Building the Photo Stream
The photos section is a static, zero-database photo stream. No CMS, no media server — just JPG files, JSON sidecars, and Astro’s build pipeline.
Sidecar files
Each photo has a companion .json file with the same base name:
img/
2025-10-06-121017.jpg
2025-10-06-121017.json
The JSON is generated by scripts/vision.ts, which calls the OpenAI Vision API to produce alt text and a title, then reads EXIF data with exiftool:
{
"id": "2025-10-06-121017",
"title": ["Golden Temple Bell in Sunlit Foliage", ...],
"alt": "A small brass bell hangs from a leafy branch...",
"location": "18 deg 48' 16.92\" N, 98 deg 55' 18.92\" E",
"date": "2025-10-06",
"tags": ["bell", "brass", "leaves", "bokeh"],
"exif": {
"camera": "X-T3",
"lens": "XF16-55mmF2.8 R LM WR",
"aperture": "2.8",
"iso": "400",
"focal_length": "55.0",
"shutter_speed": "1/250"
}
}
This keeps metadata out of the repository’s content collections and out of the image files themselves. The build just reads whatever JSON files exist alongside the images.
Wiring images to sidecars at build time
Both are loaded with import.meta.glob — Vite resolves the paths at build time:
const sidecars = import.meta.glob<PhotoSidecar>(
"/src/content/photos/albums/**/*.json",
{ eager: true },
);
const imageModules = import.meta.glob<{ default: ImageMetadata }>(
"/src/content/photos/albums/**/*.jpg",
{ eager: true },
);
Pairing them is a simple path replacement:
const photos = Object.entries(sidecars)
.map(([jsonPath, sidecar]) => {
const imgPath = jsonPath.replace(".json", ".jpg");
const image = imageModules[imgPath]?.default;
return { sidecar, image };
})
.filter((p) => !!p.image)
.sort((a, b) =>
new Date(b.sidecar.date).getTime() - new Date(a.sidecar.date).getTime(),
);
Photos without a matching JPG are dropped. The stream is sorted newest-first by the sidecar date field.
Because eager: true is set, Vite inlines all resolved modules into the build. Astro’s <Image> component then handles resizing, format conversion, and srcset generation for each photo.
Justified layout
The grid uses justified-layout — the same library Flickr uses — to compute a row-based layout where every image reaches the same height within its row, and rows fill the container width:
import justifiedLayout from "justified-layout";
const result = justifiedLayout(ratios, {
containerWidth: grid.offsetWidth,
targetRowHeight: 280,
boxSpacing: 8,
containerPadding: 0,
});
ratios is an array of width / height per visible photo. The result gives back top, left, width, height for each box. The grid container is set to position: relative with the computed containerHeight, and each item is positioned absolutely.
On mobile (≤ 640px), the JS layout is bypassed entirely — the grid becomes a plain flex column with natural aspect ratios.
Batch loading with IntersectionObserver
All photos are rendered in the HTML at build time but hidden beyond the first batch:
<div
class="photo-item"
data-ar={ar}
data-batch={Math.floor(i / BATCH_SIZE)}
data-hidden={i >= BATCH_SIZE ? "true" : undefined}
>
BATCH_SIZE is 15. A sentinel element sits below the grid. An IntersectionObserver watches it with a 400px root margin — as the user approaches the bottom, the next batch is revealed and the layout is recalculated:
const observer = new IntersectionObserver(
(entries) => {
if (entries[0]?.isIntersecting) revealNextBatch();
},
{ rootMargin: "400px" },
);
observer.observe(sentinel);
This avoids a paginated API entirely. All image URLs are in the HTML — the browser just needs to fetch them as they come into view, which lazy loading on the <Image> component handles.
Individual photo page
Each photo gets a static page at /photos/[id], generated from the same sidecar list:
export async function getStaticPaths() {
return photos.map((photo) => ({
params: { id: photo.sidecar.id },
props: { sidecar: photo.sidecar, image: photo.image },
}));
}
The detail page shows the full-size image, title, date, EXIF data, and tags. Prev/next navigation is computed in the page component body (not in getStaticPaths) to avoid prop serialization issues with the sorted array:
const sortedIds = Object.values(allSidecars)
.sort((a, b) => new Date(b.date).getTime() - new Date(a.date).getTime())
.map((s) => s.id);
const currentIndex = sortedIds.indexOf(sidecar.id);
const prevId = sortedIds[currentIndex - 1] ?? null;
const nextId = sortedIds[currentIndex + 1] ?? null;
What this avoids
There is no database, no image CDN, no media upload step, and no client-side data fetching. Everything is resolved at build time by Vite and Astro. The deployed site is entirely static HTML and pre-optimised images.
The only runtime work is the IntersectionObserver and the layout recalculation — both are lightweight vanilla JS with no framework dependency.