Building the Photo Stream
How the photos section works — JSON sidecars from Vision, import.meta.glob, Flickr's justified-layout, and batch loading with IntersectionObserver.
Category: Development
I wanted a photo section on this site that didn’t drag in a CMS, a media server, or client-side data fetching. The brief was simple: JPG files on disk, everything else resolved at build time.
The setup
- Site: Astro 6 static build, deployed as a single container.
- Source: JPG files plus JSON sidecars in
src/content/photos/albums/. - Goal: a chronological stream of photos, justified-grid layout, lazy batches on scroll — no runtime backend.
Sidecar files
Each photo has a companion .json file with the same base name:
img/
2025-10-06-121017.jpg
2025-10-06-121017.json
The JSON is generated by scripts/vision.ts, which calls the OpenAI Vision API to produce alt text and a title, then reads EXIF data with exiftool:
{
"id": "2025-10-06-121017",
"title": ["Golden Temple Bell in Sunlit Foliage", ...],
"alt": "A small brass bell hangs from a leafy branch...",
"location": "18 deg 48' 16.92\" N, 98 deg 55' 18.92\" E",
"date": "2025-10-06",
"tags": ["bell", "brass", "leaves", "bokeh"],
"exif": {
"camera": "X-T3",
"lens": "XF16-55mmF2.8 R LM WR",
"aperture": "2.8",
"iso": "400",
"focal_length": "55.0",
"shutter_speed": "1/250"
}
}
Keeping metadata in a sibling file means it stays out of the content collections and out of the image files themselves. The build just reads whatever JSON files exist alongside the images — no registration step, no indexer.
Wiring images to sidecars
Problem: The build needs to pair each sidecar with its matching JPG, run them through Astro’s image pipeline, and sort the lot by date — all without a database.
Implementation: Both are loaded with import.meta.glob — Vite resolves the paths at build time:
const sidecars = import.meta.glob<PhotoSidecar>(
"/src/content/photos/albums/**/*.json",
{ eager: true },
);
const imageModules = import.meta.glob<{ default: ImageMetadata }>(
"/src/content/photos/albums/**/*.jpg",
{ eager: true },
);
Pairing them is a straight path replacement:
const photos = Object.entries(sidecars)
.map(([jsonPath, sidecar]) => {
const imgPath = jsonPath.replace(".json", ".jpg");
const image = imageModules[imgPath]?.default;
return { sidecar, image };
})
.filter((p) => !!p.image)
.sort((a, b) =>
new Date(b.sidecar.date).getTime() - new Date(a.sidecar.date).getTime(),
);
Solution: Photos without a matching JPG are dropped. The stream is sorted newest-first by the sidecar date field. Because eager: true is set, Vite inlines all resolved modules into the build, and Astro’s <Image> component handles resizing, format conversion, and srcset generation for each photo.
Justified layout
The grid uses justified-layout — the same library Flickr uses — to compute a row-based layout where every image reaches the same height within its row, and rows fill the container width:
import justifiedLayout from "justified-layout";
const result = justifiedLayout(ratios, {
containerWidth: grid.offsetWidth,
targetRowHeight: 280,
boxSpacing: 8,
containerPadding: 0,
});
ratios is an array of width / height per visible photo. The result gives back top, left, width, height for each box. The grid container is set to position: relative with the computed containerHeight, and each item is positioned absolutely.
On mobile (≤ 640px), the JS layout is bypassed entirely — the grid becomes a plain flex column with natural aspect ratios.
Batch loading with IntersectionObserver
All photos are rendered in the HTML at build time but hidden beyond the first batch:
<div
class="photo-item"
data-ar={ar}
data-batch={Math.floor(i / BATCH_SIZE)}
data-hidden={i >= BATCH_SIZE ? "true" : undefined}
>
BATCH_SIZE is 15. A sentinel element sits below the grid. An IntersectionObserver watches it with a 400px root margin — as the user approaches the bottom, the next batch is revealed and the layout is recalculated:
const observer = new IntersectionObserver(
(entries) => {
if (entries[0]?.isIntersecting) revealNextBatch();
},
{ rootMargin: "400px" },
);
observer.observe(sentinel);
This sidesteps a paginated API entirely. All image URLs are already in the HTML — the browser just fetches them as they come into view, which lazy loading on the <Image> component handles.
Individual photo page
Each photo gets a static page at /photos/[id], generated from the same sidecar list:
export async function getStaticPaths() {
return photos.map((photo) => ({
params: { id: photo.sidecar.id },
props: { sidecar: photo.sidecar, image: photo.image },
}));
}
The detail page shows the full-size image, title, date, EXIF data, and tags. Prev/next navigation is computed in the page component body — not in getStaticPaths — to avoid prop serialization issues with the sorted array:
const sortedIds = Object.values(allSidecars)
.sort((a, b) => new Date(b.date).getTime() - new Date(a.date).getTime())
.map((s) => s.id);
const currentIndex = sortedIds.indexOf(sidecar.id);
const prevId = sortedIds[currentIndex - 1] ?? null;
const nextId = sortedIds[currentIndex + 1] ?? null;
What to take away
- A JSON sidecar next to each JPG is enough metadata storage for a static photo stream — no content collection required.
import.meta.globwith{ eager: true }is the right primitive for build-time file pairing in Vite/Astro.justified-layoutgives you the Flickr grid look without writing layout maths yourself; disable it entirely on mobile.- Render every item at build time, then reveal in batches via
IntersectionObserver— it replaces pagination with near-zero runtime code. - Compute prev/next in the page body, not in
getStaticPaths, to dodge prop serialization quirks.