How to build your Website from Local Setup to VPS
A complete local setup for running an Astro SSR site in Podman with the Node standalone adapter.
Category: On-Premises & Private Cloud
I wanted a local stack for my Astro site that looked exactly like production — no “works on my laptop, fails on the server” surprises once I moved to a VPS. The answer was to run the site in Podman from day one, locally, with the Node standalone adapter doing the SSR.
This post documents the first half of that path: getting Astro to render server-side and run inside a container on my machine.
The setup
- Astro in SSR mode via
@astrojs/nodeinstandalonemode. - Container entry point:
dist/server/entry.mjs. - Runtime: Podman on macOS, app reachable on port
4321.
Astro SSR configuration
The Node adapter in standalone mode ships a tiny HTTP server — no Express, no extra glue. Combined with output: "server" it gives you a build that boots from a single JS file.
// astro.config.mjs
import { defineConfig } from "astro/config";
import mdx from "@astrojs/mdx";
import node from "@astrojs/node";
export default defineConfig({
site: "https://adrian-altner.de",
output: "server",
integrations: [mdx()],
markdown: {
shikiConfig: {
theme: "github-light",
},
},
adapter: node({
mode: "standalone",
}),
});
A matching start script in package.json keeps the container CMD honest:
{
"scripts": {
"build": "astro build",
"start": "node dist/server/entry.mjs"
}
}
Containerfile
Problem: I wanted the runtime image to carry only what’s needed to serve the site — no dev dependencies, no source tree, no build toolchain.
Implementation: A two-stage build. The first stage installs with pnpm and runs astro build; the second copies only dist/ into a slim Node base image and drops privileges to the node user.
FROM node:20-bookworm-slim AS build
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile
COPY . .
RUN pnpm run build
FROM node:20-bookworm-slim AS runtime
WORKDIR /app
ENV NODE_ENV=production
ENV HOST=0.0.0.0
ENV PORT=4321
COPY --from=build --chown=node:node /app/dist ./dist
USER node
EXPOSE 4321
CMD ["node", "dist/server/entry.mjs"]
Solution: The runtime image contains nothing but dist/ and a Node interpreter. Boot time is dominated by Node itself, not by anything I ship on top.
Compose file
Compose is overkill for a single service, but it makes the local and production invocations identical — same file, same command, same shape.
# compose.yml
services:
website:
build:
context: .
dockerfile: Containerfile
container_name: website
ports:
- "4321:4321"
environment:
NODE_ENV: production
HOST: 0.0.0.0
PORT: 4321
restart: unless-stopped
Local run
podman machine start
podman compose -f compose.yml up --build -d
podman ps
curl -I http://localhost:4321
A HTTP/1.1 200 OK back from that last curl is the whole acceptance criterion at this stage.
Troubleshooting
Two things bit me on first boot:
- Wrong compose provider. If
podman composesilently invokesdocker-compose, pin the provider:export PODMAN_COMPOSE_PROVIDER=/opt/homebrew/bin/podman-compose - VM wedged on startup. A quick
podman machine stop && podman machine startclears it.
What to take away
- Run the same container locally that you intend to ship — the packaging is the spec.
- The Node adapter’s
standalonemode is the lightest path to “one JS file, one port”. - A two-stage Containerfile keeps the runtime image small without fighting the builder.
- Verify
dist/server/entry.mjsboots early; once that path is stable, everything after is plumbing.