Operating Astro SSR in Production with Podman
A compact day-two operations runbook: start, stop, logs, deploy updates, and rollback strategy.
Category: On-Premises & Private Cloud
Shipping the site was the satisfying part. The half that actually matters long-term is the boring one — knowing, three months later, how to check logs, push an update, or roll back when something goes sideways. I wanted a runbook short enough to keep in my head, not a wiki I’d never open.
This is what I settled on.
The setup
- Host: Debian VPS, Podman with
podman-compose. - Working directory:
/opt/website— the Git checkout is also the deploy root. - Proxy: Caddy on
80/443, container on127.0.0.1:4321.
Start and stop
From the project directory:
cd /opt/website
podman-compose -f compose.yml up -d
podman-compose -f compose.yml down
When I only need to poke a single container — e.g. after a crash loop — the plain podman verbs are still the fastest thing:
podman stop website
podman start website
podman restart website
Health checks
podman ps
podman logs -f website
curl -I http://127.0.0.1:4321
curl -I https://adrian-altner.de
An HTTP status code is still the highest-signal, lowest-noise diagnostic I have — faster than reading logs, less ambiguous than ps.
Deploy updates
Problem: “Deploy a new version” needs to be short enough that I’ll actually do it right at 11pm.
Implementation: Four commands, always the same:
cd /opt/website
git pull
podman-compose -f compose.yml up --build -d
podman image prune -f
Solution: git pull brings in the change, up --build -d rebuilds and swaps the running container, and image prune -f keeps disk from slowly filling with orphaned build layers.
Rolling back
When the newest build is worse than what came before it, the simplest recovery is to check out the previous commit and rebuild — no separate image registry required, because the image is reproducible from source:
cd /opt/website
git log --oneline -n 5
git checkout <previous-commit>
podman-compose -f compose.yml up --build -d
If a version turns out to be a regression worth remembering, I tag it afterwards so the next rollback doesn’t need commit-hash archaeology.
When Caddy looks wrong
Most HTTPS incidents on this box are one of three things — DNS drift, a port that silently changed, or a proxy target that moved. Caddy itself almost never breaks. Three commands cover all of it:
sudo systemctl status caddy --no-pager
sudo journalctl -u caddy -n 100 --no-pager
sudo caddy validate --config /etc/caddy/Caddyfile
Keeping it boring
A single-service production setup doesn’t need sophistication — it needs three properties:
- Observable —
logs,status,curl, in that order. - Repeatable —
up --build -ddoes the same thing every time. - Reversible — a commit hash and a rebuild are a complete rollback.
What to take away
- A deploy you can run half-asleep is more valuable than a clever pipeline.
git pull && up --build -d && image prune -fis the whole update loop.- Rolling back by commit + rebuild is free when the image is reproducible from source; no registry needed.
- Most HTTPS incidents on a small setup are DNS, ports, or proxy target — not the proxy itself.
- Keep the command surface small; the fewer verbs you need to remember, the fewer you forget.