Triggering VPS Deploys with GitHub Actions

Category: On-Premises & Private Cloud

Tags: podman, github-actions


Manual deploys work fine — until the one time you forget them and spend twenty minutes wondering why the fix isn’t live. Once the deploy path on the VPS was solid, the obvious next step was to let GitHub Actions pull the trigger on every push to main.

This post documents the wiring: a dedicated deploy key, the four repo secrets that feed the workflow, and the SSH step that runs the same commands I used to type by hand — pull-based, no registry credentials in CI.

The setup

  • VPS: Debian, Podman, project checked out at /opt/website, container managed by podman compose.
  • CI: GitHub Actions on ubuntu-latest with appleboy/ssh-action@v1.2.0.
  • Model: pull-based deploy — CI opens an SSH session, the server pulls Git itself, nothing is shipped from CI to the server beyond the signal to start.

Preconditions on the VPS

Before wiring anything up, the manual path has to work cleanly. If podman compose misbehaves under my fingers, it will misbehave under CI.

cd /opt/website
git pull --ff-only origin main
podman compose -f compose.yml up -d --build
curl -I http://127.0.0.1:4321

If this isn’t stable yet, automate later. CI will amplify bugs, not paper over them.

A dedicated deploy key

I generated a fresh ed25519 keypair for exactly this job — reusing my personal key from CI would be asking for trouble:

ssh-keygen -t ed25519 -C "gha-vps-deploy" -f ./gha_vps_deploy_key -N ""

The two halves go to opposite sides:

  • gha_vps_deploy_key.pub → the VPS user’s ~/.ssh/authorized_keys.
  • gha_vps_deploy_key (private key) → GitHub repo secrets as VPS_SSH_KEY.

Use a dedicated deploy user with access to /opt/website and Podman — not root, not your personal login.

Repo secrets

In Settings → Secrets and variables → Actions, four entries:

  • VPS_HOST — server IP or DNS.
  • VPS_PORT — usually 22.
  • VPS_USER — the deploy user.
  • VPS_SSH_KEY — the private key contents, multi-line.

The workflow

Create .github/workflows/deploy.yml:

name: Deploy VPS

on:
  push:
    branches: [main]
  workflow_dispatch:

concurrency:
  group: deploy-production
  cancel-in-progress: false

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy via SSH
        uses: appleboy/ssh-action@v1.2.0
        with:
          host: ${{ secrets.VPS_HOST }}
          port: ${{ secrets.VPS_PORT }}
          username: ${{ secrets.VPS_USER }}
          key: ${{ secrets.VPS_SSH_KEY }}
          script_stop: true
          script: |
            set -euo pipefail
            cd /opt/website

            git fetch --all --prune
            git checkout main
            git pull --ff-only origin main

            if ! podman compose -f compose.yml up -d --build; then
              podman rm -f website || true
              podman compose -f compose.yml up -d --build
            fi

            podman ps --filter name=website
            curl -fsS http://127.0.0.1:4321 >/dev/null

Two design choices worth flagging. The concurrency block with cancel-in-progress: false ensures a second push during an in-flight deploy queues behind the first rather than killing it — half-applied deploys are worse than slightly delayed ones. And the if ! ... || podman rm -f website fallback covers the stale-container case without aborting the run: first attempt fails on a name conflict, the force-remove clears it, the second attempt succeeds.

The server stays pull-based. CI never holds registry credentials, never ships an image — it just tells the VPS to update itself.

First run and validation

Run the workflow once via workflow_dispatch before relying on the push trigger, then verify from the VPS side:

curl -I http://127.0.0.1:4321
curl -I https://adrian-altner.de
sudo systemctl status caddy --no-pager

If both the local and public checks pass, automatic deploy is live.

Common failure modes

  • Permission denied (publickey) — wrong private key in VPS_SSH_KEY, or the public key never landed in authorized_keys. Both halves need checking, not just one.
  • fatal: Not possible to fast-forward — the server’s main branch has diverged. Clean or reset on the VPS; do not add --force to the workflow as a shortcut.
  • container name "website" is already in use — stale container state. The fallback in the script handles this automatically.
  • dial tcp 127.0.0.1:4321: connect: connection refused — the app container is down. podman logs website tells you why.

What to take away

  • Get the manual path solid first. CI is an amplifier, not a fix — if the deploy is fragile by hand it will be worse in CI.
  • Pull-based beats push-based for a single VPS. No registry credentials in CI, no image shipping, one direction of trust.
  • Dedicated deploy key, dedicated deploy user. Blast radius stays small when either leaks.
  • concurrency with cancel-in-progress: false — queue deploys, never kill one mid-flight.
  • Keep the script idempotent. The stale-container fallback costs four extra lines and removes an entire class of 2 a.m. pages.