Files
vault-dash/DEPLOYMENT.md
Bu5hm4nn 00a68bc767 Initial commit: Vault Dashboard for options hedging
- FastAPI + NiceGUI web application
- QuantLib-based Black-Scholes pricing with Greeks
- Protective put, laddered, and LEAPS strategies
- Real-time WebSocket updates
- TradingView-style charts via Lightweight-Charts
- Docker containerization
- GitLab CI/CD pipeline for VPS deployment
- VPN-only access configuration
2026-03-21 19:21:40 +01:00

14 KiB

Deployment Guide

This project ships with a GitLab CI/CD pipeline that builds a Docker image, pushes it to the GitLab Container Registry, and deploys it to a VPN-reachable VPS over SSH.

Overview

Deployment is driven by:

  • .gitlab-ci.yml for CI/CD stages
  • scripts/deploy.sh for remote deployment and rollback
  • docker-compose.deploy.yml for the production app container
  • scripts/healthcheck.py for post-deploy validation

The current production flow is:

  1. Run lint, tests, and type checks
  2. Build and push a Docker image to GitLab Container Registry
  3. Scan the image with Trivy
  4. SSH into the VPS
  5. Upload docker-compose.deploy.yml
  6. Write a remote .env
  7. Pull the new image and restart the service
  8. Poll /health
  9. Roll back to the last successful image if health checks fail

1. Prerequisites

VPS requirements

Minimum recommended VPS baseline:

  • 2 vCPU
  • 2 GB RAM
  • 20 GB SSD
  • Linux host with systemd
  • Stable outbound internet access to:
    • GitLab Container Registry
    • Python package mirrors if you build locally on the server later
    • Market data providers if production uses live data
  • Docker Engine installed
  • Docker Compose plugin installed (docker compose)
  • curl installed
  • SSH access enabled

Recommended hardening:

  • Dedicated non-root deploy user
  • Host firewall enabled (ufw or equivalent)
  • Automatic security updates
  • Disk monitoring and log rotation
  • VPN-only access to SSH and application traffic

Software to install on the VPS

Example for Debian/Ubuntu:

sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg

# Install Docker
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker deploy

Log out and back in after adding the deploy user to the docker group.


2. GitLab runner setup

The repository uses three CI stages:

  • test
  • build
  • deploy

What the pipeline expects

From .gitlab-ci.yml:

  • Test jobs run in python:3.12-slim
  • Image builds run with docker:27 plus docker:27-dind
  • Deploy runs in python:3.12-alpine and installs:
    • bash
    • openssh-client
    • curl
    • docker-cli
    • docker-cli-compose

Runner options

You can use either:

  1. GitLab shared runners, if they support Docker-in-Docker for your project
  2. A dedicated self-hosted Docker runner

Use a Docker executor runner with privileged mode enabled for the build_image job.

Example config.toml excerpt:

[[runners]]
  name = "vault-dash-docker-runner"
  url = "https://gitlab.com/"
  token = "REDACTED"
  executor = "docker"
  [runners.docker]
    tls_verify = false
    image = "python:3.12-slim"
    privileged = true
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0

Registering a runner

sudo gitlab-runner register

Recommended answers:

  • URL: your GitLab instance URL
  • Executor: docker
  • Default image: python:3.12-slim
  • Tags: optional, but useful if you want to target dedicated runners later

Runner permissions and networking

The runner must be able to:

  • Authenticate to the GitLab Container Registry
  • Reach the target VPS over SSH
  • Reach the target VPS VPN address during deploy validation
  • Pull base images from Docker Hub or your mirror

3. SSH key configuration

Deployment authenticates with DEPLOY_SSH_PRIVATE_KEY, which the deploy job writes to ~/.ssh/id_ed25519 before running scripts/deploy.sh.

Generate a deployment keypair

On a secure admin machine:

ssh-keygen -t ed25519 -C "gitlab-deploy-vault-dash" -f ./vault_dash_deploy_key

This creates:

  • vault_dash_deploy_key — private key
  • vault_dash_deploy_key.pub — public key

Install the public key on the VPS

ssh-copy-id -i ./vault_dash_deploy_key.pub deploy@YOUR_VPN_HOST

Or manually append it to:

/home/deploy/.ssh/authorized_keys

Add the private key to GitLab CI/CD variables

In Settings → CI/CD → Variables add:

  • DEPLOY_SSH_PRIVATE_KEY — contents of the private key

Recommended flags:

  • Masked: yes
  • Protected: yes
  • Environment scope: production if you use environment-specific variables

Known-host handling

The current deploy script uses:

-o StrictHostKeyChecking=no

That makes first connection easier, but it weakens SSH trust validation. For a stricter setup, update the pipeline to preload known_hosts and remove that option.


4. VPN setup for access

The deployment is designed for private-network access.

  • The application container binds to loopback by default
  • DEPLOY_HOST is expected to be a VPN-reachable private IP or internal DNS name
  • SSH and HTTP traffic should not be exposed publicly unless a hardened reverse proxy is placed in front
Admin / GitLab Runner
        |
        | VPN
        v
  VPS private address
        |
        +--> SSH (22)
        +--> reverse proxy or direct internal app access

Tailscale example

  1. Install Tailscale on the VPS
  2. Join the host to your tailnet
  3. Use the Tailscale IP or MagicDNS name as DEPLOY_HOST
  4. Restrict firewall rules to the Tailscale interface

Example UFW rules:

sudo ufw allow in on tailscale0 to any port 22 proto tcp
sudo ufw allow in on tailscale0 to any port 8000 proto tcp
sudo ufw deny 22/tcp
sudo ufw deny 8000/tcp
sudo ufw enable

WireGuard alternative

If you use WireGuard instead of Tailscale:

  • assign the VPS a stable private VPN IP
  • allow SSH and proxy traffic only on the WireGuard interface
  • set DEPLOY_HOST to that private IP

Access patterns

Preferred options:

  1. VPN access only, app bound to 127.0.0.1, reverse proxy on same host
  2. VPN access only, app published to private/VPN interface
  3. Public HTTPS only through reverse proxy, app still bound internally

Least preferred:

  • public direct access to port 8000

5. Environment variables

The deploy script supports two patterns:

  1. Provide a full APP_ENV_FILE variable containing the remote .env
  2. Provide individual CI variables and let scripts/deploy.sh assemble the .env

Required GitLab variables

SSH and deployment

  • DEPLOY_SSH_PRIVATE_KEY
  • DEPLOY_USER
  • DEPLOY_HOST
  • DEPLOY_PORT (optional, default 22)
  • DEPLOY_PATH (optional, default /opt/vault-dash)

Container registry

These are generally provided by GitLab automatically in CI:

  • CI_REGISTRY
  • CI_REGISTRY_IMAGE
  • CI_REGISTRY_USER
  • CI_REGISTRY_PASSWORD
  • CI_COMMIT_SHA

App runtime

  • APP_ENV
  • APP_NAME
  • APP_PORT
  • APP_BIND_ADDRESS
  • REDIS_URL
  • DEFAULT_SYMBOL
  • CACHE_TTL
  • WEBSOCKET_INTERVAL_SECONDS
  • NICEGUI_MOUNT_PATH
  • NICEGUI_STORAGE_SECRET
  • CORS_ORIGINS

Optional deployment controls

  • APP_ENV_FILE
  • COMPOSE_FILE
  • COMPOSE_SERVICE
  • DEPLOY_TIMEOUT
  • HEALTHCHECK_URL
  • REMOTE_ENV_FILE
  • EXTERNAL_HEALTHCHECK_URL
  • IMAGE_TAG
  • APP_IMAGE

Example .env

APP_IMAGE=registry.gitlab.com/your-group/vault-dash:main-123456
APP_ENV=production
APP_NAME=Vault Dashboard
APP_PORT=8000
APP_BIND_ADDRESS=127.0.0.1
REDIS_URL=
DEFAULT_SYMBOL=GLD
CACHE_TTL=300
WEBSOCKET_INTERVAL_SECONDS=5
NICEGUI_MOUNT_PATH=/
NICEGUI_STORAGE_SECRET=replace-with-long-random-secret
CORS_ORIGINS=https://vault.example.com

Variable behavior in the app

app/main.py loads runtime settings from environment variables and uses them for:

  • CORS configuration
  • Redis connection
  • cache TTL
  • default symbol
  • WebSocket publish interval
  • NiceGUI mount path
  • NiceGUI storage secret

Secret management guidance

Treat these as secrets or sensitive config:

  • DEPLOY_SSH_PRIVATE_KEY
  • NICEGUI_STORAGE_SECRET
  • REDIS_URL if it contains credentials
  • any future broker API credentials
  • any future OAuth client secrets

6. SSL/TLS configuration

SSL/TLS is strongly recommended, especially because future OAuth integrations require stable HTTPS callback URLs and secure cookie handling.

Current app behavior

The app itself listens on plain HTTP inside the container on port 8000.

Recommended production pattern:

Client -> HTTPS reverse proxy -> vault-dash container (HTTP on localhost/private network)
  • Caddy
  • Nginx
  • Traefik

Minimum TLS recommendations

  • TLS termination at the reverse proxy
  • Automatic certificate management with Let's Encrypt or internal PKI
  • Redirect HTTP to HTTPS
  • HSTS once the domain is stable
  • Forward standard proxy headers

Nginx example

server {
    listen 80;
    server_name vault.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name vault.example.com;

    ssl_certificate /etc/letsencrypt/live/vault.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/vault.example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

OAuth readiness checklist

Before adding OAuth:

  • serve the app only over HTTPS
  • use a stable public or internal FQDN
  • keep CORS_ORIGINS limited to trusted origins
  • ensure WebSocket upgrade headers pass through the reverse proxy
  • store OAuth client secrets in GitLab CI/CD variables or a secret manager
  • verify callback/redirect URLs exactly match the provider configuration

7. Deployment procedure

One-time server preparation

  1. Provision the VPS
  2. Install Docker and Compose
  3. Create a deploy user
  4. Install the SSH public key for that user
  5. Join the VPS to your VPN
  6. Configure firewall rules
  7. Create the deployment directory:
sudo mkdir -p /opt/vault-dash
sudo chown deploy:deploy /opt/vault-dash

GitLab CI/CD configuration

  1. Add all required variables in GitLab
  2. Protect production variables
  3. Ensure the deploy runner can reach the VPN host
  4. Push to the default branch

What happens during deploy

scripts/deploy.sh will:

  • connect to DEPLOY_USER@DEPLOY_HOST
  • create DEPLOY_PATH if it does not exist
  • write .env to REMOTE_ENV_FILE
  • upload docker-compose.deploy.yml
  • log into the GitLab registry on the VPS
  • pull APP_IMAGE
  • start the service with docker compose
  • check http://127.0.0.1:${APP_PORT}/health by default
  • restore the previous image if the health check never passes

Manual deploy from a workstation

You can also export the same variables locally and run:

bash scripts/deploy.sh

This is useful for smoke tests before enabling automated production deploys.


8. Troubleshooting

Pipeline fails during build_image

Possible causes:

  • runner is not privileged for Docker-in-Docker
  • registry auth failed
  • Docker Hub rate limits or base image pull failures

Checks:

docker info

Verify on the runner that privileged mode is enabled for Docker executor jobs.

Deploy job cannot SSH to the VPS

Possible causes:

  • wrong DEPLOY_HOST
  • VPN not connected
  • wrong private key
  • missing public key in authorized_keys
  • firewall blocking port 22

Checks:

ssh -i vault_dash_deploy_key deploy@YOUR_VPN_HOST

Deploy job connects but docker compose fails

Possible causes:

  • Docker not installed on VPS
  • deploy user not in docker group
  • remote filesystem permissions wrong
  • invalid .env content

Checks on VPS:

docker version
docker compose version
id
ls -la /opt/vault-dash

Health check never turns green

Possible causes:

  • app failed to start
  • container crashed
  • missing NICEGUI_STORAGE_SECRET
  • invalid env vars
  • reverse proxy misrouting traffic

Checks on VPS:

cd /opt/vault-dash
docker compose -f docker-compose.deploy.yml --env-file .env ps
docker compose -f docker-compose.deploy.yml --env-file .env logs --tail=200
curl -fsS http://127.0.0.1:8000/health

Redis warnings at startup

This app tolerates missing Redis and falls back to no-cache mode. If caching is expected, verify:

  • REDIS_URL is set
  • Redis is reachable from the container
  • the redis Python package is installed in the image

WebSocket issues behind a proxy

Possible causes:

  • missing Upgrade / Connection headers
  • idle timeout too low on the proxy
  • incorrect HTTPS termination config

Verify that /ws/updates supports WebSocket upgrades end-to-end.

Rollback failed

The deploy script stores the last successful image at:

/opt/vault-dash/.last_successful_image

Manual rollback:

cd /opt/vault-dash
export PREVIOUS_IMAGE="$(cat .last_successful_image)"
sed -i.bak "/^APP_IMAGE=/d" .env
printf 'APP_IMAGE=%s\n' "$PREVIOUS_IMAGE" | cat - .env.bak > .env
rm -f .env.bak
docker pull "$PREVIOUS_IMAGE"
docker compose -f docker-compose.deploy.yml --env-file .env up -d --remove-orphans

9. Post-deploy validation

Minimum checks:

curl -fsS http://127.0.0.1:8000/health
python scripts/healthcheck.py https://vault.example.com/health --timeout 120 --expect-status ok

Recommended smoke checks:

  • load the NiceGUI dashboard in a browser
  • call /api/portfolio?symbol=GLD
  • call /api/options?symbol=GLD
  • call /api/strategies?symbol=GLD
  • verify /ws/updates emits connected then portfolio_update

10. Future deployment improvements

Suggested follow-ups:

  • pin SSH host keys instead of disabling strict checking
  • add a production reverse proxy service to Compose
  • add Redis to the deploy Compose stack if caching is required in production
  • add metrics and centralized logging
  • split staging and production environments
  • move secrets to a dedicated secret manager