- FastAPI + NiceGUI web application - QuantLib-based Black-Scholes pricing with Greeks - Protective put, laddered, and LEAPS strategies - Real-time WebSocket updates - TradingView-style charts via Lightweight-Charts - Docker containerization - GitLab CI/CD pipeline for VPS deployment - VPN-only access configuration
14 KiB
Deployment Guide
This project ships with a GitLab CI/CD pipeline that builds a Docker image, pushes it to the GitLab Container Registry, and deploys it to a VPN-reachable VPS over SSH.
Overview
Deployment is driven by:
.gitlab-ci.ymlfor CI/CD stagesscripts/deploy.shfor remote deployment and rollbackdocker-compose.deploy.ymlfor the production app containerscripts/healthcheck.pyfor post-deploy validation
The current production flow is:
- Run lint, tests, and type checks
- Build and push a Docker image to GitLab Container Registry
- Scan the image with Trivy
- SSH into the VPS
- Upload
docker-compose.deploy.yml - Write a remote
.env - Pull the new image and restart the service
- Poll
/health - Roll back to the last successful image if health checks fail
1. Prerequisites
VPS requirements
Minimum recommended VPS baseline:
- 2 vCPU
- 2 GB RAM
- 20 GB SSD
- Linux host with systemd
- Stable outbound internet access to:
- GitLab Container Registry
- Python package mirrors if you build locally on the server later
- Market data providers if production uses live data
- Docker Engine installed
- Docker Compose plugin installed (
docker compose) curlinstalled- SSH access enabled
Recommended hardening:
- Dedicated non-root deploy user
- Host firewall enabled (
ufwor equivalent) - Automatic security updates
- Disk monitoring and log rotation
- VPN-only access to SSH and application traffic
Software to install on the VPS
Example for Debian/Ubuntu:
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
# Install Docker
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker deploy
Log out and back in after adding the deploy user to the docker group.
2. GitLab runner setup
The repository uses three CI stages:
testbuilddeploy
What the pipeline expects
From .gitlab-ci.yml:
- Test jobs run in
python:3.12-slim - Image builds run with
docker:27plusdocker:27-dind - Deploy runs in
python:3.12-alpineand installs:bashopenssh-clientcurldocker-clidocker-cli-compose
Runner options
You can use either:
- GitLab shared runners, if they support Docker-in-Docker for your project
- A dedicated self-hosted Docker runner
Recommended self-hosted runner configuration
Use a Docker executor runner with privileged mode enabled for the build_image job.
Example config.toml excerpt:
[[runners]]
name = "vault-dash-docker-runner"
url = "https://gitlab.com/"
token = "REDACTED"
executor = "docker"
[runners.docker]
tls_verify = false
image = "python:3.12-slim"
privileged = true
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Registering a runner
sudo gitlab-runner register
Recommended answers:
- URL: your GitLab instance URL
- Executor:
docker - Default image:
python:3.12-slim - Tags: optional, but useful if you want to target dedicated runners later
Runner permissions and networking
The runner must be able to:
- Authenticate to the GitLab Container Registry
- Reach the target VPS over SSH
- Reach the target VPS VPN address during deploy validation
- Pull base images from Docker Hub or your mirror
3. SSH key configuration
Deployment authenticates with DEPLOY_SSH_PRIVATE_KEY, which the deploy job writes to ~/.ssh/id_ed25519 before running scripts/deploy.sh.
Generate a deployment keypair
On a secure admin machine:
ssh-keygen -t ed25519 -C "gitlab-deploy-vault-dash" -f ./vault_dash_deploy_key
This creates:
vault_dash_deploy_key— private keyvault_dash_deploy_key.pub— public key
Install the public key on the VPS
ssh-copy-id -i ./vault_dash_deploy_key.pub deploy@YOUR_VPN_HOST
Or manually append it to:
/home/deploy/.ssh/authorized_keys
Add the private key to GitLab CI/CD variables
In Settings → CI/CD → Variables add:
DEPLOY_SSH_PRIVATE_KEY— contents of the private key
Recommended flags:
- Masked: yes
- Protected: yes
- Environment scope:
productionif you use environment-specific variables
Known-host handling
The current deploy script uses:
-o StrictHostKeyChecking=no
That makes first connection easier, but it weakens SSH trust validation. For a stricter setup, update the pipeline to preload known_hosts and remove that option.
4. VPN setup for access
The deployment is designed for private-network access.
Why VPN is recommended
- The application container binds to loopback by default
DEPLOY_HOSTis expected to be a VPN-reachable private IP or internal DNS name- SSH and HTTP traffic should not be exposed publicly unless a hardened reverse proxy is placed in front
Recommended topology
Admin / GitLab Runner
|
| VPN
v
VPS private address
|
+--> SSH (22)
+--> reverse proxy or direct internal app access
Tailscale example
- Install Tailscale on the VPS
- Join the host to your tailnet
- Use the Tailscale IP or MagicDNS name as
DEPLOY_HOST - Restrict firewall rules to the Tailscale interface
Example UFW rules:
sudo ufw allow in on tailscale0 to any port 22 proto tcp
sudo ufw allow in on tailscale0 to any port 8000 proto tcp
sudo ufw deny 22/tcp
sudo ufw deny 8000/tcp
sudo ufw enable
WireGuard alternative
If you use WireGuard instead of Tailscale:
- assign the VPS a stable private VPN IP
- allow SSH and proxy traffic only on the WireGuard interface
- set
DEPLOY_HOSTto that private IP
Access patterns
Preferred options:
- VPN access only, app bound to
127.0.0.1, reverse proxy on same host - VPN access only, app published to private/VPN interface
- Public HTTPS only through reverse proxy, app still bound internally
Least preferred:
- public direct access to port
8000
5. Environment variables
The deploy script supports two patterns:
- Provide a full
APP_ENV_FILEvariable containing the remote.env - Provide individual CI variables and let
scripts/deploy.shassemble the.env
Required GitLab variables
SSH and deployment
DEPLOY_SSH_PRIVATE_KEYDEPLOY_USERDEPLOY_HOSTDEPLOY_PORT(optional, default22)DEPLOY_PATH(optional, default/opt/vault-dash)
Container registry
These are generally provided by GitLab automatically in CI:
CI_REGISTRYCI_REGISTRY_IMAGECI_REGISTRY_USERCI_REGISTRY_PASSWORDCI_COMMIT_SHA
App runtime
APP_ENVAPP_NAMEAPP_PORTAPP_BIND_ADDRESSREDIS_URLDEFAULT_SYMBOLCACHE_TTLWEBSOCKET_INTERVAL_SECONDSNICEGUI_MOUNT_PATHNICEGUI_STORAGE_SECRETCORS_ORIGINS
Optional deployment controls
APP_ENV_FILECOMPOSE_FILECOMPOSE_SERVICEDEPLOY_TIMEOUTHEALTHCHECK_URLREMOTE_ENV_FILEEXTERNAL_HEALTHCHECK_URLIMAGE_TAGAPP_IMAGE
Example .env
APP_IMAGE=registry.gitlab.com/your-group/vault-dash:main-123456
APP_ENV=production
APP_NAME=Vault Dashboard
APP_PORT=8000
APP_BIND_ADDRESS=127.0.0.1
REDIS_URL=
DEFAULT_SYMBOL=GLD
CACHE_TTL=300
WEBSOCKET_INTERVAL_SECONDS=5
NICEGUI_MOUNT_PATH=/
NICEGUI_STORAGE_SECRET=replace-with-long-random-secret
CORS_ORIGINS=https://vault.example.com
Variable behavior in the app
app/main.py loads runtime settings from environment variables and uses them for:
- CORS configuration
- Redis connection
- cache TTL
- default symbol
- WebSocket publish interval
- NiceGUI mount path
- NiceGUI storage secret
Secret management guidance
Treat these as secrets or sensitive config:
DEPLOY_SSH_PRIVATE_KEYNICEGUI_STORAGE_SECRETREDIS_URLif it contains credentials- any future broker API credentials
- any future OAuth client secrets
6. SSL/TLS configuration
SSL/TLS is strongly recommended, especially because future OAuth integrations require stable HTTPS callback URLs and secure cookie handling.
Current app behavior
The app itself listens on plain HTTP inside the container on port 8000.
Recommended production pattern:
Client -> HTTPS reverse proxy -> vault-dash container (HTTP on localhost/private network)
Recommended reverse proxy choices
- Caddy
- Nginx
- Traefik
Minimum TLS recommendations
- TLS termination at the reverse proxy
- Automatic certificate management with Let's Encrypt or internal PKI
- Redirect HTTP to HTTPS
- HSTS once the domain is stable
- Forward standard proxy headers
Nginx example
server {
listen 80;
server_name vault.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name vault.example.com;
ssl_certificate /etc/letsencrypt/live/vault.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/vault.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
OAuth readiness checklist
Before adding OAuth:
- serve the app only over HTTPS
- use a stable public or internal FQDN
- keep
CORS_ORIGINSlimited to trusted origins - ensure WebSocket upgrade headers pass through the reverse proxy
- store OAuth client secrets in GitLab CI/CD variables or a secret manager
- verify callback/redirect URLs exactly match the provider configuration
7. Deployment procedure
One-time server preparation
- Provision the VPS
- Install Docker and Compose
- Create a deploy user
- Install the SSH public key for that user
- Join the VPS to your VPN
- Configure firewall rules
- Create the deployment directory:
sudo mkdir -p /opt/vault-dash
sudo chown deploy:deploy /opt/vault-dash
GitLab CI/CD configuration
- Add all required variables in GitLab
- Protect production variables
- Ensure the deploy runner can reach the VPN host
- Push to the default branch
What happens during deploy
scripts/deploy.sh will:
- connect to
DEPLOY_USER@DEPLOY_HOST - create
DEPLOY_PATHif it does not exist - write
.envtoREMOTE_ENV_FILE - upload
docker-compose.deploy.yml - log into the GitLab registry on the VPS
- pull
APP_IMAGE - start the service with
docker compose - check
http://127.0.0.1:${APP_PORT}/healthby default - restore the previous image if the health check never passes
Manual deploy from a workstation
You can also export the same variables locally and run:
bash scripts/deploy.sh
This is useful for smoke tests before enabling automated production deploys.
8. Troubleshooting
Pipeline fails during build_image
Possible causes:
- runner is not privileged for Docker-in-Docker
- registry auth failed
- Docker Hub rate limits or base image pull failures
Checks:
docker info
Verify on the runner that privileged mode is enabled for Docker executor jobs.
Deploy job cannot SSH to the VPS
Possible causes:
- wrong
DEPLOY_HOST - VPN not connected
- wrong private key
- missing public key in
authorized_keys - firewall blocking port 22
Checks:
ssh -i vault_dash_deploy_key deploy@YOUR_VPN_HOST
Deploy job connects but docker compose fails
Possible causes:
- Docker not installed on VPS
- deploy user not in
dockergroup - remote filesystem permissions wrong
- invalid
.envcontent
Checks on VPS:
docker version
docker compose version
id
ls -la /opt/vault-dash
Health check never turns green
Possible causes:
- app failed to start
- container crashed
- missing
NICEGUI_STORAGE_SECRET - invalid env vars
- reverse proxy misrouting traffic
Checks on VPS:
cd /opt/vault-dash
docker compose -f docker-compose.deploy.yml --env-file .env ps
docker compose -f docker-compose.deploy.yml --env-file .env logs --tail=200
curl -fsS http://127.0.0.1:8000/health
Redis warnings at startup
This app tolerates missing Redis and falls back to no-cache mode. If caching is expected, verify:
REDIS_URLis set- Redis is reachable from the container
- the
redisPython package is installed in the image
WebSocket issues behind a proxy
Possible causes:
- missing
Upgrade/Connectionheaders - idle timeout too low on the proxy
- incorrect HTTPS termination config
Verify that /ws/updates supports WebSocket upgrades end-to-end.
Rollback failed
The deploy script stores the last successful image at:
/opt/vault-dash/.last_successful_image
Manual rollback:
cd /opt/vault-dash
export PREVIOUS_IMAGE="$(cat .last_successful_image)"
sed -i.bak "/^APP_IMAGE=/d" .env
printf 'APP_IMAGE=%s\n' "$PREVIOUS_IMAGE" | cat - .env.bak > .env
rm -f .env.bak
docker pull "$PREVIOUS_IMAGE"
docker compose -f docker-compose.deploy.yml --env-file .env up -d --remove-orphans
9. Post-deploy validation
Minimum checks:
curl -fsS http://127.0.0.1:8000/health
python scripts/healthcheck.py https://vault.example.com/health --timeout 120 --expect-status ok
Recommended smoke checks:
- load the NiceGUI dashboard in a browser
- call
/api/portfolio?symbol=GLD - call
/api/options?symbol=GLD - call
/api/strategies?symbol=GLD - verify
/ws/updatesemitsconnectedthenportfolio_update
10. Future deployment improvements
Suggested follow-ups:
- pin SSH host keys instead of disabling strict checking
- add a production reverse proxy service to Compose
- add Redis to the deploy Compose stack if caching is required in production
- add metrics and centralized logging
- split staging and production environments
- move secrets to a dedicated secret manager