Switch from GitLab CI to Forgejo Actions
- Add .forgejo/workflows/ci.yaml for lint/test/type-check - Add .forgejo/workflows/deploy.yaml for build/deploy - Update DEPLOYMENT.md with Forgejo-specific instructions - Remove .gitlab-ci.yml
This commit is contained in:
705
DEPLOYMENT.md
705
DEPLOYMENT.md
@@ -1,620 +1,195 @@
|
||||
# Deployment Guide
|
||||
|
||||
This project ships with a GitLab CI/CD pipeline that builds a Docker image, pushes it to the GitLab Container Registry, and deploys it to a VPN-reachable VPS over SSH.
|
||||
This project uses Forgejo Actions for CI/CD, building a Docker image and deploying to a VPN-reachable VPS over SSH.
|
||||
|
||||
## Overview
|
||||
|
||||
Deployment is driven by:
|
||||
Deployment workflow:
|
||||
|
||||
- `.gitlab-ci.yml` for CI/CD stages
|
||||
- `scripts/deploy.sh` for remote deployment and rollback
|
||||
- `docker-compose.deploy.yml` for the production app container
|
||||
- `scripts/healthcheck.py` for post-deploy validation
|
||||
|
||||
The current production flow is:
|
||||
|
||||
1. Run lint, tests, and type checks
|
||||
2. Build and push a Docker image to GitLab Container Registry
|
||||
3. Scan the image with Trivy
|
||||
4. SSH into the VPS
|
||||
5. Upload `docker-compose.deploy.yml`
|
||||
6. Write a remote `.env`
|
||||
7. Pull the new image and restart the service
|
||||
8. Poll `/health`
|
||||
9. Roll back to the last successful image if health checks fail
|
||||
1. **CI** (`.forgejo/workflows/ci.yaml`): Lint, test, type-check on every push
|
||||
2. **Deploy** (`.forgejo/workflows/deploy.yaml`): Build, scan, and deploy on main branch
|
||||
|
||||
---
|
||||
|
||||
## 1. Prerequisites
|
||||
|
||||
### VPS requirements
|
||||
### VPS Requirements
|
||||
|
||||
Minimum recommended VPS baseline:
|
||||
- 2 vCPU, 2 GB RAM, 20 GB SSD
|
||||
- Docker Engine + Compose plugin
|
||||
- SSH access via VPN
|
||||
- Python 3.11+ (for healthcheck script)
|
||||
|
||||
- 2 vCPU
|
||||
- 2 GB RAM
|
||||
- 20 GB SSD
|
||||
- Linux host with systemd
|
||||
- Stable outbound internet access to:
|
||||
- GitLab Container Registry
|
||||
- Python package mirrors if you build locally on the server later
|
||||
- Market data providers if production uses live data
|
||||
- Docker Engine installed
|
||||
- Docker Compose plugin installed (`docker compose`)
|
||||
- `curl` installed
|
||||
- SSH access enabled
|
||||
### Forgejo Instance Setup
|
||||
|
||||
Recommended hardening:
|
||||
1. Enable Actions in Forgejo admin settings
|
||||
2. Register a runner (or use Forgejo's built-in runner)
|
||||
|
||||
- Dedicated non-root deploy user
|
||||
- Host firewall enabled (`ufw` or equivalent)
|
||||
- Automatic security updates
|
||||
- Disk monitoring and log rotation
|
||||
- VPN-only access to SSH and application traffic
|
||||
### Runner Setup
|
||||
|
||||
### Software to install on the VPS
|
||||
Forgejo supports both built-in runners and self-hosted Docker runners. For Docker-in-Docker builds, ensure the runner has:
|
||||
|
||||
Example for Debian/Ubuntu:
|
||||
- Docker installed and accessible
|
||||
- `docker` and `docker compose` commands available
|
||||
|
||||
Example runner registration:
|
||||
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y ca-certificates curl gnupg
|
||||
# On your Forgejo server
|
||||
forgejo actions generate-runner-token > token.txt
|
||||
forgejo-runner register --instance-addr http://localhost:3000 --token $(cat token.txt)
|
||||
forgejo-runner daemon
|
||||
```
|
||||
|
||||
# Install Docker
|
||||
sudo install -m 0755 -d /etc/apt/keyrings
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||
sudo chmod a+r /etc/apt/keyrings/docker.gpg
|
||||
---
|
||||
|
||||
echo \
|
||||
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
|
||||
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
|
||||
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
## 2. Required Secrets
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||
Configure in **Settings → Secrets and variables → Actions**:
|
||||
|
||||
| Secret | Description |
|
||||
|--------|-------------|
|
||||
| `DEPLOY_SSH_PRIVATE_KEY` | SSH key for VPS access |
|
||||
| `DEPLOY_HOST` | VPS IP/hostname (VPN-reachable) |
|
||||
| `DEPLOY_USER` | Deploy user (default: `deploy`) |
|
||||
| `DEPLOY_PORT` | SSH port (default: 22) |
|
||||
| `DEPLOY_PATH` | Deploy path (default: `/opt/vault-dash`) |
|
||||
| `NICEGUI_STORAGE_SECRET` | Session secret |
|
||||
| `REGISTRY_PASSWORD` | Container registry token (if needed) |
|
||||
|
||||
### Optional Variables
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `REGISTRY` | Container registry URL |
|
||||
| `EXTERNAL_HEALTHCHECK_URL` | Public health check URL |
|
||||
|
||||
---
|
||||
|
||||
## 3. One-Time VPS Setup
|
||||
|
||||
```bash
|
||||
# Create deploy user
|
||||
sudo useradd -m -s /bin/bash deploy
|
||||
sudo usermod -aG docker deploy
|
||||
```
|
||||
|
||||
Log out and back in after adding the deploy user to the `docker` group.
|
||||
|
||||
---
|
||||
|
||||
## 2. GitLab runner setup
|
||||
|
||||
The repository uses three CI stages:
|
||||
|
||||
- `test`
|
||||
- `build`
|
||||
- `deploy`
|
||||
|
||||
### What the pipeline expects
|
||||
|
||||
From `.gitlab-ci.yml`:
|
||||
|
||||
- Test jobs run in `python:3.12-slim`
|
||||
- Image builds run with `docker:27` plus `docker:27-dind`
|
||||
- Deploy runs in `python:3.12-alpine` and installs:
|
||||
- `bash`
|
||||
- `openssh-client`
|
||||
- `curl`
|
||||
- `docker-cli`
|
||||
- `docker-cli-compose`
|
||||
|
||||
### Runner options
|
||||
|
||||
You can use either:
|
||||
|
||||
1. GitLab shared runners, if they support Docker-in-Docker for your project
|
||||
2. A dedicated self-hosted Docker runner
|
||||
|
||||
### Recommended self-hosted runner configuration
|
||||
|
||||
Use a Docker executor runner with privileged mode enabled for the `build_image` job.
|
||||
|
||||
Example `config.toml` excerpt:
|
||||
|
||||
```toml
|
||||
[[runners]]
|
||||
name = "vault-dash-docker-runner"
|
||||
url = "https://gitlab.com/"
|
||||
token = "REDACTED"
|
||||
executor = "docker"
|
||||
[runners.docker]
|
||||
tls_verify = false
|
||||
image = "python:3.12-slim"
|
||||
privileged = true
|
||||
disable_cache = false
|
||||
volumes = ["/cache"]
|
||||
shm_size = 0
|
||||
```
|
||||
|
||||
### Registering a runner
|
||||
|
||||
```bash
|
||||
sudo gitlab-runner register
|
||||
```
|
||||
|
||||
Recommended answers:
|
||||
|
||||
- URL: your GitLab instance URL
|
||||
- Executor: `docker`
|
||||
- Default image: `python:3.12-slim`
|
||||
- Tags: optional, but useful if you want to target dedicated runners later
|
||||
|
||||
### Runner permissions and networking
|
||||
|
||||
The runner must be able to:
|
||||
|
||||
- Authenticate to the GitLab Container Registry
|
||||
- Reach the target VPS over SSH
|
||||
- Reach the target VPS VPN address during deploy validation
|
||||
- Pull base images from Docker Hub or your mirror
|
||||
|
||||
---
|
||||
|
||||
## 3. SSH key configuration
|
||||
|
||||
Deployment authenticates with `DEPLOY_SSH_PRIVATE_KEY`, which the deploy job writes to `~/.ssh/id_ed25519` before running `scripts/deploy.sh`.
|
||||
|
||||
### Generate a deployment keypair
|
||||
|
||||
On a secure admin machine:
|
||||
|
||||
```bash
|
||||
ssh-keygen -t ed25519 -C "gitlab-deploy-vault-dash" -f ./vault_dash_deploy_key
|
||||
```
|
||||
|
||||
This creates:
|
||||
|
||||
- `vault_dash_deploy_key` — private key
|
||||
- `vault_dash_deploy_key.pub` — public key
|
||||
|
||||
### Install the public key on the VPS
|
||||
|
||||
```bash
|
||||
ssh-copy-id -i ./vault_dash_deploy_key.pub deploy@YOUR_VPN_HOST
|
||||
```
|
||||
|
||||
Or manually append it to:
|
||||
|
||||
```text
|
||||
/home/deploy/.ssh/authorized_keys
|
||||
```
|
||||
|
||||
### Add the private key to GitLab CI/CD variables
|
||||
|
||||
In **Settings → CI/CD → Variables** add:
|
||||
|
||||
- `DEPLOY_SSH_PRIVATE_KEY` — contents of the private key
|
||||
|
||||
Recommended flags:
|
||||
|
||||
- Masked: yes
|
||||
- Protected: yes
|
||||
- Environment scope: `production` if you use environment-specific variables
|
||||
|
||||
### Known-host handling
|
||||
|
||||
The current deploy script uses:
|
||||
|
||||
```bash
|
||||
-o StrictHostKeyChecking=no
|
||||
```
|
||||
|
||||
That makes first connection easier, but it weakens SSH trust validation. For a stricter setup, update the pipeline to preload `known_hosts` and remove that option.
|
||||
|
||||
---
|
||||
|
||||
## 4. VPN setup for access
|
||||
|
||||
The deployment is designed for private-network access.
|
||||
|
||||
### Why VPN is recommended
|
||||
|
||||
- The application container binds to loopback by default
|
||||
- `DEPLOY_HOST` is expected to be a VPN-reachable private IP or internal DNS name
|
||||
- SSH and HTTP traffic should not be exposed publicly unless a hardened reverse proxy is placed in front
|
||||
|
||||
### Recommended topology
|
||||
|
||||
```text
|
||||
Admin / GitLab Runner
|
||||
|
|
||||
| VPN
|
||||
v
|
||||
VPS private address
|
||||
|
|
||||
+--> SSH (22)
|
||||
+--> reverse proxy or direct internal app access
|
||||
```
|
||||
|
||||
### Tailscale example
|
||||
|
||||
1. Install Tailscale on the VPS
|
||||
2. Join the host to your tailnet
|
||||
3. Use the Tailscale IP or MagicDNS name as `DEPLOY_HOST`
|
||||
4. Restrict firewall rules to the Tailscale interface
|
||||
|
||||
Example UFW rules:
|
||||
|
||||
```bash
|
||||
sudo ufw allow in on tailscale0 to any port 22 proto tcp
|
||||
sudo ufw allow in on tailscale0 to any port 8000 proto tcp
|
||||
sudo ufw deny 22/tcp
|
||||
sudo ufw deny 8000/tcp
|
||||
sudo ufw enable
|
||||
```
|
||||
|
||||
### WireGuard alternative
|
||||
|
||||
If you use WireGuard instead of Tailscale:
|
||||
|
||||
- assign the VPS a stable private VPN IP
|
||||
- allow SSH and proxy traffic only on the WireGuard interface
|
||||
- set `DEPLOY_HOST` to that private IP
|
||||
|
||||
### Access patterns
|
||||
|
||||
Preferred options:
|
||||
|
||||
1. VPN access only, app bound to `127.0.0.1`, reverse proxy on same host
|
||||
2. VPN access only, app published to private/VPN interface
|
||||
3. Public HTTPS only through reverse proxy, app still bound internally
|
||||
|
||||
Least preferred:
|
||||
|
||||
- public direct access to port `8000`
|
||||
|
||||
---
|
||||
|
||||
## 5. Environment variables
|
||||
|
||||
The deploy script supports two patterns:
|
||||
|
||||
1. Provide a full `APP_ENV_FILE` variable containing the remote `.env`
|
||||
2. Provide individual CI variables and let `scripts/deploy.sh` assemble the `.env`
|
||||
|
||||
### Required GitLab variables
|
||||
|
||||
#### SSH and deployment
|
||||
|
||||
- `DEPLOY_SSH_PRIVATE_KEY`
|
||||
- `DEPLOY_USER`
|
||||
- `DEPLOY_HOST`
|
||||
- `DEPLOY_PORT` (optional, default `22`)
|
||||
- `DEPLOY_PATH` (optional, default `/opt/vault-dash`)
|
||||
|
||||
#### Container registry
|
||||
|
||||
These are generally provided by GitLab automatically in CI:
|
||||
|
||||
- `CI_REGISTRY`
|
||||
- `CI_REGISTRY_IMAGE`
|
||||
- `CI_REGISTRY_USER`
|
||||
- `CI_REGISTRY_PASSWORD`
|
||||
- `CI_COMMIT_SHA`
|
||||
|
||||
#### App runtime
|
||||
|
||||
- `APP_ENV`
|
||||
- `APP_NAME`
|
||||
- `APP_PORT`
|
||||
- `APP_BIND_ADDRESS`
|
||||
- `REDIS_URL`
|
||||
- `DEFAULT_SYMBOL`
|
||||
- `CACHE_TTL`
|
||||
- `WEBSOCKET_INTERVAL_SECONDS`
|
||||
- `NICEGUI_MOUNT_PATH`
|
||||
- `NICEGUI_STORAGE_SECRET`
|
||||
- `CORS_ORIGINS`
|
||||
|
||||
#### Optional deployment controls
|
||||
|
||||
- `APP_ENV_FILE`
|
||||
- `COMPOSE_FILE`
|
||||
- `COMPOSE_SERVICE`
|
||||
- `DEPLOY_TIMEOUT`
|
||||
- `HEALTHCHECK_URL`
|
||||
- `REMOTE_ENV_FILE`
|
||||
- `EXTERNAL_HEALTHCHECK_URL`
|
||||
- `IMAGE_TAG`
|
||||
- `APP_IMAGE`
|
||||
|
||||
### Example `.env`
|
||||
|
||||
```env
|
||||
APP_IMAGE=registry.gitlab.com/your-group/vault-dash:main-123456
|
||||
APP_ENV=production
|
||||
APP_NAME=Vault Dashboard
|
||||
APP_PORT=8000
|
||||
APP_BIND_ADDRESS=127.0.0.1
|
||||
REDIS_URL=
|
||||
DEFAULT_SYMBOL=GLD
|
||||
CACHE_TTL=300
|
||||
WEBSOCKET_INTERVAL_SECONDS=5
|
||||
NICEGUI_MOUNT_PATH=/
|
||||
NICEGUI_STORAGE_SECRET=replace-with-long-random-secret
|
||||
CORS_ORIGINS=https://vault.example.com
|
||||
```
|
||||
|
||||
### Variable behavior in the app
|
||||
|
||||
`app/main.py` loads runtime settings from environment variables and uses them for:
|
||||
|
||||
- CORS configuration
|
||||
- Redis connection
|
||||
- cache TTL
|
||||
- default symbol
|
||||
- WebSocket publish interval
|
||||
- NiceGUI mount path
|
||||
- NiceGUI storage secret
|
||||
|
||||
### Secret management guidance
|
||||
|
||||
Treat these as secrets or sensitive config:
|
||||
|
||||
- `DEPLOY_SSH_PRIVATE_KEY`
|
||||
- `NICEGUI_STORAGE_SECRET`
|
||||
- `REDIS_URL` if it contains credentials
|
||||
- any future broker API credentials
|
||||
- any future OAuth client secrets
|
||||
|
||||
---
|
||||
|
||||
## 6. SSL/TLS configuration
|
||||
|
||||
SSL/TLS is strongly recommended, especially because future OAuth integrations require stable HTTPS callback URLs and secure cookie handling.
|
||||
|
||||
### Current app behavior
|
||||
|
||||
The app itself listens on plain HTTP inside the container on port `8000`.
|
||||
|
||||
Recommended production pattern:
|
||||
|
||||
```text
|
||||
Client -> HTTPS reverse proxy -> vault-dash container (HTTP on localhost/private network)
|
||||
```
|
||||
|
||||
### Recommended reverse proxy choices
|
||||
|
||||
- Caddy
|
||||
- Nginx
|
||||
- Traefik
|
||||
|
||||
### Minimum TLS recommendations
|
||||
|
||||
- TLS termination at the reverse proxy
|
||||
- Automatic certificate management with Let's Encrypt or internal PKI
|
||||
- Redirect HTTP to HTTPS
|
||||
- HSTS once the domain is stable
|
||||
- Forward standard proxy headers
|
||||
|
||||
### Nginx example
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name vault.example.com;
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name vault.example.com;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/vault.example.com/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/vault.example.com/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### OAuth readiness checklist
|
||||
|
||||
Before adding OAuth:
|
||||
|
||||
- serve the app only over HTTPS
|
||||
- use a stable public or internal FQDN
|
||||
- keep `CORS_ORIGINS` limited to trusted origins
|
||||
- ensure WebSocket upgrade headers pass through the reverse proxy
|
||||
- store OAuth client secrets in GitLab CI/CD variables or a secret manager
|
||||
- verify callback/redirect URLs exactly match the provider configuration
|
||||
|
||||
---
|
||||
|
||||
## 7. Deployment procedure
|
||||
|
||||
### One-time server preparation
|
||||
|
||||
1. Provision the VPS
|
||||
2. Install Docker and Compose
|
||||
3. Create a deploy user
|
||||
4. Install the SSH public key for that user
|
||||
5. Join the VPS to your VPN
|
||||
6. Configure firewall rules
|
||||
7. Create the deployment directory:
|
||||
|
||||
```bash
|
||||
# Set up deployment directory
|
||||
sudo mkdir -p /opt/vault-dash
|
||||
sudo chown deploy:deploy /opt/vault-dash
|
||||
|
||||
# Install Docker (Debian/Ubuntu)
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y docker.io docker-compose-plugin
|
||||
|
||||
# Add SSH key for deploy user
|
||||
sudo -u deploy mkdir -p /home/deploy/.ssh
|
||||
# Add public key to /home/deploy/.ssh/authorized_keys
|
||||
```
|
||||
|
||||
### GitLab CI/CD configuration
|
||||
---
|
||||
|
||||
1. Add all required variables in GitLab
|
||||
2. Protect production variables
|
||||
3. Ensure the deploy runner can reach the VPN host
|
||||
4. Push to the default branch
|
||||
|
||||
### What happens during deploy
|
||||
|
||||
`scripts/deploy.sh` will:
|
||||
|
||||
- connect to `DEPLOY_USER@DEPLOY_HOST`
|
||||
- create `DEPLOY_PATH` if it does not exist
|
||||
- write `.env` to `REMOTE_ENV_FILE`
|
||||
- upload `docker-compose.deploy.yml`
|
||||
- log into the GitLab registry on the VPS
|
||||
- pull `APP_IMAGE`
|
||||
- start the service with `docker compose`
|
||||
- check `http://127.0.0.1:${APP_PORT}/health` by default
|
||||
- restore the previous image if the health check never passes
|
||||
|
||||
### Manual deploy from a workstation
|
||||
|
||||
You can also export the same variables locally and run:
|
||||
## 4. Local Development
|
||||
|
||||
```bash
|
||||
# Create virtual environment
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
# Run tests
|
||||
pytest
|
||||
|
||||
# Start development server
|
||||
uvicorn app.main:app --reload --port 8000
|
||||
```
|
||||
|
||||
### Docker Development
|
||||
|
||||
```bash
|
||||
# Build and run
|
||||
docker-compose up --build
|
||||
|
||||
# Access at http://localhost:8000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Manual Deployment
|
||||
|
||||
```bash
|
||||
# Set environment variables
|
||||
export DEPLOY_HOST="10.100.0.10"
|
||||
export DEPLOY_USER="deploy"
|
||||
export DEPLOY_SSH_PRIVATE_KEY="$(cat ~/.ssh/deploy_key)"
|
||||
export APP_IMAGE="registry.example.com/vault-dash:latest"
|
||||
|
||||
# Run deploy script
|
||||
bash scripts/deploy.sh
|
||||
```
|
||||
|
||||
This is useful for smoke tests before enabling automated production deploys.
|
||||
---
|
||||
|
||||
## 6. VPN-Only Access
|
||||
|
||||
The application binds to `127.0.0.1:8000` by default. Access via:
|
||||
|
||||
1. **VPN directly**: Connect VPN, access `http://VPS_IP:8000`
|
||||
2. **Reverse proxy**: Use Caddy/Nginx on VPS for HTTPS
|
||||
|
||||
### Caddy Example
|
||||
|
||||
```
|
||||
# Caddyfile
|
||||
vault.uncloud.vpn {
|
||||
reverse_proxy 127.0.0.1:8000
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Future: OAuth Integration
|
||||
|
||||
When ready to expose publicly:
|
||||
|
||||
1. Set up OAuth provider (Authentik, Keycloak, etc.)
|
||||
2. Configure `CORS_ORIGINS` for public URL
|
||||
3. Add OAuth middleware to FastAPI
|
||||
4. Enable HTTPS via Let's Encrypt
|
||||
|
||||
---
|
||||
|
||||
## 8. Troubleshooting
|
||||
|
||||
### Pipeline fails during `build_image`
|
||||
|
||||
Possible causes:
|
||||
|
||||
- runner is not privileged for Docker-in-Docker
|
||||
- registry auth failed
|
||||
- Docker Hub rate limits or base image pull failures
|
||||
|
||||
Checks:
|
||||
### Runner can't build Docker images
|
||||
|
||||
Ensure runner has Docker access:
|
||||
```bash
|
||||
docker info
|
||||
docker run --rm hello-world
|
||||
```
|
||||
|
||||
Verify on the runner that privileged mode is enabled for Docker executor jobs.
|
||||
|
||||
### Deploy job cannot SSH to the VPS
|
||||
|
||||
Possible causes:
|
||||
|
||||
- wrong `DEPLOY_HOST`
|
||||
- VPN not connected
|
||||
- wrong private key
|
||||
- missing public key in `authorized_keys`
|
||||
- firewall blocking port 22
|
||||
|
||||
Checks:
|
||||
### SSH connection fails
|
||||
|
||||
```bash
|
||||
ssh -i vault_dash_deploy_key deploy@YOUR_VPN_HOST
|
||||
ssh -i ~/.ssh/deploy_key deploy@YOUR_VPS
|
||||
```
|
||||
|
||||
### Deploy job connects but `docker compose` fails
|
||||
Check firewall allows VPN traffic on port 22.
|
||||
|
||||
Possible causes:
|
||||
|
||||
- Docker not installed on VPS
|
||||
- deploy user not in `docker` group
|
||||
- remote filesystem permissions wrong
|
||||
- invalid `.env` content
|
||||
|
||||
Checks on VPS:
|
||||
### Health check fails
|
||||
|
||||
```bash
|
||||
docker version
|
||||
docker compose version
|
||||
id
|
||||
ls -la /opt/vault-dash
|
||||
curl http://127.0.0.1:8000/health
|
||||
docker compose -f /opt/vault-dash/docker-compose.deploy.yml logs
|
||||
```
|
||||
|
||||
### Health check never turns green
|
||||
|
||||
Possible causes:
|
||||
|
||||
- app failed to start
|
||||
- container crashed
|
||||
- missing `NICEGUI_STORAGE_SECRET`
|
||||
- invalid env vars
|
||||
- reverse proxy misrouting traffic
|
||||
|
||||
Checks on VPS:
|
||||
### Rollback
|
||||
|
||||
```bash
|
||||
cd /opt/vault-dash
|
||||
docker compose -f docker-compose.deploy.yml --env-file .env ps
|
||||
docker compose -f docker-compose.deploy.yml --env-file .env logs --tail=200
|
||||
curl -fsS http://127.0.0.1:8000/health
|
||||
```
|
||||
|
||||
### Redis warnings at startup
|
||||
|
||||
This app tolerates missing Redis and falls back to no-cache mode. If caching is expected, verify:
|
||||
|
||||
- `REDIS_URL` is set
|
||||
- Redis is reachable from the container
|
||||
- the `redis` Python package is installed in the image
|
||||
|
||||
### WebSocket issues behind a proxy
|
||||
|
||||
Possible causes:
|
||||
|
||||
- missing `Upgrade` / `Connection` headers
|
||||
- idle timeout too low on the proxy
|
||||
- incorrect HTTPS termination config
|
||||
|
||||
Verify that `/ws/updates` supports WebSocket upgrades end-to-end.
|
||||
|
||||
### Rollback failed
|
||||
|
||||
The deploy script stores the last successful image at:
|
||||
|
||||
```text
|
||||
/opt/vault-dash/.last_successful_image
|
||||
```
|
||||
|
||||
Manual rollback:
|
||||
|
||||
```bash
|
||||
cd /opt/vault-dash
|
||||
export PREVIOUS_IMAGE="$(cat .last_successful_image)"
|
||||
sed -i.bak "/^APP_IMAGE=/d" .env
|
||||
printf 'APP_IMAGE=%s\n' "$PREVIOUS_IMAGE" | cat - .env.bak > .env
|
||||
rm -f .env.bak
|
||||
docker pull "$PREVIOUS_IMAGE"
|
||||
docker compose -f docker-compose.deploy.yml --env-file .env up -d --remove-orphans
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Post-deploy validation
|
||||
|
||||
Minimum checks:
|
||||
|
||||
```bash
|
||||
curl -fsS http://127.0.0.1:8000/health
|
||||
python scripts/healthcheck.py https://vault.example.com/health --timeout 120 --expect-status ok
|
||||
```
|
||||
|
||||
Recommended smoke checks:
|
||||
|
||||
- load the NiceGUI dashboard in a browser
|
||||
- call `/api/portfolio?symbol=GLD`
|
||||
- call `/api/options?symbol=GLD`
|
||||
- call `/api/strategies?symbol=GLD`
|
||||
- verify `/ws/updates` emits `connected` then `portfolio_update`
|
||||
|
||||
## 10. Future deployment improvements
|
||||
|
||||
Suggested follow-ups:
|
||||
|
||||
- pin SSH host keys instead of disabling strict checking
|
||||
- add a production reverse proxy service to Compose
|
||||
- add Redis to the deploy Compose stack if caching is required in production
|
||||
- add metrics and centralized logging
|
||||
- split staging and production environments
|
||||
- move secrets to a dedicated secret manager
|
||||
PREVIOUS=$(cat .last_successful_image)
|
||||
sed -i "s|^APP_IMAGE=.*|APP_IMAGE=$PREVIOUS|" .env
|
||||
docker compose -f docker-compose.deploy.yml up -d
|
||||
```
|
||||
Reference in New Issue
Block a user