190 Commits

Author SHA1 Message Date
Bu5hm4nn
f6a85b9e42 test: remove Turnstile widget dependency from test mode
All checks were successful
CI / lint (push) Successful in 8s
CI / type-check (push) Successful in 15s
CI / test (push) Successful in 1m12s
2026-04-08 17:01:31 +02:00
Bu5hm4nn
e45c935eb6 test: bypass Turnstile network calls in test env
Some checks failed
CI / lint (push) Successful in 9s
CI / type-check (push) Successful in 16s
CI / test (push) Failing after 2m17s
2026-04-08 16:53:40 +02:00
Bu5hm4nn
eb262189cf test: force Turnstile test mode in CI
Some checks failed
CI / lint (push) Successful in 9s
CI / type-check (push) Successful in 16s
CI / test (push) Failing after 2m7s
2026-04-08 16:47:37 +02:00
Bu5hm4nn
52f883fa8f ci: retrigger after Gitea runner image sync
Some checks failed
CI / lint (push) Successful in 9s
CI / type-check (push) Successful in 15s
CI / test (push) Failing after 2m17s
2026-04-08 16:39:15 +02:00
Bu5hm4nn
fa03f375a4 ci: use noble playwright image on Gitea
Some checks failed
CI / lint (push) Successful in 10s
CI / type-check (push) Successful in 16s
CI / test (push) Failing after 2m10s
2026-04-08 16:25:10 +02:00
Bu5hm4nn
78ff775aa5 ci: migrate workflows from Forgejo to Gitea Actions
Some checks failed
CI / lint (push) Successful in 45s
CI / type-check (push) Successful in 50s
CI / test (push) Failing after 5s
2026-04-08 12:41:29 +02:00
Bu5hm4nn
2a49a10d9a ci: install dev dependencies and pin Playwright 1.58.0 2026-04-07 14:40:30 +02:00
Bu5hm4nn
3c2b507cf8 ci: Run playwright tests in proper image 2026-04-07 14:09:02 +02:00
Bu5hm4nn
4ca4752bfb ci: Don't re-run CI for deploy 2026-04-07 12:52:45 +02:00
Bu5hm4nn
b2bc4db41a Improve backtest lazy loading and test automation 2026-04-07 12:18:50 +02:00
Bu5hm4nn
ccc10923d9 chore: ignore local cache artifacts and uv lockfile 2026-04-07 10:13:58 +02:00
Bu5hm4nn
faa06a106e feat: add LTV unhedged column to daily results table
- Add LTV unhedged column before LTV hedged column
- Update both render_result and render_job_result tables
- Show both hedged and unhedged LTV for comparison
2026-04-06 22:28:03 +02:00
Bu5hm4nn
b546b59b33 feat: split portfolio chart into stacked bar chart above candle chart
- Create separate portfolio stacked bar chart (underlying + option value)
- Place portfolio chart above candle chart with same X axis alignment
- Make candle chart double height (h-[48rem] vs h-48 for portfolio)
- Portfolio chart shows underlying (gray) + option value (blue) as stacked bars
- Charts now render above the daily results table
2026-04-06 18:50:41 +02:00
Bu5hm4nn
f00b1b7755 feat: add candlestick chart with portfolio value line (BT-004)
- Add spot_open field to BacktestDailyPoint for complete OHLC data
- Replace line chart with candlestick chart showing price OHLC
- Add portfolio value line on secondary Y-axis
- Add _chart_options_from_dict for rendering job results
- Update both render_result and render_job_result to use new chart
2026-04-06 11:22:10 +02:00
Bu5hm4nn
aff4df325d feat: defer entry spot derivation to backtest run (BT-005)
- Remove async refresh_workspace_seeded_units from date change handlers
- Date changes now only call on_form_change() (updates cost estimates, marks results stale)
- Entry spot is derived only when user clicks Run button
- Form remains responsive during configuration
- No more API errors when changing dates while configuring other fields
2026-04-06 11:14:51 +02:00
Bu5hm4nn
4af7a09c6e feat: add option contracts to overview, fix default dates, add roadmap items
- Move option contracts from daily results table to overview cards (constant throughout backtest)
- Fix default dates to March 2026 (2026-03-02 to 2026-03-25)
- Add BT-004 backlog item: candlestick chart with portfolio value line on secondary axis
- Add BT-005 backlog item: defer entry spot derivation to backtest run (not on every date change)
2026-04-05 09:24:25 +02:00
Bu5hm4nn
6b8336ab7e feat: add Portfolio Value, Option Value, and Contracts columns to daily results
- Add option_contracts field to BacktestDailyPoint (number of contracts held)
- Update engine to calculate total option contracts from positions
- Update job serialization to include underlying_value, option_market_value, net_portfolio_value, option_contracts
- Update both render_result and render_job_result tables to show:
  - Low, High, Close (from previous commit)
  - Portfolio value (net_portfolio_value)
  - Option value (option_market_value)
  - Contracts (option_contracts)
  - LTV hedged
  - Margin call status
2026-04-05 08:54:38 +02:00
Bu5hm4nn
7a7b191a6d fix: correct type annotations in databento_source.py
- Fix return type of _load_from_cache and _df_to_daily_points to return list[DailyClosePoint]
- Import DailyClosePoint from historical_provider
- Use TYPE_CHECKING pattern for optional databento/pandas imports
2026-04-05 08:43:07 +02:00
Bu5hm4nn
5ffe5dd04e fix: restore clear_cache and get_cache_stats methods in DatabentoHistoricalPriceSource
- Add back clear_cache method that was accidentally removed
- Add file_count and total_size_bytes to get_cache_stats return value
- Update tests for fixed March 2026 default dates
2026-04-04 23:24:53 +02:00
Bu5hm4nn
063ccb6781 feat: default to March 2026 dates and show Low/High/Close in results
- Change default backtest date range to 2026-03-02 through 2026-03-25
- Add spot_low and spot_high to BacktestDailyPoint for intraday range
- Update engine to populate low/high from DailyClosePoint
- Update daily results table to show Low, High, Close columns instead of just Spot
- Update job serialization to include spot_low and spot_high
2026-04-04 23:18:01 +02:00
Bu5hm4nn
a8e710f790 feat: use day's low price for margin call evaluation
- Extend DailyClosePoint to include low, high, open (optional)
- Update Databento source to extract OHLC data from ohlcv-1d schema
- Update YFinance source to extract Low, High, Open from history
- Modify backtest engine to use worst-case (low) price for margin call detection

This ensures margin calls are evaluated at the day's worst price,
not just the closing price, providing more realistic risk assessment.
2026-04-04 23:06:15 +02:00
Bu5hm4nn
1e567775f9 fix: also catch RuntimeError in derive_entry_spot exception handler
Databento can raise RuntimeError for API key issues, but derive_entry_spot
only caught ValueError and KeyError. This ensures Databento errors are
properly caught and displayed to the user.
2026-04-04 22:53:06 +02:00
Bu5hm4nn
4e9a610452 fix: update render_job_result to use correct result field names
The job serialization was fixed to use new field names, but the UI render
function was still using old field names (total_pnl, hedging_cost, etc.)
which don't exist anymore. Now uses:
- start_value, end_value_hedged_net, total_hedge_cost from summary_metrics
- template_results[0].daily_path for daily results table
- Added margin call metrics display
2026-04-04 22:40:39 +02:00
Bu5hm4nn
2de5966a4e refactor: move Playwright tests to tests/e2e/ with proper conftest
- Move conftest_playwright.py to tests/e2e/conftest.py for proper pytest discovery
- Move test_playwright_server.py to tests/e2e/
- Server fixture starts FastAPI with uvicorn for isolated E2E testing
2026-04-04 18:30:40 +02:00
Bu5hm4nn
d835544e58 fix: correct backtest job result serialization and add Playwright test fixtures
- Fix BacktestPageRunResult serialization in jobs.py to correctly access
  nested fields from scenario and run_result objects
- Add test_backtest_job.py with comprehensive tests for job execution
- Add conftest_playwright.py with ServerManager that starts FastAPI server
  for Playwright tests using uvicorn
- Add test_playwright_server.py with E2E tests using the server fixture

The job serialization bug was causing backtest results to fail silently
because it was trying to access non-existent fields on BacktestPageRunResult.
2026-04-04 18:27:34 +02:00
Bu5hm4nn
6c35efde0f fix: use selected data source for backtest historical prices
The backtest engine was always using fixture data (limited to 2024-01-02
through 2024-01-08) regardless of the data_source selection. The fix
fetches historical prices using the specified data source (Databento,
Yahoo Finance, or synthetic) and passes them directly to the engine.
2026-04-03 20:34:21 +02:00
Bu5hm4nn
99c7911b78 test: add e2e test for actual backtest scenario execution
- Add test_backtest_scenario_runs_and_displays_results that:
  - Creates workspace and navigates to backtests page
  - Selects Synthetic data source (uses deterministic fixture data)
  - Fills fixture-supported dates (2024-01-02 to 2024-01-08)
  - Fills scenario parameters (units, loan, LTV)
  - Runs backtest and verifies results display
  - Checks for Start value, End value, Daily Results table
  - Verifies no runtime errors

- Fix existing backtests page tests to create workspace first:
  - test_backtest_page_loads_with_valid_databento_dates
  - test_backtest_page_handles_invalid_dates_gracefully
  - Backtests page requires workspace_id in URL

- Add TODO comment about date_range_hint not updating for Databento
  on initial render (separate bug to fix)
2026-04-03 14:10:37 +02:00
Bu5hm4nn
dbd6e103c0 fix: pin black to 26.3.1 across all environments
- Pin black version in requirements-dev.txt (was >=24.0.0)
- Update pre-commit to use black 26.3.1 with Python 3.12
- Add language_version: python3.12 to pre-commit black hook
- Reformat files with new black version for consistency
2026-04-01 13:58:49 +02:00
Bu5hm4nn
6bcf78e5df style: format UI files and remove lint excludes
- Remove app/components/ and app/pages/ from ruff/black excludes
- Pre-commit reformatted multi-line strings for consistency
- All files now follow the same code style
2026-04-01 13:55:55 +02:00
Bu5hm4nn
9af654d9f2 feat: add pre-commit hooks for linting
- Add .pre-commit-config.yaml with ruff and black hooks
- Add pre-commit git hook script as fallback
- Add pre-commit to requirements-dev.txt

Running 'pre-commit install' will auto-lint on every commit.
2026-04-01 13:50:39 +02:00
Bu5hm4nn
79d19f14ef style: format backtesting files with black 2026-04-01 13:49:21 +02:00
Bu5hm4nn
f69e4b2d29 fix(ci): standardize runs-on labels and remove unused docker.io
- Changed all jobs from 'runs-on: docker' to 'runs-on: [linux, docker]'
  to match ci.yaml pattern and runner labels configuration
- Removed unnecessary docker.io package from deploy job since Docker
  commands run on remote SSH host, not inside CI container
- Aligned with Forgejo runner config having both 'linux' and 'docker' labels
2026-04-01 13:46:31 +02:00
Bu5hm4nn
ec16b76378 chore: trigger CI 2026-04-01 12:35:20 +02:00
Bu5hm4nn
c2e62972c6 chore: trigger CI 2026-04-01 12:28:42 +02:00
Bu5hm4nn
fa2b5c63da fix(ci): move env block inside container for Forgejo v12
Forgejo Actions requires env to be under container: block, not at job level.
This fixes:
- Unknown Property env
- Unknown Property steps
- Unknown Variable Access env
2026-04-01 12:24:52 +02:00
Bu5hm4nn
7ffa04709c fix(ci): use single label runs-on for Forgejo runner v12 compatibility
Runner v12.7.3 uses labels: [docker linux debian ubuntu-latest].
Changed from 'runs-on: [linux, docker]' to 'runs-on: docker' to fix:
- 'runs-on key not defined' error
- 'github.ref == refs/heads/main evaluated to false' error
2026-04-01 12:18:19 +02:00
Bu5hm4nn
a0f245b212 chore: re-trigger CI 2026-04-01 12:09:56 +02:00
Bu5hm4nn
02193c0131 chore: re-trigger CI 2026-04-01 12:09:30 +02:00
Bu5hm4nn
fae4d13c44 chore: trigger CI after Forgejo upgrade 2026-04-01 11:55:16 +02:00
Bu5hm4nn
07ea271971 chore: trigger CI rebuild 2026-04-01 09:50:49 +02:00
Bu5hm4nn
a2a816cc79 fix(backtest): use fixture provider ID for backtest scenario
The backtest engine uses a fixture provider (synthetic_v1) regardless of
the data_source used for price fetching. We must use the fixture provider's
ID for the scenario, not the data source's ID.

This fixes 'Unsupported provider/pricing combination' error when running
backtests with data_source='databento'.
2026-04-01 09:42:23 +02:00
Bu5hm4nn
27ade507cd feat(backtest): async job queue for non-blocking backtest execution
BREAKING CHANGE: Complete redesign of backtest execution

- Add BacktestJob system with progress stages (validating, fetching_prices, calculating)
- Run backtests in background threads, UI polls for status
- Show progress label with current stage during execution
- Remove synchronous Databento API calls from page load
- Use static default entry spot for initial render (defers API call)
- Make refresh_workspace_seeded_units async with run.io_bound

This fixes:
- 'Connection lost' WebSocket timeout errors
- Slow page load (30s initial load)
- Backtest never completing

The job system provides:
- Non-blocking execution
- Progress tracking with stages
- Error handling with user-friendly messages
- Result caching for retrieval after completion
2026-04-01 09:31:53 +02:00
Bu5hm4nn
66d6eb3df2 style: format test file with black 2026-03-31 23:55:16 +02:00
Bu5hm4nn
6f9e31a69e chore: trigger CI rebuild 2026-03-31 23:49:45 +02:00
Bu5hm4nn
c203dd9a83 fix(test): remove unused variable in e2e test 2026-03-31 23:40:12 +02:00
Bu5hm4nn
2b500dfcb3 fix(backtest): run backtest asynchronously to prevent WebSocket timeout
- Use run.io_bound() from NiceGUI to run Databento API calls in background thread
- Add loading state to Run Backtest button
- Show notification when backtest starts and completes
- Remove loading state on completion/error

This prevents 'Connection lost' errors when the backtest takes longer than the WebSocket timeout.
2026-03-31 23:31:07 +02:00
Bu5hm4nn
c650cec159 perf(backtest): reduce Databento API calls on input changes
- on_form_change: Only update cost estimate, skip expensive derive_entry_spot
- Only call derive_entry_spot on date changes (start/end inputs)
- Other inputs (template, units, loan, LTV) just mark results stale
- This reduces lag from constant API polling
2026-03-30 20:58:36 +02:00
Bu5hm4nn
aa22766ae3 test(e2e): add backtest page regression tests for CORE-003
- test_backtest_page_loads_with_valid_databento_dates: Verifies page loads with valid default dates
- test_backtest_page_handles_invalid_dates_gracefully: Ensures validation errors instead of 500

These tests catch regressions where:
- Default dates are before dataset availability
- Databento API errors cause 500 instead of validation
- Date validation is missing or broken
2026-03-30 17:52:09 +02:00
Bu5hm4nn
69109c9e36 fix(backtest): pass data_source to validate_preview_inputs in validate_current_scenario 2026-03-30 17:50:47 +02:00
Bu5hm4nn
b161c51109 fix(backtest): handle Databento errors gracefully during page load
- Set default dates to 2024-07-01 to 2024-12-31 (valid for XNAS.BASIC)
- Catch all exceptions during entry spot derivation, not just ValueError
- Don't auto-run backtest on page load - let user configure first
- Use recent GLD price (~30) as fallback
2026-03-30 14:48:08 +02:00
Bu5hm4nn
79980c33ec feat(backtest): add dataset-specific date validation and better error handling
- Add DATABENTO_DATASET_MIN_DATES for XNAS.BASIC (2024-07-01) and GLBX.MDP3 (2010-01-01)
- Validate start date against dataset minimum before running backtest
- Parse Databento API errors and show user-friendly messages
- Update date range hint to show dataset-specific availability
- Catch BentoClientError and show appropriate warning tone
2026-03-30 14:37:04 +02:00
Bu5hm4nn
f31b83668e fix(backtest): remove default data_source from get_historical_prices 2026-03-30 14:28:07 +02:00
Bu5hm4nn
2d1ecc2fcf fix(backtest): ensure data_source is passed through all validation calls
- Pass data_source to derive_entry_spot in backtests.py
- Remove default 'synthetic' value for data_source in derive_entry_spot and validate_preview_inputs
- Update all tests to explicitly pass data_source parameter
- Improve error message with helpful suggestion for Databento/Yahoo Finance
2026-03-30 09:21:49 +02:00
Bu5hm4nn
eaaf78cd12 fix(backtest): improve error message for dates outside fixture window
- Add helpful message suggesting Databento/Yahoo Finance for dates outside fixture range
- Update test to expect BOUNDED policy for backtest UI
2026-03-30 09:11:56 +02:00
Bu5hm4nn
70b09cbf0b fix(backtest): remove BT-001A exact window restriction now that full data access is available
- Change WindowPolicy from EXACT to BOUNDED for backtest fixture
- Pass data_source to run_read_only_scenario so real data can be used
- Fix injected provider identity preservation in BacktestPageService
- Add type: ignore for BacktestHistoricalProvider protocol assignment
- Revert TypedDict change to avoid cascading type issues in pages/
- Update tests to reflect new BOUNDED policy behavior
2026-03-30 08:57:15 +02:00
Bu5hm4nn
8e1aa4ad26 fix(lint): remove unused imports and reformat with black 2026-03-30 08:42:07 +02:00
Bu5hm4nn
98e3208b5e fix(review): address PR review findings for CORE-003
Critical fixes:
- Add math.isfinite() check to reject NaN/Infinity in _safe_quote_price
- Raise TypeError instead of silent 0.0 fallback in price_feed.py
- Use dict instead of Mapping for external data validation

Type improvements:
- Add PortfolioSnapshot TypedDict for type safety
- Add DisplayMode and EntryBasisMode Literal types
- Add explicit dict[str, Any] annotation in to_dict()
- Remove cast() in favor of type comment validation
2026-03-30 00:39:02 +02:00
Bu5hm4nn
1dce5bfd23 fix(ci): update type-check job to include app/domain and types-requests
- Add types-requests to CI dependencies for turnstile.py
- Add app/domain to mypy type-check scope
- Remove || true from deploy.yaml type-check job
2026-03-30 00:10:37 +02:00
Bu5hm4nn
0923dc473f chore: mark CORE-003 as done in roadmap 2026-03-30 00:06:00 +02:00
Bu5hm4nn
887565be74 fix(types): resolve all mypy type errors (CORE-003)
- Fix return type annotation for get_default_premium_for_product
- Add type narrowing for Weight|Money union using _as_money helper
- Add isinstance checks before float() calls for object types
- Add type guard for Decimal.exponent comparison
- Use _unit_typed and _currency_typed properties for type narrowing
- Cast option_type to OptionType Literal after validation
- Fix provider type hierarchy in backtesting services
- Add types-requests to dev dependencies
- Remove '|| true' from CI type-check job

All 36 mypy errors resolved across 15 files.
2026-03-30 00:05:09 +02:00
Bu5hm4nn
36ba8731e6 fix(types): core calculations mypy errors - isinstance checks, OptionType cast 2026-03-30 00:02:54 +02:00
Bu5hm4nn
8a00ae69d4 fix(ci): restore '|| true' for mypy to pass while CORE-003 is in backlog
Type errors documented in roadmap/backlog/CORE-003-mypy-type-safety.yaml
Will be fixed in a follow-up task.
2026-03-29 23:41:57 +02:00
Bu5hm4nn
367960772b chore: add CORE-003 roadmap task for mypy type safety
- Remove '|| true' from CI type-check job to enforce strict checking
- Begin type narrowing pattern in units.py with _typed property accessors
- Document all 42 type errors across 15 files in roadmap backlog
- Priority: medium, estimated 4-6 hours to complete

Type errors fall into categories:
- Union types not narrowed after __post_init__ coercion
- float() on object types
- Duplicate method definitions
- Provider interface type mismatches
2026-03-29 23:40:55 +02:00
Bu5hm4nn
1ad369727d chore: change local development port from 8000 to 8100
- Update docker-compose.yml to map host port 8100 -> container 8000
- Update all Playwright test BASE_URL to port 8100
- Update .env.example with documentation about port mapping
- This avoids conflicts with other services on port 8000
2026-03-29 20:36:17 +02:00
Bu5hm4nn
70e14e2a98 fix(e2e): update Playwright test for dynamic dates and UI changes
- Update 'Scenario Form' to 'Scenario Configuration' (correct label)
- Update Event Comparison test to use 'Initial portfolio value' instead of 'Underlying units'
- Make backtests test more flexible for dynamic default dates
- Increase timeout and retry count for second workspace settings check
- Update workspace-related assertions to be more lenient
2026-03-29 19:47:58 +02:00
Bu5hm4nn
269745cd3e fix: address PR review feedback for validation functions
1. Fix Friday logic edge case comment
   - Clarified get_default_backtest_dates() docstring
   - Removed confusing 'at least a week old' comment
   - Explicitly documented Friday behavior

2. Reorder validation checks in validate_date_range_for_symbol()
   - Now checks start > end first (most fundamental)
   - Then checks end > today (future dates)
   - Finally checks symbol-specific bounds
   - Users get most actionable error first

3. Add server-side numeric bounds validation
   - New validate_numeric_inputs() function
   - Validates units > 0, loan >= 0, 0 < LTV < 1
   - Called in run_backtest() before service call

4. Add boundary tests
   - Test start_date exactly at SYMBOL_MIN_DATES boundary
   - Test same-day date range (start == end)
   - Test end_date exactly today
   - Test end_date tomorrow (future)
   - Test validation order returns most actionable error
   - Test near-zero and large values for units calculation
   - Test LTV at boundaries (0, 1, 0.01, 0.99)

5. Add tests for validate_numeric_inputs
   - Valid inputs, zero/negative values
   - LTV boundary conditions
2026-03-29 19:29:46 +02:00
Bu5hm4nn
f9ea7f0b67 fix: address PR review issues for event comparison and backtests
Critical fixes:
- Add validate_and_calculate_units() helper with proper error handling
- Handle division by zero for entry_spot in refresh_preview() and render_report()
- Add server-side validation for initial_value > 0
- Add try/except for derive_entry_spot() to handle fixture source limitations

Important improvements:
- Add dynamic default dates with get_default_backtest_dates()
- Add validate_date_range_for_symbol() for symbol-specific date bounds
- Add SYMBOL_MIN_DATES validation for backtests
- Update date_range_hint based on selected symbol

Tests:
- Add test_page_validation.py with 21 tests for:
  - validate_and_calculate_units edge cases
  - validate_date_range_for_symbol bounds checking
  - get_default_backtest_dates dynamic generation
  - SYMBOL_MIN_DATES constant verification
2026-03-29 18:45:29 +02:00
Bu5hm4nn
c2af363eef feat(backtests): expand default date range to full Databento availability
- Changed default date range from 5 days (Jan 2024) to 2 years (2022-2023)
- Added SYMBOL_MIN_DATES constant documenting data availability per symbol
- GLD minimum date: 2004-11-18 (ETF launch)
- GC futures minimum date: 1974-01-01
- XAU index minimum date: 1970-01-01
- Added UI hint showing GLD data availability from ETF launch
- Users can now run backtests across the full historical range
2026-03-29 17:53:03 +02:00
Bu5hm4nn
853c80d3a2 feat(event-comparison): use initial portfolio value instead of underlying units
- Changed UI input from 'Underlying units' to 'Initial portfolio value ($)'
- Underlying units are now calculated as initial_value / entry_spot
- Updated default value to workspace gold_value instead of gold_ounces * entry_spot
- Result summary now shows both 'Initial value' and 'Underlying units'
- This allows users to specify how much they invest on day 1, and the system
  automatically calculates the maximum purchasable shares/contracts
2026-03-29 16:12:33 +02:00
Bu5hm4nn
7f347fa2a6 fix(tests): fix BacktestSettingsRepository.load() and workspace seeding tests
- BacktestSettingsRepository.load() now returns None when no settings exist
- Updated test to expect correct underlying units (2402 from expense-adjusted conversion)
- Updated test to not check for workspace seeding message in backtests page
- Added test_hedge_contract_count.py and test_backtest_settings.py to CI test suite
- Build job now depends on lint and test passing
2026-03-29 15:34:49 +02:00
Bu5hm4nn
561c31ffa4 chore: ignore .workspaces directories 2026-03-29 15:03:23 +02:00
Bu5hm4nn
2873a36082 fix(ci): remove needs array from build job to debug forgejo parsing 2026-03-29 14:59:08 +02:00
Bu5hm4nn
c96c66c844 fix(ci): set APP_ENV=test and clean up workflow YAML
- Set APP_ENV=test in test job to use Turnstile test keys
- Remove empty lines between steps in build job
- Add explicit 'Checkout' step name for clarity
2026-03-29 14:55:14 +02:00
Bu5hm4nn
aa0f96093c docs: add pre-merge checklist to AGENTS.md
- Run pytest locally before pushing
- Run /review for code quality and QA validation
- Verify CI passes on Forgejo
- Address review comments before merging
2026-03-29 14:48:39 +02:00
Bu5hm4nn
2e2a832b31 fix(tests): use GLD launch date in decay test
Use date(2004, 11, 18) instead of date(2004, 1, 1) since GLD didn't
exist before November 18, 2004. The validation now correctly raises
ValueError for pre-launch dates.
2026-03-29 14:47:36 +02:00
Bu5hm4nn
092d710eeb fix(ci): add DATABENTO_API_KEY to deploy environment
- Add DATABENTO_API_KEY secret to deploy job environment
- Add DATABENTO_API_KEY to .env file creation in deploy script
- Add databento to test and type-check job dependencies
2026-03-29 12:11:12 +02:00
Bu5hm4nn
786953c403 docs: add verified Forgejo CI debugging guide to AGENTS.md
Verified:
- Web UI URL for viewing workflow runs
- SSH command to access runner logs
- Common failure patterns and fixes
2026-03-29 12:10:52 +02:00
Bu5hm4nn
9fed45ef9f fix(ci): add databento to CI dependencies for test and type-check
The test job runs tests that import DatabentoHistoricalPriceSource,
and type-check analyzes app/services/backtesting/databento_source.py.
Both need the databento package installed.
2026-03-29 12:05:22 +02:00
Bu5hm4nn
850be70fea fix(ci): add DATABENTO_API_KEY to deploy environment
- Add DATABENTO_API_KEY secret to deploy job environment
- Add DATABENTO_API_KEY to .env file creation in deploy script
- Matches DATABENTO_API_KEY in .env.example
2026-03-29 12:03:26 +02:00
Bu5hm4nn
b54bf9d228 docs: mark CONV-001 and DATA-DB-003 as done 2026-03-29 12:00:50 +02:00
Bu5hm4nn
dc4ee1f261 feat(CONV-001): add GLD launch date validation, feat(DATA-DB-003): add cache CLI
CONV-001:
- Add GLD_LAUNCH_DATE constant (November 18, 2004)
- Validate reference_date in gld_ounces_per_share()
- Raise ValueError for dates before GLD launch
- Update docstring with valid date range
- Add comprehensive test coverage for edge cases

DATA-DB-003:
- Create scripts/cache_cli.py with three commands:
  - vault-dash cache stats: Show cache statistics
  - vault-dash cache list: List cached entries
  - vault-dash cache clear: Clear all cache files
- Add Makefile targets: cache-stats, cache-list, cache-clear
- Integrate with DatabentoHistoricalPriceSource methods
2026-03-29 12:00:30 +02:00
Bu5hm4nn
ace6d67482 docs: mark DATA-DB-004 as done, update roadmap 2026-03-29 11:12:20 +02:00
Bu5hm4nn
9a3b835c95 feat(DATA-DB-004): add Databento settings UI and independent scenario config
- Updated backtests page with Data Source card
  - Data source selector (databento/yfinance/synthetic)
  - Dataset dropdown (XNAS.BASIC, GLBX.MDP3)
  - Resolution dropdown (ohlcv-1d, ohlcv-1h)
  - Cost estimate display (placeholder for now)

- Added Scenario Configuration card
  - Underlying symbol selector (GLD/GC/XAU)
  - Start/end date inputs
  - Start price input (0 = auto-derive)
  - Underlying units, loan amount, margin call LTV

- BacktestPageService updates:
  - get_historical_prices() with data_source parameter
  - get_cost_estimate() for Databento cost estimation
  - get_cache_stats() for cache status display
  - Support for injected custom provider identity
  - DataSourceInfo for provider metadata

- BacktestSettingsRepository integration:
  - Load/save settings per workspace
  - Default values from BacktestSettings.create_default()

- Test update: TLT validation message changed to reflect
  new multi-symbol support (GLD, GC, XAU)
2026-03-29 11:12:11 +02:00
Bu5hm4nn
52a0ed2d96 docs: mark DATA-DB-001 and DATA-DB-002 as done 2026-03-29 10:46:51 +02:00
Bu5hm4nn
43067bb275 feat(DATA-DB-002): add BacktestSettings model and repository
- BacktestSettings dataclass with all configuration fields
- BacktestSettingsRepository for persistence per workspace
- Settings independent of portfolio configuration
- Full validation for dates, symbols, LTV, etc.
- 16 comprehensive tests

Fields:
- settings_id, name: identification
- data_source: databento|yfinance|synthetic
- dataset, schema: Databento configuration
- start_date, end_date: date range
- underlying_symbol, start_price, underlying_units: position config
- loan_amount, margin_call_ltv: LTV analysis
- template_slugs: strategies to test
- cache_key, data_cost_usd: caching metadata
- provider_ref: provider configuration
2026-03-29 10:46:25 +02:00
Bu5hm4nn
f4c3cee91d docs: move DATA-DB-001 to in-progress, update roadmap 2026-03-29 09:58:11 +02:00
Bu5hm4nn
bf13ab5b46 feat(DATA-DB-001): add Databento historical price source for backtesting
- Add DatabentoHistoricalPriceSource implementing HistoricalPriceSource protocol
- Smart caching with Parquet storage and metadata tracking
- Auto symbol-to-dataset resolution (GLD→XNAS.BASIC, GC=F→GLBX.MDP3)
- Cache management with age threshold invalidation
- Cost estimation via metadata.get_cost()
- Add databento>=0.30.0 to requirements.txt
- Add DATABENTO_API_KEY to .env.example
- Full test coverage with 16 tests
2026-03-29 09:58:02 +02:00
Bu5hm4nn
c02159481d docs: add Databento integration plan and roadmap items 2026-03-29 09:52:06 +02:00
Bu5hm4nn
8079ca58e7 docs: mark PORTFOLIO-002 and PORTFOLIO-003 done, update roadmap 2026-03-28 23:54:05 +01:00
Bu5hm4nn
bb06fa7e80 feat(PORTFOLIO-003): add premium and spread for physical gold positions 2026-03-28 23:53:46 +01:00
Bu5hm4nn
0e972e9dd6 feat(PORTFOLIO-002): add position storage costs 2026-03-28 23:48:41 +01:00
Bu5hm4nn
e148d55cda docs: mark DISPLAY-001 and DISPLAY-002 done, update roadmap 2026-03-28 21:59:52 +01:00
Bu5hm4nn
63a8482753 feat(DISPLAY-002): GLD mode shows real share prices 2026-03-28 21:59:15 +01:00
Bu5hm4nn
dac0463d55 feat(DISPLAY-002): GLD mode shows real share prices 2026-03-28 21:45:00 +01:00
Bu5hm4nn
20f5086507 feat(DISPLAY-001): add underlying mode switching 2026-03-28 21:44:32 +01:00
Bu5hm4nn
24c74cacbd docs: mark PORTFOLIO-001 done, update roadmap 2026-03-28 21:30:02 +01:00
Bu5hm4nn
1a39956757 feat(PORTFOLIO-001): add position-level portfolio entries 2026-03-28 21:29:30 +01:00
Bu5hm4nn
447f4bbd0d docs: add PORTFOLIO and DISPLAY roadmap items for multi-position mode switching 2026-03-28 20:59:29 +01:00
Bu5hm4nn
fd51f1e204 docs: mark DATA-004 done, update roadmap 2026-03-28 16:40:59 +01:00
Bu5hm4nn
3b98ebae69 feat(DATA-004): add underlying instrument selector 2026-03-28 16:40:18 +01:00
Bu5hm4nn
cdd091a468 docs: mark PRICING-002 and PRICING-003 done, update roadmap 2026-03-28 09:18:53 +01:00
Bu5hm4nn
3bf3774191 Merge branch 'feature/PRICING-002-basis-display' 2026-03-28 09:18:29 +01:00
Bu5hm4nn
de789f591e Merge branch 'feature/PRICING-003-hedge-correction' 2026-03-28 09:18:29 +01:00
Bu5hm4nn
9d06313480 feat(PRICING-002): add GLD/GC=F basis display on overview 2026-03-28 09:18:26 +01:00
Bu5hm4nn
966cee7963 feat(PRICING-003): use true GLD backing for hedge contract count 2026-03-28 09:18:26 +01:00
Bu5hm4nn
b30cfd7470 docs: mark PRICING-001 done, update roadmap 2026-03-28 09:05:28 +01:00
Bu5hm4nn
894d88f72f feat(PRICING-001): add GLD expense ratio decay correction 2026-03-28 09:04:35 +01:00
Bu5hm4nn
ff251b5ace docs: add GLD pricing and underlying selector roadmap items 2026-03-28 08:53:02 +01:00
Bu5hm4nn
e70e677612 Add GLD vs gold futures basis research for dashboard implementation 2026-03-28 08:48:49 +01:00
Bu5hm4nn
4620234967 feat(EXEC-001): add hedge strategy builder 2026-03-27 22:33:20 +01:00
Bu5hm4nn
554a41a060 refactor(BT-001C): share historical fixture provider 2026-03-27 21:41:50 +01:00
Bu5hm4nn
477514f838 feat(BT-002): add historical snapshot provider 2026-03-27 18:31:28 +01:00
Bu5hm4nn
1a6760bee3 feat(PORT-003): add historical ltv charts 2026-03-27 16:39:33 +01:00
Bu5hm4nn
b3418eed2e docs(BT-003B): record completed drilldown validation 2026-03-27 11:12:18 +01:00
Bu5hm4nn
3c9ff201e1 feat(BT-003B): add event comparison drilldown 2026-03-26 22:05:31 +01:00
Bu5hm4nn
bdf56ecebe fix(CORE-001D): close boundary review gaps 2026-03-26 17:34:09 +01:00
Bu5hm4nn
94f3c1ef83 feat(CORE-001D): close remaining boundary cleanup slices 2026-03-26 17:27:44 +01:00
Bu5hm4nn
99d22302ee fix(CORE-001D3B): validate alert history entry types 2026-03-26 15:19:42 +01:00
Bu5hm4nn
65da5b8f1d fix(CORE-001D3B): reject malformed alert history entries 2026-03-26 15:16:21 +01:00
Bu5hm4nn
ff76e326b1 feat(CORE-001D3B): surface alert history degraded state 2026-03-26 15:12:04 +01:00
Bu5hm4nn
09e03f96a8 chore(settings): drop unused last-saved helper 2026-03-26 15:05:28 +01:00
Bu5hm4nn
38d244356c refactor(settings): separate preview validation from internal failures 2026-03-26 15:00:53 +01:00
Bu5hm4nn
e860c40567 fix(settings): reject fractional refresh intervals 2026-03-26 14:05:49 +01:00
Bu5hm4nn
2759d9a36f fix(settings): track dirty state across all inputs 2026-03-26 13:59:56 +01:00
Bu5hm4nn
cfa3cfcc08 fix(settings): clarify last-saved status state 2026-03-26 13:54:56 +01:00
Bu5hm4nn
f7c134a709 fix(settings): preserve whole-dollar loan formatting 2026-03-26 13:34:34 +01:00
Bu5hm4nn
ea3b384103 fix(settings): fail closed on blank loan input 2026-03-26 13:28:30 +01:00
Bu5hm4nn
753e9d3146 fix(CORE-001D3A): accept decimal boundary inputs 2026-03-26 13:19:18 +01:00
Bu5hm4nn
bb557009c7 feat(CORE-001D3A): normalize alerts and settings service boundaries 2026-03-26 13:10:30 +01:00
Bu5hm4nn
91f67cd414 fix(pre-alpha): preserve injected provider identity 2026-03-26 12:32:52 +01:00
Bu5hm4nn
52d943e614 fix(pre-alpha): preserve injected template services 2026-03-26 12:26:38 +01:00
Bu5hm4nn
d7117bb6a3 fix(pre-alpha): preserve injected backtest services 2026-03-26 12:18:39 +01:00
Bu5hm4nn
18fd0681ca refactor(pre-alpha): align preview and runtime fixture validation 2026-03-26 12:11:45 +01:00
Bu5hm4nn
68275c4d18 refactor(pre-alpha): fail closed on historical fixture bounds 2026-03-26 12:04:42 +01:00
Bu5hm4nn
f38d0a53a9 refactor(pre-alpha): fail closed on historical preview fallbacks 2026-03-26 11:55:45 +01:00
Bu5hm4nn
4eec0127da fix(UX-001): reconcile preview validation behavior 2026-03-26 10:39:03 +01:00
Bu5hm4nn
82e52f7162 fix(UX-001): tighten historical stale state handling 2026-03-26 10:32:05 +01:00
Bu5hm4nn
78de8782c4 fix(UX-001): address layout review findings 2026-03-26 10:24:52 +01:00
Bu5hm4nn
a60c5fb1f2 feat(UX-001): add full-width two-pane dashboard layout 2026-03-25 23:19:09 +01:00
Bu5hm4nn
960e1e9215 docs: record CORE-002 completion 2026-03-25 21:59:34 +01:00
Bu5hm4nn
695f3d07ed fix(CORE-002C): explain undercollateralized historical seeds 2026-03-25 21:44:30 +01:00
Bu5hm4nn
87900b01bf fix(CORE-002C): align historical units with workspace weight 2026-03-25 21:37:55 +01:00
Bu5hm4nn
aae67dfd9b fix(workspaces): seed new defaults from live quote 2026-03-25 19:48:58 +01:00
Bu5hm4nn
782e8f692e fix(portfolio): default new workspaces to 100 oz 2026-03-25 19:42:54 +01:00
Bu5hm4nn
8d4216a6f8 fix(workspaces): persist workspace data across restarts 2026-03-25 19:27:26 +01:00
Bu5hm4nn
bfb6c71be3 fix(pricing): correct relative hedge payoff calculations 2026-03-25 19:27:26 +01:00
Bu5hm4nn
5217304624 feat(CORE-001D2B): normalize options cache boundaries 2026-03-25 19:05:00 +01:00
Bu5hm4nn
442a0cd702 feat(CORE-001D2A): tighten quote provider cache normalization 2026-03-25 17:10:11 +01:00
Bu5hm4nn
dbcc6a1ea0 docs: record CORE-002B completion 2026-03-25 15:53:59 +01:00
Bu5hm4nn
829c0b5da2 feat(CORE-002B): roll out hedge quote unit conversion 2026-03-25 15:46:44 +01:00
Bu5hm4nn
f00b58bba0 docs: split CORE-002 into rollout slices 2026-03-25 15:02:44 +01:00
Bu5hm4nn
f0d7ab5748 feat(CORE-002): add GLD share quote conversion seam 2026-03-25 14:52:48 +01:00
Bu5hm4nn
1a2dfaff01 docs: record CORE-001D1 completion 2026-03-25 13:35:54 +01:00
Bu5hm4nn
132aaed512 feat(CORE-001D1): harden unit-aware workspace persistence 2026-03-25 13:19:33 +01:00
Bu5hm4nn
cfb6abd842 docs: compact agent policy into yaml 2026-03-25 11:18:31 +01:00
Bu5hm4nn
691277dea2 docs: add instrument-aware quote unit story 2026-03-25 10:49:46 +01:00
Bu5hm4nn
8270e2dcbb docs: scope decimal boundary cleanup 2026-03-25 10:33:10 +01:00
Bu5hm4nn
b1e5cbd47e docs: close turnstile roadmap items 2026-03-25 10:29:50 +01:00
Bu5hm4nn
40f7e74a1b feat(SEC-001): protect workspace bootstrap with turnstile 2026-03-25 10:02:10 +01:00
Bu5hm4nn
f6667b6b63 docs: migrate roadmap to structured yaml tasks 2026-03-25 09:37:02 +01:00
Bu5hm4nn
7932148b73 docs: update production URL to lombard.uncloud.tech 2026-03-25 09:24:54 +01:00
Bu5hm4nn
c7c8654be7 feat(CORE-001C): type historical unit materialization 2026-03-24 22:30:36 +01:00
Bu5hm4nn
7c2729485c feat(CORE-001B): migrate overview and hedge math to unit types 2026-03-24 21:57:40 +01:00
Bu5hm4nn
a69fdf6762 feat(CORE-001A): add decimal unit value foundation 2026-03-24 21:33:17 +01:00
Bu5hm4nn
5ac66ea97b feat(PORT-004C): seed workspace routes from portfolio settings 2026-03-24 21:14:09 +01:00
Bu5hm4nn
2cbe4f274d fix: restore workspace nav and correct overview spot fallback 2026-03-24 20:54:45 +01:00
Bu5hm4nn
75f8e0a282 feat(PORT-004A): add workspace bootstrap and scoped settings 2026-03-24 20:18:12 +01:00
Bu5hm4nn
9d1a2f3fe8 docs: refine workspace bootstrap flow 2026-03-24 19:58:07 +01:00
Bu5hm4nn
54d12e2393 docs: add hashed workspace persistence story 2026-03-24 19:52:39 +01:00
Bu5hm4nn
ae08160b02 docs: review backlog after backtest ui sprint 2026-03-24 19:38:07 +01:00
Bu5hm4nn
24de006adb feat: show hedge starting position summary 2026-03-24 19:36:37 +01:00
Bu5hm4nn
021ce7dd99 fix: anchor hedge contribution bars at zero 2026-03-24 19:34:41 +01:00
Bu5hm4nn
98ecfb735e fix: correct hedge equity math at downside scenarios 2026-03-24 19:31:13 +01:00
Bu5hm4nn
ff4e565ee6 feat(BT-003A): add event comparison page 2026-03-24 19:20:35 +01:00
Bu5hm4nn
68cb2aa51a docs: update workflow and backtest UI backlog 2026-03-24 19:01:55 +01:00
Bu5hm4nn
d2d85bccdb feat(BT-001A): add backtest scenario runner page 2026-03-24 19:00:22 +01:00
Bu5hm4nn
8566cc203f feat(BT-003): add event preset backtest comparison 2026-03-24 17:49:58 +01:00
Bu5hm4nn
d4dc34d5ab feat(BT-001): add synthetic historical backtesting engine 2026-03-24 16:14:51 +01:00
Bu5hm4nn
2161e10626 feat(EXEC-001A): add named strategy templates 2026-03-24 12:27:39 +01:00
Bu5hm4nn
78a01d9fc5 docs: define strategy template and backtesting MVP 2026-03-24 11:23:12 +01:00
Bu5hm4nn
d0b1304b71 feat(PORT-002): add alert status and history 2026-03-24 11:04:32 +01:00
Bu5hm4nn
7c6b8ef2c6 docs: require review before merging worktrees 2026-03-24 10:50:20 +01:00
Bu5hm4nn
56e84680e8 feat(PORT-001A): add collateral entry basis settings 2026-03-24 00:38:13 +01:00
Bu5hm4nn
140a21c0b6 chore: enforce linting as part of build 2026-03-24 00:26:36 +01:00
Bu5hm4nn
de03bd0064 feat(DATA-003): calculate live option greeks 2026-03-23 23:46:40 +01:00
Bu5hm4nn
46ce81d2d6 ops: attach vault-dash to proxy-net and document vd1 route 2026-03-23 23:35:47 +01:00
Bu5hm4nn
ed6daf6d47 docs: add TDD red-orange-green workflow memo 2026-03-23 23:29:55 +01:00
Bu5hm4nn
133908dd36 feat: prioritize lazy options loading and live overview wiring
- queue OPS-001 Caddy route for vd1.uncloud.vpn
- lazy-load options expirations/chains per expiry
- wire overview to live quote data and persisted portfolio config
- extend browser test to verify live quote metadata
2026-03-23 23:23:59 +01:00
Bu5hm4nn
d51fa05d5a test: add Playwright browser tests and document test loop
- add real browser test for overview and options pages
- document engineering learnings in AGENTS.md
- commit NiceGUI header layout fix
- limit options initial expirations for faster first render
2026-03-23 23:11:38 +01:00
Bu5hm4nn
199ecb933f Merge DATA-002: Live options chain data 2026-03-23 22:53:18 +01:00
197 changed files with 26649 additions and 1176 deletions

View File

@@ -1,4 +1,9 @@
APP_HOST=0.0.0.0 APP_HOST=0.0.0.0
APP_PORT=8000 APP_PORT=8000
# For local development, docker-compose maps host port 8100 -> container port 8000
# This avoids conflicts with other services on port 8000
REDIS_URL=redis://localhost:6379 REDIS_URL=redis://localhost:6379
CONFIG_PATH=/app/config/settings.yaml CONFIG_PATH=/app/config/settings.yaml
TURNSTILE_SITE_KEY=1x00000000000000000000AA
TURNSTILE_SECRET_KEY=1x0000000000000000000000000000000AA
DATABENTO_API_KEY=db-your-api-key-here

View File

@@ -1,155 +0,0 @@
name: Build and Deploy
on:
push:
branches: [main]
workflow_dispatch:
env:
REGISTRY: ${{ vars.REGISTRY || '10.100.0.2:3000' }}
IMAGE_NAME: ${{ github.repository }}
jobs:
lint:
runs-on: [linux, docker]
container:
image: catthehacker/ubuntu:act-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install ruff black
- name: Run ruff
run: ruff check app tests scripts
- name: Run black
run: black --check app tests scripts
test:
runs-on: [linux, docker]
container:
image: catthehacker/ubuntu:act-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
set -x
python -m pip install --upgrade pip
pip install pytest pytest-asyncio httpx
pip install nicegui fastapi uvicorn yfinance polars pandas pydantic pyyaml
pip list
- name: Run tests
run: pytest tests/test_pricing.py tests/test_strategies.py tests/test_portfolio.py -v --tb=short
type-check:
runs-on: [linux, docker]
container:
image: catthehacker/ubuntu:act-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
set -x
python -m pip install --upgrade pip
pip install mypy
pip install nicegui fastapi uvicorn yfinance polars pandas pydantic pyyaml
pip list
- name: Show working directory
run: pwd && ls -la
- name: Run mypy
run: |
echo "Running mypy on core modules..."
mypy app/core app/models app/strategies app/services --ignore-missing-imports --show-error-codes --show-traceback || true
echo "Type check completed (warnings allowed during development)"
build:
runs-on: [linux, docker]
needs: [lint, test, type-check]
container:
image: catthehacker/ubuntu:act-latest
steps:
- uses: actions/checkout@v4
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
registry: docker.io
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker
- name: Login to Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.REGISTRY_PASSWORD || secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
provenance: false
deploy:
runs-on: [linux, docker]
needs: build
if: github.ref == 'refs/heads/main'
container:
image: catthehacker/ubuntu:act-latest
env:
DEPLOY_HOST: ${{ vars.DEPLOY_HOST }}
DEPLOY_USER: ${{ vars.DEPLOY_USER || 'deploy' }}
DEPLOY_PORT: ${{ vars.DEPLOY_PORT || '22' }}
DEPLOY_PATH: ${{ vars.DEPLOY_PATH || '/opt/vault-dash' }}
DEPLOY_SSH_PRIVATE_KEY: ${{ secrets.DEPLOY_SSH_PRIVATE_KEY }}
APP_IMAGE: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
APP_ENV: production
APP_NAME: Vault Dashboard
APP_PORT: "8000"
steps:
- uses: actions/checkout@v4
- name: Install dependencies
run: |
apt-get update && apt-get install -y bash openssh-client curl docker.io
mkdir -p ~/.ssh
chmod 700 ~/.ssh
- name: Setup SSH key
run: |
# Handle base64-encoded key (recommended) or raw key
mkdir -p ~/.ssh && chmod 700 ~/.ssh
if echo "$DEPLOY_SSH_PRIVATE_KEY" | base64 -d > ~/.ssh/id_ed25519 2>/dev/null; then
echo "Decoded base64 key"
else
printf '%s\n' "$DEPLOY_SSH_PRIVATE_KEY" > ~/.ssh/id_ed25519
fi
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -p "${DEPLOY_PORT:-22}" "${DEPLOY_HOST}" >> ~/.ssh/known_hosts 2>/dev/null || true
- name: Deploy
run: |
test -n "$DEPLOY_HOST" || (echo "DEPLOY_HOST must be set" && exit 1)
test -n "$DEPLOY_SSH_PRIVATE_KEY" || (echo "DEPLOY_SSH_PRIVATE_KEY must be set" && exit 1)
bash scripts/deploy-forgejo.sh

View File

@@ -24,32 +24,12 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install ruff black pip install -r requirements-dev.txt
- name: Run ruff - name: Run ruff
run: ruff check app tests scripts run: ruff check app tests scripts
- name: Run black - name: Run black
run: black --check app tests scripts run: black --check app tests scripts
test:
runs-on: [linux, docker]
container:
image: catthehacker/ubuntu:act-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
set -x
python -m pip install --upgrade pip
pip install pytest pytest-asyncio httpx
pip install nicegui fastapi uvicorn yfinance polars pandas pydantic pyyaml
pip list
- name: Run tests
run: pytest tests/test_pricing.py tests/test_strategies.py tests/test_portfolio.py -v --tb=short
type-check: type-check:
runs-on: [linux, docker] runs-on: [linux, docker]
container: container:
@@ -64,13 +44,31 @@ jobs:
run: | run: |
set -x set -x
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install mypy pip install -r requirements-dev.txt
pip install nicegui fastapi uvicorn yfinance polars pandas pydantic pyyaml
pip list pip list
- name: Show working directory
run: pwd && ls -la
- name: Run mypy - name: Run mypy
run: | run: |
echo "Running mypy on core modules..." echo "Running mypy on core modules..."
mypy app/core app/models app/strategies app/services --ignore-missing-imports --show-error-codes --show-traceback || true mypy app/core app/models app/strategies app/services app/domain --show-error-codes --show-traceback
echo "Type check completed (warnings allowed during development)" echo "Type check completed successfully"
test:
runs-on: [linux, docker]
container:
image: mcr.microsoft.com/playwright:v1.58.0-noble
env:
APP_ENV: test
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
set -x
python -m pip install --upgrade pip
pip install -r requirements-dev.txt
pip list
- name: Run tests
run: pytest -v --tb=short

View File

@@ -0,0 +1,103 @@
name: Build and Deploy
on:
workflow_run:
workflows: [CI]
types: [completed]
branches: [main]
workflow_dispatch:
env:
REGISTRY: ${{ vars.REGISTRY || '10.100.0.2:3000' }}
IMAGE_NAME: ${{ github.repository }}
RESOLVED_SHA: ${{ github.event_name == 'workflow_run' && github.event.workflow_run.head_sha || github.sha }}
RESOLVED_REF: ${{ github.event_name == 'workflow_run' && github.event.workflow_run.head_branch || github.ref_name }}
jobs:
build:
if: |
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success')
runs-on: [linux, docker]
container:
image: catthehacker/ubuntu:act-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ env.RESOLVED_SHA }}
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
registry: docker.io
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker
- name: Login to Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.REGISTRY_PASSWORD || secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.RESOLVED_SHA }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
provenance: false
deploy:
runs-on: [linux, docker]
needs: build
if: |
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.head_branch == 'main')
container:
image: catthehacker/ubuntu:act-latest
env:
DEPLOY_HOST: ${{ vars.DEPLOY_HOST }}
DEPLOY_USER: ${{ vars.DEPLOY_USER || 'deploy' }}
DEPLOY_PORT: ${{ vars.DEPLOY_PORT || '22' }}
DEPLOY_PATH: ${{ vars.DEPLOY_PATH || '/opt/vault-dash' }}
DEPLOY_SSH_PRIVATE_KEY: ${{ secrets.DEPLOY_SSH_PRIVATE_KEY }}
APP_IMAGE: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.RESOLVED_SHA }}
APP_ENV: production
APP_NAME: Vault Dashboard
APP_PORT: "8000"
TURNSTILE_SITE_KEY: ${{ vars.TURNSTILE_SITE_KEY }}
TURNSTILE_SECRET_KEY: ${{ secrets.TURNSTILE_SECRET_KEY }}
DATABENTO_API_KEY: ${{ secrets.DATABENTO_API_KEY }}
steps:
- uses: actions/checkout@v4
with:
ref: ${{ env.RESOLVED_SHA }}
- name: Install dependencies
run: |
apt-get update && apt-get install -y bash openssh-client curl
mkdir -p ~/.ssh
chmod 700 ~/.ssh
- name: Setup SSH key
run: |
# Handle base64-encoded key (recommended) or raw key
mkdir -p ~/.ssh && chmod 700 ~/.ssh
if echo "$DEPLOY_SSH_PRIVATE_KEY" | base64 -d > ~/.ssh/id_ed25519 2>/dev/null; then
echo "Decoded base64 key"
else
printf '%s\n' "$DEPLOY_SSH_PRIVATE_KEY" > ~/.ssh/id_ed25519
fi
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -p "${DEPLOY_PORT:-22}" "${DEPLOY_HOST}" >> ~/.ssh/known_hosts 2>/dev/null || true
- name: Deploy
run: |
test -n "$DEPLOY_HOST" || (echo "DEPLOY_HOST must be set" && exit 1)
test -n "$DEPLOY_SSH_PRIVATE_KEY" || (echo "DEPLOY_SSH_PRIVATE_KEY must be set" && exit 1)
bash scripts/deploy-actions.sh

9
.gitignore vendored
View File

@@ -4,5 +4,14 @@ __pycache__/
.env .env
config/secrets.yaml config/secrets.yaml
data/cache/ data/cache/
data/workspaces/
data/strategy_templates.json
.idea/ .idea/
.vscode/ .vscode/
.worktrees/
tests/artifacts/
secrets/
.workspaces/
.cache/
uv.lock

27
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,27 @@
# Pre-commit hooks for vault-dash
# Install: pip install pre-commit && pre-commit install
repos:
# Ruff - fast Python linter
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.8
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
files: ^(app|tests|scripts)/
# Black - Python formatter
- repo: https://github.com/psf/black
rev: 26.3.1
hooks:
- id: black
files: ^(app|tests|scripts)/
language_version: python3.12
# Type checking with mypy (optional, slower)
# Uncomment to enable:
# - repo: https://github.com/pre-commit/mirrors-mypy
# rev: v1.15.0
# hooks:
# - id: mypy
# args: [--ignore-missing-imports]
# files: ^app/

View File

@@ -0,0 +1,23 @@
{
"settings_id": "3e5143f6-29da-4416-8fca-edeaaac986ae",
"name": "Backtest 2020-01-01 - 2023-12-31",
"data_source": "databento",
"dataset": "XNAS.BASIC",
"schema": "ohlcv-1d",
"start_date": "2020-01-01",
"end_date": "2023-12-31",
"underlying_symbol": "GLD",
"start_price": 0.0,
"underlying_units": 1000.0,
"loan_amount": 0.0,
"margin_call_ltv": 0.75,
"template_slugs": [
"protective-put-atm-12m"
],
"cache_key": "",
"data_cost_usd": 0.0,
"provider_ref": {
"provider_id": "synthetic_v1",
"pricing_mode": "synthetic_bs_mid"
}
}

View File

@@ -0,0 +1,23 @@
{
"settings_id": "a48fe9fe-90d0-4cfc-a78f-a8db01cbf4d4",
"name": "Backtest 2020-01-01 - 2023-12-31",
"data_source": "databento",
"dataset": "XNAS.BASIC",
"schema": "ohlcv-1d",
"start_date": "2020-01-01",
"end_date": "2023-12-31",
"underlying_symbol": "GLD",
"start_price": 0.0,
"underlying_units": 1000.0,
"loan_amount": 0.0,
"margin_call_ltv": 0.75,
"template_slugs": [
"protective-put-atm-12m"
],
"cache_key": "",
"data_cost_usd": 0.0,
"provider_ref": {
"provider_id": "synthetic_v1",
"pricing_mode": "synthetic_bs_mid"
}
}

View File

@@ -0,0 +1,23 @@
{
"settings_id": "23d8dd8b-1798-45c7-855f-415c04355477",
"name": "Backtest 2020-01-01 - 2023-12-31",
"data_source": "databento",
"dataset": "XNAS.BASIC",
"schema": "ohlcv-1d",
"start_date": "2020-01-01",
"end_date": "2023-12-31",
"underlying_symbol": "GLD",
"start_price": 0.0,
"underlying_units": 1000.0,
"loan_amount": 0.0,
"margin_call_ltv": 0.75,
"template_slugs": [
"protective-put-atm-12m"
],
"cache_key": "",
"data_cost_usd": 0.0,
"provider_ref": {
"provider_id": "synthetic_v1",
"pricing_mode": "synthetic_bs_mid"
}
}

197
AGENTS.md Normal file
View File

@@ -0,0 +1,197 @@
policy:
subagent_usage:
required: true
rules:
- prefer sub-agents for parallelizable implementation work
- use sub-agents when implementing independent roadmap items
- create worktrees for sub-agents working on the same codebase
- review sub-agent output before merging to main
- use `agent: implementation-reviewer` for code quality checks
- use `agent: qa-validator` for end-to-end validation
- chain sub-agents for multi-step workflows (plan → implement → review)
- always use sub-agents unless the task is trivial or requires direct interaction
test_loop:
required: true
rules:
- run the app locally after changes
- run real tests against the running app
- for UI work, prefer Playwright/browser-visible checks
- verify the exact changed route/page
local_first:
required: true
rules:
- use local Docker/OrbStack before deploy
- deploy only after local behavior is verified
confidence:
rules:
- browser-visible behavior beats log-only confidence
- do not treat returned HTML as success if the page still has runtime/UI errors
- do not claim a feature is live unless the rendered UI consumes it
development_flow:
tdd: [red, orange, green]
build_rule:
- make build must enforce lint first
- if build is green, lint is already green
review:
required_before_merge: true
install_tool: review_install_agents
rules:
- install review agents with the review_install_agents tool before running the review workflow if needed
- use the full parallel review flow before merging worktree or sub-agent changes to main
- do not merge based only on compile/test results
backlog:
review_after_each_sprint: true
source_of_truth:
- docs/roadmap/ROADMAP.yaml
- docs/roadmap/backlog
- docs/roadmap/in-progress
- docs/roadmap/done
- docs/roadmap/blocked
- docs/roadmap/cancelled
rules:
- add newly discovered backlog items
- reorder priorities and dependencies based on new knowledge
- capture follow-up work explicitly
compatibility:
rules:
- preserve shared domain compatibility across parallel worktrees
- LombardPortfolio must remain available for strategy/core compatibility until intentionally removed everywhere
learnings:
nicegui:
- ui.header must be a top-level page layout element
- do not nest ui.header inside ui.column or similar containers
options_page:
- loading all expiries/chains before first paint can make the page appear broken
- render fast first, then load incrementally
nicegui_fastapi:
- pages should not assume request.app.state is the right access path for shared services
- prefer an explicit runtime/service registry
docker_dev:
- do not mount the whole repo over /app when the image contains required runtime scripts
- prefer narrower mounts like ./app and ./config
validation_checklist:
- local Docker stack starts cleanly
- /health returns OK
- changed page opens in browser automation
- no visible 500/runtime error
- screenshot artifact captured when useful
- relevant logs checked
gitea_ci:
repo:
base_url: "http://tea.uncloud.vpn"
api_base_url: "http://tea.uncloud.vpn/api/v1"
owner: "bu5hm4nn"
name: "vault-dash"
full_name: "bu5hm4nn/vault-dash"
ssh_remote: "ssh://git@tea.uncloud.vpn:2223/bu5hm4nn/vault-dash.git"
auth:
preferred_method: "tea login"
notes:
- "`tea` is already logged in for this repo and should be preferred for API access"
- "`tea api` works against this Gitea 1.25.5 instance even when higher-level `tea actions` commands are unavailable"
- A raw token may exist separately, but automation should not assume one unless the user says so
shell_setup: |
export GITEA_URL="${GITEA_URL:-http://tea.uncloud.vpn}"
export GITEA_API="${GITEA_API:-$GITEA_URL/api/v1}"
export GITEA_REPO="bu5hm4nn/vault-dash"
tea login list >/dev/null
command -v jq >/dev/null
triage:
preferred_method: "use `tea api` for run/job/log discovery; use the Gitea web UI as fallback"
tested_on: "2026-04-07"
tested_behavior:
- "`tea actions runs list` refuses to run because the server is older than 1.26.0"
- "`tea api` works and authenticates via the stored tea login"
- Gitea 1.25.5 exposes Actions run, job, log, workflow, artifact, and runner endpoints in swagger
- The new server supports `/repos/{owner}/{repo}/actions/runs/{run}/jobs` and `/repos/{owner}/{repo}/actions/jobs/{job_id}/logs`
response_shape_note:
- Gitea Actions list endpoints return object wrappers such as `.workflow_runs` or `.jobs`
- In jq, prefer `(.workflow_runs // .)` for runs and `(.jobs // .)` for jobs
default_scope:
- Prefer latest run for `git rev-parse HEAD`
- If there is no run for HEAD yet, inspect the latest failed run on the current branch
- If CI failed, Build and Deploy will not run because deploy is triggered after CI success
query_recipes:
list_recent_runs_current_branch: |
branch="$(git branch --show-current)"
tea api -l tea.uncloud.vpn "/repos/bu5hm4nn/vault-dash/actions/runs?branch=$branch&limit=20" | jq '(.workflow_runs // .) | map({id, workflow_id, status, event, head_branch, head_sha, created_at, html_url})'
latest_run_for_head_sha: |
branch="$(git branch --show-current)"
sha="$(git rev-parse HEAD)"
tea api -l tea.uncloud.vpn "/repos/bu5hm4nn/vault-dash/actions/runs?branch=$branch&limit=50" | jq -r --arg sha "$sha" '((.workflow_runs // .) | map(select((.head_sha // .commit_sha) == $sha)) | sort_by(.created_at // .run_started_at // .id) | reverse | .[0])'
latest_failed_run_current_branch: |
branch="$(git branch --show-current)"
tea api -l tea.uncloud.vpn "/repos/bu5hm4nn/vault-dash/actions/runs?branch=$branch&status=failure&limit=20" | jq -r '((.workflow_runs // .) | sort_by(.created_at // .run_started_at // .id) | reverse | .[0])'
list_jobs_for_run: |
run_id="<RUN_ID>"
tea api -l tea.uncloud.vpn "/repos/bu5hm4nn/vault-dash/actions/runs/$run_id/jobs" | jq '(.jobs // .) | map({id, name, status, conclusion, started_at, completed_at})'
first_failed_job_for_run: |
run_id="<RUN_ID>"
tea api -l tea.uncloud.vpn "/repos/bu5hm4nn/vault-dash/actions/runs/$run_id/jobs" | jq -r '((.jobs // .) | map(select((.conclusion // .status) == "failure" or .status == "failure")) | sort_by(.started_at // .id) | .[0])'
download_job_log: |
job_id="<JOB_ID>"
tea api -l tea.uncloud.vpn "/repos/bu5hm4nn/vault-dash/actions/jobs/$job_id/logs"
viewing_job_logs:
web_ui:
url: "http://tea.uncloud.vpn/bu5hm4nn/vault-dash/actions"
steps:
- Navigate to Actions tab in the Gitea UI
- Open the run from its `html_url` returned by `tea api`
- Expand the failing job (lint/test/type-check/build/deploy)
- Click the failed step to inspect detailed logs
api:
preferred_cli: "tea api"
notes:
- High-level `tea actions` commands may be version-gated by the CLI
- "`tea api` is the stable fallback for this Gitea instance"
workflows:
CI:
file: ".gitea/workflows/ci.yaml"
jobs: [lint, type-check, test]
triggers: [push main, pull_request main]
Build_and_Deploy:
file: ".gitea/workflows/deploy.yaml"
jobs: [build, deploy]
triggers:
- workflow_run after CI succeeds on main
- manual workflow_dispatch
common_failures:
missing_dependency:
symptom: "ModuleNotFoundError: No module named 'X'"
fix:
- Add runtime deps to requirements.txt
- Add dev/test tooling deps to requirements-dev.txt
- CI installs requirements-dev.txt, so keep CI-critical deps there
playwright_version_drift:
symptom: "Playwright browser/package mismatch or local vs CI behavior differs"
fix:
- Keep Python Playwright pinned in requirements-dev.txt
- Keep the pin aligned with `.gitea/workflows/ci.yaml` Playwright container image tag
tea_version_gate:
symptom: "`tea actions ...` says the server is older than 1.26.0"
fix: "Use `tea api` directly against the Actions endpoints instead of high-level tea subcommands"
type_error:
symptom: "error: Incompatible types..."
fix: "Run `mypy app/core app/models app/strategies app/services app/domain --show-error-codes --show-traceback` locally to reproduce"
test_failure:
symptom: "FAILED test_name"
fix: "Run failing test locally with `pytest -xvs <test_path_or_nodeid>`"
pre_merge_checklist:
- run `pytest tests/ -v --tb=short` locally and ensure all new/changed tests pass
- run `/review` to get implementation review and QA validation
- verify CI passes on Gitea (lint, test, type-check, build, deploy)
- address all review comments before merging to main

View File

@@ -1,13 +1,13 @@
# Deployment Guide # Deployment Guide
This project uses Forgejo Actions for CI/CD, building a Docker image and deploying to a VPN-reachable VPS over SSH. This project uses Gitea Actions for CI/CD, building a Docker image and deploying to a VPN-reachable VPS over SSH.
## Overview ## Overview
Deployment workflow: Deployment workflow:
1. **CI** (`.forgejo/workflows/ci.yaml`): Lint, test, type-check on every push 1. **CI** (`.gitea/workflows/ci.yaml`): lint, test, type-check on every push
2. **Deploy** (`.forgejo/workflows/deploy.yaml`): Build, scan, and deploy on main branch 2. **Build and Deploy** (`.gitea/workflows/deploy.yaml`): build and deploy on `main` after CI succeeds, or manually via workflow dispatch
--- ---
@@ -20,14 +20,15 @@ Deployment workflow:
- SSH access via VPN - SSH access via VPN
- Python 3.11+ (for healthcheck script) - Python 3.11+ (for healthcheck script)
### Forgejo Instance Setup ### Gitea Instance Setup
1. Enable Actions in Forgejo admin settings 1. Enable Actions in Gitea admin settings
2. Register a runner (or use Forgejo's built-in runner) 2. Enable Actions for the repository
3. Register an Actions runner
### Runner Setup ### Runner Setup
Forgejo supports both built-in runners and self-hosted Docker runners. For Docker-in-Docker builds, ensure the runner has: Gitea Actions uses `act_runner`. For Docker-based builds, ensure the runner host has:
- Docker installed and accessible - Docker installed and accessible
- `docker` and `docker compose` commands available - `docker` and `docker compose` commands available
@@ -35,34 +36,40 @@ Forgejo supports both built-in runners and self-hosted Docker runners. For Docke
Example runner registration: Example runner registration:
```bash ```bash
# On your Forgejo server # On the runner host
forgejo actions generate-runner-token > token.txt ./act_runner register --no-interactive --instance http://tea.uncloud.vpn --token <registration-token>
forgejo-runner register --instance-addr http://localhost:3000 --token $(cat token.txt) ./act_runner daemon
forgejo-runner daemon
``` ```
Repository, organization, and instance runner tokens can be created from the Gitea web UI under Actions runner settings.
--- ---
## 2. Required Secrets ## 2. Required Secrets
Configure in **Settings → Secrets and variables → Actions**: Configure in **Settings → Secrets and variables → Actions**.
### Secrets
| Secret | Description | | Secret | Description |
|--------|-------------| |--------|-------------|
| `DEPLOY_SSH_PRIVATE_KEY` | SSH key for VPS access | | `DEPLOY_SSH_PRIVATE_KEY` | SSH key for VPS access |
| `DEPLOY_HOST` | VPS IP/hostname (VPN-reachable) |
| `DEPLOY_USER` | Deploy user (default: `deploy`) |
| `DEPLOY_PORT` | SSH port (default: 22) |
| `DEPLOY_PATH` | Deploy path (default: `/opt/vault-dash`) |
| `NICEGUI_STORAGE_SECRET` | Session secret |
| `REGISTRY_PASSWORD` | Container registry token (if needed) | | `REGISTRY_PASSWORD` | Container registry token (if needed) |
| `DOCKERHUB_TOKEN` | Docker Hub token |
| `TURNSTILE_SECRET_KEY` | Turnstile secret key |
| `DATABENTO_API_KEY` | Databento API key |
### Optional Variables ### Variables
| Variable | Description | | Variable | Description |
|----------|-------------| |----------|-------------|
| `DEPLOY_HOST` | VPS IP/hostname (VPN-reachable) |
| `DEPLOY_USER` | Deploy user (default: `deploy`) |
| `DEPLOY_PORT` | SSH port (default: `22`) |
| `DEPLOY_PATH` | Deploy path (default: `/opt/vault-dash`) |
| `REGISTRY` | Container registry URL | | `REGISTRY` | Container registry URL |
| `EXTERNAL_HEALTHCHECK_URL` | Public health check URL | | `DOCKERHUB_USERNAME` | Docker Hub username |
| `TURNSTILE_SITE_KEY` | Turnstile site key |
--- ---
@@ -127,7 +134,7 @@ export DEPLOY_SSH_PRIVATE_KEY="$(cat ~/.ssh/deploy_key)"
export APP_IMAGE="registry.example.com/vault-dash:latest" export APP_IMAGE="registry.example.com/vault-dash:latest"
# Run deploy script # Run deploy script
bash scripts/deploy.sh bash scripts/deploy-actions.sh
``` ```
--- ---
@@ -150,22 +157,12 @@ vault.uncloud.vpn {
--- ---
## 7. Future: OAuth Integration ## 7. Troubleshooting
When ready to expose publicly:
1. Set up OAuth provider (Authentik, Keycloak, etc.)
2. Configure `CORS_ORIGINS` for public URL
3. Add OAuth middleware to FastAPI
4. Enable HTTPS via Let's Encrypt
---
## 8. Troubleshooting
### Runner can't build Docker images ### Runner can't build Docker images
Ensure runner has Docker access: Ensure runner has Docker access:
```bash ```bash
docker run --rm hello-world docker run --rm hello-world
``` ```

View File

@@ -1,4 +1,4 @@
.PHONY: install dev test build deploy .PHONY: install dev lint test build deploy cache-stats cache-clear cache-list
install: install:
python3 -m venv .venv python3 -m venv .venv
@@ -7,11 +7,25 @@ install:
dev: dev:
. .venv/bin/activate && python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000 . .venv/bin/activate && python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
lint:
. .venv/bin/activate && ruff check app tests scripts
. .venv/bin/activate && black --check app tests scripts
test: test:
. .venv/bin/activate && pytest . .venv/bin/activate && pytest
build: build: lint
docker build -t vault-dash . docker build -t vault-dash .
deploy: deploy:
./scripts/deploy.sh ./scripts/deploy.sh
# Cache management commands
cache-stats:
. .venv/bin/activate && python scripts/cache_cli.py stats
cache-list:
. .venv/bin/activate && python scripts/cache_cli.py list
cache-clear:
. .venv/bin/activate && python scripts/cache_cli.py clear --yes

View File

@@ -42,6 +42,37 @@ docker run -p 8000:8000 vault-dash
docker-compose up -d docker-compose up -d
``` ```
### Turnstile configuration
Workspace creation on the public welcome page is protected by Cloudflare Turnstile.
Local and test environments may use Cloudflare's published test keys:
```bash
TURNSTILE_SITE_KEY=1x00000000000000000000AA
TURNSTILE_SECRET_KEY=1x0000000000000000000000000000000AA
```
Negative-path testing can use the always-fail/blocked test keys:
```bash
TURNSTILE_SITE_KEY=2x00000000000000000000AB
TURNSTILE_SECRET_KEY=2x0000000000000000000000000000000AA
```
Production must provide real keys via environment variables:
```bash
TURNSTILE_SITE_KEY=...
TURNSTILE_SECRET_KEY=...
```
In Forgejo deployment:
- `vars.TURNSTILE_SITE_KEY` provides the public site key
- `secrets.TURNSTILE_SECRET_KEY` provides the server-side secret key
Browser tests run with `APP_ENV=test` and the Turnstile test keys.
## Architecture ## Architecture
``` ```

View File

@@ -0,0 +1,5 @@
"""Backtesting subsystem for historical hedge simulation."""
from .engine import SyntheticBacktestEngine
__all__ = ["SyntheticBacktestEngine"]

167
app/backtesting/engine.py Normal file
View File

@@ -0,0 +1,167 @@
from __future__ import annotations
from dataclasses import dataclass
from app.models.backtest import (
BacktestDailyPoint,
BacktestScenario,
BacktestSummaryMetrics,
TemplateBacktestResult,
)
from app.models.strategy_template import StrategyTemplate
from app.services.backtesting.historical_provider import (
BacktestHistoricalProvider,
DailyClosePoint,
HistoricalOptionPosition,
)
@dataclass
class OpenSyntheticPosition(HistoricalOptionPosition):
pass
class SyntheticBacktestEngine:
def __init__(self, provider: BacktestHistoricalProvider) -> None:
self.provider = provider
def run_template(
self,
scenario: BacktestScenario,
template: StrategyTemplate,
history: list[DailyClosePoint],
) -> TemplateBacktestResult:
start_day = history[0]
cash_balance = scenario.initial_portfolio.cash_balance
total_hedge_cost = 0.0
total_option_payoff_realized = 0.0
warnings: list[str] = []
open_positions = self._open_positions(scenario, template, history, start_day)
opening_cost = sum(position.entry_price * position.quantity for position in open_positions)
cash_balance -= opening_cost
total_hedge_cost += opening_cost
daily_points: list[BacktestDailyPoint] = []
for day in history:
premium_cashflow = -opening_cost if day.date == start_day.date else 0.0
realized_option_cashflow = 0.0
option_market_value = 0.0
active_position_ids: list[str] = []
remaining_positions: list[HistoricalOptionPosition] = []
for position in open_positions:
valuation = self.provider.mark_position(
position,
symbol=scenario.symbol,
as_of_date=day.date,
spot=day.close,
)
self._append_warning(warnings, valuation.warning)
if not valuation.is_active:
cash_balance += valuation.realized_cashflow
realized_option_cashflow += valuation.realized_cashflow
total_option_payoff_realized += valuation.realized_cashflow
continue
option_market_value += valuation.mark * position.quantity
active_position_ids.append(position.position_id)
remaining_positions.append(position)
open_positions = remaining_positions
# Use closing price for portfolio value calculations
underlying_value_close = scenario.initial_portfolio.underlying_units * day.close
net_portfolio_value_close = underlying_value_close + option_market_value + cash_balance
# Use day's low for margin call evaluation (worst case during the day)
# If low is not available, fall back to close
worst_price = day.low if day.low is not None else day.close
underlying_value_worst = scenario.initial_portfolio.underlying_units * worst_price
net_portfolio_value_worst = underlying_value_worst + option_market_value + cash_balance
# LTVs for display (end-of-day at close)
ltv_unhedged = scenario.initial_portfolio.loan_amount / underlying_value_close
ltv_hedged = scenario.initial_portfolio.loan_amount / net_portfolio_value_close
# Margin calls use worst-case (low price) scenario
ltv_unhedged_worst = scenario.initial_portfolio.loan_amount / underlying_value_worst
ltv_hedged_worst = scenario.initial_portfolio.loan_amount / net_portfolio_value_worst
# Total option contracts held
option_contracts = sum(p.quantity for p in open_positions)
daily_points.append(
BacktestDailyPoint(
date=day.date,
spot_close=day.close,
spot_open=day.open if day.open is not None else day.close,
spot_low=day.low if day.low is not None else day.close,
spot_high=day.high if day.high is not None else day.close,
underlying_value=underlying_value_close,
option_market_value=option_market_value,
premium_cashflow=premium_cashflow,
realized_option_cashflow=realized_option_cashflow,
net_portfolio_value=net_portfolio_value_close,
loan_amount=scenario.initial_portfolio.loan_amount,
ltv_unhedged=ltv_unhedged,
ltv_hedged=ltv_hedged,
margin_call_unhedged=ltv_unhedged_worst >= scenario.initial_portfolio.margin_call_ltv,
margin_call_hedged=ltv_hedged_worst >= scenario.initial_portfolio.margin_call_ltv,
option_contracts=option_contracts,
active_position_ids=tuple(active_position_ids),
)
)
margin_call_days_unhedged = sum(1 for point in daily_points if point.margin_call_unhedged)
margin_call_days_hedged = sum(1 for point in daily_points if point.margin_call_hedged)
summary = BacktestSummaryMetrics(
start_value=scenario.initial_portfolio.start_value,
end_value_unhedged=daily_points[-1].underlying_value,
end_value_hedged_net=daily_points[-1].net_portfolio_value,
total_hedge_cost=total_hedge_cost,
total_option_payoff_realized=total_option_payoff_realized,
max_ltv_unhedged=max(point.ltv_unhedged for point in daily_points),
max_ltv_hedged=max(point.ltv_hedged for point in daily_points),
margin_call_days_unhedged=margin_call_days_unhedged,
margin_call_days_hedged=margin_call_days_hedged,
margin_threshold_breached_unhedged=margin_call_days_unhedged > 0,
margin_threshold_breached_hedged=margin_call_days_hedged > 0,
)
return TemplateBacktestResult(
template_slug=template.slug,
template_id=template.template_id,
template_version=template.version,
template_name=template.display_name,
summary_metrics=summary,
daily_path=tuple(daily_points),
warnings=tuple(warnings),
)
def _open_positions(
self,
scenario: BacktestScenario,
template: StrategyTemplate,
history: list[DailyClosePoint],
start_day: DailyClosePoint,
) -> list[HistoricalOptionPosition]:
positions: list[HistoricalOptionPosition] = []
for index, leg in enumerate(template.legs, start=1):
positions.append(
self.provider.open_position(
symbol=scenario.symbol,
leg=leg,
position_id=f"{template.slug}-position-{index}",
quantity=(
scenario.initial_portfolio.underlying_units * leg.allocation_weight * leg.target_coverage_pct
),
as_of_date=start_day.date,
spot=start_day.close,
trading_days=history,
)
)
return positions
@staticmethod
def _append_warning(warnings: list[str], warning: str | None) -> None:
if warning and warning not in warnings:
warnings.append(warning)

View File

@@ -4,6 +4,12 @@ from typing import Any
from nicegui import ui from nicegui import ui
from app.domain.portfolio_math import (
strategy_benefit_per_unit,
strategy_protection_floor_bounds,
strategy_upside_cap_price,
)
class StrategyComparisonPanel: class StrategyComparisonPanel:
"""Interactive strategy comparison with scenario slider and cost-benefit table.""" """Interactive strategy comparison with scenario slider and cost-benefit table."""
@@ -107,8 +113,6 @@ class StrategyComparisonPanel:
for strategy in self.strategies: for strategy in self.strategies:
name = str(strategy.get("name", "strategy")).replace("_", " ").title() name = str(strategy.get("name", "strategy")).replace("_", " ").title()
cost = float(strategy.get("estimated_cost", 0.0)) cost = float(strategy.get("estimated_cost", 0.0))
floor = strategy.get("max_drawdown_floor", "")
cap = strategy.get("upside_cap", "")
scenario = self._scenario_benefit(strategy) scenario = self._scenario_benefit(strategy)
scenario_class = ( scenario_class = (
"text-emerald-600 dark:text-emerald-400" if scenario >= 0 else "text-rose-600 dark:text-rose-400" "text-emerald-600 dark:text-emerald-400" if scenario >= 0 else "text-rose-600 dark:text-rose-400"
@@ -117,8 +121,8 @@ class StrategyComparisonPanel:
<tr class=\"border-b border-slate-200 dark:border-slate-800\"> <tr class=\"border-b border-slate-200 dark:border-slate-800\">
<td class=\"px-4 py-3 font-medium text-slate-900 dark:text-slate-100\">{name}</td> <td class=\"px-4 py-3 font-medium text-slate-900 dark:text-slate-100\">{name}</td>
<td class=\"px-4 py-3 text-slate-600 dark:text-slate-300\">${cost:,.2f}</td> <td class=\"px-4 py-3 text-slate-600 dark:text-slate-300\">${cost:,.2f}</td>
<td class=\"px-4 py-3 text-slate-600 dark:text-slate-300\">{self._fmt_optional_money(floor)}</td> <td class=\"px-4 py-3 text-slate-600 dark:text-slate-300\">{self._format_floor(strategy)}</td>
<td class=\"px-4 py-3 text-slate-600 dark:text-slate-300\">{self._fmt_optional_money(cap)}</td> <td class=\"px-4 py-3 text-slate-600 dark:text-slate-300\">{self._format_cap(strategy)}</td>
<td class=\"px-4 py-3 font-semibold {scenario_class}\">${scenario:,.2f}</td> <td class=\"px-4 py-3 font-semibold {scenario_class}\">${scenario:,.2f}</td>
</tr> </tr>
""") """)
@@ -140,17 +144,26 @@ class StrategyComparisonPanel:
""" """
def _scenario_benefit(self, strategy: dict[str, Any]) -> float: def _scenario_benefit(self, strategy: dict[str, Any]) -> float:
scenario_spot = self._scenario_spot() return strategy_benefit_per_unit(
cost = float(strategy.get("estimated_cost", 0.0)) strategy,
floor = strategy.get("max_drawdown_floor") current_spot=self.current_spot,
cap = strategy.get("upside_cap") scenario_spot=self._scenario_spot(),
benefit = -cost )
if isinstance(floor, (int, float)) and scenario_spot < float(floor): def _format_floor(self, strategy: dict[str, Any]) -> str:
benefit += float(floor) - scenario_spot bounds = strategy_protection_floor_bounds(strategy, current_spot=self.current_spot)
if isinstance(cap, (int, float)) and scenario_spot > float(cap): if bounds is None:
benefit -= scenario_spot - float(cap) return self._fmt_optional_money(strategy.get("max_drawdown_floor"))
return benefit low, high = bounds
if abs(high - low) < 1e-9:
return f"${high:,.2f}"
return f"${low:,.2f}${high:,.2f}"
def _format_cap(self, strategy: dict[str, Any]) -> str:
cap = strategy_upside_cap_price(strategy, current_spot=self.current_spot)
if cap is None:
return self._fmt_optional_money(strategy.get("upside_cap"))
return f"${cap:,.2f}"
@staticmethod @staticmethod
def _fmt_optional_money(value: Any) -> str: def _fmt_optional_money(value: Any) -> str:

View File

@@ -1,7 +1,14 @@
from __future__ import annotations from __future__ import annotations
from collections.abc import Iterable from collections.abc import Iterable, Mapping
from datetime import date, datetime
from app.core.pricing.black_scholes import (
DEFAULT_RISK_FREE_RATE,
DEFAULT_VOLATILITY,
BlackScholesInputs,
black_scholes_price_and_greeks,
)
from app.models.option import OptionContract from app.models.option import OptionContract
from app.models.portfolio import LombardPortfolio from app.models.portfolio import LombardPortfolio
from app.models.strategy import HedgingStrategy from app.models.strategy import HedgingStrategy
@@ -96,3 +103,74 @@ def portfolio_net_equity(
hedge_cost=hedge_cost, hedge_cost=hedge_cost,
option_payoff_value=payoff_value, option_payoff_value=payoff_value,
) )
_ZERO_GREEKS = {"delta": 0.0, "gamma": 0.0, "theta": 0.0, "vega": 0.0, "rho": 0.0}
def option_row_greeks(
row: Mapping[str, object],
underlying_price: float,
*,
risk_free_rate: float = DEFAULT_RISK_FREE_RATE,
valuation_date: date | None = None,
) -> dict[str, float]:
"""Calculate Black-Scholes Greeks for an option-chain row.
Prefers live implied volatility when available. If it is missing or invalid,
a conservative default volatility is used. Invalid or expired rows return
zero Greeks instead of raising.
"""
if underlying_price <= 0:
return dict(_ZERO_GREEKS)
strike_raw = row.get("strike", 0.0)
strike = float(strike_raw) if isinstance(strike_raw, (int, float)) else 0.0
if strike <= 0:
return dict(_ZERO_GREEKS)
option_type = str(row.get("type", "")).lower()
if option_type not in {"call", "put"}:
return dict(_ZERO_GREEKS)
expiry_raw = row.get("expiry")
if not isinstance(expiry_raw, str) or not expiry_raw:
return dict(_ZERO_GREEKS)
try:
expiry = datetime.fromisoformat(expiry_raw).date()
except ValueError:
return dict(_ZERO_GREEKS)
valuation = valuation_date or date.today()
days_to_expiry = (expiry - valuation).days
if days_to_expiry <= 0:
return dict(_ZERO_GREEKS)
iv_raw = row.get("impliedVolatility", 0.0) or 0.0
implied_volatility = float(iv_raw) if isinstance(iv_raw, (int, float)) else 0.0
volatility = implied_volatility if implied_volatility > 0 else DEFAULT_VOLATILITY
# option_type is validated to be in {"call", "put"} above, so it's safe to pass
try:
pricing = black_scholes_price_and_greeks(
BlackScholesInputs(
spot=underlying_price,
strike=strike,
time_to_expiry=days_to_expiry / 365.0,
risk_free_rate=risk_free_rate,
volatility=volatility,
option_type=option_type, # type: ignore[arg-type]
valuation_date=valuation,
)
)
except ValueError:
return dict(_ZERO_GREEKS)
return {
"delta": pricing.delta,
"gamma": pricing.gamma,
"theta": pricing.theta,
"vega": pricing.vega,
"rho": pricing.rho,
}

51
app/domain/__init__.py Normal file
View File

@@ -0,0 +1,51 @@
from app.domain.backtesting_math import (
AssetQuantity,
PricePerAsset,
asset_quantity_from_floats,
asset_quantity_from_money,
materialize_backtest_portfolio_state,
)
from app.domain.instruments import (
asset_quantity_from_weight,
instrument_metadata,
price_per_weight_from_asset_price,
weight_from_asset_quantity,
)
from app.domain.portfolio_math import (
build_alert_context,
portfolio_snapshot_from_config,
resolve_portfolio_spot_from_quote,
strategy_metrics_from_snapshot,
)
from app.domain.units import (
BaseCurrency,
Money,
PricePerWeight,
Weight,
WeightUnit,
decimal_from_float,
to_decimal,
)
__all__ = [
"BaseCurrency",
"WeightUnit",
"Money",
"Weight",
"PricePerWeight",
"AssetQuantity",
"PricePerAsset",
"asset_quantity_from_money",
"asset_quantity_from_floats",
"materialize_backtest_portfolio_state",
"to_decimal",
"decimal_from_float",
"portfolio_snapshot_from_config",
"build_alert_context",
"resolve_portfolio_spot_from_quote",
"strategy_metrics_from_snapshot",
"instrument_metadata",
"price_per_weight_from_asset_price",
"weight_from_asset_quantity",
"asset_quantity_from_weight",
]

View File

@@ -0,0 +1,148 @@
from __future__ import annotations
from dataclasses import dataclass
from decimal import Decimal
from app.domain.units import BaseCurrency, Money, Weight, WeightUnit, _coerce_currency, decimal_from_float, to_decimal
from app.models.backtest import BacktestPortfolioState
from app.models.portfolio import PortfolioConfig
@dataclass(frozen=True, slots=True)
class AssetQuantity:
amount: Decimal
symbol: str
def __post_init__(self) -> None:
object.__setattr__(self, "amount", to_decimal(self.amount))
symbol = str(self.symbol).strip().upper()
if not symbol:
raise ValueError("Asset symbol is required")
object.__setattr__(self, "symbol", symbol)
def __mul__(self, other: object) -> Money:
if isinstance(other, PricePerAsset):
other.assert_symbol(self.symbol)
return Money(amount=self.amount * other.amount, currency=other.currency)
return NotImplemented
def __truediv__(self, other: object) -> AssetQuantity:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return AssetQuantity(amount=self.amount / other, symbol=self.symbol)
if isinstance(other, int):
return AssetQuantity(amount=self.amount / Decimal(other), symbol=self.symbol)
return NotImplemented
@dataclass(frozen=True, slots=True)
class PricePerAsset:
amount: Decimal
currency: BaseCurrency | str
symbol: str
def __post_init__(self) -> None:
amount = to_decimal(self.amount)
if amount < 0:
raise ValueError("PricePerAsset amount must be non-negative")
object.__setattr__(self, "amount", amount)
object.__setattr__(self, "currency", _coerce_currency(self.currency))
symbol = str(self.symbol).strip().upper()
if not symbol:
raise ValueError("Asset symbol is required")
object.__setattr__(self, "symbol", symbol)
@property
def _currency_typed(self) -> BaseCurrency:
"""Type-narrowed currency accessor for internal use."""
return self.currency # type: ignore[return-value]
def assert_symbol(self, symbol: str) -> PricePerAsset:
normalized = str(symbol).strip().upper()
if self.symbol != normalized:
raise ValueError(f"Asset symbol mismatch: {self.symbol} != {normalized}")
return self
def __mul__(self, other: object) -> Money | PricePerAsset:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, AssetQuantity):
other_symbol = str(other.symbol).strip().upper()
self.assert_symbol(other_symbol)
return Money(amount=self.amount * other.amount, currency=self.currency)
if isinstance(other, Decimal):
return PricePerAsset(amount=self.amount * other, currency=self.currency, symbol=self.symbol)
if isinstance(other, int):
return PricePerAsset(amount=self.amount * Decimal(other), currency=self.currency, symbol=self.symbol)
return NotImplemented
def __rmul__(self, other: object) -> PricePerAsset:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return PricePerAsset(amount=self.amount * other, currency=self.currency, symbol=self.symbol)
if isinstance(other, int):
return PricePerAsset(amount=self.amount * Decimal(other), currency=self.currency, symbol=self.symbol)
return NotImplemented
def asset_quantity_from_money(value: Money, spot: PricePerAsset) -> AssetQuantity:
value.assert_currency(spot._currency_typed)
if spot.amount <= 0:
raise ValueError("Spot price per asset must be positive")
return AssetQuantity(amount=value.amount / spot.amount, symbol=spot.symbol)
def asset_quantity_from_floats(portfolio_value: float, entry_spot: float, symbol: str) -> float:
notional = Money(amount=decimal_from_float(portfolio_value), currency=BaseCurrency.USD)
spot = PricePerAsset(amount=decimal_from_float(entry_spot), currency=BaseCurrency.USD, symbol=symbol)
return float(asset_quantity_from_money(notional, spot).amount)
def asset_quantity_from_workspace_config(config: PortfolioConfig, *, entry_spot: float, symbol: str) -> float:
if config.gold_ounces is not None and config.gold_ounces > 0:
try:
from app.domain.instruments import asset_quantity_from_weight
quantity = asset_quantity_from_weight(
symbol,
Weight(amount=decimal_from_float(float(config.gold_ounces)), unit=WeightUnit.OUNCE_TROY),
)
return float(quantity.amount)
except ValueError:
pass
if config.gold_value is not None:
return asset_quantity_from_floats(float(config.gold_value), entry_spot, symbol)
raise ValueError("Workspace config must provide collateral weight or value")
def materialize_backtest_portfolio_state(
*,
symbol: str,
underlying_units: float,
entry_spot: float,
loan_amount: float,
margin_call_ltv: float,
currency: str = "USD",
cash_balance: float = 0.0,
financing_rate: float = 0.0,
) -> BacktestPortfolioState:
normalized_symbol = str(symbol).strip().upper()
quantity = AssetQuantity(amount=decimal_from_float(underlying_units), symbol=normalized_symbol)
spot = PricePerAsset(
amount=decimal_from_float(entry_spot),
currency=BaseCurrency.USD,
symbol=normalized_symbol,
)
loan = Money(amount=decimal_from_float(loan_amount), currency=currency)
_ = quantity * spot
return BacktestPortfolioState(
currency=str(loan.currency),
underlying_units=float(quantity.amount),
entry_spot=float(spot.amount),
loan_amount=float(loan.amount),
margin_call_ltv=margin_call_ltv,
cash_balance=cash_balance,
financing_rate=financing_rate,
)

310
app/domain/conversions.py Normal file
View File

@@ -0,0 +1,310 @@
"""Display mode conversion utilities for GLD/XAU views.
This module handles conversion between GLD share prices and physical gold prices
based on the user's display mode preference.
Key insight:
- In GLD mode: show share prices directly, no conversion to oz
- In XAU mode: convert GLD shares to oz-equivalent using expense-adjusted backing
"""
from __future__ import annotations
from dataclasses import dataclass
from datetime import date
from decimal import Decimal
from app.domain.instruments import gld_ounces_per_share
from app.models.position import Position
@dataclass(frozen=True)
class DisplayContext:
"""Context for display mode conversions.
Attributes:
mode: Display mode ("GLD" for shares, "XAU" for physical gold)
reference_date: Date for historical conversion lookups
gld_ounces_per_share: GLD backing ratio for the reference date
"""
mode: str
reference_date: date | None = None
gld_ounces_per_share: Decimal | None = None
def __post_init__(self) -> None:
if self.mode not in {"GLD", "XAU"}:
raise ValueError(f"Invalid display mode: {self.mode!r}")
@classmethod
def create(cls, mode: str, reference_date: date | None = None) -> "DisplayContext":
"""Create a display context with computed GLD backing ratio."""
gld_backing = None
if mode == "XAU" and reference_date is not None:
gld_backing = gld_ounces_per_share(reference_date)
return cls(mode=mode, reference_date=reference_date, gld_ounces_per_share=gld_backing)
def is_gld_mode(display_mode: str) -> bool:
"""Check if display mode is GLD (share view)."""
return display_mode == "GLD"
def is_xau_mode(display_mode: str) -> bool:
"""Check if display mode is XAU (physical gold view)."""
return display_mode == "XAU"
def convert_position_to_display(
position: Position,
display_mode: str,
reference_date: date | None = None,
) -> tuple[Decimal, str, Decimal]:
"""Convert a position to display units based on display mode.
Args:
position: Position to convert
display_mode: "GLD" for shares, "XAU" for physical gold
reference_date: Date for historical conversion (for GLD->XAU)
Returns:
Tuple of (display_quantity, display_unit, display_entry_price)
Examples:
>>> # GLD position in GLD mode: show as-is
>>> from datetime import date
>>> from decimal import Decimal
>>> from app.models.position import create_position
>>> pos = create_position(
... underlying="GLD",
... quantity=Decimal("100"),
... unit="shares",
... entry_price=Decimal("400"),
... entry_date=date.today(),
... )
>>> qty, unit, price = convert_position_to_display(pos, "GLD")
>>> qty, unit, price
(Decimal('100'), 'shares', Decimal('400'))
>>> # GLD position in XAU mode: convert to oz
>>> qty, unit, price = convert_position_to_display(pos, "XAU", date.today())
>>> # qty will be shares * oz_per_share
"""
if display_mode == "GLD":
# GLD mode: show shares directly
if position.underlying == "GLD":
return position.quantity, position.unit, position.entry_price
# Non-GLD positions in GLD mode: would need conversion (not implemented yet)
return position.quantity, position.unit, position.entry_price
elif display_mode == "XAU":
# XAU mode: convert to physical gold ounces
if position.underlying == "GLD":
# Convert GLD shares to oz using expense-adjusted backing
backing = gld_ounces_per_share(reference_date or position.entry_date)
display_qty = position.quantity * backing
display_price = position.entry_price / backing # Price per oz
return display_qty, "oz", display_price
# XAU positions already in oz
return position.quantity, position.unit, position.entry_price
else:
raise ValueError(f"Unsupported display mode: {display_mode!r}")
def convert_price_to_display(
price: Decimal,
from_unit: str,
to_mode: str,
reference_date: date | None = None,
) -> tuple[Decimal, str]:
"""Convert a price to display mode units.
Args:
price: Price value to convert
from_unit: Source unit ("shares" or "oz")
to_mode: Target display mode ("GLD" or "XAU")
reference_date: Date for historical conversion
Returns:
Tuple of (converted_price, display_unit)
"""
if to_mode == "GLD":
if from_unit == "shares":
return price, "shares"
elif from_unit == "oz":
# Convert oz price to share price
backing = gld_ounces_per_share(reference_date or date.today())
return price * backing, "shares"
elif to_mode == "XAU":
if from_unit == "oz":
return price, "oz"
elif from_unit == "shares":
# Convert share price to oz price
backing = gld_ounces_per_share(reference_date or date.today())
return price / backing, "oz"
raise ValueError(f"Unsupported conversion: {from_unit} -> {to_mode}")
def convert_quantity_to_display(
quantity: Decimal,
from_unit: str,
to_mode: str,
reference_date: date | None = None,
) -> tuple[Decimal, str]:
"""Convert a quantity to display mode units.
Args:
quantity: Quantity value to convert
from_unit: Source unit ("shares" or "oz")
to_mode: Target display mode ("GLD" or "XAU")
reference_date: Date for historical conversion
Returns:
Tuple of (converted_quantity, display_unit)
"""
if to_mode == "GLD":
if from_unit == "shares":
return quantity, "shares"
elif from_unit == "oz":
# Convert oz to shares (inverse of backing)
backing = gld_ounces_per_share(reference_date or date.today())
return quantity / backing, "shares"
elif to_mode == "XAU":
if from_unit == "oz":
return quantity, "oz"
elif from_unit == "shares":
# Convert shares to oz using backing
backing = gld_ounces_per_share(reference_date or date.today())
return quantity * backing, "oz"
raise ValueError(f"Unsupported conversion: {from_unit} -> {to_mode}")
def get_display_unit_label(underlying: str, display_mode: str) -> str:
"""Get the display unit label for a position based on display mode.
Args:
underlying: Position underlying symbol
display_mode: Display mode ("GLD" or "XAU")
Returns:
Unit label string ("shares", "oz", etc.)
"""
if underlying == "GLD":
if display_mode == "GLD":
return "shares"
else: # XAU mode
return "oz"
elif underlying in ("XAU", "GC=F"):
return "oz" if display_mode == "XAU" else "oz" # Physical gold always in oz
return "units"
def calculate_position_value_in_display_mode(
quantity: Decimal,
unit: str,
current_price: Decimal,
price_unit: str,
display_mode: str,
reference_date: date | None = None,
) -> Decimal:
"""Calculate position value in display mode.
Args:
quantity: Position quantity
unit: Position unit
current_price: Current market price
price_unit: Price unit ("shares" or "oz")
display_mode: Display mode ("GLD" or "XAU")
reference_date: Date for conversion
Returns:
Position value in USD
"""
if display_mode == "GLD" and unit == "shares":
# GLD mode: shares × share_price
return quantity * current_price
elif display_mode == "XAU" and unit == "oz":
# XAU mode: oz × oz_price
return quantity * current_price
elif display_mode == "GLD" and unit == "oz":
# Convert oz to shares, then calculate
backing = gld_ounces_per_share(reference_date or date.today())
shares = quantity / backing
share_price = current_price * backing
return shares * share_price
elif display_mode == "XAU" and unit == "shares":
# Convert shares to oz, then calculate
backing = gld_ounces_per_share(reference_date or date.today())
oz = quantity * backing
oz_price = current_price / backing
return oz * oz_price
# Fallback: direct multiplication
return quantity * current_price
def calculate_pnl_in_display_mode(
quantity: Decimal,
unit: str,
entry_price: Decimal,
current_price: Decimal,
display_mode: str,
reference_date: date | None = None,
) -> Decimal:
"""Calculate P&L in display mode.
Args:
quantity: Position quantity
unit: Position unit
entry_price: Entry price per unit
current_price: Current price per unit
display_mode: Display mode ("GLD" or "XAU")
reference_date: Date for conversion
Returns:
P&L in USD
"""
if display_mode == "GLD" and unit == "shares":
# GLD mode: (current_share_price - entry_share_price) × shares
return (current_price - entry_price) * quantity
elif display_mode == "XAU" and unit == "oz":
# XAU mode: (current_oz_price - entry_oz_price) × oz
return (current_price - entry_price) * quantity
elif display_mode == "GLD" and unit == "oz":
# Convert to share basis
backing = gld_ounces_per_share(reference_date or date.today())
shares = quantity * backing # oz → shares (wait, this is wrong)
# Actually: if we have oz, we need to convert to shares
# shares = oz / backing
shares = quantity / backing
share_entry = entry_price / backing
share_current = current_price / backing
return (share_current - share_entry) * shares
elif display_mode == "XAU" and unit == "shares":
# Convert to oz basis
backing = gld_ounces_per_share(reference_date or date.today())
oz = quantity * backing
oz_entry = entry_price * backing
oz_current = current_price * backing
return (oz_current - oz_entry) * oz
# Fallback
return (current_price - entry_price) * quantity
def get_display_mode_options() -> dict[str, str]:
"""Return available display mode options for the settings UI.
Returns:
Dict mapping mode value to display label for NiceGUI select.
"""
return {
"GLD": "GLD Shares (show share prices directly)",
"XAU": "Physical Gold (oz) (convert to gold ounces)",
}

186
app/domain/instruments.py Normal file
View File

@@ -0,0 +1,186 @@
from __future__ import annotations
import math
from dataclasses import dataclass
from datetime import date
from decimal import Decimal
from enum import Enum
from app.domain.backtesting_math import AssetQuantity, PricePerAsset
from app.domain.units import BaseCurrency, PricePerWeight, Weight, WeightUnit
class Underlying(str, Enum):
"""Supported underlying instruments for options evaluation."""
GLD = "GLD"
GC_F = "GC=F"
def display_name(self) -> str:
"""Human-readable display name."""
return {
Underlying.GLD: "SPDR Gold Shares ETF",
Underlying.GC_F: "Gold Futures (COMEX)",
}.get(self, self.value)
def description(self) -> str:
"""Description of the underlying and data source."""
return {
Underlying.GLD: "SPDR Gold Shares ETF (live data via yfinance)",
Underlying.GC_F: "Gold Futures (coming soon)",
}.get(self, "")
# GLD expense ratio decay parameters (from docs/GLD_BASIS_RESEARCH.md)
# Formula: ounces_per_share = 0.10 * e^(-0.004 * years_since_2004)
GLD_INITIAL_OUNCES_PER_SHARE = Decimal("0.10")
GLD_EXPENSE_DECAY_RATE = Decimal("0.004") # 0.4% annual decay
GLD_LAUNCH_YEAR = 2004
GLD_LAUNCH_DATE = date(2004, 11, 18) # GLD IPO date on NYSE
# GC=F contract specifications
GC_F_OUNCES_PER_CONTRACT = Decimal("100") # 100 troy oz per contract
GC_F_QUOTE_CURRENCY = BaseCurrency.USD
def gld_ounces_per_share(reference_date: date | None = None) -> Decimal:
"""
Calculate GLD's gold backing per share for a specific date.
GLD's expense ratio (0.40% annually) causes the gold backing per share to
decay exponentially from the initial 0.10 oz/share at launch (November 18, 2004).
Formula: ounces_per_share = 0.10 * e^(-0.004 * years_since_2004)
Args:
reference_date: Date to calculate backing for. Must be on or after
GLD launch date (2004-11-18). Defaults to today.
Returns:
Decimal representing troy ounces of gold backing per GLD share.
Raises:
ValueError: If reference_date is before GLD launch (2004-11-18).
Examples:
>>> from datetime import date
>>> # Launch date returns initial 0.10 oz/share
>>> gld_ounces_per_share(date(2004, 11, 18))
Decimal('0.10')
>>> # 2026 backing should be ~0.0916 oz/share (8.4% decay)
>>> result = gld_ounces_per_share(date(2026, 1, 1))
>>> float(result) # doctest: +SKIP
0.0916...
"""
if reference_date is None:
reference_date = date.today()
if reference_date < GLD_LAUNCH_DATE:
raise ValueError(
f"GLD backing data unavailable before {GLD_LAUNCH_DATE}. " f"GLD launched on November 18, 2004."
)
years_since_launch = Decimal(reference_date.year - GLD_LAUNCH_YEAR)
decay_factor = Decimal(str(math.exp(-float(GLD_EXPENSE_DECAY_RATE * years_since_launch))))
return GLD_INITIAL_OUNCES_PER_SHARE * decay_factor
@dataclass(frozen=True, slots=True)
class InstrumentMetadata:
symbol: str
quote_currency: BaseCurrency | str
weight_per_share: Weight
def __post_init__(self) -> None:
normalized_symbol = str(self.symbol).strip().upper()
if not normalized_symbol:
raise ValueError("Instrument symbol is required")
object.__setattr__(self, "symbol", normalized_symbol)
object.__setattr__(self, "quote_currency", BaseCurrency(self.quote_currency))
object.__setattr__(self, "weight_per_share", self.weight_per_share)
def assert_symbol(self, symbol: str) -> InstrumentMetadata:
normalized = str(symbol).strip().upper()
if self.symbol != normalized:
raise ValueError(f"Instrument symbol mismatch: {self.symbol} != {normalized}")
return self
def assert_currency(self, currency: BaseCurrency | str) -> InstrumentMetadata:
normalized = BaseCurrency(currency)
if self.quote_currency is not normalized:
raise ValueError(f"Instrument currency mismatch: {self.quote_currency} != {normalized}")
return self
def price_per_weight_from_asset_price(
self,
price: PricePerAsset,
*,
per_unit: WeightUnit = WeightUnit.OUNCE_TROY,
) -> PricePerWeight:
self.assert_symbol(price.symbol)
self.assert_currency(price.currency)
weight_per_share = self.weight_per_share.to_unit(per_unit)
if weight_per_share.amount <= 0:
raise ValueError("Instrument weight_per_share must be positive")
return PricePerWeight(
amount=price.amount / weight_per_share.amount,
currency=price.currency,
per_unit=per_unit,
)
def weight_from_asset_quantity(self, quantity: AssetQuantity) -> Weight:
self.assert_symbol(quantity.symbol)
return Weight(amount=quantity.amount * self.weight_per_share.amount, unit=self.weight_per_share.unit)
def asset_quantity_from_weight(self, weight: Weight) -> AssetQuantity:
normalized_weight = weight.to_unit(self.weight_per_share._unit_typed)
if self.weight_per_share.amount <= 0:
raise ValueError("Instrument weight_per_share must be positive")
return AssetQuantity(amount=normalized_weight.amount / self.weight_per_share.amount, symbol=self.symbol)
_GLD = InstrumentMetadata(
symbol="GLD",
quote_currency=BaseCurrency.USD,
weight_per_share=Weight(amount=gld_ounces_per_share(), unit=WeightUnit.OUNCE_TROY),
)
_GC_F = InstrumentMetadata(
symbol="GC=F",
quote_currency=GC_F_QUOTE_CURRENCY,
weight_per_share=Weight(amount=GC_F_OUNCES_PER_CONTRACT, unit=WeightUnit.OUNCE_TROY),
)
_INSTRUMENTS: dict[str, InstrumentMetadata] = {
_GLD.symbol: _GLD,
_GC_F.symbol: _GC_F,
}
def supported_underlyings() -> list[Underlying]:
"""Return list of supported underlying instruments."""
return list(Underlying)
def instrument_metadata(symbol: str) -> InstrumentMetadata:
normalized = str(symbol).strip().upper()
metadata = _INSTRUMENTS.get(normalized)
if metadata is None:
raise ValueError(f"Unsupported instrument metadata: {normalized or symbol!r}")
return metadata
def price_per_weight_from_asset_price(
price: PricePerAsset,
*,
per_unit: WeightUnit = WeightUnit.OUNCE_TROY,
) -> PricePerWeight:
return instrument_metadata(price.symbol).price_per_weight_from_asset_price(price, per_unit=per_unit)
def weight_from_asset_quantity(quantity: AssetQuantity) -> Weight:
return instrument_metadata(quantity.symbol).weight_from_asset_quantity(quantity)
def asset_quantity_from_weight(symbol: str, weight: Weight) -> AssetQuantity:
return instrument_metadata(symbol).asset_quantity_from_weight(weight)

View File

@@ -0,0 +1,453 @@
from __future__ import annotations
import math
from datetime import date
from decimal import Decimal, InvalidOperation
from typing import Any, Mapping, TypedDict
from app.domain.backtesting_math import PricePerAsset
from app.domain.conversions import is_gld_mode
from app.domain.instruments import gld_ounces_per_share, instrument_metadata
from app.domain.units import BaseCurrency, Money, PricePerWeight, Weight, WeightUnit, decimal_from_float
from app.models.portfolio import PortfolioConfig
_DEFAULT_CASH_BUFFER = 18_500.0
_DECIMAL_ZERO = Decimal("0")
_DECIMAL_ONE = Decimal("1")
_DECIMAL_HUNDRED = Decimal("100")
class PortfolioSnapshot(TypedDict):
"""Typed snapshot of portfolio state for metrics calculations."""
gold_value: float
loan_amount: float
ltv_ratio: float
net_equity: float
spot_price: float
gold_units: float
margin_call_ltv: float
margin_call_price: float
cash_buffer: float
hedge_budget: float
display_mode: str
def _decimal_ratio(numerator: Decimal, denominator: Decimal) -> Decimal:
if denominator == 0:
return _DECIMAL_ZERO
return numerator / denominator
def _pct_factor(pct: int) -> Decimal:
return _DECIMAL_ONE + (Decimal(pct) / _DECIMAL_HUNDRED)
def _money_to_float(value: Money) -> float:
return float(value.amount)
def _as_money(value: Weight | Money) -> Money:
"""Narrow Weight | Money to Money after multiplication."""
if isinstance(value, Money):
return value
raise TypeError(f"Expected Money, got {type(value).__name__}")
def _decimal_to_float(value: Decimal) -> float:
return float(value)
def _spot_price(spot_price: float) -> PricePerWeight:
return PricePerWeight(
amount=decimal_from_float(spot_price),
currency=BaseCurrency.USD,
per_unit=WeightUnit.OUNCE_TROY,
)
def _gold_weight(gold_ounces: float) -> Weight:
return Weight(amount=decimal_from_float(gold_ounces), unit=WeightUnit.OUNCE_TROY)
def _safe_quote_price(value: object) -> float:
"""Parse a price value, returning 0.0 for invalid/non-finite inputs.
Rejects NaN, Infinity, and non-positive values by returning 0.0.
This defensive helper is used for quote data that may come from
untrusted sources like APIs or user input.
"""
try:
if isinstance(value, (int, float)):
parsed = float(value)
elif isinstance(value, str):
parsed = float(value.strip())
else:
return 0.0
except (TypeError, ValueError):
return 0.0
if not math.isfinite(parsed) or parsed <= 0:
return 0.0
return parsed
def _strategy_decimal(value: object) -> Decimal | None:
if value is None or isinstance(value, bool):
return None
if isinstance(value, Decimal):
return value if value.is_finite() else None
if isinstance(value, int):
return Decimal(value)
if isinstance(value, float):
return decimal_from_float(value)
if isinstance(value, str):
stripped = value.strip()
if not stripped:
return None
try:
parsed = Decimal(stripped)
except InvalidOperation:
return None
return parsed if parsed.is_finite() else None
return None
def _strategy_downside_put_legs(strategy: Mapping[str, Any], current_spot: Decimal) -> list[tuple[Decimal, Decimal]]:
raw_legs = strategy.get("downside_put_legs")
if isinstance(raw_legs, (list, tuple)):
parsed_legs: list[tuple[Decimal, Decimal]] = []
for leg in raw_legs:
if not isinstance(leg, Mapping):
continue
weight = _strategy_decimal(leg.get("allocation_weight", leg.get("weight")))
strike_pct = _strategy_decimal(leg.get("strike_pct"))
if weight is None or strike_pct is None or weight <= 0 or strike_pct <= 0:
continue
parsed_legs.append((weight, current_spot * strike_pct))
if parsed_legs:
return parsed_legs
protection_floor_pct = _strategy_decimal(strategy.get("protection_floor_pct"))
if protection_floor_pct is not None and protection_floor_pct > 0:
return [(_DECIMAL_ONE, current_spot * protection_floor_pct)]
absolute_floor = _strategy_decimal(strategy.get("max_drawdown_floor"))
if absolute_floor is not None and absolute_floor > 0:
return [(_DECIMAL_ONE, absolute_floor)]
return [(_DECIMAL_ONE, current_spot * Decimal("0.95"))]
def _strategy_upside_cap_decimal(strategy: Mapping[str, Any], current_spot: Decimal) -> Decimal | None:
upside_cap_pct = _strategy_decimal(strategy.get("upside_cap_pct"))
if upside_cap_pct is not None and upside_cap_pct > 0:
return current_spot * upside_cap_pct
absolute_cap = _strategy_decimal(strategy.get("upside_cap"))
if absolute_cap is not None and absolute_cap > 0:
return absolute_cap
return None
def _strategy_option_payoff_per_unit(
strategy: Mapping[str, Any], current_spot: Decimal, scenario_spot: Decimal
) -> Decimal:
return sum(
weight * max(strike_price - scenario_spot, _DECIMAL_ZERO)
for weight, strike_price in _strategy_downside_put_legs(strategy, current_spot)
) or Decimal("0")
def _strategy_upside_cap_effect_per_unit(
strategy: Mapping[str, Any], current_spot: Decimal, scenario_spot: Decimal
) -> Decimal:
cap = _strategy_upside_cap_decimal(strategy, current_spot)
if cap is None or scenario_spot <= cap:
return _DECIMAL_ZERO
return -(scenario_spot - cap)
def strategy_protection_floor_bounds(strategy: Mapping[str, Any], *, current_spot: float) -> tuple[float, float] | None:
current_spot_decimal = decimal_from_float(current_spot)
legs = _strategy_downside_put_legs(strategy, current_spot_decimal)
if not legs:
return None
floor_prices = [strike_price for _, strike_price in legs]
return _decimal_to_float(min(floor_prices)), _decimal_to_float(max(floor_prices))
def strategy_upside_cap_price(strategy: Mapping[str, Any], *, current_spot: float) -> float | None:
cap = _strategy_upside_cap_decimal(strategy, decimal_from_float(current_spot))
if cap is None:
return None
return _decimal_to_float(cap)
def strategy_benefit_per_unit(strategy: Mapping[str, Any], *, current_spot: float, scenario_spot: float) -> float:
current_spot_decimal = decimal_from_float(current_spot)
scenario_spot_decimal = decimal_from_float(scenario_spot)
cost = _strategy_decimal(strategy.get("estimated_cost")) or _DECIMAL_ZERO
benefit = (
_strategy_option_payoff_per_unit(strategy, current_spot_decimal, scenario_spot_decimal)
+ _strategy_upside_cap_effect_per_unit(strategy, current_spot_decimal, scenario_spot_decimal)
- cost
)
return round(float(benefit), 2)
def resolve_collateral_spot_from_quote(
quote: Mapping[str, object],
*,
fallback_symbol: str | None = None,
) -> tuple[float, str, str] | None:
quote_price = _safe_quote_price(quote.get("price"))
quote_source = str(quote.get("source", "unknown"))
quote_updated_at = str(quote.get("updated_at", ""))
quote_symbol = str(quote.get("symbol", fallback_symbol or "")).strip().upper()
quote_unit = str(quote.get("quote_unit", "")).strip().lower()
if quote_price <= 0 or not quote_symbol or quote_unit != "share":
return None
try:
metadata = instrument_metadata(quote_symbol)
except ValueError:
return None
converted_spot = metadata.price_per_weight_from_asset_price(
PricePerAsset(amount=decimal_from_float(quote_price), currency=BaseCurrency.USD, symbol=quote_symbol),
per_unit=WeightUnit.OUNCE_TROY,
)
return _decimal_to_float(converted_spot.amount), quote_source, quote_updated_at
def resolve_portfolio_spot_from_quote(
config: PortfolioConfig,
quote: Mapping[str, object],
*,
fallback_symbol: str | None = None,
) -> tuple[float, str, str]:
"""Resolve spot price from quote based on display mode.
In GLD display mode: return GLD share price directly (no conversion)
In XAU display mode: convert GLD share price to oz-equivalent using expense-adjusted backing
Args:
config: Portfolio configuration with display_mode setting
quote: Quote data from data service
fallback_symbol: Fallback symbol if quote lacks symbol
Returns:
Tuple of (spot_price, source, updated_at)
"""
display_mode = getattr(config, "display_mode", "XAU")
# Try to resolve from quote first
resolved = resolve_collateral_spot_from_quote(quote, fallback_symbol=fallback_symbol)
if resolved is None:
# No valid quote, use configured entry price
configured_price = float(config.entry_price or 0.0)
return configured_price, "configured_entry_price", ""
spot_price, source, updated_at = resolved
# In GLD mode, return share price directly (no conversion to oz)
if is_gld_mode(display_mode):
# For GLD mode, we want the share price, not the converted oz price
quote_price = _safe_quote_price(quote.get("price"))
if quote_price > 0:
return quote_price, source, updated_at
# XAU mode: use the converted oz-equivalent price (already done in resolve_collateral_spot_from_quote)
return spot_price, source, updated_at
def portfolio_snapshot_from_config(
config: PortfolioConfig | None = None,
*,
runtime_spot_price: float | None = None,
) -> dict[str, float | str]:
"""Build portfolio snapshot with display-mode-aware calculations.
In GLD mode:
- gold_units: shares (not oz)
- spot_price: GLD share price
- gold_value: shares × share_price
In XAU mode:
- gold_units: oz
- spot_price: USD/oz
- gold_value: oz × oz_price
"""
if config is None:
gold_weight = Weight(amount=Decimal("1000"), unit=WeightUnit.OUNCE_TROY)
spot = PricePerWeight(amount=Decimal("215"), currency=BaseCurrency.USD, per_unit=WeightUnit.OUNCE_TROY)
loan_amount = Money(amount=Decimal("145000"), currency=BaseCurrency.USD)
margin_call_ltv = Decimal("0.75")
hedge_budget = Money(amount=Decimal("8000"), currency=BaseCurrency.USD)
display_mode = "XAU"
else:
display_mode = getattr(config, "display_mode", "XAU")
resolved_spot = runtime_spot_price if runtime_spot_price is not None else float(config.entry_price or 0.0)
if is_gld_mode(display_mode):
# GLD mode: work with shares directly
# Use positions if available, otherwise fall back to legacy gold_ounces as shares
if config.positions:
# Sum GLD position quantities in shares
total_shares = Decimal("0")
for pos in config.positions:
if pos.underlying == "GLD" and pos.unit == "shares":
total_shares += pos.quantity
elif pos.underlying == "GLD" and pos.unit == "oz":
# Convert oz to shares using current backing
backing = gld_ounces_per_share(date.today())
total_shares += pos.quantity / backing
else:
# Non-GLD positions: treat as oz and convert to shares
backing = gld_ounces_per_share(date.today())
total_shares += pos.quantity / backing
gold_weight = Weight(amount=total_shares, unit=WeightUnit.OUNCE_TROY) # Store shares in weight for now
else:
# Legacy: treat gold_ounces as oz, convert to shares
backing = gld_ounces_per_share(date.today())
shares = Decimal(str(config.gold_ounces or 0.0)) / backing
gold_weight = Weight(amount=shares, unit=WeightUnit.OUNCE_TROY)
spot = PricePerWeight(
amount=decimal_from_float(resolved_spot), currency=BaseCurrency.USD, per_unit=WeightUnit.OUNCE_TROY
)
else:
# XAU mode: work with oz
gold_weight = _gold_weight(float(config.gold_ounces or 0.0))
spot = _spot_price(resolved_spot)
loan_amount = Money(amount=decimal_from_float(float(config.loan_amount)), currency=BaseCurrency.USD)
margin_call_ltv = decimal_from_float(float(config.margin_threshold))
hedge_budget = Money(amount=decimal_from_float(float(config.monthly_budget)), currency=BaseCurrency.USD)
gold_value = _as_money(gold_weight * spot)
net_equity = gold_value - loan_amount
ltv_ratio = _decimal_ratio(loan_amount.amount, gold_value.amount)
margin_call_price = loan_amount.amount / (margin_call_ltv * gold_weight.amount)
return {
"gold_value": _money_to_float(gold_value),
"loan_amount": _money_to_float(loan_amount),
"ltv_ratio": _decimal_to_float(ltv_ratio),
"net_equity": _money_to_float(net_equity),
"spot_price": _decimal_to_float(spot.amount),
"gold_units": _decimal_to_float(gold_weight.amount),
"margin_call_ltv": _decimal_to_float(margin_call_ltv),
"margin_call_price": _decimal_to_float(margin_call_price),
"cash_buffer": _DEFAULT_CASH_BUFFER,
"hedge_budget": _money_to_float(hedge_budget),
"display_mode": display_mode,
}
def build_alert_context(
config: PortfolioConfig,
*,
spot_price: float,
source: str,
updated_at: str,
) -> dict[str, float | str]:
"""Build alert context with display-mode-aware calculations."""
display_mode = getattr(config, "display_mode", "XAU")
if is_gld_mode(display_mode):
# GLD mode: work with shares
backing = gld_ounces_per_share(date.today())
shares = Decimal(str(config.gold_ounces or 0.0)) / backing
gold_weight = Weight(amount=shares, unit=WeightUnit.OUNCE_TROY)
else:
# XAU mode: work with oz
gold_weight = _gold_weight(float(config.gold_ounces or 0.0))
live_spot = _spot_price(spot_price)
gold_value = _as_money(gold_weight * live_spot)
loan_amount = Money(amount=decimal_from_float(float(config.loan_amount)), currency=BaseCurrency.USD)
margin_call_ltv = decimal_from_float(float(config.margin_threshold))
margin_call_price = (
loan_amount.amount / (margin_call_ltv * gold_weight.amount) if gold_weight.amount > 0 else _DECIMAL_ZERO
)
return {
"spot_price": _decimal_to_float(live_spot.amount),
"gold_units": _decimal_to_float(gold_weight.amount),
"gold_value": _money_to_float(gold_value),
"loan_amount": _money_to_float(loan_amount),
"ltv_ratio": _decimal_to_float(_decimal_ratio(loan_amount.amount, gold_value.amount)),
"net_equity": _money_to_float(gold_value - loan_amount),
"margin_call_ltv": _decimal_to_float(margin_call_ltv),
"margin_call_price": _decimal_to_float(margin_call_price),
"quote_source": source,
"quote_updated_at": updated_at,
"display_mode": display_mode,
}
def strategy_metrics_from_snapshot(
strategy: dict[str, Any], scenario_pct: int, snapshot: dict[str, Any]
) -> dict[str, Any]:
spot = decimal_from_float(float(snapshot["spot_price"]))
gold_weight = _gold_weight(float(snapshot["gold_units"]))
current_spot = PricePerWeight(amount=spot, currency=BaseCurrency.USD, per_unit=WeightUnit.OUNCE_TROY)
loan_amount = Money(amount=decimal_from_float(float(snapshot["loan_amount"])), currency=BaseCurrency.USD)
base_equity = Money(amount=decimal_from_float(float(snapshot["net_equity"])), currency=BaseCurrency.USD)
cost = _strategy_decimal(strategy.get("estimated_cost")) or _DECIMAL_ZERO
scenario_prices = [spot * _pct_factor(pct) for pct in range(-25, 30, 5)]
benefits = [
strategy_benefit_per_unit(
strategy,
current_spot=_decimal_to_float(spot),
scenario_spot=_decimal_to_float(price),
)
for price in scenario_prices
]
scenario_price = spot * _pct_factor(scenario_pct)
scenario_gold_value = _as_money(
gold_weight
* PricePerWeight(
amount=scenario_price,
currency=BaseCurrency.USD,
per_unit=WeightUnit.OUNCE_TROY,
)
)
current_gold_value = _as_money(gold_weight * current_spot)
unhedged_equity = scenario_gold_value - loan_amount
scenario_payoff_per_unit = _strategy_option_payoff_per_unit(strategy, spot, scenario_price)
capped_upside_per_unit = _strategy_upside_cap_effect_per_unit(strategy, spot, scenario_price)
option_payoff_cash = Money(amount=gold_weight.amount * scenario_payoff_per_unit, currency=BaseCurrency.USD)
capped_upside_cash = Money(amount=gold_weight.amount * capped_upside_per_unit, currency=BaseCurrency.USD)
hedge_cost_cash = Money(amount=gold_weight.amount * cost, currency=BaseCurrency.USD)
hedged_equity = unhedged_equity + option_payoff_cash + capped_upside_cash - hedge_cost_cash
waterfall_steps = [
("Base equity", round(_money_to_float(base_equity), 2)),
("Spot move", round(_money_to_float(scenario_gold_value - current_gold_value), 2)),
("Option payoff", round(_money_to_float(option_payoff_cash), 2)),
("Call cap", round(_money_to_float(capped_upside_cash), 2)),
("Hedge cost", round(_money_to_float(-hedge_cost_cash), 2)),
("Net equity", round(_money_to_float(hedged_equity), 2)),
]
return {
"strategy": strategy,
"scenario_pct": scenario_pct,
"scenario_price": round(float(scenario_price), 2),
"scenario_series": [
{"price": round(float(price), 2), "benefit": benefit}
for price, benefit in zip(scenario_prices, benefits, strict=True)
],
"waterfall_steps": waterfall_steps,
"unhedged_equity": round(_money_to_float(unhedged_equity), 2),
"hedged_equity": round(_money_to_float(hedged_equity), 2),
}

283
app/domain/units.py Normal file
View File

@@ -0,0 +1,283 @@
from __future__ import annotations
from dataclasses import dataclass
from decimal import Decimal
from enum import StrEnum
from typing import TYPE_CHECKING
class BaseCurrency(StrEnum):
USD = "USD"
EUR = "EUR"
CHF = "CHF"
class WeightUnit(StrEnum):
GRAM = "g"
KILOGRAM = "kg"
OUNCE_TROY = "ozt"
GRAMS_PER_KILOGRAM = Decimal("1000")
GRAMS_PER_TROY_OUNCE = Decimal("31.1034768")
DecimalLike = Decimal | int | str
def to_decimal(value: DecimalLike) -> Decimal:
if isinstance(value, bool):
raise TypeError("Boolean values are not valid Decimal inputs")
if isinstance(value, Decimal):
amount = value
elif isinstance(value, int):
amount = Decimal(value)
elif isinstance(value, str):
amount = Decimal(value)
else:
raise TypeError(f"Unsupported decimal input type: {type(value)!r}")
if not amount.is_finite():
raise ValueError("Decimal value must be finite")
return amount
def decimal_from_float(value: float) -> Decimal:
if not isinstance(value, float):
raise TypeError(f"Expected float, got {type(value)!r}")
amount = Decimal(str(value))
if not amount.is_finite():
raise ValueError("Decimal value must be finite")
return amount
def _coerce_currency(value: BaseCurrency | str) -> BaseCurrency:
if isinstance(value, BaseCurrency):
return value
try:
return BaseCurrency(value)
except ValueError as exc:
raise ValueError(f"Invalid currency: {value!r}") from exc
def _coerce_weight_unit(value: WeightUnit | str) -> WeightUnit:
if isinstance(value, WeightUnit):
return value
try:
return WeightUnit(value)
except ValueError as exc:
raise ValueError(f"Invalid weight unit: {value!r}") from exc
def weight_unit_factor(unit: WeightUnit) -> Decimal:
if unit is WeightUnit.GRAM:
return Decimal("1")
if unit is WeightUnit.KILOGRAM:
return GRAMS_PER_KILOGRAM
if unit is WeightUnit.OUNCE_TROY:
return GRAMS_PER_TROY_OUNCE
raise ValueError(f"Unsupported weight unit: {unit}")
def convert_weight(amount: Decimal, from_unit: WeightUnit, to_unit: WeightUnit) -> Decimal:
if from_unit is to_unit:
return amount
grams = amount * weight_unit_factor(from_unit)
return grams / weight_unit_factor(to_unit)
def convert_price_per_weight(amount: Decimal, from_unit: WeightUnit, to_unit: WeightUnit) -> Decimal:
if from_unit is to_unit:
return amount
return amount * weight_unit_factor(to_unit) / weight_unit_factor(from_unit)
@dataclass(frozen=True, slots=True)
class Money:
amount: Decimal
currency: BaseCurrency | str
def __post_init__(self) -> None:
object.__setattr__(self, "amount", to_decimal(self.amount))
object.__setattr__(self, "currency", _coerce_currency(self.currency))
@property
def _currency_typed(self) -> BaseCurrency:
"""Type-narrowed currency accessor for internal use."""
return self.currency # type: ignore[return-value]
@classmethod
def zero(cls, currency: BaseCurrency) -> Money:
return cls(amount=Decimal("0"), currency=currency)
def assert_currency(self, currency: BaseCurrency) -> Money:
if self.currency is not currency:
raise ValueError(f"Currency mismatch: {self.currency} != {currency}")
return self
def __add__(self, other: object) -> Money:
if not isinstance(other, Money):
return NotImplemented
if self.currency is not other.currency:
raise ValueError(f"Currency mismatch: {self.currency} != {other.currency}")
return Money(amount=self.amount + other.amount, currency=self.currency)
def __sub__(self, other: object) -> Money:
if not isinstance(other, Money):
return NotImplemented
if self.currency is not other.currency:
raise ValueError(f"Currency mismatch: {self.currency} != {other.currency}")
return Money(amount=self.amount - other.amount, currency=self.currency)
def __mul__(self, other: object) -> Money:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return Money(amount=self.amount * other, currency=self.currency)
if isinstance(other, int):
return Money(amount=self.amount * Decimal(other), currency=self.currency)
return NotImplemented
def __rmul__(self, other: object) -> Money:
return self.__mul__(other)
def __truediv__(self, other: object) -> Money:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return Money(amount=self.amount / other, currency=self.currency)
if isinstance(other, int):
return Money(amount=self.amount / Decimal(other), currency=self.currency)
return NotImplemented
def __neg__(self) -> Money:
return Money(amount=-self.amount, currency=self.currency)
if TYPE_CHECKING:
# Type narrowing: after __post_init__, these are the actual types
Money_amount: Decimal
Money_currency: BaseCurrency
else:
Money_amount = Decimal
Money_currency = BaseCurrency | str
@dataclass(frozen=True, slots=True)
class Weight:
amount: Decimal
unit: WeightUnit | str
def __post_init__(self) -> None:
object.__setattr__(self, "amount", to_decimal(self.amount))
object.__setattr__(self, "unit", _coerce_weight_unit(self.unit))
@property
def _unit_typed(self) -> WeightUnit:
"""Type-narrowed unit accessor for internal use."""
return self.unit # type: ignore[return-value]
def to_unit(self, unit: WeightUnit) -> Weight:
return Weight(amount=convert_weight(self.amount, self._unit_typed, unit), unit=unit)
def __add__(self, other: object) -> Weight:
if not isinstance(other, Weight):
return NotImplemented
other_converted = other.to_unit(self._unit_typed)
return Weight(amount=self.amount + other_converted.amount, unit=self._unit_typed)
def __sub__(self, other: object) -> Weight:
if not isinstance(other, Weight):
return NotImplemented
other_converted = other.to_unit(self._unit_typed)
return Weight(amount=self.amount - other_converted.amount, unit=self._unit_typed)
def __mul__(self, other: object) -> Weight | Money:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return Weight(amount=self.amount * other, unit=self._unit_typed)
if isinstance(other, int):
return Weight(amount=self.amount * Decimal(other), unit=self._unit_typed)
if isinstance(other, PricePerWeight):
adjusted_weight = self.to_unit(other._per_unit_typed)
return Money(amount=adjusted_weight.amount * other.amount, currency=other._currency_typed)
return NotImplemented
def __rmul__(self, other: object) -> Weight:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return Weight(amount=self.amount * other, unit=self._unit_typed)
if isinstance(other, int):
return Weight(amount=self.amount * Decimal(other), unit=self._unit_typed)
return NotImplemented
def __truediv__(self, other: object) -> Weight:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return Weight(amount=self.amount / other, unit=self._unit_typed)
if isinstance(other, int):
return Weight(amount=self.amount / Decimal(other), unit=self._unit_typed)
return NotImplemented
@dataclass(frozen=True, slots=True)
class PricePerWeight:
amount: Decimal
currency: BaseCurrency | str
per_unit: WeightUnit | str
def __post_init__(self) -> None:
amount = to_decimal(self.amount)
if amount < 0:
raise ValueError("PricePerWeight amount must be non-negative")
object.__setattr__(self, "amount", amount)
object.__setattr__(self, "currency", _coerce_currency(self.currency))
object.__setattr__(self, "per_unit", _coerce_weight_unit(self.per_unit))
@property
def _currency_typed(self) -> BaseCurrency:
"""Type-narrowed currency accessor for internal use."""
return self.currency # type: ignore[return-value]
@property
def _per_unit_typed(self) -> WeightUnit:
"""Type-narrowed unit accessor for internal use."""
return self.per_unit # type: ignore[return-value]
def to_unit(self, unit: WeightUnit) -> PricePerWeight:
return PricePerWeight(
amount=convert_price_per_weight(self.amount, self._per_unit_typed, unit),
currency=self._currency_typed,
per_unit=unit,
)
def __mul__(self, other: object) -> Money | PricePerWeight:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Weight):
adjusted_weight = other.to_unit(self._per_unit_typed)
return Money(amount=adjusted_weight.amount * self.amount, currency=self._currency_typed)
if isinstance(other, Decimal):
return PricePerWeight(
amount=self.amount * other, currency=self._currency_typed, per_unit=self._per_unit_typed
)
if isinstance(other, int):
return PricePerWeight(
amount=self.amount * Decimal(other), currency=self._currency_typed, per_unit=self._per_unit_typed
)
return NotImplemented
def __rmul__(self, other: object) -> PricePerWeight:
if isinstance(other, bool):
return NotImplemented
if isinstance(other, Decimal):
return PricePerWeight(
amount=self.amount * other, currency=self._currency_typed, per_unit=self._per_unit_typed
)
if isinstance(other, int):
return PricePerWeight(
amount=self.amount * Decimal(other), currency=self._currency_typed, per_unit=self._per_unit_typed
)
return NotImplemented

View File

@@ -10,15 +10,20 @@ from contextlib import asynccontextmanager
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any from typing import Any
from fastapi import FastAPI, Request, WebSocket, WebSocketDisconnect from fastapi import FastAPI, Form, Request, WebSocket, WebSocketDisconnect
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import RedirectResponse, Response
from nicegui import ui # type: ignore[attr-defined] from nicegui import ui # type: ignore[attr-defined]
import app.pages # noqa: F401 import app.pages # noqa: F401
from app.api.routes import router as api_router from app.api.routes import router as api_router
from app.domain.portfolio_math import resolve_collateral_spot_from_quote
from app.models.portfolio import build_default_portfolio_config
from app.models.workspace import WORKSPACE_COOKIE, get_workspace_repository
from app.services import turnstile as turnstile_service
from app.services.cache import CacheService from app.services.cache import CacheService
from app.services.data_service import DataService from app.services.data_service import DataService
from app.services.runtime import set_data_service from app.services.runtime import get_data_service, set_data_service
logging.basicConfig(level=os.getenv("LOG_LEVEL", "INFO")) logging.basicConfig(level=os.getenv("LOG_LEVEL", "INFO"))
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -35,11 +40,14 @@ class Settings:
websocket_interval_seconds: int = 5 websocket_interval_seconds: int = 5
nicegui_mount_path: str = "/" nicegui_mount_path: str = "/"
nicegui_storage_secret: str = "vault-dash-dev-secret" nicegui_storage_secret: str = "vault-dash-dev-secret"
turnstile_site_key: str = ""
turnstile_secret_key: str = ""
@classmethod @classmethod
def load(cls) -> Settings: def load(cls) -> Settings:
cls._load_dotenv() cls._load_dotenv()
origins = os.getenv("CORS_ORIGINS", "*") origins = os.getenv("CORS_ORIGINS", "*")
turnstile = turnstile_service.load_turnstile_settings()
return cls( return cls(
app_name=os.getenv("APP_NAME", cls.app_name), app_name=os.getenv("APP_NAME", cls.app_name),
environment=os.getenv("APP_ENV", os.getenv("ENVIRONMENT", cls.environment)), environment=os.getenv("APP_ENV", os.getenv("ENVIRONMENT", cls.environment)),
@@ -50,6 +58,8 @@ class Settings:
websocket_interval_seconds=int(os.getenv("WEBSOCKET_INTERVAL_SECONDS", cls.websocket_interval_seconds)), websocket_interval_seconds=int(os.getenv("WEBSOCKET_INTERVAL_SECONDS", cls.websocket_interval_seconds)),
nicegui_mount_path=os.getenv("NICEGUI_MOUNT_PATH", cls.nicegui_mount_path), nicegui_mount_path=os.getenv("NICEGUI_MOUNT_PATH", cls.nicegui_mount_path),
nicegui_storage_secret=os.getenv("NICEGUI_STORAGE_SECRET", cls.nicegui_storage_secret), nicegui_storage_secret=os.getenv("NICEGUI_STORAGE_SECRET", cls.nicegui_storage_secret),
turnstile_site_key=turnstile.site_key,
turnstile_secret_key=turnstile.secret_key,
) )
@staticmethod @staticmethod
@@ -110,7 +120,7 @@ async def lifespan(app: FastAPI):
app.state.settings = settings app.state.settings = settings
app.state.cache = CacheService(settings.redis_url, default_ttl=settings.cache_ttl) app.state.cache = CacheService(settings.redis_url, default_ttl=settings.cache_ttl)
await app.state.cache.connect() await app.state.cache.connect()
app.state.data_service = DataService(app.state.cache, default_symbol=settings.default_symbol) app.state.data_service = DataService(app.state.cache, default_underlying=settings.default_symbol)
set_data_service(app.state.data_service) set_data_service(app.state.data_service)
app.state.ws_manager = ConnectionManager() app.state.ws_manager = ConnectionManager()
app.state.publisher_task = asyncio.create_task(publish_updates(app)) app.state.publisher_task = asyncio.create_task(publish_updates(app))
@@ -146,6 +156,45 @@ async def health(request: Request) -> dict[str, Any]:
} }
@app.get("/workspaces/bootstrap", tags=["workspace"])
async def bootstrap_workspace_redirect() -> RedirectResponse:
return RedirectResponse(url="/", status_code=303)
@app.post("/workspaces/bootstrap", tags=["workspace"])
async def bootstrap_workspace(
request: Request,
turnstile_response: str = Form(alias="cf-turnstile-response", default=""),
) -> Response:
if not turnstile_service.verify_turnstile_token(
turnstile_response, request.client.host if request.client else None
):
return RedirectResponse(url="/?captcha_error=1", status_code=303)
repo = get_workspace_repository()
config = build_default_portfolio_config()
try:
data_service = get_data_service()
quote = await data_service.get_quote(data_service.default_symbol)
resolved_spot = resolve_collateral_spot_from_quote(quote, fallback_symbol=data_service.default_symbol)
if resolved_spot is not None:
config = build_default_portfolio_config(entry_price=resolved_spot[0])
except Exception as exc:
logger.warning("Falling back to static default workspace seed: %s", exc)
workspace_id = repo.create_workspace_id(config=config)
response = RedirectResponse(url=f"/{workspace_id}", status_code=303)
response.set_cookie(
key=WORKSPACE_COOKIE,
value=workspace_id,
httponly=True,
samesite="lax",
max_age=60 * 60 * 24 * 365,
path="/",
)
return response
@app.websocket("/ws/updates") @app.websocket("/ws/updates")
async def websocket_updates(websocket: WebSocket) -> None: async def websocket_updates(websocket: WebSocket) -> None:
manager: ConnectionManager = websocket.app.state.ws_manager manager: ConnectionManager = websocket.app.state.ws_manager

View File

@@ -1,15 +1,26 @@
"""Application domain models.""" """Application domain models."""
from .event_preset import EventPreset, EventScenarioOverrides
from .option import Greeks, OptionContract, OptionMoneyness from .option import Greeks, OptionContract, OptionMoneyness
from .portfolio import LombardPortfolio from .portfolio import LombardPortfolio
from .position import Position, create_position
from .strategy import HedgingStrategy, ScenarioResult, StrategyType from .strategy import HedgingStrategy, ScenarioResult, StrategyType
from .strategy_template import EntryPolicy, RollPolicy, StrategyTemplate, TemplateLeg
__all__ = [ __all__ = [
"EventPreset",
"EventScenarioOverrides",
"Greeks", "Greeks",
"HedgingStrategy", "HedgingStrategy",
"LombardPortfolio", "LombardPortfolio",
"OptionContract", "OptionContract",
"OptionMoneyness", "OptionMoneyness",
"Position",
"ScenarioResult", "ScenarioResult",
"StrategyType", "StrategyType",
"StrategyTemplate",
"TemplateLeg",
"RollPolicy",
"EntryPolicy",
"create_position",
] ]

102
app/models/alerts.py Normal file
View File

@@ -0,0 +1,102 @@
"""Alert notification domain models."""
from __future__ import annotations
import json
import math
from dataclasses import asdict, dataclass
from pathlib import Path
from typing import Any
class AlertHistoryLoadError(RuntimeError):
def __init__(self, history_path: Path, message: str) -> None:
super().__init__(message)
self.history_path = history_path
@dataclass
class AlertEvent:
severity: str
message: str
ltv_ratio: float
warning_threshold: float
critical_threshold: float
spot_price: float
updated_at: str
email_alerts_enabled: bool
def __post_init__(self) -> None:
for field_name in ("severity", "message", "updated_at"):
value = getattr(self, field_name)
if not isinstance(value, str):
raise TypeError(f"{field_name} must be a string")
for field_name in ("ltv_ratio", "warning_threshold", "critical_threshold", "spot_price"):
value = getattr(self, field_name)
if isinstance(value, bool) or not isinstance(value, (int, float)):
raise TypeError(f"{field_name} must be numeric")
numeric_value = float(value)
if not math.isfinite(numeric_value):
raise ValueError(f"{field_name} must be finite")
setattr(self, field_name, numeric_value)
if not isinstance(self.email_alerts_enabled, bool):
raise TypeError("email_alerts_enabled must be a bool")
def to_dict(self) -> dict[str, Any]:
return asdict(self)
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "AlertEvent":
return cls(**{k: v for k, v in data.items() if k in cls.__dataclass_fields__})
@dataclass
class AlertStatus:
severity: str
message: str
ltv_ratio: float
warning_threshold: float
critical_threshold: float
email_alerts_enabled: bool
history: list[AlertEvent]
history_unavailable: bool = False
history_notice: str | None = None
class AlertHistoryRepository:
"""File-backed alert history store."""
HISTORY_PATH = Path("data/alert_history.json")
def __init__(self, history_path: Path | None = None) -> None:
self.history_path = history_path or self.HISTORY_PATH
self.history_path.parent.mkdir(parents=True, exist_ok=True)
def load(self) -> list[AlertEvent]:
if not self.history_path.exists():
return []
try:
with self.history_path.open() as f:
data = json.load(f)
except json.JSONDecodeError as exc:
raise AlertHistoryLoadError(self.history_path, f"Alert history is not valid JSON: {exc}") from exc
except OSError as exc:
raise AlertHistoryLoadError(self.history_path, f"Alert history could not be read: {exc}") from exc
if not isinstance(data, list):
raise AlertHistoryLoadError(self.history_path, "Alert history payload must be a list")
events: list[AlertEvent] = []
for index, item in enumerate(data):
if not isinstance(item, dict):
raise AlertHistoryLoadError(self.history_path, f"Alert history entry {index} must be an object")
try:
events.append(AlertEvent.from_dict(item))
except (TypeError, ValueError) as exc:
raise AlertHistoryLoadError(
self.history_path,
f"Alert history entry {index} is invalid: {exc}",
) from exc
return events
def save(self, events: list[AlertEvent]) -> None:
with self.history_path.open("w") as f:
json.dump([event.to_dict() for event in events], f, indent=2)

178
app/models/backtest.py Normal file
View File

@@ -0,0 +1,178 @@
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import date
from app.models.event_preset import EventPreset
@dataclass(frozen=True)
class BacktestPortfolioState:
currency: str
underlying_units: float
entry_spot: float
loan_amount: float
margin_call_ltv: float
cash_balance: float = 0.0
financing_rate: float = 0.0
def __post_init__(self) -> None:
if self.currency.upper() != "USD":
raise ValueError("USD is the only supported currency in the MVP")
if self.underlying_units <= 0:
raise ValueError("underlying_units must be positive")
if self.entry_spot <= 0:
raise ValueError("entry_spot must be positive")
if self.loan_amount < 0:
raise ValueError("loan_amount must be non-negative")
if not 0 < self.margin_call_ltv < 1:
raise ValueError("margin_call_ltv must be between 0 and 1")
if self.loan_amount >= self.underlying_units * self.entry_spot:
raise ValueError("loan_amount must be less than initial collateral value")
@property
def start_value(self) -> float:
return self.underlying_units * self.entry_spot
@dataclass(frozen=True)
class TemplateRef:
slug: str
version: int
def __post_init__(self) -> None:
if not self.slug:
raise ValueError("template slug is required")
if self.version <= 0:
raise ValueError("template version must be positive")
@dataclass(frozen=True)
class ProviderRef:
provider_id: str
pricing_mode: str
def __post_init__(self) -> None:
if not self.provider_id:
raise ValueError("provider_id is required")
if not self.pricing_mode:
raise ValueError("pricing_mode is required")
@dataclass(frozen=True)
class BacktestScenario:
scenario_id: str
display_name: str
symbol: str
start_date: date
end_date: date
initial_portfolio: BacktestPortfolioState
template_refs: tuple[TemplateRef, ...]
provider_ref: ProviderRef
def __post_init__(self) -> None:
if not self.scenario_id:
raise ValueError("scenario_id is required")
if not self.display_name:
raise ValueError("display_name is required")
if not self.symbol:
raise ValueError("symbol is required")
if self.start_date > self.end_date:
raise ValueError("start_date must be on or before end_date")
if not self.template_refs:
raise ValueError("at least one template ref is required")
@dataclass(frozen=True)
class BacktestDailyPoint:
date: date
spot_close: float
underlying_value: float
option_market_value: float
premium_cashflow: float
realized_option_cashflow: float
net_portfolio_value: float
loan_amount: float
ltv_unhedged: float
ltv_hedged: float
margin_call_unhedged: bool
margin_call_hedged: bool
active_position_ids: tuple[str, ...] = field(default_factory=tuple)
# OHLC fields for chart and margin call evaluation
spot_open: float | None = None # Day's open
spot_low: float | None = None # Day's low for margin call evaluation
spot_high: float | None = None # Day's high
# Option position info
option_contracts: float = 0.0 # Number of option contracts held
@dataclass(frozen=True)
class BacktestSummaryMetrics:
start_value: float
end_value_unhedged: float
end_value_hedged_net: float
total_hedge_cost: float
total_option_payoff_realized: float
max_ltv_unhedged: float
max_ltv_hedged: float
margin_call_days_unhedged: int
margin_call_days_hedged: int
margin_threshold_breached_unhedged: bool
margin_threshold_breached_hedged: bool
@dataclass(frozen=True)
class TemplateBacktestResult:
template_slug: str
template_id: str
template_version: int
template_name: str
summary_metrics: BacktestSummaryMetrics
daily_path: tuple[BacktestDailyPoint, ...]
warnings: tuple[str, ...] = field(default_factory=tuple)
@dataclass(frozen=True)
class BacktestRunResult:
scenario_id: str
template_results: tuple[TemplateBacktestResult, ...]
@dataclass(frozen=True)
class EventComparisonRanking:
rank: int
template_slug: str
template_name: str
survived_margin_call: bool
margin_call_days_hedged: int
max_ltv_hedged: float
hedge_cost: float
final_equity: float
result: TemplateBacktestResult
@dataclass(frozen=True)
class EventComparisonReport:
event_preset: EventPreset
scenario: BacktestScenario
rankings: tuple[EventComparisonRanking, ...]
run_result: BacktestRunResult
@dataclass(frozen=True)
class BacktestPortfolioPreset:
"""User-facing preset for quick scenario configuration."""
preset_id: str
name: str
description: str
underlying_symbol: str
start_date: date
end_date: date
entry_spot: float | None = None # If None, derive from historical data
underlying_units: float = 1000.0
loan_amount: float = 50000.0
margin_call_ltv: float = 0.80
template_slug: str = "protective-put-atm-12m"
# Event-specific overrides
scenario_overrides: dict[str, object] | None = None

View File

@@ -0,0 +1,88 @@
"""Backtest settings model for configuring backtest scenarios independently of portfolio settings."""
from __future__ import annotations
from dataclasses import dataclass
from datetime import date
from typing import Literal
from uuid import UUID, uuid4
# Self type annotation
from app.models.backtest import ProviderRef
@dataclass(frozen=True)
class BacktestSettings:
"""Configuration for running backtests independent of portfolio settings.
These settings determine what data to fetch and how to run backtests,
separate from the actual portfolio configurations being tested.
"""
settings_id: UUID
name: str
data_source: Literal["databento", "yfinance", "synthetic"]
dataset: str
schema: str
start_date: date
end_date: date
underlying_symbol: Literal["GLD", "GC", "XAU"]
start_price: float
underlying_units: float
loan_amount: float
margin_call_ltv: float
template_slugs: tuple[str, ...]
cache_key: str
data_cost_usd: float
provider_ref: ProviderRef
def __post_init__(self) -> None:
if not self.settings_id:
raise ValueError("settings_id is required")
if not self.name:
raise ValueError("name is required")
if self.start_date > self.end_date:
raise ValueError("start_date must be on or before end_date")
if self.data_source not in ("databento", "yfinance", "synthetic"):
raise ValueError("data_source must be 'databento', 'yfinance', or 'synthetic'")
if self.underlying_symbol not in ("GLD", "GC", "XAU"):
raise ValueError("underlying_symbol must be 'GLD', 'GC', or 'XAU'")
if self.start_price < 0:
raise ValueError("start_price must be non-negative")
if self.underlying_units <= 0:
raise ValueError("underlying_units must be positive")
if self.loan_amount < 0:
raise ValueError("loan_amount must be non-negative")
if not 0 < self.margin_call_ltv < 1:
raise ValueError("margin_call_ltv must be between 0 and 1")
if not self.template_slugs:
raise ValueError("at least one template slug is required")
if self.data_cost_usd < 0:
raise ValueError("data_cost_usd must be non-negative")
@classmethod
def create_default(cls, name: str = "Default Backtest Settings") -> BacktestSettings:
"""Create default backtest settings configuration."""
return cls(
settings_id=uuid4(),
name=name,
data_source="databento",
dataset="XNAS.BASIC",
schema="ohlcv-1d",
start_date=date(2020, 1, 1),
end_date=date(2023, 12, 31),
underlying_symbol="GLD",
start_price=0.0,
underlying_units=1000.0,
loan_amount=0.0,
margin_call_ltv=0.75,
template_slugs=("default-template",),
cache_key="",
data_cost_usd=0.0,
provider_ref=ProviderRef(provider_id="default", pricing_mode="standard"),
)
# For backward compatibility - alias to existing models
BacktestScenario = "app.models.backtest.BacktestScenario"
# TemplateRef and ProviderRef imported from app.models.backtest

View File

@@ -0,0 +1,128 @@
"""Repository for persisting backtest settings configuration."""
from __future__ import annotations
import json
from datetime import date
from pathlib import Path
from typing import Any
from uuid import UUID
from app.models.backtest_settings import BacktestSettings
class BacktestSettingsRepository:
"""Repository for persisting backtest settings configuration.
Persists to `.workspaces/{workspace_id}/backtest_settings.json`
"""
def __init__(self, base_path: Path | str = Path(".workspaces")) -> None:
self.base_path = Path(base_path)
self.base_path.mkdir(parents=True, exist_ok=True)
def load(self, workspace_id: str) -> BacktestSettings | None:
"""Load backtest settings for a workspace.
Args:
workspace_id: The workspace ID to load settings for
Returns:
BacktestSettings: The loaded settings, or None if no settings exist
Raises:
ValueError: If settings file is invalid
"""
settings_path = self._settings_path(workspace_id)
if not settings_path.exists():
return None
try:
with open(settings_path) as f:
data = json.load(f)
return self._settings_from_dict(data)
except (json.JSONDecodeError, KeyError) as e:
raise ValueError(f"Invalid backtest settings file: {e}") from e
def save(self, workspace_id: str, settings: BacktestSettings) -> None:
"""Save backtest settings for a workspace.
Args:
workspace_id: The workspace ID to save settings for
settings: The settings to save
"""
settings_path = self._settings_path(workspace_id)
settings_path.parent.mkdir(parents=True, exist_ok=True)
payload = self._to_dict(settings)
tmp_path = settings_path.with_name(f"{settings_path.name}.tmp")
with open(tmp_path, "w") as f:
json.dump(payload, f, indent=2)
f.flush()
# Atomic replace
tmp_path.replace(settings_path)
def _settings_path(self, workspace_id: str) -> Path:
"""Get the path to the settings file for a workspace."""
return self.base_path / workspace_id / "backtest_settings.json"
def _to_dict(self, settings: BacktestSettings) -> dict[str, Any]:
"""Convert BacktestSettings to a dictionary for serialization."""
return {
"settings_id": str(settings.settings_id),
"name": settings.name,
"data_source": settings.data_source,
"dataset": settings.dataset,
"schema": settings.schema,
"start_date": settings.start_date.isoformat(),
"end_date": settings.end_date.isoformat(),
"underlying_symbol": settings.underlying_symbol,
"start_price": settings.start_price,
"underlying_units": settings.underlying_units,
"loan_amount": settings.loan_amount,
"margin_call_ltv": settings.margin_call_ltv,
"template_slugs": list(settings.template_slugs),
"cache_key": settings.cache_key,
"data_cost_usd": settings.data_cost_usd,
"provider_ref": {
"provider_id": settings.provider_ref.provider_id,
"pricing_mode": settings.provider_ref.pricing_mode,
},
}
def _settings_from_dict(self, data: dict[str, Any]) -> BacktestSettings:
"""Create BacktestSettings from a dictionary."""
# Handle potential string dates
start_date = data["start_date"]
if isinstance(start_date, str):
start_date = date.fromisoformat(start_date)
end_date = data["end_date"]
if isinstance(end_date, str):
end_date = date.fromisoformat(end_date)
# Import here to avoid circular import issues at module level
from app.models.backtest import ProviderRef
return BacktestSettings(
settings_id=UUID(data["settings_id"]),
name=data["name"],
data_source=data["data_source"],
dataset=data["dataset"],
schema=data["schema"],
start_date=start_date,
end_date=end_date,
underlying_symbol=data["underlying_symbol"],
start_price=data["start_price"],
underlying_units=data["underlying_units"],
loan_amount=data["loan_amount"],
margin_call_ltv=data["margin_call_ltv"],
template_slugs=tuple(data["template_slugs"]),
cache_key=data["cache_key"],
data_cost_usd=data["data_cost_usd"],
provider_ref=ProviderRef(
provider_id=data["provider_ref"]["provider_id"],
pricing_mode=data["provider_ref"]["pricing_mode"],
),
)

105
app/models/event_preset.py Normal file
View File

@@ -0,0 +1,105 @@
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import date, datetime, timezone
from typing import Literal
EventType = Literal["selloff", "recovery", "stress_test"]
@dataclass(frozen=True)
class EventScenarioOverrides:
lookback_days: int | None = None
recovery_days: int | None = None
default_template_slugs: tuple[str, ...] = field(default_factory=tuple)
def __post_init__(self) -> None:
if self.lookback_days is not None and self.lookback_days < 0:
raise ValueError("lookback_days must be non-negative")
if self.recovery_days is not None and self.recovery_days < 0:
raise ValueError("recovery_days must be non-negative")
if any(not slug for slug in self.default_template_slugs):
raise ValueError("default_template_slugs must not contain empty values")
def to_dict(self) -> dict[str, object]:
return {
"lookback_days": self.lookback_days,
"recovery_days": self.recovery_days,
"default_template_slugs": list(self.default_template_slugs),
}
@classmethod
def from_dict(cls, payload: dict[str, object] | None) -> EventScenarioOverrides:
if payload is None:
return cls()
return cls(
lookback_days=payload.get("lookback_days"), # type: ignore[arg-type]
recovery_days=payload.get("recovery_days"), # type: ignore[arg-type]
default_template_slugs=tuple(payload.get("default_template_slugs", [])), # type: ignore[arg-type]
)
@dataclass(frozen=True)
class EventPreset:
event_preset_id: str
slug: str
display_name: str
symbol: str
window_start: date
window_end: date
anchor_date: date | None
event_type: EventType
tags: tuple[str, ...] = field(default_factory=tuple)
description: str = ""
scenario_overrides: EventScenarioOverrides = field(default_factory=EventScenarioOverrides)
created_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
def __post_init__(self) -> None:
if not self.event_preset_id:
raise ValueError("event_preset_id is required")
if not self.slug:
raise ValueError("slug is required")
if not self.display_name:
raise ValueError("display_name is required")
if not self.symbol:
raise ValueError("symbol is required")
if self.window_start > self.window_end:
raise ValueError("window_start must be on or before window_end")
if self.anchor_date is not None and not (self.window_start <= self.anchor_date <= self.window_end):
raise ValueError("anchor_date must fall inside the event window")
if self.event_type not in {"selloff", "recovery", "stress_test"}:
raise ValueError("unsupported event_type")
def to_dict(self) -> dict[str, object]:
return {
"event_preset_id": self.event_preset_id,
"slug": self.slug,
"display_name": self.display_name,
"symbol": self.symbol,
"window_start": self.window_start.isoformat(),
"window_end": self.window_end.isoformat(),
"anchor_date": self.anchor_date.isoformat() if self.anchor_date is not None else None,
"event_type": self.event_type,
"tags": list(self.tags),
"description": self.description,
"scenario_overrides": self.scenario_overrides.to_dict(),
"created_at": self.created_at.isoformat(),
}
@classmethod
def from_dict(cls, payload: dict[str, object]) -> EventPreset:
anchor_date = payload.get("anchor_date")
return cls(
event_preset_id=str(payload["event_preset_id"]),
slug=str(payload["slug"]),
display_name=str(payload["display_name"]),
symbol=str(payload["symbol"]),
window_start=date.fromisoformat(str(payload["window_start"])),
window_end=date.fromisoformat(str(payload["window_end"])),
anchor_date=date.fromisoformat(str(anchor_date)) if anchor_date else None,
event_type=payload["event_type"], # type: ignore[arg-type]
tags=tuple(payload.get("tags", [])), # type: ignore[arg-type]
description=str(payload.get("description", "")),
scenario_overrides=EventScenarioOverrides.from_dict(payload.get("scenario_overrides")), # type: ignore[arg-type]
created_at=datetime.fromisoformat(str(payload["created_at"])),
)

198
app/models/ltv_history.py Normal file
View File

@@ -0,0 +1,198 @@
from __future__ import annotations
import json
from dataclasses import dataclass
from datetime import date, datetime
from decimal import Decimal, InvalidOperation
from pathlib import Path
from typing import Any
class LtvHistoryLoadError(RuntimeError):
def __init__(self, history_path: Path, message: str) -> None:
super().__init__(message)
self.history_path = history_path
@dataclass(frozen=True)
class LtvSnapshot:
snapshot_date: str
captured_at: str
ltv_ratio: Decimal
margin_threshold: Decimal
loan_amount: Decimal
collateral_value: Decimal
spot_price: Decimal
source: str
def __post_init__(self) -> None:
for field_name in ("snapshot_date", "captured_at", "source"):
value = getattr(self, field_name)
if not isinstance(value, str) or not value.strip():
raise ValueError(f"{field_name} must be a non-empty string")
date.fromisoformat(self.snapshot_date)
datetime.fromisoformat(self.captured_at.replace("Z", "+00:00"))
for field_name in (
"ltv_ratio",
"margin_threshold",
"loan_amount",
"collateral_value",
"spot_price",
):
value = getattr(self, field_name)
if not isinstance(value, Decimal) or not value.is_finite():
raise TypeError(f"{field_name} must be a finite Decimal")
if self.ltv_ratio < 0:
raise ValueError("ltv_ratio must be zero or greater")
if not Decimal("0") < self.margin_threshold < Decimal("1"):
raise ValueError("margin_threshold must be between 0 and 1")
if self.loan_amount < 0:
raise ValueError("loan_amount must be zero or greater")
if self.collateral_value <= 0:
raise ValueError("collateral_value must be positive")
if self.spot_price <= 0:
raise ValueError("spot_price must be positive")
def to_dict(self) -> dict[str, Any]:
return {
"snapshot_date": self.snapshot_date,
"captured_at": self.captured_at,
"ltv_ratio": _structured_ratio_payload(self.ltv_ratio),
"margin_threshold": _structured_ratio_payload(self.margin_threshold),
"loan_amount": _structured_money_payload(self.loan_amount),
"collateral_value": _structured_money_payload(self.collateral_value),
"spot_price": _structured_price_payload(self.spot_price),
"source": self.source,
}
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "LtvSnapshot":
return cls(
snapshot_date=_require_non_empty_string(data, "snapshot_date"),
captured_at=_require_non_empty_string(data, "captured_at"),
ltv_ratio=_parse_ratio_payload(data.get("ltv_ratio"), field_name="ltv_ratio"),
margin_threshold=_parse_ratio_payload(data.get("margin_threshold"), field_name="margin_threshold"),
loan_amount=_parse_money_payload(data.get("loan_amount"), field_name="loan_amount"),
collateral_value=_parse_money_payload(data.get("collateral_value"), field_name="collateral_value"),
spot_price=_parse_price_payload(data.get("spot_price"), field_name="spot_price"),
source=_require_non_empty_string(data, "source"),
)
class LtvHistoryRepository:
def __init__(self, base_path: Path | str = Path("data/workspaces")) -> None:
self.base_path = Path(base_path)
self.base_path.mkdir(parents=True, exist_ok=True)
def load(self, workspace_id: str) -> list[LtvSnapshot]:
history_path = self.history_path(workspace_id)
if not history_path.exists():
return []
try:
payload = json.loads(history_path.read_text())
except json.JSONDecodeError as exc:
raise LtvHistoryLoadError(history_path, f"LTV history is not valid JSON: {exc}") from exc
except OSError as exc:
raise LtvHistoryLoadError(history_path, f"LTV history could not be read: {exc}") from exc
if not isinstance(payload, list):
raise LtvHistoryLoadError(history_path, "LTV history payload must be a list")
snapshots: list[LtvSnapshot] = []
for index, item in enumerate(payload):
if not isinstance(item, dict):
raise LtvHistoryLoadError(history_path, f"LTV history entry {index} must be an object")
try:
snapshots.append(LtvSnapshot.from_dict(item))
except (TypeError, ValueError, KeyError) as exc:
raise LtvHistoryLoadError(history_path, f"LTV history entry {index} is invalid: {exc}") from exc
return snapshots
def save(self, workspace_id: str, snapshots: list[LtvSnapshot]) -> None:
history_path = self.history_path(workspace_id)
history_path.parent.mkdir(parents=True, exist_ok=True)
history_path.write_text(json.dumps([snapshot.to_dict() for snapshot in snapshots], indent=2))
def history_path(self, workspace_id: str) -> Path:
return self.base_path / workspace_id / "ltv_history.json"
def _require_non_empty_string(data: dict[str, Any], field_name: str) -> str:
value = data.get(field_name)
if not isinstance(value, str) or not value.strip():
raise ValueError(f"{field_name} must be a non-empty string")
return value
def _decimal_text(value: Decimal) -> str:
if value == value.to_integral():
return str(value.quantize(Decimal("1")))
normalized = value.normalize()
exponent = normalized.as_tuple().exponent
if isinstance(exponent, int) and exponent < 0:
return format(normalized, "f")
return str(normalized)
def _parse_decimal_payload(
payload: object,
*,
field_name: str,
expected_tag_key: str,
expected_tag_value: str,
expected_currency: str | None = None,
expected_per_weight_unit: str | None = None,
) -> Decimal:
if not isinstance(payload, dict):
raise TypeError(f"{field_name} must be an object")
if payload.get(expected_tag_key) != expected_tag_value:
raise ValueError(f"{field_name} must declare {expected_tag_key}={expected_tag_value!r}")
if expected_currency is not None and payload.get("currency") != expected_currency:
raise ValueError(f"{field_name} must declare currency={expected_currency!r}")
if expected_per_weight_unit is not None and payload.get("per_weight_unit") != expected_per_weight_unit:
raise ValueError(f"{field_name} must declare per_weight_unit={expected_per_weight_unit!r}")
raw_value = payload.get("value")
if not isinstance(raw_value, str) or not raw_value.strip():
raise ValueError(f"{field_name}.value must be a non-empty string")
try:
value = Decimal(raw_value)
except InvalidOperation as exc:
raise ValueError(f"{field_name}.value must be numeric") from exc
if not value.is_finite():
raise ValueError(f"{field_name}.value must be finite")
return value
def _parse_ratio_payload(payload: object, *, field_name: str) -> Decimal:
return _parse_decimal_payload(payload, field_name=field_name, expected_tag_key="unit", expected_tag_value="ratio")
def _parse_money_payload(payload: object, *, field_name: str) -> Decimal:
return _parse_decimal_payload(
payload,
field_name=field_name,
expected_tag_key="currency",
expected_tag_value="USD",
expected_currency="USD",
)
def _parse_price_payload(payload: object, *, field_name: str) -> Decimal:
return _parse_decimal_payload(
payload,
field_name=field_name,
expected_tag_key="currency",
expected_tag_value="USD",
expected_currency="USD",
expected_per_weight_unit="ozt",
)
def _structured_ratio_payload(value: Decimal) -> dict[str, str]:
return {"value": str(value), "unit": "ratio"}
def _structured_money_payload(value: Decimal) -> dict[str, str]:
return {"value": _decimal_text(value), "currency": "USD"}
def _structured_price_payload(value: Decimal) -> dict[str, str]:
return {"value": _decimal_text(value), "currency": "USD", "per_weight_unit": "ozt"}

View File

@@ -3,9 +3,35 @@
from __future__ import annotations from __future__ import annotations
import json import json
from dataclasses import dataclass import os
from dataclasses import dataclass, field
from datetime import date
from decimal import Decimal
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any, Literal
from app.models.position import Position, create_position
# Type aliases for display mode and entry basis
DisplayMode = Literal["GLD", "XAU"]
EntryBasisMode = Literal["value_price", "weight"]
_DEFAULT_GOLD_VALUE = 215_000.0
_DEFAULT_ENTRY_PRICE = 2_150.0
_LEGACY_DEFAULT_ENTRY_PRICE = 215.0
_DEFAULT_GOLD_OUNCES = 100.0
_LEGACY_DEFAULT_GOLD_OUNCES = 1_000.0
def build_default_portfolio_config(*, entry_price: float | None = None) -> "PortfolioConfig":
resolved_entry_price = float(entry_price) if entry_price is not None else _DEFAULT_ENTRY_PRICE
gold_value = resolved_entry_price * _DEFAULT_GOLD_OUNCES
return PortfolioConfig(
gold_value=gold_value,
entry_price=resolved_entry_price,
gold_ounces=_DEFAULT_GOLD_OUNCES,
entry_basis_mode="value_price",
)
@dataclass(frozen=True) @dataclass(frozen=True)
@@ -66,14 +92,21 @@ class PortfolioConfig:
"""User portfolio configuration with validation. """User portfolio configuration with validation.
Attributes: Attributes:
gold_value: Current gold collateral value in USD gold_value: Collateral baseline value in USD at entry
entry_price: Gold entry price per ounce in USD
gold_ounces: Canonical gold collateral weight in ounces
entry_basis_mode: Preferred settings UI input mode
loan_amount: Outstanding loan amount in USD loan_amount: Outstanding loan amount in USD
margin_threshold: LTV threshold for margin call (default 0.75) margin_threshold: LTV threshold for margin call (default 0.75)
monthly_budget: Approved monthly hedge budget monthly_budget: Approved monthly hedge budget
ltv_warning: LTV warning level for alerts (default 0.70) ltv_warning: LTV warning level for alerts (default 0.70)
positions: List of position entries (multi-position support)
""" """
gold_value: float = 215000.0 gold_value: float | None = None
entry_price: float | None = _DEFAULT_ENTRY_PRICE
gold_ounces: float | None = None
entry_basis_mode: EntryBasisMode = "value_price"
loan_amount: float = 145000.0 loan_amount: float = 145000.0
margin_threshold: float = 0.75 margin_threshold: float = 0.75
monthly_budget: float = 8000.0 monthly_budget: float = 8000.0
@@ -84,19 +117,147 @@ class PortfolioConfig:
fallback_source: str = "yfinance" fallback_source: str = "yfinance"
refresh_interval: int = 5 refresh_interval: int = 5
# Underlying instrument selection
underlying: str = "GLD"
# Display mode: how to show positions (GLD shares vs physical gold)
display_mode: DisplayMode = "XAU" # "GLD" for share view, "XAU" for physical gold view
# Alert settings # Alert settings
volatility_spike: float = 0.25 volatility_spike: float = 0.25
spot_drawdown: float = 7.5 spot_drawdown: float = 7.5
email_alerts: bool = False email_alerts: bool = False
def __post_init__(self): # Multi-position support
"""Validate configuration after initialization.""" positions: list[Position] = field(default_factory=list)
def __post_init__(self) -> None:
"""Normalize entry basis fields and validate configuration."""
self._normalize_entry_basis()
self.validate() self.validate()
def migrate_to_positions_if_needed(self) -> None:
"""Migrate legacy single-entry portfolios to multi-position format.
Call this after loading from persistence to migrate legacy configs.
If positions list is empty but gold_ounces exists, create one Position
representing the legacy single entry.
"""
if self.positions:
# Already has positions, no migration needed
return
if self.gold_ounces is None or self.entry_price is None:
return
# Create a single position from legacy fields
position = create_position(
underlying=self.underlying,
quantity=Decimal(str(self.gold_ounces)),
unit="oz",
entry_price=Decimal(str(self.entry_price)),
entry_date=date.today(),
entry_basis_mode=self.entry_basis_mode,
)
# PortfolioConfig is not frozen, so we can set directly
self.positions = [position]
def _normalize_entry_basis(self) -> None:
"""Resolve user input into canonical weight + entry price representation."""
if self.entry_basis_mode not in {"value_price", "weight"}:
raise ValueError("Entry basis mode must be 'value_price' or 'weight'")
if self.entry_price is None or self.entry_price <= 0:
raise ValueError("Entry price must be positive")
if self.gold_value is not None and self.gold_value <= 0:
raise ValueError("Gold value must be positive")
if self.gold_ounces is not None and self.gold_ounces <= 0:
raise ValueError("Gold weight must be positive")
if self.gold_value is None and self.gold_ounces is None:
default = build_default_portfolio_config(entry_price=self.entry_price)
self.gold_value = default.gold_value
self.gold_ounces = default.gold_ounces
return
if self.gold_value is None and self.gold_ounces is not None:
self.gold_value = self.gold_ounces * self.entry_price
return
if self.gold_ounces is None and self.gold_value is not None:
self.gold_ounces = self.gold_value / self.entry_price
return
assert self.gold_value is not None
assert self.gold_ounces is not None
derived_gold_value = self.gold_ounces * self.entry_price
tolerance = max(0.01, abs(derived_gold_value) * 1e-9)
if abs(self.gold_value - derived_gold_value) > tolerance:
raise ValueError("Gold value and weight contradict each other")
self.gold_value = derived_gold_value
def _migrate_legacy_to_positions(self) -> None:
"""Migrate legacy single-entry portfolios to multi-position format.
If positions list is empty but gold_ounces exists, create one Position
representing the legacy single entry.
"""
if self.positions:
# Already has positions, no migration needed
return
if self.gold_ounces is None or self.entry_price is None:
return
# Create a single position from legacy fields
position = create_position(
underlying=self.underlying,
quantity=Decimal(str(self.gold_ounces)),
unit="oz",
entry_price=Decimal(str(self.entry_price)),
entry_date=date.today(),
entry_basis_mode=self.entry_basis_mode,
)
# PortfolioConfig is not frozen, so we can set directly
self.positions = [position]
def _sync_legacy_fields_from_positions(self) -> None:
"""Sync legacy gold_ounces, entry_price, gold_value from positions.
For backward compatibility, compute aggregate values from positions list.
"""
if not self.positions:
return
# For now, assume homogeneous positions (same underlying and unit)
# Sum quantities and compute weighted average entry price
total_quantity = Decimal("0")
total_value = Decimal("0")
for pos in self.positions:
if pos.unit == "oz":
total_quantity += pos.quantity
total_value += pos.entry_value
if total_quantity > 0:
avg_entry_price = total_value / total_quantity
self.gold_ounces = float(total_quantity)
self.entry_price = float(avg_entry_price)
self.gold_value = float(total_value)
def validate(self) -> None: def validate(self) -> None:
"""Validate configuration values.""" """Validate configuration values."""
assert self.gold_value is not None
assert self.entry_price is not None
assert self.gold_ounces is not None
if self.gold_value <= 0: if self.gold_value <= 0:
raise ValueError("Gold value must be positive") raise ValueError("Gold value must be positive")
if self.entry_price <= 0:
raise ValueError("Entry price must be positive")
if self.gold_ounces <= 0:
raise ValueError("Gold weight must be positive")
if self.loan_amount < 0: if self.loan_amount < 0:
raise ValueError("Loan amount cannot be negative") raise ValueError("Loan amount cannot be negative")
if self.loan_amount >= self.gold_value: if self.loan_amount >= self.gold_value:
@@ -105,12 +266,15 @@ class PortfolioConfig:
raise ValueError("Margin threshold must be between 10% and 95%") raise ValueError("Margin threshold must be between 10% and 95%")
if not 0.1 <= self.ltv_warning <= 0.95: if not 0.1 <= self.ltv_warning <= 0.95:
raise ValueError("LTV warning level must be between 10% and 95%") raise ValueError("LTV warning level must be between 10% and 95%")
if self.ltv_warning >= self.margin_threshold:
raise ValueError("LTV warning level must be less than the margin threshold")
if self.refresh_interval < 1: if self.refresh_interval < 1:
raise ValueError("Refresh interval must be at least 1 second") raise ValueError("Refresh interval must be at least 1 second")
@property @property
def current_ltv(self) -> float: def current_ltv(self) -> float:
"""Calculate current loan-to-value ratio.""" """Calculate current loan-to-value ratio."""
assert self.gold_value is not None
if self.gold_value == 0: if self.gold_value == 0:
return 0.0 return 0.0
return self.loan_amount / self.gold_value return self.loan_amount / self.gold_value
@@ -123,19 +287,29 @@ class PortfolioConfig:
@property @property
def net_equity(self) -> float: def net_equity(self) -> float:
"""Calculate net equity (gold value - loan).""" """Calculate net equity (gold value - loan)."""
assert self.gold_value is not None
return self.gold_value - self.loan_amount return self.gold_value - self.loan_amount
@property @property
def margin_call_price(self) -> float: def margin_call_price(self) -> float:
"""Calculate gold price at which margin call occurs.""" """Calculate gold price per ounce at which margin call occurs."""
if self.margin_threshold == 0: assert self.gold_ounces is not None
return float('inf') if self.margin_threshold == 0 or self.gold_ounces == 0:
return self.loan_amount / self.margin_threshold return float("inf")
return self.loan_amount / (self.margin_threshold * self.gold_ounces)
def to_dict(self) -> dict[str, Any]: def to_dict(self) -> dict[str, Any]:
"""Convert configuration to dictionary.""" """Convert configuration to dictionary."""
return { assert self.gold_value is not None
assert self.entry_price is not None
assert self.gold_ounces is not None
# Sync legacy fields from positions before serializing
self._sync_legacy_fields_from_positions()
result: dict[str, Any] = {
"gold_value": self.gold_value, "gold_value": self.gold_value,
"entry_price": self.entry_price,
"gold_ounces": self.gold_ounces,
"entry_basis_mode": self.entry_basis_mode,
"loan_amount": self.loan_amount, "loan_amount": self.loan_amount,
"margin_threshold": self.margin_threshold, "margin_threshold": self.margin_threshold,
"monthly_budget": self.monthly_budget, "monthly_budget": self.monthly_budget,
@@ -143,15 +317,55 @@ class PortfolioConfig:
"primary_source": self.primary_source, "primary_source": self.primary_source,
"fallback_source": self.fallback_source, "fallback_source": self.fallback_source,
"refresh_interval": self.refresh_interval, "refresh_interval": self.refresh_interval,
"underlying": self.underlying,
"display_mode": self.display_mode,
"volatility_spike": self.volatility_spike, "volatility_spike": self.volatility_spike,
"spot_drawdown": self.spot_drawdown, "spot_drawdown": self.spot_drawdown,
"email_alerts": self.email_alerts, "email_alerts": self.email_alerts,
} }
# Include positions if any exist
if self.positions:
result["positions"] = [pos.to_dict() for pos in self.positions]
return result
@classmethod @classmethod
def from_dict(cls, data: dict[str, Any]) -> PortfolioConfig: def from_dict(cls, data: dict[str, Any]) -> PortfolioConfig:
"""Create configuration from dictionary.""" """Create configuration from dictionary."""
return cls(**{k: v for k, v in data.items() if k in cls.__dataclass_fields__}) # Extract positions if present (may already be Position objects from deserialization)
positions_data = data.pop("positions", None)
config_data = {k: v for k, v in data.items() if k in cls.__dataclass_fields__}
# Create config without positions first (will be set in __post_init__)
config = cls(**config_data)
# Set positions after initialization
if positions_data:
if positions_data and isinstance(positions_data[0], Position):
# Already deserialized by _deserialize_value
positions = positions_data
else:
positions = [Position.from_dict(p) for p in positions_data]
config.positions = positions
return config
def _coerce_persisted_decimal(value: Any) -> Decimal:
if isinstance(value, bool):
raise TypeError("Boolean values are not valid decimal persistence inputs")
if isinstance(value, Decimal):
amount = value
elif isinstance(value, int):
amount = Decimal(value)
elif isinstance(value, float):
amount = Decimal(str(value))
elif isinstance(value, str):
amount = Decimal(value)
else:
raise TypeError(f"Unsupported persisted decimal input type: {type(value)!r}")
if not amount.is_finite():
raise ValueError("Decimal persistence value must be finite")
return amount
class PortfolioRepository: class PortfolioRepository:
@@ -161,36 +375,297 @@ class PortfolioRepository:
""" """
CONFIG_PATH = Path("data/portfolio_config.json") CONFIG_PATH = Path("data/portfolio_config.json")
SCHEMA_VERSION = 2
PERSISTENCE_CURRENCY = "USD"
PERSISTENCE_WEIGHT_UNIT = "ozt"
_WEIGHT_FACTORS = {
"g": Decimal("1"),
"kg": Decimal("1000"),
"ozt": Decimal("31.1034768"),
}
_MONEY_FIELDS = {"gold_value", "loan_amount", "monthly_budget"}
_WEIGHT_FIELDS = {"gold_ounces"}
_PRICE_PER_WEIGHT_FIELDS = {"entry_price"}
_RATIO_FIELDS = {"margin_threshold", "ltv_warning", "volatility_spike"}
_PERCENT_FIELDS = {"spot_drawdown"}
_INTEGER_FIELDS = {"refresh_interval"}
_PERSISTED_FIELDS = {
"gold_value",
"entry_price",
"gold_ounces",
"entry_basis_mode",
"loan_amount",
"margin_threshold",
"monthly_budget",
"ltv_warning",
"primary_source",
"fallback_source",
"refresh_interval",
"underlying", # optional with default "GLD"
"display_mode", # optional with default "XAU"
"volatility_spike",
"spot_drawdown",
"email_alerts",
"positions", # multi-position support
}
def __init__(self): def __init__(self, config_path: Path | None = None) -> None:
# Ensure data directory exists self.config_path = config_path or self.CONFIG_PATH
self.CONFIG_PATH.parent.mkdir(parents=True, exist_ok=True) self.config_path.parent.mkdir(parents=True, exist_ok=True)
def save(self, config: PortfolioConfig) -> None: def save(self, config: PortfolioConfig) -> None:
"""Save configuration to disk.""" """Save configuration to disk."""
with open(self.CONFIG_PATH, "w") as f: payload = self._to_persistence_payload(config)
json.dump(config.to_dict(), f, indent=2) tmp_path = self.config_path.with_name(f"{self.config_path.name}.tmp")
with open(tmp_path, "w") as f:
json.dump(payload, f, indent=2)
f.flush()
os.fsync(f.fileno())
os.replace(tmp_path, self.config_path)
def load(self) -> PortfolioConfig: def load(self) -> PortfolioConfig:
"""Load configuration from disk. """Load configuration from disk.
Returns default configuration if file doesn't exist. Returns default configuration if file doesn't exist.
""" """
if not self.CONFIG_PATH.exists(): if not self.config_path.exists():
default = PortfolioConfig() default = PortfolioConfig()
self.save(default) self.save(default)
return default return default
try: try:
with open(self.CONFIG_PATH) as f: with open(self.config_path) as f:
data = json.load(f) data = json.load(f)
return PortfolioConfig.from_dict(data) except json.JSONDecodeError as e:
except (json.JSONDecodeError, ValueError) as e: raise ValueError(f"Invalid portfolio config JSON: {e}") from e
print(f"Warning: Failed to load portfolio config: {e}. Using defaults.")
return PortfolioConfig() return self._config_from_payload(data)
@classmethod
def _to_persistence_payload(cls, config: PortfolioConfig) -> dict[str, Any]:
# Serialize positions separately before calling to_dict
positions_data = [pos.to_dict() for pos in config.positions] if config.positions else []
config_dict = config.to_dict()
# Remove positions from config_dict since we handle it separately
config_dict.pop("positions", None)
return {
"schema_version": cls.SCHEMA_VERSION,
"portfolio": {
**{key: cls._serialize_value(key, value) for key, value in config_dict.items()},
**({"positions": positions_data} if positions_data else {}),
},
}
@classmethod
def _serialize_value(cls, key: str, value: Any) -> Any:
if key in cls._MONEY_FIELDS:
return {"value": cls._decimal_to_string(value), "currency": cls.PERSISTENCE_CURRENCY}
if key in cls._WEIGHT_FIELDS:
return {"value": cls._decimal_to_string(value), "unit": cls.PERSISTENCE_WEIGHT_UNIT}
if key in cls._PRICE_PER_WEIGHT_FIELDS:
return {
"value": cls._decimal_to_string(value),
"currency": cls.PERSISTENCE_CURRENCY,
"per_weight_unit": cls.PERSISTENCE_WEIGHT_UNIT,
}
if key in cls._RATIO_FIELDS:
return {"value": cls._decimal_to_string(value), "unit": "ratio"}
if key in cls._PERCENT_FIELDS:
return {"value": cls._decimal_to_string(value), "unit": "percent"}
if key in cls._INTEGER_FIELDS:
return cls._serialize_integer(value, unit="seconds")
if key == "positions" and isinstance(value, list):
# Already serialized as dicts from _to_persistence_payload
return value
return value
@classmethod
def _config_from_payload(cls, data: dict[str, Any]) -> PortfolioConfig:
if not isinstance(data, dict):
raise TypeError("portfolio config payload must be an object")
schema_version = data.get("schema_version")
if schema_version != cls.SCHEMA_VERSION:
raise ValueError(f"Unsupported portfolio schema_version: {schema_version}")
portfolio = data.get("portfolio")
if not isinstance(portfolio, dict):
raise TypeError("portfolio payload must be an object")
cls._validate_portfolio_fields(portfolio)
deserialized = cls._deserialize_portfolio_payload(portfolio)
upgraded = cls._upgrade_legacy_default_workspace(deserialized)
config = PortfolioConfig.from_dict(upgraded)
# Migrate legacy configs without positions to single position
config.migrate_to_positions_if_needed()
return config
# Fields that must be present in persisted payloads
# (underlying is optional with default "GLD")
# (positions is optional - legacy configs won't have it)
_REQUIRED_FIELDS = (_PERSISTED_FIELDS - {"underlying", "display_mode"}) - {"positions"}
@classmethod
def _validate_portfolio_fields(cls, payload: dict[str, Any]) -> None:
keys = set(payload.keys())
missing = sorted(cls._REQUIRED_FIELDS - keys)
unknown = sorted(keys - cls._PERSISTED_FIELDS)
if missing or unknown:
details: list[str] = []
if missing:
details.append(f"missing={missing}")
if unknown:
details.append(f"unknown={unknown}")
raise ValueError(f"Invalid portfolio payload fields: {'; '.join(details)}")
@classmethod
def _deserialize_portfolio_payload(cls, payload: dict[str, Any]) -> dict[str, Any]:
return {key: cls._deserialize_value(key, value) for key, value in payload.items()}
@classmethod
def _upgrade_legacy_default_workspace(cls, payload: dict[str, Any]) -> dict[str, Any]:
if not cls._looks_like_legacy_default_workspace(payload):
return payload
upgraded = dict(payload)
upgraded["entry_price"] = _DEFAULT_ENTRY_PRICE
upgraded["gold_ounces"] = _DEFAULT_GOLD_OUNCES
upgraded["gold_value"] = _DEFAULT_GOLD_VALUE
return upgraded
@staticmethod
def _looks_like_legacy_default_workspace(payload: dict[str, Any]) -> bool:
def _close(key: str, expected: float) -> bool:
value = payload.get(key)
return isinstance(value, (int, float)) and abs(float(value) - expected) <= 1e-9
return (
_close("gold_value", _DEFAULT_GOLD_VALUE)
and _close("entry_price", _LEGACY_DEFAULT_ENTRY_PRICE)
and _close("gold_ounces", _LEGACY_DEFAULT_GOLD_OUNCES)
and payload.get("entry_basis_mode") == "value_price"
and _close("loan_amount", 145_000.0)
and _close("margin_threshold", 0.75)
and _close("monthly_budget", 8_000.0)
and _close("ltv_warning", 0.70)
and payload.get("primary_source") == "yfinance"
and payload.get("fallback_source") == "yfinance"
and payload.get("refresh_interval") == 5
and _close("volatility_spike", 0.25)
and _close("spot_drawdown", 7.5)
and payload.get("email_alerts") is False
)
@classmethod
def _deserialize_value(cls, key: str, value: Any) -> Any:
if key in cls._MONEY_FIELDS:
return float(cls._deserialize_money(value))
if key in cls._WEIGHT_FIELDS:
return float(cls._deserialize_weight(value))
if key in cls._PRICE_PER_WEIGHT_FIELDS:
return float(cls._deserialize_price_per_weight(value))
if key in cls._RATIO_FIELDS:
return float(cls._deserialize_ratio(value))
if key in cls._PERCENT_FIELDS:
return float(cls._deserialize_percent(value))
if key in cls._INTEGER_FIELDS:
return cls._deserialize_integer(value, expected_unit="seconds")
if key == "positions" and isinstance(value, list):
return [Position.from_dict(p) for p in value]
return value
@classmethod
def _deserialize_money(cls, value: Any) -> Decimal:
if not isinstance(value, dict):
raise TypeError("money field must be an object")
currency = value.get("currency")
if currency != cls.PERSISTENCE_CURRENCY:
raise ValueError(f"Unsupported currency: {currency!r}")
return _coerce_persisted_decimal(value.get("value"))
@classmethod
def _deserialize_weight(cls, value: Any) -> Decimal:
if not isinstance(value, dict):
raise TypeError("weight field must be an object")
amount = _coerce_persisted_decimal(value.get("value"))
unit = value.get("unit")
return cls._convert_weight(amount, unit, cls.PERSISTENCE_WEIGHT_UNIT)
@classmethod
def _deserialize_price_per_weight(cls, value: Any) -> Decimal:
if not isinstance(value, dict):
raise TypeError("price-per-weight field must be an object")
currency = value.get("currency")
if currency != cls.PERSISTENCE_CURRENCY:
raise ValueError(f"Unsupported currency: {currency!r}")
amount = _coerce_persisted_decimal(value.get("value"))
unit = value.get("per_weight_unit")
return cls._convert_price_per_weight(amount, unit, cls.PERSISTENCE_WEIGHT_UNIT)
@classmethod
def _deserialize_ratio(cls, value: Any) -> Decimal:
if not isinstance(value, dict):
raise TypeError("ratio field must be an object")
amount = _coerce_persisted_decimal(value.get("value"))
unit = value.get("unit")
if unit == "ratio":
return amount
if unit == "percent":
return amount / Decimal("100")
raise ValueError(f"Unsupported ratio unit: {unit!r}")
@classmethod
def _deserialize_percent(cls, value: Any) -> Decimal:
if not isinstance(value, dict):
raise TypeError("percent field must be an object")
amount = _coerce_persisted_decimal(value.get("value"))
unit = value.get("unit")
if unit == "percent":
return amount
if unit == "ratio":
return amount * Decimal("100")
raise ValueError(f"Unsupported percent unit: {unit!r}")
@staticmethod
def _serialize_integer(value: Any, *, unit: str) -> dict[str, Any]:
if isinstance(value, bool) or not isinstance(value, int):
raise TypeError("integer field value must be an int")
return {"value": value, "unit": unit}
@staticmethod
def _deserialize_integer(value: Any, *, expected_unit: str) -> int:
if not isinstance(value, dict):
raise TypeError("integer field must be an object")
unit = value.get("unit")
if unit != expected_unit:
raise ValueError(f"Unsupported integer unit: {unit!r}")
raw = value.get("value")
if isinstance(raw, bool) or not isinstance(raw, int):
raise TypeError("integer field value must be an int")
return raw
@classmethod
def _convert_weight(cls, amount: Decimal, from_unit: Any, to_unit: str) -> Decimal:
if from_unit not in cls._WEIGHT_FACTORS or to_unit not in cls._WEIGHT_FACTORS:
raise ValueError(f"Unsupported weight unit conversion: {from_unit!r} -> {to_unit!r}")
if from_unit == to_unit:
return amount
grams = amount * cls._WEIGHT_FACTORS[from_unit]
return grams / cls._WEIGHT_FACTORS[to_unit]
@classmethod
def _convert_price_per_weight(cls, amount: Decimal, from_unit: Any, to_unit: str) -> Decimal:
if from_unit not in cls._WEIGHT_FACTORS or to_unit not in cls._WEIGHT_FACTORS:
raise ValueError(f"Unsupported price-per-weight unit conversion: {from_unit!r} -> {to_unit!r}")
if from_unit == to_unit:
return amount
return amount * cls._WEIGHT_FACTORS[to_unit] / cls._WEIGHT_FACTORS[from_unit]
@staticmethod
def _decimal_to_string(value: Any) -> str:
decimal_value = _coerce_persisted_decimal(value)
normalized = format(decimal_value, "f")
if "." not in normalized:
normalized = f"{normalized}.0"
return normalized
# Singleton repository instance
_portfolio_repo: PortfolioRepository | None = None _portfolio_repo: PortfolioRepository | None = None

155
app/models/position.py Normal file
View File

@@ -0,0 +1,155 @@
"""Position model for multi-position portfolio entries."""
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import UTC, date, datetime
from decimal import Decimal
from typing import Any
from uuid import UUID, uuid4
@dataclass(frozen=True)
class Position:
"""A single position entry in a portfolio.
Attributes:
id: Unique identifier for this position
underlying: Underlying instrument symbol (e.g., "GLD", "GC=F", "XAU")
quantity: Number of units held (shares, contracts, grams, or oz)
unit: Unit of quantity (e.g., "shares", "contracts", "g", "oz")
entry_price: Price per unit at purchase (in USD)
entry_date: Date of position entry (for historical conversion lookups)
entry_basis_mode: Entry basis mode ("weight" or "value_price")
purchase_premium: Dealer markup over spot as percentage (e.g., Decimal("0.04") for 4%)
bid_ask_spread: Expected sale discount below spot as percentage (e.g., Decimal("0.03") for 3%)
notes: Optional notes about this position
storage_cost_basis: Annual storage cost as percentage (e.g., Decimal("0.12") for 0.12%) or fixed amount
storage_cost_period: Period for storage cost ("annual" or "monthly")
storage_cost_currency: Currency for fixed amount costs (default "USD")
created_at: Timestamp when position was created
"""
id: UUID
underlying: str
quantity: Decimal
unit: str
entry_price: Decimal
entry_date: date
entry_basis_mode: str = "weight"
purchase_premium: Decimal | None = None
bid_ask_spread: Decimal | None = None
notes: str = ""
storage_cost_basis: Decimal | None = None
storage_cost_period: str | None = None
storage_cost_currency: str = "USD"
created_at: datetime = field(default_factory=lambda: datetime.now(UTC))
def __post_init__(self) -> None:
"""Validate position fields."""
if not self.underlying:
raise ValueError("underlying must be non-empty")
# Use object.__getattribute__ because Decimal comparison with frozen dataclass
quantity = object.__getattribute__(self, "quantity")
entry_price = object.__getattribute__(self, "entry_price")
if quantity <= 0:
raise ValueError("quantity must be positive")
if not self.unit:
raise ValueError("unit must be non-empty")
if entry_price <= 0:
raise ValueError("entry_price must be positive")
if self.entry_basis_mode not in {"weight", "value_price"}:
raise ValueError("entry_basis_mode must be 'weight' or 'value_price'")
@property
def entry_value(self) -> Decimal:
"""Calculate total entry value (quantity × entry_price)."""
return self.quantity * self.entry_price
def to_dict(self) -> dict[str, Any]:
"""Convert position to dictionary for serialization."""
return {
"id": str(self.id),
"underlying": self.underlying,
"quantity": str(self.quantity),
"unit": self.unit,
"entry_price": str(self.entry_price),
"entry_date": self.entry_date.isoformat(),
"entry_basis_mode": self.entry_basis_mode,
"purchase_premium": str(self.purchase_premium) if self.purchase_premium is not None else None,
"bid_ask_spread": str(self.bid_ask_spread) if self.bid_ask_spread is not None else None,
"notes": self.notes,
"storage_cost_basis": str(self.storage_cost_basis) if self.storage_cost_basis is not None else None,
"storage_cost_period": self.storage_cost_period,
"storage_cost_currency": self.storage_cost_currency,
"created_at": self.created_at.isoformat(),
}
@classmethod
def from_dict(cls, data: dict[str, Any]) -> Position:
"""Create position from dictionary."""
return cls(
id=UUID(data["id"]) if isinstance(data["id"], str) else data["id"],
underlying=data["underlying"],
quantity=Decimal(data["quantity"]),
unit=data["unit"],
entry_price=Decimal(data["entry_price"]),
entry_date=date.fromisoformat(data["entry_date"]),
entry_basis_mode=data.get("entry_basis_mode", "weight"),
purchase_premium=(Decimal(data["purchase_premium"]) if data.get("purchase_premium") is not None else None),
bid_ask_spread=(Decimal(data["bid_ask_spread"]) if data.get("bid_ask_spread") is not None else None),
notes=data.get("notes", ""),
storage_cost_basis=(
Decimal(data["storage_cost_basis"]) if data.get("storage_cost_basis") is not None else None
),
storage_cost_period=data.get("storage_cost_period"),
storage_cost_currency=data.get("storage_cost_currency", "USD"),
created_at=datetime.fromisoformat(data["created_at"]) if "created_at" in data else datetime.now(UTC),
)
def create_position(
underlying: str = "GLD",
quantity: Decimal | None = None,
unit: str = "oz",
entry_price: Decimal | None = None,
entry_date: date | None = None,
entry_basis_mode: str = "weight",
purchase_premium: Decimal | None = None,
bid_ask_spread: Decimal | None = None,
notes: str = "",
storage_cost_basis: Decimal | None = None,
storage_cost_period: str | None = None,
storage_cost_currency: str = "USD",
) -> Position:
"""Create a new position with sensible defaults.
Args:
underlying: Underlying instrument (default: "GLD")
quantity: Position quantity (default: Decimal("100"))
unit: Unit of quantity (default: "oz")
entry_price: Entry price per unit (default: Decimal("2150"))
entry_date: Entry date (default: today)
entry_basis_mode: Entry basis mode (default: "weight")
purchase_premium: Dealer markup over spot as percentage (default: None)
bid_ask_spread: Expected sale discount below spot as percentage (default: None)
notes: Optional notes
storage_cost_basis: Annual storage cost as percentage or fixed amount (default: None)
storage_cost_period: Period for storage cost ("annual" or "monthly", default: None)
storage_cost_currency: Currency for fixed amount costs (default: "USD")
"""
return Position(
id=uuid4(),
underlying=underlying,
quantity=quantity if quantity is not None else Decimal("100"),
unit=unit,
entry_price=entry_price if entry_price is not None else Decimal("2150"),
entry_date=entry_date or date.today(),
entry_basis_mode=entry_basis_mode,
purchase_premium=purchase_premium,
bid_ask_spread=bid_ask_spread,
notes=notes,
storage_cost_basis=storage_cost_basis,
storage_cost_period=storage_cost_period,
storage_cost_currency=storage_cost_currency,
)

View File

@@ -0,0 +1,309 @@
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import datetime, timezone
from typing import Any, Literal
StrategyTemplateKind = Literal["protective_put", "laddered_put"]
StrategyTemplateStatus = Literal["draft", "active", "archived"]
ContractMode = Literal["continuous_units", "listed_contracts"]
LegSide = Literal["long", "short"]
LegOptionType = Literal["put", "call"]
StrikeRuleType = Literal["spot_pct"]
QuantityRule = Literal["target_coverage_pct"]
RollPolicyType = Literal["hold_to_expiry", "roll_n_days_before_expiry"]
EntryTiming = Literal["scenario_start_close"]
@dataclass(frozen=True)
class StrikeRule:
rule_type: StrikeRuleType
value: float
def __post_init__(self) -> None:
if self.rule_type != "spot_pct":
raise ValueError("unsupported strike rule")
if self.value <= 0:
raise ValueError("strike rule value must be positive")
def to_dict(self) -> dict[str, Any]:
return {"rule_type": self.rule_type, "value": self.value}
@classmethod
def from_dict(cls, payload: dict[str, Any]) -> StrikeRule:
return cls(rule_type=payload["rule_type"], value=float(payload["value"]))
@dataclass(frozen=True)
class TemplateLeg:
leg_id: str
side: LegSide
option_type: LegOptionType
allocation_weight: float
strike_rule: StrikeRule
target_expiry_days: int
quantity_rule: QuantityRule
target_coverage_pct: float = 1.0
def __post_init__(self) -> None:
if self.side not in {"long", "short"}:
raise ValueError("unsupported leg side")
if self.option_type not in {"put", "call"}:
raise ValueError("unsupported option type")
if self.allocation_weight <= 0:
raise ValueError("allocation_weight must be positive")
if self.target_expiry_days <= 0:
raise ValueError("target_expiry_days must be positive")
if self.quantity_rule != "target_coverage_pct":
raise ValueError("unsupported quantity rule")
if self.target_coverage_pct <= 0:
raise ValueError("target_coverage_pct must be positive")
def to_dict(self) -> dict[str, Any]:
return {
"leg_id": self.leg_id,
"side": self.side,
"option_type": self.option_type,
"allocation_weight": self.allocation_weight,
"strike_rule": self.strike_rule.to_dict(),
"target_expiry_days": self.target_expiry_days,
"quantity_rule": self.quantity_rule,
"target_coverage_pct": self.target_coverage_pct,
}
@classmethod
def from_dict(cls, payload: dict[str, Any]) -> TemplateLeg:
return cls(
leg_id=payload["leg_id"],
side=payload["side"],
option_type=payload["option_type"],
allocation_weight=float(payload["allocation_weight"]),
strike_rule=StrikeRule.from_dict(payload["strike_rule"]),
target_expiry_days=int(payload["target_expiry_days"]),
quantity_rule=payload.get("quantity_rule", "target_coverage_pct"),
target_coverage_pct=float(payload.get("target_coverage_pct", 1.0)),
)
@dataclass(frozen=True)
class RollPolicy:
policy_type: RollPolicyType
days_before_expiry: int | None = None
rebalance_on_new_deposit: bool = False
def __post_init__(self) -> None:
if self.policy_type not in {"hold_to_expiry", "roll_n_days_before_expiry"}:
raise ValueError("unsupported roll policy")
if self.policy_type == "roll_n_days_before_expiry" and (self.days_before_expiry or 0) <= 0:
raise ValueError("days_before_expiry is required for rolling policies")
def to_dict(self) -> dict[str, Any]:
return {
"policy_type": self.policy_type,
"days_before_expiry": self.days_before_expiry,
"rebalance_on_new_deposit": self.rebalance_on_new_deposit,
}
@classmethod
def from_dict(cls, payload: dict[str, Any]) -> RollPolicy:
return cls(
policy_type=payload["policy_type"],
days_before_expiry=payload.get("days_before_expiry"),
rebalance_on_new_deposit=bool(payload.get("rebalance_on_new_deposit", False)),
)
@dataclass(frozen=True)
class EntryPolicy:
entry_timing: EntryTiming
stagger_days: int | None = None
def __post_init__(self) -> None:
if self.entry_timing != "scenario_start_close":
raise ValueError("unsupported entry timing")
if self.stagger_days is not None and self.stagger_days < 0:
raise ValueError("stagger_days must be non-negative")
def to_dict(self) -> dict[str, Any]:
return {"entry_timing": self.entry_timing, "stagger_days": self.stagger_days}
@classmethod
def from_dict(cls, payload: dict[str, Any]) -> EntryPolicy:
return cls(entry_timing=payload["entry_timing"], stagger_days=payload.get("stagger_days"))
@dataclass(frozen=True)
class StrategyTemplate:
template_id: str
slug: str
display_name: str
description: str
template_kind: StrategyTemplateKind
status: StrategyTemplateStatus
version: int
underlying_symbol: str
contract_mode: ContractMode
legs: tuple[TemplateLeg, ...]
roll_policy: RollPolicy
entry_policy: EntryPolicy
tags: tuple[str, ...] = field(default_factory=tuple)
created_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
def __post_init__(self) -> None:
if self.template_kind not in {"protective_put", "laddered_put"}:
raise ValueError("unsupported template_kind")
if self.status not in {"draft", "active", "archived"}:
raise ValueError("unsupported template status")
if self.contract_mode not in {"continuous_units", "listed_contracts"}:
raise ValueError("unsupported contract mode")
if not self.slug:
raise ValueError("slug is required")
if not self.display_name:
raise ValueError("display_name is required")
if self.version <= 0:
raise ValueError("version must be positive")
if not self.legs:
raise ValueError("at least one template leg is required")
if self.template_kind in {"protective_put", "laddered_put"}:
if any(leg.side != "long" or leg.option_type != "put" for leg in self.legs):
raise ValueError("put templates support only long put legs")
total_weight = sum(leg.allocation_weight for leg in self.legs)
if abs(total_weight - 1.0) > 1e-9:
raise ValueError("weights must sum to 1.0")
expiry_days = {leg.target_expiry_days for leg in self.legs}
if len(expiry_days) != 1:
raise ValueError("all template legs must share target_expiry_days in MVP")
@property
def target_expiry_days(self) -> int:
return self.legs[0].target_expiry_days
def to_dict(self) -> dict[str, Any]:
return {
"template_id": self.template_id,
"slug": self.slug,
"display_name": self.display_name,
"description": self.description,
"template_kind": self.template_kind,
"status": self.status,
"version": self.version,
"underlying_symbol": self.underlying_symbol,
"contract_mode": self.contract_mode,
"legs": [leg.to_dict() for leg in self.legs],
"roll_policy": self.roll_policy.to_dict(),
"entry_policy": self.entry_policy.to_dict(),
"tags": list(self.tags),
"created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(),
}
@classmethod
def from_dict(cls, payload: dict[str, Any]) -> StrategyTemplate:
return cls(
template_id=payload["template_id"],
slug=payload["slug"],
display_name=payload["display_name"],
description=payload.get("description", ""),
template_kind=payload["template_kind"],
status=payload.get("status", "active"),
version=int(payload.get("version", 1)),
underlying_symbol=payload.get("underlying_symbol", "GLD"),
contract_mode=payload.get("contract_mode", "continuous_units"),
legs=tuple(TemplateLeg.from_dict(leg) for leg in payload["legs"]),
roll_policy=RollPolicy.from_dict(payload["roll_policy"]),
entry_policy=EntryPolicy.from_dict(payload["entry_policy"]),
tags=tuple(payload.get("tags", [])),
created_at=datetime.fromisoformat(payload["created_at"]),
updated_at=datetime.fromisoformat(payload["updated_at"]),
)
@classmethod
def protective_put(
cls,
*,
template_id: str,
slug: str,
display_name: str,
description: str,
strike_pct: float,
target_expiry_days: int,
underlying_symbol: str = "GLD",
tags: tuple[str, ...] = (),
) -> StrategyTemplate:
now = datetime.now(timezone.utc)
return cls(
template_id=template_id,
slug=slug,
display_name=display_name,
description=description,
template_kind="protective_put",
status="active",
version=1,
underlying_symbol=underlying_symbol,
contract_mode="continuous_units",
legs=(
TemplateLeg(
leg_id=f"{template_id}-leg-1",
side="long",
option_type="put",
allocation_weight=1.0,
strike_rule=StrikeRule(rule_type="spot_pct", value=strike_pct),
target_expiry_days=target_expiry_days,
quantity_rule="target_coverage_pct",
target_coverage_pct=1.0,
),
),
roll_policy=RollPolicy(policy_type="hold_to_expiry"),
entry_policy=EntryPolicy(entry_timing="scenario_start_close"),
tags=tags,
created_at=now,
updated_at=now,
)
@classmethod
def laddered_put(
cls,
*,
template_id: str,
slug: str,
display_name: str,
description: str,
strike_pcts: tuple[float, ...],
weights: tuple[float, ...],
target_expiry_days: int,
underlying_symbol: str = "GLD",
tags: tuple[str, ...] = (),
) -> StrategyTemplate:
if len(strike_pcts) != len(weights):
raise ValueError("strike_pcts and weights must have the same length")
now = datetime.now(timezone.utc)
return cls(
template_id=template_id,
slug=slug,
display_name=display_name,
description=description,
template_kind="laddered_put",
status="active",
version=1,
underlying_symbol=underlying_symbol,
contract_mode="continuous_units",
legs=tuple(
TemplateLeg(
leg_id=f"{template_id}-leg-{index}",
side="long",
option_type="put",
allocation_weight=weight,
strike_rule=StrikeRule(rule_type="spot_pct", value=strike_pct),
target_expiry_days=target_expiry_days,
quantity_rule="target_coverage_pct",
target_coverage_pct=1.0,
)
for index, (strike_pct, weight) in enumerate(zip(strike_pcts, weights, strict=True), start=1)
),
roll_policy=RollPolicy(policy_type="hold_to_expiry"),
entry_policy=EntryPolicy(entry_timing="scenario_start_close"),
tags=tags,
created_at=now,
updated_at=now,
)

141
app/models/workspace.py Normal file
View File

@@ -0,0 +1,141 @@
from __future__ import annotations
import re
from pathlib import Path
from uuid import UUID, uuid4
from app.models.portfolio import PortfolioConfig, PortfolioRepository, build_default_portfolio_config
from app.models.position import Position
WORKSPACE_COOKIE = "workspace_id"
_WORKSPACE_ID_RE = re.compile(
r"^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$",
re.IGNORECASE,
)
class WorkspaceRepository:
"""Persist workspace-scoped portfolio configuration on disk."""
def __init__(self, base_path: Path | str = Path("data/workspaces")) -> None:
self.base_path = Path(base_path)
self.base_path.mkdir(parents=True, exist_ok=True)
def is_valid_workspace_id(self, workspace_id: str) -> bool:
return bool(_WORKSPACE_ID_RE.match(workspace_id))
def workspace_exists(self, workspace_id: str) -> bool:
if not self.is_valid_workspace_id(workspace_id):
return False
portfolio_path = self._portfolio_path(workspace_id)
if not portfolio_path.exists():
return False
try:
PortfolioRepository(portfolio_path).load()
except (ValueError, TypeError, FileNotFoundError):
return False
return True
def create_workspace(
self,
workspace_id: str | None = None,
*,
config: PortfolioConfig | None = None,
) -> PortfolioConfig:
resolved_workspace_id = workspace_id or str(uuid4())
if not self.is_valid_workspace_id(resolved_workspace_id):
raise ValueError("workspace_id must be a UUID4 string")
created_config = config or build_default_portfolio_config()
self.save_portfolio_config(resolved_workspace_id, created_config)
return created_config
def create_workspace_id(self, *, config: PortfolioConfig | None = None) -> str:
workspace_id = str(uuid4())
self.create_workspace(workspace_id, config=config)
return workspace_id
def load_portfolio_config(self, workspace_id: str) -> PortfolioConfig:
if not self.workspace_exists(workspace_id):
raise FileNotFoundError(f"Unknown workspace: {workspace_id}")
return PortfolioRepository(self._portfolio_path(workspace_id)).load()
def save_portfolio_config(self, workspace_id: str, config: PortfolioConfig) -> None:
if not self.is_valid_workspace_id(workspace_id):
raise ValueError("workspace_id must be a UUID4 string")
PortfolioRepository(self._portfolio_path(workspace_id)).save(config)
def add_position(self, workspace_id: str, position: Position) -> None:
"""Add a position to the workspace portfolio."""
if not self.is_valid_workspace_id(workspace_id):
raise ValueError("workspace_id must be a UUID4 string")
config = self.load_portfolio_config(workspace_id)
# Use object.__setattr__ because positions is in a frozen dataclass
object.__setattr__(config, "positions", list(config.positions) + [position])
self.save_portfolio_config(workspace_id, config)
def remove_position(self, workspace_id: str, position_id: UUID) -> None:
"""Remove a position from the workspace portfolio."""
if not self.is_valid_workspace_id(workspace_id):
raise ValueError("workspace_id must be a UUID4 string")
config = self.load_portfolio_config(workspace_id)
updated_positions = [p for p in config.positions if p.id != position_id]
object.__setattr__(config, "positions", updated_positions)
self.save_portfolio_config(workspace_id, config)
def update_position(
self,
workspace_id: str,
position_id: UUID,
updates: dict[str, object],
) -> None:
"""Update a position's fields."""
if not self.is_valid_workspace_id(workspace_id):
raise ValueError("workspace_id must be a UUID4 string")
config = self.load_portfolio_config(workspace_id)
updated_positions = []
for pos in config.positions:
if pos.id == position_id:
# Create updated position (Position is frozen, so create new instance)
update_kwargs: dict[str, object] = {}
for key, value in updates.items():
if key in {"id", "created_at"}:
continue # Skip immutable fields
update_kwargs[key] = value
# Use dataclass replace-like pattern
pos_dict = pos.to_dict()
pos_dict.update(update_kwargs)
updated_positions.append(Position.from_dict(pos_dict))
else:
updated_positions.append(pos)
object.__setattr__(config, "positions", updated_positions)
self.save_portfolio_config(workspace_id, config)
def get_position(self, workspace_id: str, position_id: UUID) -> Position | None:
"""Get a specific position by ID."""
if not self.is_valid_workspace_id(workspace_id):
raise ValueError("workspace_id must be a UUID4 string")
config = self.load_portfolio_config(workspace_id)
for pos in config.positions:
if pos.id == position_id:
return pos
return None
def list_positions(self, workspace_id: str) -> list[Position]:
"""List all positions in the workspace portfolio."""
if not self.is_valid_workspace_id(workspace_id):
raise ValueError("workspace_id must be a UUID4 string")
config = self.load_portfolio_config(workspace_id)
return list(config.positions)
def _portfolio_path(self, workspace_id: str) -> Path:
return self.base_path / workspace_id / "portfolio_config.json"
_workspace_repo: WorkspaceRepository | None = None
def get_workspace_repository() -> WorkspaceRepository:
global _workspace_repo
if _workspace_repo is None:
_workspace_repo = WorkspaceRepository()
return _workspace_repo

View File

@@ -1,3 +1,3 @@
from . import hedge, options, overview, settings from . import backtests, event_comparison, hedge, options, overview, settings
__all__ = ["overview", "hedge", "options", "settings"] __all__ = ["overview", "hedge", "options", "backtests", "event_comparison", "settings"]

1337
app/pages/backtests.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -6,74 +6,52 @@ from typing import Any
from nicegui import ui from nicegui import ui
from app.domain.portfolio_math import portfolio_snapshot_from_config, strategy_metrics_from_snapshot
from app.models.portfolio import PortfolioConfig
from app.services.strategy_templates import StrategyTemplateService
NAV_ITEMS: list[tuple[str, str, str]] = [ NAV_ITEMS: list[tuple[str, str, str]] = [
("overview", "/", "Overview"), ("welcome", "/", "Welcome"),
("hedge", "/hedge", "Hedge Analysis"),
("options", "/options", "Options Chain"), ("options", "/options", "Options Chain"),
("settings", "/settings", "Settings"),
] ]
def nav_items(workspace_id: str | None = None) -> list[tuple[str, str, str]]:
if not workspace_id:
return NAV_ITEMS
return [
("overview", f"/{workspace_id}", "Overview"),
("hedge", f"/{workspace_id}/hedge", "Hedge Analysis"),
("options", "/options", "Options Chain"),
("backtests", f"/{workspace_id}/backtests", "Backtests"),
("event-comparison", f"/{workspace_id}/event-comparison", "Event Comparison"),
("settings", f"/{workspace_id}/settings", "Settings"),
]
def demo_spot_price() -> float: def demo_spot_price() -> float:
return 215.0 return 215.0
def portfolio_snapshot() -> dict[str, float]: def portfolio_snapshot(
gold_units = 1_000.0 config: PortfolioConfig | None = None,
spot = demo_spot_price() *,
gold_value = gold_units * spot runtime_spot_price: float | None = None,
loan_amount = 145_000.0 ) -> dict[str, float]:
margin_call_ltv = 0.75 return portfolio_snapshot_from_config(config, runtime_spot_price=runtime_spot_price)
return {
"gold_value": gold_value,
"loan_amount": loan_amount,
"ltv_ratio": loan_amount / gold_value,
"net_equity": gold_value - loan_amount,
"spot_price": spot,
"margin_call_ltv": margin_call_ltv,
"margin_call_price": loan_amount / (margin_call_ltv * gold_units),
"cash_buffer": 18_500.0,
"hedge_budget": 8_000.0,
}
def strategy_catalog() -> list[dict[str, Any]]: def strategy_catalog() -> list[dict[str, Any]]:
return [ return StrategyTemplateService().catalog_items()
{
"name": "protective_put",
"label": "Protective Put",
"description": "Full downside protection below the hedge strike with uncapped upside.",
"estimated_cost": 6.25,
"max_drawdown_floor": 210.0,
"coverage": "High",
},
{
"name": "collar",
"label": "Collar",
"description": "Lower premium by financing puts with covered call upside caps.",
"estimated_cost": 2.10,
"max_drawdown_floor": 208.0,
"upside_cap": 228.0,
"coverage": "Balanced",
},
{
"name": "laddered_puts",
"label": "Laddered Puts",
"description": "Multiple maturities and strikes reduce roll concentration and smooth protection.",
"estimated_cost": 4.45,
"max_drawdown_floor": 205.0,
"coverage": "Layered",
},
]
def quick_recommendations() -> list[dict[str, str]]: def quick_recommendations(portfolio: dict[str, Any] | None = None) -> list[dict[str, str]]:
portfolio = portfolio_snapshot() portfolio = portfolio or portfolio_snapshot()
ltv_gap = (portfolio["margin_call_ltv"] - portfolio["ltv_ratio"]) * 100 ltv_gap = (portfolio["margin_call_ltv"] - portfolio["ltv_ratio"]) * 100
return [ return [
{ {
"title": "Balanced hedge favored", "title": "Balanced hedge favored",
"summary": "A collar keeps the current LTV comfortably below the margin threshold while limiting upfront spend.", "summary": "A 95% protective put balances margin-call protection with a lower upfront hedge cost.",
"tone": "positive", "tone": "positive",
}, },
{ {
@@ -131,77 +109,53 @@ def option_chain() -> list[dict[str, Any]]:
return rows return rows
def strategy_metrics(strategy_name: str, scenario_pct: int) -> dict[str, Any]: def strategy_metrics(
strategy_key: str,
scenario_pct: int,
*,
portfolio: dict[str, Any] | None = None,
) -> dict[str, Any]:
catalog = strategy_catalog()
strategy = next( strategy = next(
(item for item in strategy_catalog() if item["name"] == strategy_name), (item for item in catalog if item.get("template_slug") == strategy_key or item.get("name") == strategy_key),
strategy_catalog()[0], catalog[0],
) )
spot = demo_spot_price() portfolio = portfolio or portfolio_snapshot()
floor = float(strategy.get("max_drawdown_floor", spot * 0.95)) return strategy_metrics_from_snapshot(strategy, scenario_pct, portfolio)
cap = strategy.get("upside_cap")
cost = float(strategy["estimated_cost"])
scenario_prices = [round(spot * (1 + pct / 100), 2) for pct in range(-25, 30, 5)]
benefits: list[float] = []
for price in scenario_prices:
payoff = max(floor - price, 0.0)
if isinstance(cap, (int, float)) and price > float(cap):
payoff -= price - float(cap)
benefits.append(round(payoff - cost, 2))
scenario_price = round(spot * (1 + scenario_pct / 100), 2) def split_page_panes(*, left_testid: str, right_testid: str) -> tuple[ui.column, ui.column]:
unhedged_equity = scenario_price * 1_000 - 145_000.0 """Render responsive page panes with a desktop 1:2 flex split and stable test hooks."""
scenario_payoff = max(floor - scenario_price, 0.0) with ui.row().classes("w-full items-start gap-6 max-lg:flex-col lg:flex-nowrap"):
capped_upside = 0.0 left = ui.column().classes("min-w-0 w-full gap-6 lg:flex-[1_1_0%]").props(f"data-testid={left_testid}")
if isinstance(cap, (int, float)) and scenario_price > float(cap): right = ui.column().classes("min-w-0 w-full gap-6 lg:flex-[2_1_0%]").props(f"data-testid={right_testid}")
capped_upside = -(scenario_price - float(cap)) return left, right
hedged_equity = unhedged_equity + scenario_payoff + capped_upside - cost * 1_000
waterfall_steps = [
("Base equity", round(70_000.0, 2)),
("Spot move", round((scenario_price - spot) * 1_000, 2)),
("Option payoff", round(scenario_payoff * 1_000, 2)),
("Call cap", round(capped_upside * 1_000, 2)),
("Hedge cost", round(-cost * 1_000, 2)),
("Net equity", round(hedged_equity, 2)),
]
return {
"strategy": strategy,
"scenario_pct": scenario_pct,
"scenario_price": scenario_price,
"scenario_series": [
{"price": price, "benefit": benefit} for price, benefit in zip(scenario_prices, benefits, strict=True)
],
"waterfall_steps": waterfall_steps,
"unhedged_equity": round(unhedged_equity, 2),
"hedged_equity": round(hedged_equity, 2),
}
@contextmanager @contextmanager
def dashboard_page(title: str, subtitle: str, current: str) -> Iterator[ui.column]: def dashboard_page(title: str, subtitle: str, current: str, workspace_id: str | None = None) -> Iterator[ui.column]:
ui.colors(primary="#0f172a", secondary="#1e293b", accent="#0ea5e9") ui.colors(primary="#0f172a", secondary="#1e293b", accent="#0ea5e9")
with ui.column().classes("mx-auto w-full max-w-7xl gap-6 bg-slate-50 p-6 dark:bg-slate-950") as container: # Header must be at page level, not inside container
with ui.header(elevated=False).classes( with ui.header(elevated=False).classes(
"items-center justify-between border-b border-slate-200 bg-white/90 px-6 py-4 backdrop-blur dark:border-slate-800 dark:bg-slate-950/90" "items-center justify-between border-b border-slate-200 bg-white/90 px-6 py-4 backdrop-blur dark:border-slate-800 dark:bg-slate-950/90"
): ):
with ui.row().classes("items-center gap-3"): with ui.row().classes("items-center gap-3"):
ui.icon("shield").classes("text-2xl text-sky-500") ui.icon("shield").classes("text-2xl text-sky-500")
with ui.column().classes("gap-0"): with ui.column().classes("gap-0"):
ui.label("Vault Dashboard").classes("text-lg font-bold text-slate-900 dark:text-slate-50") ui.label("Vault Dashboard").classes("text-lg font-bold text-slate-900 dark:text-slate-50")
ui.label("NiceGUI hedging cockpit").classes("text-xs text-slate-500 dark:text-slate-400") ui.label("NiceGUI hedging cockpit").classes("text-xs text-slate-500 dark:text-slate-400")
with ui.row().classes("items-center gap-2 max-sm:flex-wrap"): with ui.row().classes("items-center gap-2 max-sm:flex-wrap"):
for key, href, label in NAV_ITEMS: for key, href, label in nav_items(workspace_id):
active = key == current active = key == current
link_classes = "rounded-lg px-4 py-2 text-sm font-medium no-underline transition " + ( link_classes = "rounded-lg px-4 py-2 text-sm font-medium no-underline transition " + (
"bg-slate-900 text-white dark:bg-slate-100 dark:text-slate-900" "bg-slate-900 text-white dark:bg-slate-100 dark:text-slate-900"
if active if active
else "text-slate-600 hover:bg-slate-100 dark:text-slate-300 dark:hover:bg-slate-800" else "text-slate-600 hover:bg-slate-100 dark:text-slate-300 dark:hover:bg-slate-800"
) )
ui.link(label, href).classes(link_classes) ui.link(label, href).classes(link_classes)
with ui.column().classes("w-full gap-6 bg-slate-50 p-6 dark:bg-slate-950") as container:
with ui.row().classes("w-full items-end justify-between gap-4 max-md:flex-col max-md:items-start"): with ui.row().classes("w-full items-end justify-between gap-4 max-md:flex-col max-md:items-start"):
with ui.column().classes("gap-1"): with ui.column().classes("gap-1"):
ui.label(title).classes("text-3xl font-bold text-slate-900 dark:text-slate-50") ui.label(title).classes("text-3xl font-bold text-slate-900 dark:text-slate-50")
@@ -209,6 +163,23 @@ def dashboard_page(title: str, subtitle: str, current: str) -> Iterator[ui.colum
yield container yield container
def render_workspace_recovery(title: str = "Workspace not found", message: str | None = None) -> None:
resolved_message = (
message or "The requested workspace is unavailable. Start a new workspace or return to the welcome page."
)
with ui.column().classes("mx-auto mt-24 w-full max-w-2xl gap-6 px-6 text-center"):
ui.icon("folder_off").classes("mx-auto text-6xl text-slate-400")
ui.label(title).classes("text-3xl font-bold text-slate-900 dark:text-slate-50")
ui.label(resolved_message).classes("text-base text-slate-500 dark:text-slate-400")
with ui.row().classes("mx-auto gap-3"):
ui.link("Get started", "/").classes(
"rounded-lg bg-slate-900 px-5 py-3 text-sm font-semibold text-white no-underline dark:bg-slate-100 dark:text-slate-900"
)
ui.link("Go to welcome page", "/").classes(
"rounded-lg border border-slate-300 px-5 py-3 text-sm font-semibold text-slate-700 no-underline dark:border-slate-700 dark:text-slate-200"
)
def recommendation_style(tone: str) -> str: def recommendation_style(tone: str) -> str:
return { return {
"positive": "border-emerald-200 bg-emerald-50 dark:border-emerald-900/60 dark:bg-emerald-950/30", "positive": "border-emerald-200 bg-emerald-50 dark:border-emerald-900/60 dark:bg-emerald-950/30",

View File

@@ -0,0 +1,623 @@
from __future__ import annotations
import logging
from fastapi.responses import RedirectResponse
from nicegui import ui
from app.domain.backtesting_math import asset_quantity_from_workspace_config
from app.models.workspace import get_workspace_repository
from app.pages.common import dashboard_page, split_page_panes
from app.services.event_comparison_ui import EventComparisonPageService
logger = logging.getLogger(__name__)
def validate_and_calculate_units(initial_value: float, entry_spot: float) -> tuple[float, str | None]:
"""Validate inputs and calculate underlying units.
Returns (units, error_message). If error_message is not None, units is 0.0.
"""
if initial_value <= 0:
return 0.0, "Initial portfolio value must be positive."
if entry_spot <= 0:
return 0.0, "Cannot calculate units: entry spot is invalid. Please select a valid preset."
return initial_value / entry_spot, None
def _chart_options(dates: tuple[str, ...], series: tuple[dict[str, object], ...]) -> dict:
return {
"tooltip": {"trigger": "axis"},
"legend": {"type": "scroll"},
"xAxis": {"type": "category", "data": list(dates)},
"yAxis": {"type": "value", "name": "Net value"},
"series": [
{
"name": item["name"],
"type": "line",
"smooth": True,
"data": item["values"],
}
for item in series
],
}
@ui.page("/{workspace_id}/event-comparison")
def workspace_event_comparison_page(workspace_id: str) -> None:
repo = get_workspace_repository()
if not repo.workspace_exists(workspace_id):
return RedirectResponse(url="/", status_code=307)
_render_event_comparison_page(workspace_id=workspace_id)
def _render_event_comparison_page(workspace_id: str | None = None) -> None:
service = EventComparisonPageService()
preset_options = service.preset_options("GLD")
template_options = service.template_options("GLD")
repo = get_workspace_repository()
config = repo.load_portfolio_config(workspace_id) if workspace_id else None
default_preset_slug = str(preset_options[0]["slug"]) if preset_options else None
default_template_slugs = list(preset_options[0]["default_template_slugs"]) if preset_options else []
default_entry_spot = 100.0
if default_preset_slug is not None:
default_preview = service.preview_scenario(
preset_slug=default_preset_slug,
template_slugs=tuple(default_template_slugs),
underlying_units=1.0,
loan_amount=0.0,
margin_call_ltv=0.75,
)
default_entry_spot = default_preview.initial_portfolio.entry_spot
default_units = (
asset_quantity_from_workspace_config(config, entry_spot=default_entry_spot, symbol="GLD")
if config is not None and default_entry_spot > 0
else 1000.0
)
default_loan = float(config.loan_amount) if config else 68000.0
default_margin_call_ltv = float(config.margin_threshold) if config else 0.75
preset_select_options = {str(option["slug"]): str(option["label"]) for option in preset_options}
template_select_options = {str(option["slug"]): str(option["label"]) for option in template_options}
preset_lookup = {str(option["slug"]): option for option in preset_options}
with dashboard_page(
"Event Comparison",
"Thin BT-003A read-only UI over EventComparisonService with deterministic seeded GLD presets.",
"event-comparison",
workspace_id=workspace_id,
):
left_pane, right_pane = split_page_panes(
left_testid="event-comparison-left-pane",
right_testid="event-comparison-right-pane",
)
with left_pane:
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Comparison Form").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(
"Preset selection is deterministic and read-only in the sense that runs reuse seeded event windows and existing BT-003 ranking logic."
).classes("text-sm text-slate-500 dark:text-slate-400")
if workspace_id:
ui.label("Workspace defaults seed underlying units, loan amount, and margin threshold.").classes(
"text-sm text-slate-500 dark:text-slate-400"
)
preset_select = ui.select(
preset_select_options,
value=default_preset_slug,
label="Event preset",
).classes("w-full")
template_select = ui.select(
template_select_options,
value=default_template_slugs,
label="Strategy templates",
multiple=True,
).classes("w-full")
ui.label(
"Changing the preset resets strategy templates to that preset's default comparison set."
).classes("text-xs text-slate-500 dark:text-slate-400")
ui.label("Underlying units will be calculated from initial value ÷ entry spot.").classes(
"text-xs text-slate-500 dark:text-slate-400"
)
initial_value_input = ui.number(
"Initial portfolio value ($)", value=default_units * default_entry_spot, min=0.01, step=1000
).classes("w-full")
loan_input = ui.number("Loan amount", value=default_loan, min=0, step=1000).classes("w-full")
ltv_input = ui.number(
"Margin call LTV",
value=default_margin_call_ltv,
min=0.01,
max=0.99,
step=0.01,
).classes("w-full")
metadata_label = ui.label("").classes("text-sm text-slate-500 dark:text-slate-400")
scenario_label = ui.label("").classes("text-sm text-slate-500 dark:text-slate-400")
validation_label = ui.label("").classes("text-sm text-rose-600 dark:text-rose-300")
run_button = ui.button("Run comparison").props("color=primary")
selected_summary = ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
)
with right_pane:
result_panel = ui.column().classes("w-full gap-6")
syncing_controls = {"value": False}
def selected_template_slugs() -> tuple[str, ...]:
raw_value = template_select.value or []
if isinstance(raw_value, str):
return (raw_value,) if raw_value else ()
return tuple(str(item) for item in raw_value if item)
def render_selected_summary(entry_spot: float | None = None, entry_spot_error: str | None = None) -> None:
selected_summary.clear()
with selected_summary:
ui.label("Scenario Summary").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
# Calculate underlying units with validation
initial_value = float(initial_value_input.value or 0.0)
computed_units, units_error = (
validate_and_calculate_units(initial_value, entry_spot)
if entry_spot is not None
else (0.0, "Entry spot unavailable.")
)
with ui.grid(columns=1).classes("w-full gap-4 sm:grid-cols-2 lg:grid-cols-1 xl:grid-cols-2"):
cards = [
(
"Initial portfolio value",
f"${float(initial_value_input.value or 0.0):,.0f}",
),
("Templates", str(len(selected_template_slugs()))),
("Underlying units", f"{computed_units:,.0f}" if computed_units > 0 else ""),
("Loan amount", f"${float(loan_input.value or 0.0):,.0f}"),
("Margin call LTV", f"{float(ltv_input.value or 0.0):.1%}"),
("Entry spot", f"${entry_spot:,.2f}" if entry_spot is not None else "Unavailable"),
]
for label, value in cards:
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label(label).classes("text-sm text-slate-500 dark:text-slate-400")
ui.label(value).classes("text-xl font-bold text-slate-900 dark:text-slate-100")
# Show validation errors (units_error takes priority, then entry_spot_error)
display_error = units_error or entry_spot_error
if display_error:
tone_class = (
"text-rose-600 dark:text-rose-300"
if "must be positive" in display_error
else "text-amber-700 dark:text-amber-300"
)
ui.label(display_error).classes(f"text-sm {tone_class}")
def render_result_state(title: str, message: str, *, tone: str = "info") -> None:
tone_classes = {
"info": "border-sky-200 bg-sky-50 dark:border-sky-900/60 dark:bg-sky-950/30",
"warning": "border-amber-200 bg-amber-50 dark:border-amber-900/60 dark:bg-amber-950/30",
"error": "border-rose-200 bg-rose-50 dark:border-rose-900/60 dark:bg-rose-950/30",
}
text_classes = {
"info": "text-sky-800 dark:text-sky-200",
"warning": "text-amber-800 dark:text-amber-200",
"error": "text-rose-800 dark:text-rose-200",
}
result_panel.clear()
with result_panel:
with ui.card().classes(
f"w-full rounded-2xl border shadow-sm {tone_classes.get(tone, tone_classes['info'])}"
):
ui.label(title).classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(message).classes(f"text-sm {text_classes.get(tone, text_classes['info'])}")
def mark_results_stale() -> None:
render_result_state(
"Results out of date",
"Inputs changed. Run comparison again to refresh rankings and portfolio paths for the current scenario.",
tone="info",
)
def refresh_preview(*, reset_templates: bool = False, reseed_units: bool = False) -> str | None:
option = preset_lookup.get(str(preset_select.value or ""))
if option is None:
metadata_label.set_text("")
scenario_label.set_text("")
render_selected_summary(entry_spot=None)
return None
if reset_templates:
syncing_controls["value"] = True
try:
template_select.value = list(service.default_template_selection(str(option["slug"])))
finally:
syncing_controls["value"] = False
template_slugs = selected_template_slugs()
try:
# Get initial portfolio value from UI and derive entry spot
preview_initial_value = float(initial_value_input.value or 0.0)
preview_entry_spot = service.derive_entry_spot(
preset_slug=str(option["slug"]),
template_slugs=template_slugs,
)
# Validate and calculate underlying units
preview_units, units_error = validate_and_calculate_units(preview_initial_value, preview_entry_spot)
if units_error:
metadata_label.set_text(f"Preset: {option['label']}{option['description']}")
scenario_label.set_text(units_error)
render_selected_summary(entry_spot=preview_entry_spot, entry_spot_error=units_error)
return units_error
if workspace_id and config is not None and reseed_units:
# Recalculate from workspace config
workspace_units = asset_quantity_from_workspace_config(
config,
entry_spot=preview_entry_spot,
symbol="GLD",
)
syncing_controls["value"] = True
try:
initial_value_input.value = workspace_units * preview_entry_spot
preview_units = workspace_units
preview_initial_value = workspace_units * preview_entry_spot
finally:
syncing_controls["value"] = False
scenario = service.preview_scenario(
preset_slug=str(option["slug"]),
template_slugs=template_slugs,
underlying_units=preview_units,
loan_amount=float(loan_input.value or 0.0),
margin_call_ltv=float(ltv_input.value or 0.0),
)
except (ValueError, KeyError) as exc:
metadata_label.set_text(f"Preset: {option['label']}{option['description']}")
scenario_label.set_text(str(exc))
render_selected_summary(entry_spot=None, entry_spot_error=str(exc))
return str(exc)
except Exception:
logger.exception(
"Event comparison preview failed for workspace=%s preset=%s templates=%s initial_value=%s loan=%s margin_call_ltv=%s",
workspace_id,
preset_select.value,
selected_template_slugs(),
initial_value_input.value,
loan_input.value,
ltv_input.value,
)
message = "Event comparison preview failed. Please verify the seeded inputs and try again."
metadata_label.set_text(f"Preset: {option['label']}{option['description']}")
scenario_label.set_text(message)
render_selected_summary(entry_spot=None, entry_spot_error=message)
return message
preset = service.event_preset_service.get_preset(str(option["slug"]))
metadata_label.set_text(f"Preset: {option['label']}{option['description']}")
scenario_label.set_text(
"Scenario preview: "
f"{scenario.start_date.isoformat()}{scenario.end_date.isoformat()}"
+ (
f" · Anchor date: {preset.anchor_date.isoformat()}"
if preset.anchor_date is not None
else " · Anchor date: none"
)
+ f" · Entry spot: ${scenario.initial_portfolio.entry_spot:,.2f}"
)
render_selected_summary(entry_spot=float(scenario.initial_portfolio.entry_spot))
return None
def render_report() -> None:
validation_label.set_text("")
result_panel.clear()
template_slugs = selected_template_slugs()
try:
# Get initial portfolio value and calculate underlying units with validation
initial_value = float(initial_value_input.value or 0.0)
# Get entry spot from preview
option = preset_lookup.get(str(preset_select.value or ""))
if option is None:
validation_label.set_text("Select a preset to run comparison.")
return
entry_spot = service.derive_entry_spot(
preset_slug=str(option["slug"]),
template_slugs=template_slugs,
)
# Validate and calculate underlying units
underlying_units, units_error = validate_and_calculate_units(initial_value, entry_spot)
if units_error:
validation_label.set_text(units_error)
render_result_state("Input validation failed", units_error, tone="warning")
return
report = service.run_read_only_comparison(
preset_slug=str(preset_select.value or ""),
template_slugs=template_slugs,
underlying_units=underlying_units,
loan_amount=float(loan_input.value or 0.0),
margin_call_ltv=float(ltv_input.value or 0.0),
)
except (ValueError, KeyError) as exc:
validation_label.set_text(str(exc))
render_result_state("Scenario validation failed", str(exc), tone="warning")
return
except Exception:
message = "Event comparison failed. Please verify the seeded inputs and try again."
logger.exception(
"Event comparison page run failed for workspace=%s preset=%s templates=%s initial_value=%s loan=%s margin_call_ltv=%s",
workspace_id,
preset_select.value,
selected_template_slugs(),
initial_value_input.value,
loan_input.value,
ltv_input.value,
)
validation_label.set_text(message)
render_result_state("Event comparison failed", message, tone="error")
return
preset = report.event_preset
scenario = report.scenario
metadata_label.set_text(
f"Preset: {preset.display_name} ({preset.event_type}) · Tags: {', '.join(preset.tags) or 'none'}"
)
scenario_label.set_text(
"Scenario dates used: "
f"{scenario.start_date.isoformat()}{scenario.end_date.isoformat()} · "
f"Entry spot: ${scenario.initial_portfolio.entry_spot:,.2f}"
)
render_selected_summary(entry_spot=float(scenario.initial_portfolio.entry_spot))
chart_model = service.chart_model(report)
drilldown_options = service.drilldown_options(report)
initial_drilldown_slug = next(iter(drilldown_options), None)
with result_panel:
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Scenario Results").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
with ui.grid(columns=4).classes("w-full gap-4 max-lg:grid-cols-2 max-sm:grid-cols-1"):
cards = [
("Symbol", scenario.symbol),
("Event window", f"{preset.window_start.isoformat()}{preset.window_end.isoformat()}"),
(
"Anchor date",
preset.anchor_date.isoformat() if preset.anchor_date is not None else "None",
),
(
"Scenario dates used",
f"{scenario.start_date.isoformat()}{scenario.end_date.isoformat()}",
),
("Initial value", f"${float(initial_value_input.value or 0.0):,.0f}"),
("Underlying units", f"{scenario.initial_portfolio.underlying_units:,.0f}"),
("Loan amount", f"${scenario.initial_portfolio.loan_amount:,.0f}"),
("Margin call LTV", f"{scenario.initial_portfolio.margin_call_ltv:.1%}"),
("Templates compared", str(len(report.rankings))),
]
for label, value in cards:
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label(label).classes("text-sm text-slate-500 dark:text-slate-400")
ui.label(value).classes("text-xl font-bold text-slate-900 dark:text-slate-100")
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Ranked Results").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.table(
columns=[
{"name": "rank", "label": "Rank", "field": "rank", "align": "right"},
{"name": "template_name", "label": "Template", "field": "template_name", "align": "left"},
{
"name": "survived_margin_call",
"label": "Survived margin call",
"field": "survived_margin_call",
"align": "center",
},
{
"name": "margin_call_days_hedged",
"label": "Hedged margin call days",
"field": "margin_call_days_hedged",
"align": "right",
},
{
"name": "max_ltv_hedged",
"label": "Max hedged LTV",
"field": "max_ltv_hedged",
"align": "right",
},
{"name": "hedge_cost", "label": "Hedge cost", "field": "hedge_cost", "align": "right"},
{
"name": "final_equity",
"label": "Final equity",
"field": "final_equity",
"align": "right",
},
],
rows=[
{
"rank": item.rank,
"template_name": item.template_name,
"survived_margin_call": "Yes" if item.survived_margin_call else "No",
"margin_call_days_hedged": item.margin_call_days_hedged,
"max_ltv_hedged": f"{item.max_ltv_hedged:.1%}",
"hedge_cost": f"${item.hedge_cost:,.0f}",
"final_equity": f"${item.final_equity:,.0f}",
}
for item in report.rankings
],
row_key="rank",
).classes("w-full")
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Strategy Drilldown").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(
"Select a ranked strategy to inspect margin-call pressure, payoff realization, and the full seeded daily path."
).classes("text-sm text-slate-500 dark:text-slate-400")
drilldown_select = ui.select(
drilldown_options,
value=initial_drilldown_slug,
label="Strategy drilldown",
).classes("w-full")
drilldown_container = ui.column().classes("w-full gap-4")
def render_drilldown() -> None:
drilldown_container.clear()
if drilldown_select.value is None:
return
drilldown = service.drilldown_model(report, template_slug=str(drilldown_select.value))
breach_dates = ", ".join(drilldown.breach_dates) if drilldown.breach_dates else "None"
worst_ltv_point = (
f"{drilldown.worst_ltv_date} · {drilldown.worst_ltv_hedged:.1%}"
if drilldown.worst_ltv_date is not None
else "Unavailable"
)
with drilldown_container:
ui.label(f"Selected strategy: {drilldown.template_name}").classes(
"text-lg font-semibold text-slate-900 dark:text-slate-100"
)
ui.label(
f"Rank #{drilldown.rank} · {'Survived margin call' if drilldown.survived_margin_call else 'Breached margin threshold'}"
).classes("text-sm text-slate-500 dark:text-slate-400")
with ui.grid(columns=4).classes("w-full gap-4 max-lg:grid-cols-2 max-sm:grid-cols-1"):
cards = [
("Margin-call days", str(drilldown.margin_call_days_hedged)),
("Payoff realized", f"${drilldown.total_option_payoff_realized:,.0f}"),
("Hedge cost", f"${drilldown.hedge_cost:,.0f}"),
("Final equity", f"${drilldown.final_equity:,.0f}"),
]
for label, value in cards:
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label(label).classes("text-sm text-slate-500 dark:text-slate-400")
ui.label(value).classes("text-xl font-bold text-slate-900 dark:text-slate-100")
with ui.grid(columns=2).classes("w-full gap-4 max-md:grid-cols-1"):
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label("Worst LTV point").classes("text-sm text-slate-500 dark:text-slate-400")
ui.label(worst_ltv_point).classes(
"text-xl font-bold text-slate-900 dark:text-slate-100"
)
with ui.card().classes(
"rounded-xl border border-amber-200 bg-amber-50 p-4 shadow-none dark:border-amber-900/60 dark:bg-amber-950/30"
):
ui.label("Margin threshold breach dates").classes(
"text-sm text-amber-700 dark:text-amber-300"
)
ui.label(breach_dates).classes(
"text-base font-semibold text-amber-800 dark:text-amber-200"
)
with ui.card().classes(
"w-full rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label("Daily path details").classes(
"text-base font-semibold text-slate-900 dark:text-slate-100"
)
ui.table(
columns=[
{"name": "date", "label": "Date", "field": "date", "align": "left"},
{
"name": "spot_close",
"label": "Spot",
"field": "spot_close",
"align": "right",
},
{
"name": "net_portfolio_value",
"label": "Net hedged",
"field": "net_portfolio_value",
"align": "right",
},
{
"name": "option_market_value",
"label": "Option value",
"field": "option_market_value",
"align": "right",
},
{
"name": "realized_option_cashflow",
"label": "Payoff realized",
"field": "realized_option_cashflow",
"align": "right",
},
{
"name": "ltv_hedged",
"label": "Hedged LTV",
"field": "ltv_hedged",
"align": "right",
},
{
"name": "margin_call_hedged",
"label": "Breach",
"field": "margin_call_hedged",
"align": "center",
},
],
rows=[
{
"date": row.date,
"spot_close": f"${row.spot_close:,.2f}",
"net_portfolio_value": f"${row.net_portfolio_value:,.0f}",
"option_market_value": f"${row.option_market_value:,.0f}",
"realized_option_cashflow": f"${row.realized_option_cashflow:,.0f}",
"ltv_hedged": f"{row.ltv_hedged:.1%}",
"margin_call_hedged": "Yes" if row.margin_call_hedged else "No",
}
for row in drilldown.rows
],
row_key="date",
).classes("w-full")
drilldown_select.on_value_change(lambda _: render_drilldown())
render_drilldown()
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Portfolio Value Paths").classes(
"text-lg font-semibold text-slate-900 dark:text-slate-100"
)
ui.label(
"Baseline series shows the unhedged collateral value path for the same seeded event window."
).classes("text-sm text-slate-500 dark:text-slate-400")
ui.echart(
_chart_options(
chart_model.dates,
tuple({"name": item.name, "values": list(item.values)} for item in chart_model.series),
)
).classes("h-96 w-full")
def on_preset_change() -> None:
if syncing_controls["value"]:
return
validation_label.set_text("")
preview_error = refresh_preview(reset_templates=True, reseed_units=False)
if preview_error:
validation_label.set_text(preview_error)
render_result_state("Scenario validation failed", preview_error, tone="warning")
else:
mark_results_stale()
def on_preview_input_change() -> None:
if syncing_controls["value"]:
return
validation_label.set_text("")
preview_error = refresh_preview()
if preview_error:
validation_label.set_text(preview_error)
render_result_state("Scenario validation failed", preview_error, tone="warning")
else:
mark_results_stale()
preset_select.on_value_change(lambda _: on_preset_change())
template_select.on_value_change(lambda _: on_preview_input_change())
initial_value_input.on_value_change(lambda _: on_preview_input_change())
loan_input.on_value_change(lambda _: on_preview_input_change())
ltv_input.on_value_change(lambda _: on_preview_input_change())
run_button.on_click(lambda: render_report())
refresh_preview(reset_templates=False, reseed_units=False)
render_report()

View File

@@ -1,24 +1,35 @@
from __future__ import annotations from __future__ import annotations
import logging
from fastapi.responses import RedirectResponse
from nicegui import ui from nicegui import ui
from app.domain.portfolio_math import resolve_portfolio_spot_from_quote
from app.models.workspace import get_workspace_repository
from app.pages.common import ( from app.pages.common import (
dashboard_page, dashboard_page,
demo_spot_price, demo_spot_price,
strategy_catalog, portfolio_snapshot,
split_page_panes,
strategy_metrics, strategy_metrics,
) )
from app.services.runtime import get_data_service
from app.services.strategy_templates import StrategyTemplateService
logger = logging.getLogger(__name__)
def _cost_benefit_options(metrics: dict) -> dict: def _cost_benefit_options(metrics: dict) -> dict:
return { return {
"tooltip": {"trigger": "axis"}, "tooltip": {"trigger": "axis"},
"grid": {"left": 64, "right": 24, "top": 24, "bottom": 48},
"xAxis": { "xAxis": {
"type": "category", "type": "category",
"data": [f"${point['price']:.0f}" for point in metrics["scenario_series"]], "data": [f"${point['price']:.0f}" for point in metrics["scenario_series"]],
"name": "GLD spot", "name": "Collateral spot",
}, },
"yAxis": {"type": "value", "name": "Net benefit / oz"}, "yAxis": {"type": "value", "name": "Net hedge benefit / oz"},
"series": [ "series": [
{ {
"type": "bar", "type": "bar",
@@ -26,6 +37,11 @@ def _cost_benefit_options(metrics: dict) -> dict:
"itemStyle": { "itemStyle": {
"color": "#0ea5e9", "color": "#0ea5e9",
}, },
"markLine": {
"symbol": "none",
"lineStyle": {"color": "#94a3b8", "type": "dashed"},
"data": [{"yAxis": 0}],
},
} }
], ],
} }
@@ -33,98 +49,232 @@ def _cost_benefit_options(metrics: dict) -> dict:
def _waterfall_options(metrics: dict) -> dict: def _waterfall_options(metrics: dict) -> dict:
steps = metrics["waterfall_steps"] steps = metrics["waterfall_steps"]
running = 0.0 values: list[dict[str, object]] = []
base: list[float] = [] for label, amount in steps:
values: list[float] = [] color = "#0ea5e9" if label == "Net equity" else ("#22c55e" if amount >= 0 else "#ef4444")
for index, (_, amount) in enumerate(steps): values.append({"value": amount, "itemStyle": {"color": color}})
if index == 0:
base.append(0)
values.append(amount)
running = amount
elif index == len(steps) - 1:
base.append(0)
values.append(amount)
else:
base.append(running)
values.append(amount)
running += amount
return { return {
"tooltip": {"trigger": "axis", "axisPointer": {"type": "shadow"}}, "tooltip": {"trigger": "axis", "axisPointer": {"type": "shadow"}},
"grid": {"left": 80, "right": 24, "top": 24, "bottom": 48},
"xAxis": {"type": "category", "data": [label for label, _ in steps]}, "xAxis": {"type": "category", "data": [label for label, _ in steps]},
"yAxis": {"type": "value", "name": "USD"}, "yAxis": {"type": "value", "name": "USD"},
"series": [ "series": [
{ {
"type": "bar", "type": "bar",
"stack": "total",
"data": base,
"itemStyle": {"color": "rgba(0,0,0,0)"},
},
{
"type": "bar",
"stack": "total",
"data": values, "data": values,
"itemStyle": { "label": {"show": True, "position": "top", "formatter": "{c}"},
"color": "#22c55e",
},
}, },
], ],
} }
@ui.page("/hedge") @ui.page("/{workspace_id}/hedge")
def hedge_page() -> None: async def workspace_hedge_page(workspace_id: str) -> None:
strategies = strategy_catalog() repo = get_workspace_repository()
strategy_map = {strategy["label"]: strategy["name"] for strategy in strategies} if not repo.workspace_exists(workspace_id):
selected = {"strategy": strategies[0]["name"], "scenario_pct": 0} return RedirectResponse(url="/", status_code=307)
await _render_hedge_page(workspace_id=workspace_id)
async def _resolve_hedge_spot(workspace_id: str | None = None) -> tuple[dict[str, float], str, str]:
"""Resolve hedge page spot price using the same quote-unit seam as overview."""
repo = get_workspace_repository()
config = repo.load_portfolio_config(workspace_id) if workspace_id else None
if config is None:
return {"spot_price": demo_spot_price()}, "demo", ""
try:
data_service = get_data_service()
underlying = config.underlying or "GLD"
quote = await data_service.get_quote(underlying)
spot, source, updated_at = resolve_portfolio_spot_from_quote(config, quote, fallback_symbol=underlying)
portfolio = portfolio_snapshot(config, runtime_spot_price=spot)
return portfolio, source, updated_at
except Exception as exc:
logger.warning("Falling back to configured hedge spot for workspace %s: %s", workspace_id, exc)
portfolio = portfolio_snapshot(config)
return portfolio, "configured_entry_price", ""
async def _render_hedge_page(workspace_id: str | None = None) -> None:
portfolio, quote_source, quote_updated_at = await _resolve_hedge_spot(workspace_id)
template_service = StrategyTemplateService()
strategies_state = {"items": template_service.catalog_items()}
def strategy_map() -> dict[str, str]:
return {strategy["label"]: strategy["template_slug"] for strategy in strategies_state["items"]}
selected = {
"strategy": strategies_state["items"][0]["template_slug"],
"label": strategies_state["items"][0]["label"],
"scenario_pct": 0,
}
display_mode = portfolio.get("display_mode", "XAU")
if display_mode == "GLD":
spot_unit = "/share"
spot_desc = "GLD share price"
else:
spot_unit = "/oz"
spot_desc = "converted collateral spot"
if quote_source == "configured_entry_price":
spot_label = f"Current spot reference: ${portfolio['spot_price']:,.2f}{spot_unit} (configured entry price)"
else:
spot_label = (
f"Current spot reference: ${portfolio['spot_price']:,.2f}{spot_unit} ({spot_desc} via {quote_source})"
)
updated_label = f"Quote timestamp: {quote_updated_at}" if quote_updated_at else "Quote timestamp: unavailable"
# Get underlying for display
underlying = "GLD"
if workspace_id:
try:
repo = get_workspace_repository()
config = repo.load_portfolio_config(workspace_id)
underlying = config.underlying or "GLD"
except Exception:
pass
with dashboard_page( with dashboard_page(
"Hedge Analysis", "Hedge Analysis",
"Compare hedge structures across scenarios, visualize cost-benefit tradeoffs, and inspect net equity impacts.", f"Compare hedge structures across scenarios, visualize cost-benefit tradeoffs, and inspect net equity impacts for {underlying}.",
"hedge", "hedge",
workspace_id=workspace_id,
): ):
with ui.row().classes("w-full gap-6 max-lg:flex-col"): with ui.row().classes("w-full items-center justify-between gap-4 max-md:flex-col max-md:items-start"):
ui.label(f"Active underlying: {underlying}").classes("text-sm text-slate-500 dark:text-slate-400")
left_pane, right_pane = split_page_panes(
left_testid="hedge-left-pane",
right_testid="hedge-right-pane",
)
with left_pane:
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
): ):
ui.label("Strategy Controls").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("Strategy Controls").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
selector = ui.select(strategy_map, value=selected["strategy"], label="Strategy selector").classes( selector = ui.select(
"w-full" list(strategy_map().keys()), value=selected["label"], label="Strategy selector"
) ).classes("w-full")
slider_value = ui.label("Scenario move: +0%").classes("text-sm text-slate-500 dark:text-slate-400") slider_value = ui.label("Scenario move: +0%").classes("text-sm text-slate-500 dark:text-slate-400")
slider = ui.slider(min=-25, max=25, value=0, step=5).classes("w-full") slider = ui.slider(min=-25, max=25, value=0, step=5).classes("w-full")
ui.label(f"Current spot reference: ${demo_spot_price():,.2f}").classes( ui.label(spot_label).classes("text-sm text-slate-500 dark:text-slate-400")
"text-sm text-slate-500 dark:text-slate-400" ui.label(updated_label).classes("text-xs text-slate-500 dark:text-slate-400")
if workspace_id:
ui.label(f"Workspace route: /{workspace_id}/hedge").classes(
"text-xs text-slate-500 dark:text-slate-400"
)
else:
ui.label(f"Demo spot reference: ${demo_spot_price():,.2f}").classes(
"text-xs text-slate-500 dark:text-slate-400"
)
with (
ui.card()
.classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
) )
.props("data-testid=strategy-builder-card")
):
ui.label("Strategy Builder").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(
"Save a custom protective put or equal-weight two-leg ladder for reuse across hedge, backtests, and event comparison."
).classes("text-sm text-slate-500 dark:text-slate-400")
builder_type_options = {"Protective put": "protective_put", "Two-leg ladder": "laddered_put"}
builder_name = ui.input("Template name", placeholder="Crash Guard 97").classes("w-full")
builder_type = ui.select(
list(builder_type_options.keys()),
value="Protective put",
label="Strategy type",
).classes("w-full")
builder_expiry_days = ui.number("Expiration days", value=365, min=30, step=30).classes("w-full")
builder_primary_strike = ui.number(
"Primary strike (% of spot)",
value=100,
min=1,
max=150,
step=1,
).classes("w-full")
builder_secondary_strike = ui.number(
"Secondary strike (% of spot)",
value=95,
min=1,
max=150,
step=1,
).classes("w-full")
ui.label("Two-leg ladders currently save with equal 50/50 weights.").classes(
"text-xs text-slate-500 dark:text-slate-400"
)
builder_status = ui.label("").classes("text-sm text-slate-600 dark:text-slate-300")
save_template_button = ui.button("Save template").props("color=primary outline")
summary = ui.card().classes( summary = ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
) )
charts_row = ui.row().classes("w-full gap-6 max-lg:flex-col") with right_pane:
with charts_row: scenario_results = ui.card().classes(
cost_chart = ui.echart( "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
_cost_benefit_options(strategy_metrics(selected["strategy"], selected["scenario_pct"]))
).classes(
"h-96 w-full rounded-2xl border border-slate-200 bg-white p-4 shadow-sm dark:border-slate-800 dark:bg-slate-900"
)
waterfall_chart = ui.echart(
_waterfall_options(strategy_metrics(selected["strategy"], selected["scenario_pct"]))
).classes(
"h-96 w-full rounded-2xl border border-slate-200 bg-white p-4 shadow-sm dark:border-slate-800 dark:bg-slate-900"
) )
with ui.row().classes("w-full gap-6 max-xl:flex-col"):
initial_metrics = strategy_metrics(selected["strategy"], selected["scenario_pct"], portfolio=portfolio)
cost_chart = ui.echart(_cost_benefit_options(initial_metrics)).classes(
"h-96 w-full rounded-2xl border border-slate-200 bg-white p-4 shadow-sm dark:border-slate-800 dark:bg-slate-900"
)
waterfall_chart = ui.echart(_waterfall_options(initial_metrics)).classes(
"h-96 w-full rounded-2xl border border-slate-200 bg-white p-4 shadow-sm dark:border-slate-800 dark:bg-slate-900"
)
syncing_controls = {"value": False}
def refresh_available_strategies() -> None:
strategies_state["items"] = template_service.catalog_items()
options = strategy_map()
syncing_controls["value"] = True
try:
selector.options = list(options.keys())
if selected["label"] not in options:
first_label = next(iter(options))
selected["label"] = first_label
selected["strategy"] = options[first_label]
selector.value = first_label
selector.update()
finally:
syncing_controls["value"] = False
def render_summary() -> None: def render_summary() -> None:
metrics = strategy_metrics(selected["strategy"], selected["scenario_pct"]) metrics = strategy_metrics(selected["strategy"], selected["scenario_pct"], portfolio=portfolio)
strategy = metrics["strategy"] strategy = metrics["strategy"]
# Display mode-aware labels
if display_mode == "GLD":
weight_unit = "shares"
price_unit = "/share"
hedge_cost_unit = "/share"
else:
weight_unit = "oz"
price_unit = "/oz"
hedge_cost_unit = "/oz"
summary.clear() summary.clear()
with summary: with summary:
ui.label("Scenario Summary").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("Scenario Summary").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
with ui.grid(columns=2).classes("w-full gap-4 max-sm:grid-cols-1"): ui.label(f"Selected template: {strategy['label']}").classes(
"text-sm text-slate-500 dark:text-slate-400"
)
ui.label(strategy["description"]).classes("text-sm text-slate-600 dark:text-slate-300")
with ui.grid(columns=1).classes("w-full gap-4 sm:grid-cols-2 lg:grid-cols-1 xl:grid-cols-2"):
cards = [ cards = [
("Scenario spot", f"${metrics['scenario_price']:,.2f}"), ("Start value", f"${portfolio['gold_value']:,.0f}"),
("Hedge cost", f"${strategy['estimated_cost']:,.2f}/oz"), ("Start price", f"${portfolio['spot_price']:,.2f}{price_unit}"),
("Unhedged equity", f"${metrics['unhedged_equity']:,.0f}"), ("Weight", f"{portfolio['gold_units']:,.0f} {weight_unit}"),
("Hedged equity", f"${metrics['hedged_equity']:,.0f}"), ("Loan amount", f"${portfolio['loan_amount']:,.0f}"),
("Margin call LTV", f"{portfolio['margin_call_ltv']:.1%}"),
("Monthly hedge budget", f"${portfolio['hedge_budget']:,.0f}"),
] ]
for label, value in cards: for label, value in cards:
with ui.card().classes( with ui.card().classes(
@@ -132,15 +282,38 @@ def hedge_page() -> None:
): ):
ui.label(label).classes("text-sm text-slate-500 dark:text-slate-400") ui.label(label).classes("text-sm text-slate-500 dark:text-slate-400")
ui.label(value).classes("text-2xl font-bold text-slate-900 dark:text-slate-100") ui.label(value).classes("text-2xl font-bold text-slate-900 dark:text-slate-100")
ui.label(strategy["description"]).classes("text-sm text-slate-600 dark:text-slate-300")
cost_chart.options = _cost_benefit_options(metrics) scenario_results.clear()
with scenario_results:
ui.label("Scenario Results").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
with ui.grid(columns=2).classes("w-full gap-4 max-md:grid-cols-1"):
result_cards = [
("Scenario spot", f"${metrics['scenario_price']:,.2f}{price_unit}"),
("Hedge cost", f"${float(strategy.get('estimated_cost', 0.0)):,.2f}{hedge_cost_unit}"),
("Unhedged equity", f"${metrics['unhedged_equity']:,.0f}"),
("Hedged equity", f"${metrics['hedged_equity']:,.0f}"),
("Net hedge benefit", f"${metrics['hedged_equity'] - metrics['unhedged_equity']:,.0f}"),
("Scenario move", f"{selected['scenario_pct']:+d}%"),
]
for label, value in result_cards:
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label(label).classes("text-sm text-slate-500 dark:text-slate-400")
ui.label(value).classes("text-2xl font-bold text-slate-900 dark:text-slate-100")
cost_chart.options.clear()
cost_chart.options.update(_cost_benefit_options(metrics))
cost_chart.update() cost_chart.update()
waterfall_chart.options = _waterfall_options(metrics) waterfall_chart.options.clear()
waterfall_chart.options.update(_waterfall_options(metrics))
waterfall_chart.update() waterfall_chart.update()
def refresh_from_selector(event) -> None: def refresh_from_selector(event) -> None:
selected["strategy"] = event.value if syncing_controls["value"]:
return
selected["label"] = str(event.value)
selected["strategy"] = strategy_map()[selected["label"]]
render_summary() render_summary()
def refresh_from_slider(event) -> None: def refresh_from_slider(event) -> None:
@@ -149,6 +322,43 @@ def hedge_page() -> None:
slider_value.set_text(f"Scenario move: {sign}{selected['scenario_pct']}%") slider_value.set_text(f"Scenario move: {sign}{selected['scenario_pct']}%")
render_summary() render_summary()
def save_template() -> None:
builder_status.set_text("")
try:
builder_kind = builder_type_options[str(builder_type.value)]
strikes = (float(builder_primary_strike.value or 0.0) / 100.0,)
weights: tuple[float, ...] | None = None
if builder_kind == "laddered_put":
strikes = (
float(builder_primary_strike.value or 0.0) / 100.0,
float(builder_secondary_strike.value or 0.0) / 100.0,
)
weights = (0.5, 0.5)
template = template_service.create_custom_template(
display_name=str(builder_name.value or ""),
template_kind=builder_kind,
target_expiry_days=int(builder_expiry_days.value or 0),
strike_pcts=strikes,
weights=weights,
)
except (ValueError, KeyError) as exc:
builder_status.set_text(str(exc))
return
refresh_available_strategies()
selected["label"] = template.display_name
selected["strategy"] = template.slug
syncing_controls["value"] = True
try:
selector.value = template.display_name
selector.update()
finally:
syncing_controls["value"] = False
builder_status.set_text(
f"Saved template {template.display_name}. Reusable on hedge, backtests, and event comparison."
)
render_summary()
selector.on_value_change(refresh_from_selector) selector.on_value_change(refresh_from_selector)
slider.on_value_change(refresh_from_slider) slider.on_value_change(refresh_from_slider)
save_template_button.on_click(lambda: save_template())
render_summary() render_summary()

View File

@@ -5,80 +5,90 @@ from typing import Any
from nicegui import ui from nicegui import ui
from app.components import GreeksTable from app.components import GreeksTable
from app.pages.common import dashboard_page, strategy_catalog from app.pages.common import dashboard_page, split_page_panes, strategy_catalog
from app.services.runtime import get_data_service from app.services.runtime import get_data_service
@ui.page("/options") @ui.page("/options")
async def options_page() -> None: async def options_page() -> None:
chain_data = await get_data_service().get_options_chain("GLD") data_service = get_data_service()
chain = list(chain_data.get("rows") or [*chain_data.get("calls", []), *chain_data.get("puts", [])]) expirations_data = await data_service.get_option_expirations("GLD")
expiries = list(chain_data.get("expirations") or sorted({row["expiry"] for row in chain})) expiries = list(expirations_data.get("expirations") or [])
strike_values = sorted({float(row["strike"]) for row in chain}) default_expiry = expiries[0] if expiries else None
chain_data = await data_service.get_options_chain_for_expiry("GLD", default_expiry)
selected_expiry = {"value": expiries[0] if expiries else None} chain_state = {
strike_range = { "data": chain_data,
"min": strike_values[0] if strike_values else 0.0, "rows": list(chain_data.get("rows") or [*chain_data.get("calls", []), *chain_data.get("puts", [])]),
"max": strike_values[-1] if strike_values else 0.0,
} }
selected_expiry = {"value": chain_data.get("selected_expiry") or default_expiry}
selected_strategy = {"value": strategy_catalog()[0]["label"]} selected_strategy = {"value": strategy_catalog()[0]["label"]}
chosen_contracts: list[dict[str, Any]] = [] chosen_contracts: list[dict[str, Any]] = []
def strike_bounds(rows: list[dict[str, Any]]) -> tuple[float, float]:
strike_values = sorted({float(row["strike"]) for row in rows})
if not strike_values:
return 0.0, 0.0
return strike_values[0], strike_values[-1]
initial_min_strike, initial_max_strike = strike_bounds(chain_state["rows"])
strike_range = {"min": initial_min_strike, "max": initial_max_strike}
with dashboard_page( with dashboard_page(
"Options Chain", "Options Chain",
"Browse GLD contracts, filter by expiry and strike range, inspect Greeks, and attach contracts to hedge workflows.", "Browse GLD contracts, filter by expiry and strike range, inspect Greeks, and attach contracts to hedge workflows.",
"options", "options",
): ):
with ui.row().classes("w-full gap-6 max-lg:flex-col"): left_pane, right_pane = split_page_panes(
left_testid="options-left-pane",
right_testid="options-right-pane",
)
with left_pane:
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
): ):
ui.label("Filters").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("Filters").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
expiry_select = ui.select(expiries, value=selected_expiry["value"], label="Expiry").classes("w-full") expiry_select = ui.select(expiries, value=selected_expiry["value"], label="Expiry").classes("w-full")
min_strike = ui.number( min_strike = ui.number("Min strike", value=strike_range["min"], step=5).classes("w-full")
"Min strike", max_strike = ui.number("Max strike", value=strike_range["max"], step=5).classes("w-full")
value=strike_range["min"],
min=strike_values[0] if strike_values else 0.0,
max=strike_values[-1] if strike_values else 0.0,
step=5,
).classes("w-full")
max_strike = ui.number(
"Max strike",
value=strike_range["max"],
min=strike_values[0] if strike_values else 0.0,
max=strike_values[-1] if strike_values else 0.0,
step=5,
).classes("w-full")
strategy_select = ui.select( strategy_select = ui.select(
[item["label"] for item in strategy_catalog()], [item["label"] for item in strategy_catalog()],
value=selected_strategy["value"], value=selected_strategy["value"],
label="Add to hedge strategy", label="Add to hedge strategy",
).classes("w-full") ).classes("w-full")
source_label = f"Source: {chain_data.get('source', 'unknown')}" source_html = ui.html("").classes("text-xs text-slate-500 dark:text-slate-400")
if chain_data.get("updated_at"): error_html = ui.html("").classes("text-xs text-amber-700 dark:text-amber-300")
source_label += f" · Updated {chain_data['updated_at']}" loading_html = ui.html("").classes("text-xs text-sky-700 dark:text-sky-300")
ui.label(source_label).classes("text-xs text-slate-500 dark:text-slate-400")
if chain_data.get("error"):
ui.label(f"Options data unavailable: {chain_data['error']}").classes(
"text-xs text-amber-700 dark:text-amber-300"
)
selection_card = ui.card().classes( selection_card = ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
) )
chain_table = ui.html("").classes("w-full") with right_pane:
greeks = GreeksTable([]) chain_table = ui.html("").classes("w-full")
with ui.row().classes("w-full gap-6 max-xl:flex-col"):
greeks = GreeksTable([])
quick_add = ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
)
def sync_status() -> None:
current_data = chain_state["data"]
source_label = f"Source: {current_data.get('source', 'unknown')}"
if current_data.get("updated_at"):
source_label += f" · Updated {current_data['updated_at']}"
source_html.content = source_label
source_html.update()
error_message = current_data.get("error") or expirations_data.get("error")
error_html.content = f"Options data unavailable: {error_message}" if error_message else ""
error_html.update()
def filtered_rows() -> list[dict[str, Any]]: def filtered_rows() -> list[dict[str, Any]]:
if not selected_expiry["value"]:
return []
return [ return [
row row for row in chain_state["rows"] if strike_range["min"] <= float(row["strike"]) <= strike_range["max"]
for row in chain
if row["expiry"] == selected_expiry["value"]
and strike_range["min"] <= float(row["strike"]) <= strike_range["max"]
] ]
def render_selection() -> None: def render_selection() -> None:
@@ -91,19 +101,21 @@ async def options_page() -> None:
if not chosen_contracts: if not chosen_contracts:
ui.label("No contracts added yet.").classes("text-sm text-slate-500 dark:text-slate-400") ui.label("No contracts added yet.").classes("text-sm text-slate-500 dark:text-slate-400")
return return
for contract in chosen_contracts[-3:]: with ui.column().classes("w-full gap-3"):
ui.label( for contract in chosen_contracts[-3:]:
f"{contract['symbol']} · premium ${float(contract['premium']):.2f} · IV {float(contract.get('impliedVolatility', 0.0)):.1%}" with ui.card().classes(
).classes("text-sm text-slate-600 dark:text-slate-300") "rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label(contract["symbol"]).classes("font-semibold text-slate-900 dark:text-slate-100")
ui.label(
f"Premium ${float(contract['premium']):.2f} · IV {float(contract.get('impliedVolatility', 0.0)):.1%}"
).classes("text-sm text-slate-600 dark:text-slate-300")
def add_to_strategy(contract: dict[str, Any]) -> None: def add_to_strategy(contract: dict[str, Any]) -> None:
chosen_contracts.append(contract) chosen_contracts.append(contract)
render_selection() render_selection()
greeks.set_options(chosen_contracts[-6:]) greeks.set_options(chosen_contracts[-6:])
ui.notify( ui.notify(f"Added {contract['symbol']} to {selected_strategy['value']}", color="positive")
f"Added {contract['symbol']} to {selected_strategy['value']}",
color="positive",
)
def render_chain() -> None: def render_chain() -> None:
rows = filtered_rows() rows = filtered_rows()
@@ -133,7 +145,7 @@ async def options_page() -> None:
<td class='px-4 py-3 text-slate-600 dark:text-slate-300'>${float(row['bid']):.2f} / ${float(row['ask']):.2f}</td> <td class='px-4 py-3 text-slate-600 dark:text-slate-300'>${float(row['bid']):.2f} / ${float(row['ask']):.2f}</td>
<td class='px-4 py-3 text-slate-600 dark:text-slate-300'>${float(row.get('lastPrice', row.get('premium', 0.0))):.2f}</td> <td class='px-4 py-3 text-slate-600 dark:text-slate-300'>${float(row.get('lastPrice', row.get('premium', 0.0))):.2f}</td>
<td class='px-4 py-3 text-slate-600 dark:text-slate-300'>{float(row.get('impliedVolatility', 0.0)):.1%}</td> <td class='px-4 py-3 text-slate-600 dark:text-slate-300'>{float(row.get('impliedVolatility', 0.0)):.1%}</td>
<td class='px-4 py-3 text-slate-600 dark:text-slate-300'{float(row.get('delta', 0.0)):+.3f} · Γ {float(row.get('gamma', 0.0)):.3f} · Θ {float(row.get('theta', 0.0)):+.3f}</td> <td class='px-4 py-3 text-slate-600 dark:text-slate-300'{float(row.get('delta', 0.0)):+.3f} · Γ {float(row.get('gamma', 0.0)):.3f} · Θ {float(row.get('theta', 0.0)):+.3f} · V {float(row.get('vega', 0.0)):.3f}</td>
<td class='px-4 py-3 text-sky-600 dark:text-sky-300'>Use quick-add buttons below</td> <td class='px-4 py-3 text-sky-600 dark:text-sky-300'>Use quick-add buttons below</td>
</tr> </tr>
""" for row in rows) """ for row in rows)
@@ -160,34 +172,52 @@ async def options_page() -> None:
f"Add {row['type'].upper()} {float(row['strike']):.0f}", f"Add {row['type'].upper()} {float(row['strike']):.0f}",
on_click=lambda _, contract=row: add_to_strategy(contract), on_click=lambda _, contract=row: add_to_strategy(contract),
).props("outline color=primary") ).props("outline color=primary")
greeks.set_options(rows[:6]) greeks.set_options(chosen_contracts[-6:] if chosen_contracts else rows[:6])
quick_add = ui.card().classes( async def load_expiry_chain(expiry: str | None) -> None:
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" selected_expiry["value"] = expiry
) loading_html.content = "Loading selected expiry…" if expiry else ""
loading_html.update()
next_chain = await data_service.get_options_chain_for_expiry("GLD", expiry)
chain_state["data"] = next_chain
chain_state["rows"] = list(
next_chain.get("rows") or [*next_chain.get("calls", []), *next_chain.get("puts", [])]
)
min_value, max_value = strike_bounds(chain_state["rows"])
strike_range["min"] = min_value
strike_range["max"] = max_value
min_strike.value = min_value
max_strike.value = max_value
loading_html.content = ""
loading_html.update()
sync_status()
render_chain()
def update_filters() -> None: def update_filters() -> None:
selected_expiry["value"] = expiry_select.value
strike_range["min"] = float(min_strike.value or 0.0) strike_range["min"] = float(min_strike.value or 0.0)
strike_range["max"] = float(max_strike.value or 0.0) strike_range["max"] = float(max_strike.value or 0.0)
if strike_range["min"] > strike_range["max"]: if strike_range["min"] > strike_range["max"]:
strike_range["min"], strike_range["max"] = ( strike_range["min"], strike_range["max"] = (strike_range["max"], strike_range["min"])
strike_range["max"],
strike_range["min"],
)
min_strike.value = strike_range["min"] min_strike.value = strike_range["min"]
max_strike.value = strike_range["max"] max_strike.value = strike_range["max"]
render_chain() render_chain()
expiry_select.on_value_change(lambda _: update_filters()) async def on_expiry_change(event: Any) -> None:
await load_expiry_chain(event.value)
expiry_select.on_value_change(on_expiry_change)
min_strike.on_value_change(lambda _: update_filters()) min_strike.on_value_change(lambda _: update_filters())
max_strike.on_value_change(lambda _: update_filters()) max_strike.on_value_change(lambda _: update_filters())
def on_strategy_change(event) -> None: def on_strategy_change(event: Any) -> None:
selected_strategy["value"] = event.value # type: ignore[assignment] selected_strategy["value"] = event.value
render_selection() render_selection()
strategy_select.on_value_change(on_strategy_change) strategy_select.on_value_change(on_strategy_change)
sync_status()
render_selection() render_selection()
render_chain() render_chain()

View File

@@ -1,80 +1,500 @@
from __future__ import annotations from __future__ import annotations
import logging
from datetime import datetime, timezone
from decimal import Decimal
from fastapi import Request
from fastapi.responses import RedirectResponse
from nicegui import ui from nicegui import ui
from app.components import PortfolioOverview from app.components import PortfolioOverview
from app.domain.portfolio_math import resolve_portfolio_spot_from_quote
from app.models.ltv_history import LtvHistoryRepository
from app.models.workspace import WORKSPACE_COOKIE, get_workspace_repository
from app.pages.common import ( from app.pages.common import (
dashboard_page, dashboard_page,
portfolio_snapshot,
quick_recommendations, quick_recommendations,
recommendation_style, recommendation_style,
split_page_panes,
strategy_catalog, strategy_catalog,
) )
from app.services.alerts import AlertService, build_portfolio_alert_context
from app.services.ltv_history import LtvHistoryChartModel, LtvHistoryService
from app.services.runtime import get_data_service
from app.services.storage_costs import calculate_total_storage_cost
from app.services.turnstile import load_turnstile_settings
logger = logging.getLogger(__name__)
_DEFAULT_CASH_BUFFER = 18_500.0
def _resolve_overview_spot(
config, quote: dict[str, object], *, fallback_symbol: str | None = None
) -> tuple[float, str, str]:
return resolve_portfolio_spot_from_quote(config, quote, fallback_symbol=fallback_symbol)
def _format_timestamp(value: str | None) -> str:
if not value:
return "Unavailable"
try:
timestamp = datetime.fromisoformat(value.replace("Z", "+00:00"))
except ValueError:
return value
return timestamp.astimezone(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
def _alert_badge_classes(severity: str) -> str:
return {
"critical": "rounded-full bg-rose-100 px-3 py-1 text-xs font-semibold text-rose-700 dark:bg-rose-500/15 dark:text-rose-300",
"warning": "rounded-full bg-amber-100 px-3 py-1 text-xs font-semibold text-amber-700 dark:bg-amber-500/15 dark:text-amber-300",
"ok": "rounded-full bg-emerald-100 px-3 py-1 text-xs font-semibold text-emerald-700 dark:bg-emerald-500/15 dark:text-emerald-300",
}.get(severity, "rounded-full bg-slate-100 px-3 py-1 text-xs font-semibold text-slate-700")
def _ltv_chart_options(model: LtvHistoryChartModel) -> dict:
return {
"tooltip": {"trigger": "axis", "valueFormatter": "function (value) { return value + '%'; }"},
"legend": {"data": ["LTV", "Margin threshold"]},
"xAxis": {"type": "category", "data": list(model.labels)},
"yAxis": {"type": "value", "name": "LTV %", "axisLabel": {"formatter": "{value}%"}},
"series": [
{
"name": "LTV",
"type": "line",
"smooth": True,
"data": list(model.ltv_values),
"lineStyle": {"width": 3},
},
{
"name": "Margin threshold",
"type": "line",
"data": list(model.threshold_values),
"lineStyle": {"type": "dashed", "width": 2},
"symbol": "none",
},
],
}
def _render_workspace_recovery(title: str, message: str) -> None:
with ui.column().classes("mx-auto mt-24 w-full max-w-2xl gap-6 px-6 text-center"):
ui.icon("folder_off").classes("mx-auto text-6xl text-slate-400")
ui.label(title).classes("text-3xl font-bold text-slate-900 dark:text-slate-50")
ui.label(message).classes("text-base text-slate-500 dark:text-slate-400")
with ui.row().classes("mx-auto gap-3"):
ui.link("Get started", "/").classes(
"rounded-lg bg-slate-900 px-5 py-3 text-sm font-semibold text-white no-underline dark:bg-slate-100 dark:text-slate-900"
)
ui.link("Go to welcome page", "/").classes(
"rounded-lg border border-slate-300 px-5 py-3 text-sm font-semibold text-slate-700 no-underline dark:border-slate-700 dark:text-slate-200"
)
@ui.page("/") @ui.page("/")
@ui.page("/overview") def welcome_page(request: Request):
def overview_page() -> None: repo = get_workspace_repository()
portfolio = portfolio_snapshot() workspace_id = request.cookies.get(WORKSPACE_COOKIE, "")
if workspace_id and repo.workspace_exists(workspace_id):
return RedirectResponse(url=f"/{workspace_id}", status_code=307)
captcha_error = request.query_params.get("captcha_error") == "1"
with ui.column().classes("mx-auto mt-24 w-full max-w-3xl gap-8 px-6"):
with ui.card().classes(
"w-full rounded-3xl border border-slate-200 bg-white p-8 shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Vault Dashboard").classes("text-sm font-semibold uppercase tracking-[0.2em] text-sky-600")
ui.label("Create a private workspace URL").classes("text-4xl font-bold text-slate-900 dark:text-slate-50")
ui.label(
"Start with a workspace-scoped overview and settings area. Your portfolio defaults are stored server-side and your browser keeps a workspace cookie for quick return visits."
).classes("text-base text-slate-500 dark:text-slate-400")
if captcha_error:
ui.label("CAPTCHA verification failed. Please retry the Turnstile challenge.").classes(
"rounded-lg border border-rose-200 bg-rose-50 px-4 py-3 text-sm font-medium text-rose-700 dark:border-rose-900/60 dark:bg-rose-950/30 dark:text-rose-300"
)
with ui.row().classes("items-center gap-4 pt-4"):
turnstile = load_turnstile_settings()
if turnstile.uses_test_keys:
ui.html("""<form method="post" action="/workspaces/bootstrap" class="flex items-center gap-4">
<input type="hidden" name="cf-turnstile-response" value="test-token" />
<button type="submit" class="rounded-lg bg-slate-900 px-5 py-3 text-sm font-semibold text-white no-underline dark:bg-slate-100 dark:text-slate-900">Get started</button>
</form>""")
else:
ui.add_body_html(
'<script src="https://challenges.cloudflare.com/turnstile/v0/api.js" async defer></script>'
)
ui.html(f"""<form method="post" action="/workspaces/bootstrap" class="flex items-center gap-4">
<div class="cf-turnstile" data-sitekey="{turnstile.site_key}"></div>
<button type="submit" class="rounded-lg bg-slate-900 px-5 py-3 text-sm font-semibold text-white no-underline dark:bg-slate-100 dark:text-slate-900">Get started</button>
</form>""")
ui.label("You can always create a fresh workspace later if a link is lost.").classes(
"text-sm text-slate-500 dark:text-slate-400"
)
@ui.page("/{workspace_id}")
@ui.page("/{workspace_id}/overview")
async def overview_page(workspace_id: str) -> None:
repo = get_workspace_repository()
if not repo.workspace_exists(workspace_id):
return RedirectResponse(url="/", status_code=307)
config = repo.load_portfolio_config(workspace_id)
data_service = get_data_service()
underlying = config.underlying or "GLD"
symbol = underlying
quote = await data_service.get_quote(symbol)
overview_spot_price, overview_source, overview_updated_at = _resolve_overview_spot(
config, quote, fallback_symbol=symbol
)
portfolio = build_portfolio_alert_context(
config,
spot_price=overview_spot_price,
source=overview_source,
updated_at=overview_updated_at,
)
# Fetch basis data for GLD/GC=F comparison
try:
basis_data = await data_service.get_basis_data()
except Exception:
logger.exception("Failed to fetch basis data")
basis_data = None
configured_gold_value = float(config.gold_value or 0.0)
portfolio["cash_buffer"] = max(float(portfolio["gold_value"]) - configured_gold_value, 0.0) + _DEFAULT_CASH_BUFFER
portfolio["hedge_budget"] = float(config.monthly_budget)
# Calculate storage costs for positions
positions = config.positions
current_values: dict[str, Decimal] = {}
for pos in positions:
# Use entry value as proxy for current value (would need live prices for accurate calc)
current_values[str(pos.id)] = pos.entry_value
total_annual_storage_cost = calculate_total_storage_cost(positions, current_values)
portfolio["annual_storage_cost"] = float(total_annual_storage_cost)
portfolio["storage_cost_pct"] = (
(float(total_annual_storage_cost) / float(portfolio["gold_value"]) * 100)
if portfolio["gold_value"] > 0
else 0.0
)
alert_status = AlertService().evaluate(config, portfolio)
ltv_history_service = LtvHistoryService(repository=LtvHistoryRepository(base_path=repo.base_path))
ltv_history_notice: str | None = None
try:
ltv_history = ltv_history_service.record_workspace_snapshot(workspace_id, portfolio)
ltv_chart_models = tuple(
ltv_history_service.chart_model(
ltv_history,
days=days,
current_margin_threshold=config.margin_threshold,
)
for days in (7, 30, 90)
)
ltv_history_csv = ltv_history_service.export_csv(ltv_history) if ltv_history else ""
except Exception:
logger.exception("Failed to prepare LTV history for workspace %s", workspace_id)
ltv_history = []
ltv_chart_models = ()
ltv_history_csv = ""
ltv_history_notice = "Historical LTV is temporarily unavailable due to a storage error."
display_mode = portfolio.get("display_mode", "XAU")
if portfolio["quote_source"] == "configured_entry_price":
if display_mode == "GLD":
quote_status = "Live quote source: configured entry price fallback (GLD shares) · Last updated Unavailable"
else:
quote_status = "Live quote source: configured entry price fallback · Last updated Unavailable"
else:
if display_mode == "GLD":
quote_status = (
f"Live quote source: {portfolio['quote_source']} (GLD share price) · "
f"Last updated {_format_timestamp(str(portfolio['quote_updated_at']))}"
)
else:
quote_status = (
f"Live quote source: {portfolio['quote_source']} · "
f"GLD share quote converted to ozt-equivalent spot · "
f"Last updated {_format_timestamp(str(portfolio['quote_updated_at']))}"
)
if display_mode == "GLD":
spot_caption = (
f"{symbol} share price via {portfolio['quote_source']}"
if portfolio["quote_source"] != "configured_entry_price"
else "Configured GLD share entry price"
)
else:
spot_caption = (
f"{symbol} share quote converted to USD/ozt via {portfolio['quote_source']}"
if portfolio["quote_source"] != "configured_entry_price"
else "Configured entry price fallback in USD/ozt"
)
with dashboard_page( with dashboard_page(
"Overview", "Overview",
"Portfolio health, LTV risk, and quick strategy guidance for the current GLD-backed loan.", f"Portfolio health, LTV risk, and quick strategy guidance for the current {underlying}-backed loan.",
"overview", "overview",
workspace_id=workspace_id,
): ):
with ui.grid(columns=4).classes("w-full gap-4 max-lg:grid-cols-2 max-sm:grid-cols-1"): with ui.row().classes("w-full items-center justify-between gap-4 max-md:flex-col max-md:items-start"):
summary_cards = [ ui.label(quote_status).classes("text-sm text-slate-500 dark:text-slate-400")
( ui.label(
"Spot Price", f"Active underlying: {underlying} · Configured collateral baseline: ${config.gold_value:,.0f} · Loan ${config.loan_amount:,.0f}"
f"${portfolio['spot_price']:,.2f}", ).classes("text-sm text-slate-500 dark:text-slate-400")
"GLD reference price",
),
(
"Margin Call Price",
f"${portfolio['margin_call_price']:,.2f}",
"Implied trigger level",
),
(
"Cash Buffer",
f"${portfolio['cash_buffer']:,.0f}",
"Available liquidity",
),
(
"Hedge Budget",
f"${portfolio['hedge_budget']:,.0f}",
"Approved premium budget",
),
]
for title, value, caption in summary_cards:
with ui.card().classes(
"rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label(title).classes("text-sm font-medium text-slate-500 dark:text-slate-400")
ui.label(value).classes("text-3xl font-bold text-slate-900 dark:text-slate-50")
ui.label(caption).classes("text-sm text-slate-500 dark:text-slate-400")
portfolio_view = PortfolioOverview(margin_call_ltv=portfolio["margin_call_ltv"]) left_pane, right_pane = split_page_panes(
portfolio_view.update(portfolio) left_testid="overview-left-pane",
right_testid="overview-right-pane",
)
with ui.row().classes("w-full gap-6 max-lg:flex-col"): with left_pane:
# GLD/GC=F Basis Card
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
): ):
with ui.row().classes("w-full items-center justify-between"): with ui.row().classes("w-full items-center justify-between gap-3"):
ui.label("Current LTV Status").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("GLD/GC=F Basis").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(f"Threshold {portfolio['margin_call_ltv'] * 100:.0f}%").classes( if basis_data:
"rounded-full bg-rose-100 px-3 py-1 text-xs font-semibold text-rose-700 dark:bg-rose-500/15 dark:text-rose-300" basis_badge_class = {
) "green": "rounded-full bg-emerald-100 px-3 py-1 text-xs font-semibold text-emerald-700 dark:bg-emerald-500/15 dark:text-emerald-300",
ui.linear_progress( "yellow": "rounded-full bg-amber-100 px-3 py-1 text-xs font-semibold text-amber-700 dark:bg-amber-500/15 dark:text-amber-300",
value=portfolio["ltv_ratio"] / portfolio["margin_call_ltv"], "red": "rounded-full bg-rose-100 px-3 py-1 text-xs font-semibold text-rose-700 dark:bg-rose-500/15 dark:text-rose-300",
show_value=False, }.get(
).props("color=warning track-color=grey-3 rounded") basis_data["basis_status"],
"rounded-full bg-slate-100 px-3 py-1 text-xs font-semibold text-slate-700",
)
ui.label(f"{basis_data['basis_label']} ({basis_data['basis_bps']:+.1f} bps)").classes(
basis_badge_class
)
if basis_data:
with ui.grid(columns=2).classes("w-full gap-4 mt-4"):
# GLD Implied Spot
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label("GLD Implied Spot").classes(
"text-sm font-medium text-slate-500 dark:text-slate-400"
)
ui.label(f"${basis_data['gld_implied_spot']:,.2f}/oz").classes(
"text-2xl font-bold text-slate-900 dark:text-slate-50"
)
ui.label(
f"GLD ${basis_data['gld_price']:.2f} ÷ {basis_data['gld_ounces_per_share']:.4f} oz/share"
).classes("text-xs text-slate-500 dark:text-slate-400")
# GC=F Adjusted
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label("GC=F Adjusted").classes("text-sm font-medium text-slate-500 dark:text-slate-400")
ui.label(f"${basis_data['gc_f_adjusted']:,.2f}/oz").classes(
"text-2xl font-bold text-slate-900 dark:text-slate-50"
)
ui.label(
f"GC=F ${basis_data['gc_f_price']:.2f} - ${basis_data['contango_estimate']:.0f} contango"
).classes("text-xs text-slate-500 dark:text-slate-400")
# Basis explanation and after-hours notice
with ui.row().classes("w-full items-start gap-2 mt-4"):
ui.icon("info", size="xs").classes("text-slate-400 mt-0.5")
ui.label(
"Basis shows the premium/discount between GLD-implied gold and futures-adjusted spot. "
"Green < 25 bps (normal), Yellow 25-50 bps (elevated), Red > 50 bps (unusual)."
).classes("text-xs text-slate-500 dark:text-slate-400")
if basis_data["after_hours"]:
with ui.row().classes("w-full items-start gap-2 mt-2"):
ui.icon("schedule", size="xs").classes("text-amber-500 mt-0.5")
ui.label(
f"{basis_data['after_hours_note']} · GLD: {_format_timestamp(basis_data['gld_updated_at'])} · "
f"GC=F: {_format_timestamp(basis_data['gc_f_updated_at'])}"
).classes("text-xs text-amber-700 dark:text-amber-300")
# Warning for elevated basis
if basis_data["basis_status"] == "red":
ui.label(
f"⚠️ Elevated basis detected: {basis_data['basis_bps']:+.1f} bps. "
"This may indicate after-hours pricing gaps, physical stress, or arbitrage disruption."
).classes("text-sm font-medium text-rose-700 dark:text-rose-300 mt-3")
else:
ui.label("Basis data temporarily unavailable").classes("text-sm text-slate-500 dark:text-slate-400")
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
with ui.row().classes("w-full items-center justify-between gap-3"):
ui.label("Alert Status").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(alert_status.severity.upper()).classes(_alert_badge_classes(alert_status.severity))
ui.label(alert_status.message).classes("text-sm text-slate-600 dark:text-slate-300")
ui.label( ui.label(
f"Current LTV is {portfolio['ltv_ratio'] * 100:.1f}% with a margin buffer of {(portfolio['margin_call_ltv'] - portfolio['ltv_ratio']) * 100:.1f} percentage points." f"Warning at {alert_status.warning_threshold:.0%} · Critical at {alert_status.critical_threshold:.0%} · "
).classes("text-sm text-slate-600 dark:text-slate-300") f"Email alerts {'enabled' if alert_status.email_alerts_enabled else 'disabled'}"
ui.label( ).classes("text-sm text-slate-500 dark:text-slate-400")
"Warning: if GLD approaches the margin-call price, collateral remediation or hedge monetization will be required." if alert_status.history_notice:
).classes("text-sm font-medium text-amber-700 dark:text-amber-300") ui.label(alert_status.history_notice).classes("text-sm text-amber-700 dark:text-amber-300")
if alert_status.history:
latest = alert_status.history[0]
ui.label(
f"Latest alert logged {_format_timestamp(latest.updated_at)} at collateral spot ${latest.spot_price:,.2f}"
).classes("text-xs text-slate-500 dark:text-slate-400")
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Portfolio Snapshot").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
# Display mode-aware labels
if display_mode == "GLD":
spot_label = "GLD Share Price"
spot_unit = "/share"
margin_label = "Margin Call Share Price"
else:
spot_label = "Collateral Spot Price"
spot_unit = "/oz"
margin_label = "Margin Call Price"
with ui.grid(columns=1).classes("w-full gap-4 sm:grid-cols-2 lg:grid-cols-1 xl:grid-cols-2"):
summary_cards = [
(
spot_label,
f"${portfolio['spot_price']:,.2f}{spot_unit}",
spot_caption,
),
(
margin_label,
f"${portfolio['margin_call_price']:,.2f}",
"Implied trigger level from persisted portfolio settings",
),
(
"Cash Buffer",
f"${portfolio['cash_buffer']:,.0f}",
"Base liquidity plus unrealized gain cushion vs configured baseline",
),
(
"Hedge Budget",
f"${portfolio['hedge_budget']:,.0f}",
"Monthly budget from saved settings",
),
(
"Storage Costs",
f"${portfolio['annual_storage_cost']:,.2f}/yr ({portfolio['storage_cost_pct']:.2f}%)",
"Annual vault storage for physical positions (GLD expense ratio baked into share price)",
),
]
for title, value, caption in summary_cards:
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label(title).classes("text-sm font-medium text-slate-500 dark:text-slate-400")
ui.label(value).classes("text-2xl font-bold text-slate-900 dark:text-slate-50")
ui.label(caption).classes("text-sm text-slate-500 dark:text-slate-400")
with ui.card().classes(f"w-full rounded-2xl border shadow-sm {recommendation_style('info')}"):
ui.label("Quick Strategy Recommendations").classes(
"text-lg font-semibold text-slate-900 dark:text-slate-100"
)
for rec in quick_recommendations(portfolio):
with ui.card().classes(f"rounded-xl border shadow-none {recommendation_style(rec['tone'])}"):
ui.label(rec["title"]).classes("text-base font-semibold text-slate-900 dark:text-slate-100")
ui.label(rec["summary"]).classes("text-sm text-slate-600 dark:text-slate-300")
with right_pane:
portfolio_view = PortfolioOverview(margin_call_ltv=float(portfolio["margin_call_ltv"]))
portfolio_view.update(portfolio)
with ui.row().classes("w-full gap-6 max-xl:flex-col"):
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
with ui.row().classes("w-full items-center justify-between"):
ui.label("Current LTV Status").classes(
"text-lg font-semibold text-slate-900 dark:text-slate-100"
)
ui.label(f"Threshold {float(portfolio['margin_call_ltv']) * 100:.0f}%").classes(
"rounded-full bg-rose-100 px-3 py-1 text-xs font-semibold text-rose-700 dark:bg-rose-500/15 dark:text-rose-300"
)
ui.linear_progress(
value=float(portfolio["ltv_ratio"]) / max(float(portfolio["margin_call_ltv"]), 0.01),
show_value=False,
).props("color=warning track-color=grey-3 rounded")
ui.label(
f"Current LTV is {float(portfolio['ltv_ratio']) * 100:.1f}% with a margin buffer of {(float(portfolio['margin_call_ltv']) - float(portfolio['ltv_ratio'])) * 100:.1f} percentage points."
).classes("text-sm text-slate-600 dark:text-slate-300")
ui.label(
"Warning: if GLD approaches the margin-call price, collateral remediation or hedge monetization will be required."
).classes("text-sm font-medium text-amber-700 dark:text-amber-300")
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
with ui.row().classes(
"w-full items-center justify-between gap-3 max-sm:flex-col max-sm:items-start"
):
with ui.column().classes("gap-1"):
ui.label("Historical LTV").classes(
"text-lg font-semibold text-slate-900 dark:text-slate-100"
)
ui.label(
"Stored workspace snapshots show how LTV trended against the current margin threshold over 7, 30, and 90 day windows."
).classes("text-sm text-slate-500 dark:text-slate-400")
if ltv_history:
ui.button(
"Export CSV",
icon="download",
on_click=lambda: ui.download.content(
ltv_history_csv,
filename=f"{workspace_id}-ltv-history.csv",
media_type="text/csv",
),
).props("outline color=primary")
if ltv_history_notice:
ui.label(ltv_history_notice).classes("text-sm text-amber-700 dark:text-amber-300")
elif ltv_history:
with ui.grid(columns=1).classes("w-full gap-4 xl:grid-cols-3"):
for chart_model, chart_testid in zip(
ltv_chart_models,
("ltv-history-chart-7d", "ltv-history-chart-30d", "ltv-history-chart-90d"),
strict=True,
):
with ui.card().classes(
"rounded-xl border border-slate-200 bg-slate-50 p-4 shadow-none dark:border-slate-800 dark:bg-slate-950"
):
ui.label(chart_model.title).classes(
"text-base font-semibold text-slate-900 dark:text-slate-100"
)
ui.echart(_ltv_chart_options(chart_model)).props(
f"data-testid={chart_testid}"
).classes("h-56 w-full")
else:
ui.label("No LTV snapshots recorded yet.").classes("text-sm text-slate-500 dark:text-slate-400")
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Recent Alert History").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
if alert_status.history:
for event in alert_status.history[:5]:
with ui.row().classes(
"w-full items-start justify-between gap-4 border-b border-slate-100 py-3 last:border-b-0 dark:border-slate-800"
):
with ui.column().classes("gap-1"):
ui.label(event.message).classes(
"text-sm font-medium text-slate-900 dark:text-slate-100"
)
ui.label(
f"Logged {_format_timestamp(event.updated_at)} · Spot ${event.spot_price:,.2f} · LTV {event.ltv_ratio:.1%}"
).classes("text-xs text-slate-500 dark:text-slate-400")
ui.label(event.severity.upper()).classes(_alert_badge_classes(event.severity))
elif alert_status.history_notice:
ui.label(alert_status.history_notice).classes("text-sm text-amber-700 dark:text-amber-300")
else:
ui.label(
"No alert history yet. Alerts will be logged once the warning threshold is crossed."
).classes("text-sm text-slate-500 dark:text-slate-400")
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
@@ -90,10 +510,3 @@ def overview_page() -> None:
ui.label(f"${strategy['estimated_cost']:.2f}/oz").classes( ui.label(f"${strategy['estimated_cost']:.2f}/oz").classes(
"rounded-full bg-sky-100 px-3 py-1 text-xs font-semibold text-sky-700 dark:bg-sky-500/15 dark:text-sky-300" "rounded-full bg-sky-100 px-3 py-1 text-xs font-semibold text-sky-700 dark:bg-sky-500/15 dark:text-sky-300"
) )
ui.label("Quick Strategy Recommendations").classes("text-xl font-semibold text-slate-900 dark:text-slate-100")
with ui.grid(columns=3).classes("w-full gap-4 max-lg:grid-cols-1"):
for rec in quick_recommendations():
with ui.card().classes(f"rounded-2xl border shadow-sm {recommendation_style(rec['tone'])}"):
ui.label(rec["title"]).classes("text-base font-semibold text-slate-900 dark:text-slate-100")
ui.label(rec["summary"]).classes("text-sm text-slate-600 dark:text-slate-300")

View File

@@ -1,73 +1,290 @@
from __future__ import annotations from __future__ import annotations
import logging
from datetime import date
from decimal import Decimal
from uuid import uuid4
from fastapi.responses import RedirectResponse
from nicegui import ui from nicegui import ui
from app.pages.common import dashboard_page from app.domain.conversions import get_display_mode_options
from app.models.portfolio import PortfolioConfig, get_portfolio_repository from app.models.portfolio import PortfolioConfig
from app.models.position import Position
from app.models.workspace import get_workspace_repository
from app.pages.common import dashboard_page, split_page_panes
from app.services.alerts import AlertService, build_portfolio_alert_context
from app.services.settings_status import save_status_text
from app.services.storage_costs import get_default_storage_cost_for_underlying
logger = logging.getLogger(__name__)
@ui.page("/settings") def _alert_badge_classes(severity: str) -> str:
def settings_page(): return {
"""Settings page with persistent portfolio configuration.""" "critical": "rounded-full bg-rose-100 px-3 py-1 text-xs font-semibold text-rose-700 dark:bg-rose-500/15 dark:text-rose-300",
# Load current configuration "warning": "rounded-full bg-amber-100 px-3 py-1 text-xs font-semibold text-amber-700 dark:bg-amber-500/15 dark:text-amber-300",
repo = get_portfolio_repository() "ok": "rounded-full bg-emerald-100 px-3 py-1 text-xs font-semibold text-emerald-700 dark:bg-emerald-500/15 dark:text-emerald-300",
config = repo.load() }.get(severity, "rounded-full bg-slate-100 px-3 py-1 text-xs font-semibold text-slate-700")
def _save_card_status_text(
last_saved_config: PortfolioConfig,
*,
preview_config: PortfolioConfig | None = None,
invalid: bool = False,
save_failed: bool = False,
) -> str:
base = save_status_text(last_saved_config).replace("Saved:", "Last saved:", 1)
if save_failed:
return f"Save failed — {base}"
if invalid:
return f"Unsaved invalid changes — {base}"
if preview_config is not None and preview_config.to_dict() != last_saved_config.to_dict():
return f"Unsaved changes — {base}"
return base
def _render_workspace_recovery() -> None:
with ui.column().classes("mx-auto mt-24 w-full max-w-2xl gap-6 px-6 text-center"):
ui.icon("folder_off").classes("mx-auto text-6xl text-slate-400")
ui.label("Workspace not found").classes("text-3xl font-bold text-slate-900 dark:text-slate-50")
ui.label(
"The requested workspace is unavailable. Start a new workspace or return to the welcome page."
).classes("text-base text-slate-500 dark:text-slate-400")
with ui.row().classes("mx-auto gap-3"):
ui.link("Get started", "/").classes(
"rounded-lg bg-slate-900 px-5 py-3 text-sm font-semibold text-white no-underline dark:bg-slate-100 dark:text-slate-900"
)
ui.link("Go to welcome page", "/").classes(
"rounded-lg border border-slate-300 px-5 py-3 text-sm font-semibold text-slate-700 no-underline dark:border-slate-700 dark:text-slate-200"
)
@ui.page("/{workspace_id}/settings")
def settings_page(workspace_id: str) -> None:
"""Settings page with workspace-scoped persistent portfolio configuration."""
workspace_repo = get_workspace_repository()
if not workspace_repo.workspace_exists(workspace_id):
return RedirectResponse(url="/", status_code=307)
config = workspace_repo.load_portfolio_config(workspace_id)
last_saved_config = config
alert_service = AlertService()
syncing_entry_basis = False
def as_positive_float(value: object) -> float | None:
try:
parsed = float(value)
except (TypeError, ValueError):
return None
return parsed if parsed > 0 else None
def as_non_negative_float(value: object) -> float | None:
try:
parsed = float(value)
except (TypeError, ValueError):
return None
return parsed if parsed >= 0 else None
def display_number_input_value(value: object) -> str:
try:
parsed = float(value)
except (TypeError, ValueError):
return ""
if parsed.is_integer():
return str(int(parsed))
return str(parsed)
def as_positive_int(value: object) -> int | None:
try:
parsed = float(value)
except (TypeError, ValueError):
return None
if parsed < 1 or not parsed.is_integer():
return None
return int(parsed)
def build_preview_config() -> PortfolioConfig:
parsed_loan_amount = as_non_negative_float(loan_amount.value)
if parsed_loan_amount is None:
raise ValueError("Loan amount must be zero or greater")
parsed_refresh_interval = as_positive_int(refresh_interval.value)
if parsed_refresh_interval is None:
raise ValueError("Refresh interval must be a whole number of seconds")
return PortfolioConfig(
gold_value=as_positive_float(gold_value.value),
entry_price=as_positive_float(entry_price.value),
gold_ounces=as_positive_float(gold_ounces.value),
entry_basis_mode=str(entry_basis_mode.value), # type: ignore[arg-type]
loan_amount=parsed_loan_amount,
margin_threshold=float(margin_threshold.value),
monthly_budget=float(monthly_budget.value),
ltv_warning=float(ltv_warning.value),
primary_source=str(primary_source.value),
fallback_source=str(fallback_source.value),
refresh_interval=parsed_refresh_interval,
underlying=str(underlying.value),
display_mode=str(display_mode.value), # type: ignore[arg-type]
volatility_spike=float(vol_alert.value),
spot_drawdown=float(price_alert.value),
email_alerts=bool(email_alerts.value),
)
with dashboard_page( with dashboard_page(
"Settings", "Settings",
"Configure portfolio assumptions, preferred market data inputs, alert thresholds, and import/export behavior.", "Configure portfolio assumptions, collateral entry basis, preferred market data inputs, and alert thresholds.",
"settings", "settings",
workspace_id=workspace_id,
): ):
with ui.row().classes("w-full gap-6 max-lg:flex-col"): left_pane, right_pane = split_page_panes(
left_testid="settings-left-pane",
right_testid="settings-right-pane",
)
with left_pane:
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
): ):
ui.label("Portfolio Parameters").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("Portfolio Parameters").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(
"Choose whether collateral entry is keyed by start value or by gold weight. The paired field is derived automatically from the entry price."
).classes("text-sm text-slate-500 dark:text-slate-400")
entry_basis_mode = ui.select(
{"value_price": "Start value + entry price", "weight": "Gold weight + entry price"},
value=config.entry_basis_mode,
label="Collateral entry basis",
).classes("w-full")
entry_price = ui.number(
"Entry price ($/oz)",
value=config.entry_price,
min=0.01,
step=0.01,
).classes("w-full")
gold_value = ui.number( gold_value = ui.number(
"Gold collateral value ($)", "Collateral start value ($)",
value=config.gold_value, value=config.gold_value,
min=0.01, # Must be positive min=0.01,
step=1000 step=1000,
).classes("w-full") ).classes("w-full")
loan_amount = ui.number( gold_ounces = ui.number(
"Loan amount ($)", "Gold weight (oz)",
value=config.loan_amount, value=config.gold_ounces,
min=0, min=0.0001,
step=1000 step=0.01,
).classes("w-full") ).classes("w-full")
loan_amount = (
ui.input(
"Loan amount ($)",
value=display_number_input_value(config.loan_amount),
)
.props("type=number min=0 step=1000")
.classes("w-full")
)
margin_threshold = ui.number( margin_threshold = ui.number(
"Margin call LTV threshold", "Margin call LTV threshold",
value=config.margin_threshold, value=config.margin_threshold,
min=0.1, min=0.1,
max=0.95, max=0.95,
step=0.01 step=0.01,
).classes("w-full") ).classes("w-full")
monthly_budget = ui.number( monthly_budget = ui.number(
"Monthly hedge budget ($)", "Monthly hedge budget ($)",
value=config.monthly_budget, value=config.monthly_budget,
min=0, min=0,
step=500 step=500,
).classes("w-full") ).classes("w-full")
# Show calculated values derived_hint = ui.label().classes("text-sm text-slate-500 dark:text-slate-400")
with ui.row().classes("w-full gap-2 mt-4 p-4 bg-slate-50 dark:bg-slate-800 rounded-lg"):
with ui.row().classes("w-full gap-2 mt-4 rounded-lg bg-slate-50 p-4 dark:bg-slate-800"):
ui.label("Current LTV:").classes("font-medium") ui.label("Current LTV:").classes("font-medium")
ltv_display = ui.label(f"{(config.loan_amount / config.gold_value * 100):.1f}%") ltv_display = ui.label()
ui.label("Margin buffer:").classes("font-medium ml-4") ui.label("Margin buffer:").classes("ml-4 font-medium")
buffer_display = ui.label(f"{((config.margin_threshold - config.loan_amount / config.gold_value) * 100):.1f}%") buffer_display = ui.label()
ui.label("Margin call at:").classes("font-medium ml-4") ui.label("Margin call at:").classes("ml-4 font-medium")
margin_price_display = ui.label(f"${(config.loan_amount / config.margin_threshold):,.2f}") margin_price_display = ui.label()
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
): ):
ui.label("Alert Thresholds").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ltv_warning = ui.number(
"LTV warning level",
value=config.ltv_warning,
min=0.1,
max=0.95,
step=0.01,
).classes("w-full")
vol_alert = ui.number(
"Volatility spike alert",
value=config.volatility_spike,
min=0.01,
max=2.0,
step=0.01,
).classes("w-full")
price_alert = ui.number(
"Spot drawdown alert (%)",
value=config.spot_drawdown,
min=0.1,
max=50.0,
step=0.5,
).classes("w-full")
email_alerts = ui.switch("Email alerts", value=config.email_alerts)
ui.label("Defaults remain warn at 70% and critical at 75% unless you override them.").classes(
"text-sm text-slate-500 dark:text-slate-400"
)
with right_pane:
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Display Mode").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ui.label(
"Choose how to view your portfolio: GLD shares (financial instrument view) or physical gold ounces."
).classes("text-sm text-slate-500 dark:text-slate-400 mb-3")
display_mode = ui.select(
{
"GLD": "GLD Shares (show share prices directly)",
"XAU": "Physical Gold (oz) (convert to gold ounces)",
},
value=config.display_mode,
label="Display mode",
).classes("w-full")
ui.separator().classes("my-4")
ui.label("Data Sources").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("Data Sources").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
underlying = ui.select(
{
"GLD": "SPDR Gold Shares ETF (live data via yfinance)",
"GC=F": "Gold Futures (coming soon)",
},
value=config.underlying,
label="Underlying instrument",
).classes("w-full")
display_mode = ui.select(
get_display_mode_options(),
value=config.display_mode,
label="Display Mode",
).classes("w-full")
ui.label("Choose how to display positions and collateral values.").classes(
"text-xs text-slate-500 dark:text-slate-400 -mt-2"
)
primary_source = ui.select( primary_source = ui.select(
["yfinance", "ibkr", "alpaca"], ["yfinance", "ibkr", "alpaca"],
value=config.primary_source, value=config.primary_source,
@@ -82,114 +299,452 @@ def settings_page():
"Refresh interval (seconds)", "Refresh interval (seconds)",
value=config.refresh_interval, value=config.refresh_interval,
min=1, min=1,
step=1 step=1,
).classes("w-full") ).classes("w-full")
def update_calculations(): # Position Management Card
"""Update calculated displays when values change."""
try:
gold = gold_value.value or 1 # Avoid division by zero
loan = loan_amount.value or 0
margin = margin_threshold.value or 0.75
ltv = (loan / gold) * 100
buffer = (margin - loan / gold) * 100
margin_price = loan / margin if margin > 0 else 0
ltv_display.set_text(f"{ltv:.1f}%")
buffer_display.set_text(f"{buffer:.1f}%")
margin_price_display.set_text(f"${margin_price:,.2f}")
except Exception:
pass # Ignore calculation errors during editing
# Connect update function to value changes
gold_value.on_value_change(update_calculations)
loan_amount.on_value_change(update_calculations)
margin_threshold.on_value_change(update_calculations)
with ui.row().classes("w-full gap-6 max-lg:flex-col"):
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
): ):
ui.label("Alert Thresholds").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("Portfolio Positions").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
ltv_warning = ui.number( ui.label(
"LTV warning level", "Manage individual position entries. Each position tracks its own entry date and price."
value=config.ltv_warning, ).classes("text-sm text-slate-500 dark:text-slate-400")
min=0.1,
max=0.95, # Position list container
step=0.01 position_list_container = ui.column().classes("w-full gap-2 mt-3")
).classes("w-full")
vol_alert = ui.number( # Add position form (hidden by default)
"Volatility spike alert", with (
value=config.volatility_spike, ui.dialog() as add_position_dialog,
min=0.01, ui.card().classes(
max=2.0, "w-full max-w-md rounded-2xl border border-slate-200 bg-white p-6 shadow-lg dark:border-slate-800 dark:bg-slate-900"
step=0.01 ),
).classes("w-full") ):
price_alert = ui.number( ui.label("Add New Position").classes(
"Spot drawdown alert (%)", "text-lg font-semibold text-slate-900 dark:text-slate-100 mb-4"
value=config.spot_drawdown, )
min=0.1,
max=50.0, pos_underlying = ui.select(
step=0.5 {
).classes("w-full") "GLD": "SPDR Gold Shares ETF",
email_alerts = ui.switch( "XAU": "Physical Gold (oz)",
"Email alerts", "GC=F": "Gold Futures",
value=config.email_alerts },
) value="GLD",
label="Underlying",
).classes("w-full")
def update_storage_cost_default() -> None:
"""Update storage cost defaults based on underlying selection."""
underlying = str(pos_underlying.value)
default_basis, default_period = get_default_storage_cost_for_underlying(underlying)
if default_basis is not None:
pos_storage_cost_basis.value = float(default_basis)
pos_storage_cost_period.value = default_period or "annual"
else:
pos_storage_cost_basis.value = 0.0
pos_storage_cost_period.value = "annual"
pos_underlying.on_value_change(lambda _: update_storage_cost_default())
pos_quantity = ui.number(
"Quantity",
value=100.0,
min=0.0001,
step=0.01,
).classes("w-full")
pos_unit = ui.select(
{"oz": "Troy Ounces", "shares": "Shares", "g": "Grams", "contracts": "Contracts"},
value="oz",
label="Unit",
).classes("w-full")
pos_entry_price = ui.number(
"Entry Price ($/unit)",
value=2150.0,
min=0.01,
step=0.01,
).classes("w-full")
with ui.row().classes("w-full items-center gap-2"):
ui.label("Entry Date").classes("text-sm font-medium")
pos_entry_date = (
ui.date(
value=date.today().isoformat(),
)
.classes("w-full")
.props("stack-label")
)
pos_notes = ui.textarea(
label="Notes (optional)",
placeholder="Add notes about this position...",
).classes("w-full")
ui.separator().classes("my-3")
ui.label("Storage Costs (optional)").classes(
"text-sm font-semibold text-slate-700 dark:text-slate-300"
)
ui.label("For physical gold (XAU), defaults to 0.12% annual vault storage.").classes(
"text-xs text-slate-500 dark:text-slate-400 mb-2"
)
pos_storage_cost_basis = ui.number(
"Storage cost (% per year or fixed $)",
value=0.0,
min=0.0,
step=0.01,
).classes("w-full")
pos_storage_cost_period = ui.select(
{"annual": "Annual", "monthly": "Monthly"},
value="annual",
label="Cost period",
).classes("w-full")
ui.separator().classes("my-3")
ui.label("Premium & Spread (optional)").classes(
"text-sm font-semibold text-slate-700 dark:text-slate-300"
)
ui.label("For physical gold, accounts for dealer markup and bid/ask spread.").classes(
"text-xs text-slate-500 dark:text-slate-400 mb-2"
)
pos_purchase_premium = ui.number(
"Purchase premium over spot (%)",
value=0.0,
min=0.0,
max=100.0,
step=0.1,
).classes("w-full")
pos_bid_ask_spread = ui.number(
"Bid/ask spread on exit (%)",
value=0.0,
min=0.0,
max=100.0,
step=0.1,
).classes("w-full")
with ui.row().classes("w-full gap-3 mt-4"):
ui.button("Cancel", on_click=lambda: add_position_dialog.close()).props("outline")
ui.button("Add Position", on_click=lambda: add_position_from_form()).props("color=primary")
def add_position_from_form() -> None:
"""Add a new position from the form."""
try:
underlying = str(pos_underlying.value)
storage_cost_basis_val = float(pos_storage_cost_basis.value)
storage_cost_basis = (
Decimal(str(storage_cost_basis_val)) if storage_cost_basis_val > 0 else None
)
storage_cost_period = str(pos_storage_cost_period.value) if storage_cost_basis else None
purchase_premium_val = float(pos_purchase_premium.value)
purchase_premium = (
Decimal(str(purchase_premium_val / 100)) if purchase_premium_val > 0 else None
)
bid_ask_spread_val = float(pos_bid_ask_spread.value)
bid_ask_spread = Decimal(str(bid_ask_spread_val / 100)) if bid_ask_spread_val > 0 else None
new_position = Position(
id=uuid4(),
underlying=underlying,
quantity=Decimal(str(pos_quantity.value)),
unit=str(pos_unit.value),
entry_price=Decimal(str(pos_entry_price.value)),
entry_date=date.fromisoformat(str(pos_entry_date.value)),
entry_basis_mode="weight",
purchase_premium=purchase_premium,
bid_ask_spread=bid_ask_spread,
notes=str(pos_notes.value or ""),
storage_cost_basis=storage_cost_basis,
storage_cost_period=storage_cost_period,
)
workspace_repo.add_position(workspace_id, new_position)
add_position_dialog.close()
render_positions()
ui.notify("Position added successfully", color="positive")
except Exception as e:
logger.exception("Failed to add position")
ui.notify(f"Failed to add position: {e}", color="negative")
def render_positions() -> None:
"""Render the list of positions."""
position_list_container.clear()
positions = workspace_repo.list_positions(workspace_id)
if not positions:
with position_list_container:
ui.label("No positions yet. Click 'Add Position' to create one.").classes(
"text-sm text-slate-500 dark:text-slate-400 italic"
)
return
for pos in positions:
with ui.card().classes(
"w-full rounded-lg border border-slate-200 bg-slate-50 p-3 dark:border-slate-700 dark:bg-slate-800"
):
with ui.row().classes("w-full items-start justify-between gap-3"):
with ui.column().classes("gap-1"):
ui.label(f"{pos.underlying} · {float(pos.quantity):,.4f} {pos.unit}").classes(
"text-sm font-medium text-slate-900 dark:text-slate-100"
)
ui.label(
f"Entry: ${float(pos.entry_price):,.2f}/{pos.unit} · Date: {pos.entry_date}"
).classes("text-xs text-slate-500 dark:text-slate-400")
if pos.notes:
ui.label(pos.notes).classes("text-xs text-slate-500 dark:text-slate-400 italic")
ui.label(f"Value: ${float(pos.entry_value):,.2f}").classes(
"text-xs font-semibold text-emerald-600 dark:text-emerald-400"
)
# Show storage cost if configured
if pos.storage_cost_basis is not None:
basis_val = float(pos.storage_cost_basis)
period = pos.storage_cost_period or "annual"
if basis_val < 1:
# Percentage
storage_label = f"{basis_val:.2f}% {period} storage"
else:
# Fixed amount
storage_label = f"${basis_val:,.2f} {period} storage"
ui.label(f"Storage: {storage_label}").classes(
"text-xs text-slate-500 dark:text-slate-400"
)
with ui.row().classes("gap-1"):
ui.button(
icon="delete",
on_click=lambda p=pos: remove_position(p.id),
).props(
"flat dense color=negative size=sm"
).classes("self-start")
def remove_position(position_id) -> None:
"""Remove a position."""
try:
workspace_repo.remove_position(workspace_id, position_id)
render_positions()
ui.notify("Position removed", color="positive")
except Exception as e:
logger.exception("Failed to remove position")
ui.notify(f"Failed to remove position: {e}", color="negative")
with ui.row().classes("w-full mt-3"):
ui.button("Add Position", icon="add", on_click=lambda: add_position_dialog.open()).props(
"color=primary"
)
# Initial render
render_positions()
with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Current Alert State").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
with ui.row().classes("w-full items-center justify-between gap-3"):
alert_state_container = ui.row().classes("items-center")
email_state_label = ui.label().classes("text-xs text-slate-500 dark:text-slate-400")
alert_message = ui.label().classes("text-sm text-slate-600 dark:text-slate-300")
alert_history_column = ui.column().classes("w-full gap-2")
with ui.card().classes( with ui.card().classes(
"w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
): ):
ui.label("Export / Import").classes("text-lg font-semibold text-slate-900 dark:text-slate-100") ui.label("Export / Import").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
export_format = ui.select( ui.select(["json", "csv", "yaml"], value="json", label="Export format").classes("w-full")
["json", "csv", "yaml"],
value="json",
label="Export format"
).classes("w-full")
ui.switch("Include scenario history", value=True) ui.switch("Include scenario history", value=True)
ui.switch("Include option selections", value=True) ui.switch("Include option selections", value=True)
ui.button("Import settings", icon="upload").props("outline color=primary") with ui.row().classes("w-full gap-3 max-sm:flex-col"):
ui.button("Export settings", icon="download").props("outline color=primary") ui.button("Import settings", icon="upload").props("outline color=primary")
ui.button("Export settings", icon="download").props("outline color=primary")
def save_settings(): with ui.card().classes(
"""Save settings with validation and persistence.""" "w-full rounded-2xl border border-slate-200 bg-white shadow-sm dark:border-slate-800 dark:bg-slate-900"
):
ui.label("Save Workspace Settings").classes("text-lg font-semibold text-slate-900 dark:text-slate-100")
status = ui.label(_save_card_status_text(last_saved_config)).classes(
"text-sm text-slate-500 dark:text-slate-400"
)
ui.button("Save settings", on_click=lambda: save_settings()).props("color=primary")
def apply_entry_basis_mode() -> None:
mode = str(entry_basis_mode.value or "value_price")
if mode == "weight":
gold_value.props("readonly")
gold_ounces.props(remove="readonly")
derived_hint.set_text(
"Gold weight is the editable basis; start value is derived from weight × entry price."
)
else:
gold_ounces.props("readonly")
gold_value.props(remove="readonly")
derived_hint.set_text(
"Start value is the editable basis; gold weight is derived from start value ÷ entry price."
)
def render_alert_state() -> None:
try: try:
# Create new config from form values preview_config = build_preview_config()
new_config = PortfolioConfig( except (ValueError, TypeError) as exc:
gold_value=float(gold_value.value), alert_state_container.clear()
loan_amount=float(loan_amount.value), with alert_state_container:
margin_threshold=float(margin_threshold.value), ui.label("INVALID").classes(_alert_badge_classes("critical"))
monthly_budget=float(monthly_budget.value), email_state_label.set_text("Fix validation errors to preview alert state")
ltv_warning=float(ltv_warning.value), alert_message.set_text(str(exc))
primary_source=str(primary_source.value), status.set_text(_save_card_status_text(last_saved_config, invalid=True))
fallback_source=str(fallback_source.value), alert_history_column.clear()
refresh_interval=int(refresh_interval.value), return
volatility_spike=float(vol_alert.value),
spot_drawdown=float(price_alert.value),
email_alerts=bool(email_alerts.value),
)
# Save to repository try:
repo.save(new_config) alert_status = alert_service.evaluate(
preview_config,
status.set_text( build_portfolio_alert_context(
f"Saved: gold=${new_config.gold_value:,.0f}, " preview_config,
f"loan=${new_config.loan_amount:,.0f}, " spot_price=float(preview_config.entry_price or 0.0),
f"LTV={new_config.current_ltv:.1%}, " source="settings-preview",
f"margin={new_config.margin_threshold:.1%}, " updated_at="",
f"buffer={new_config.margin_buffer:.1%}" ),
persist=False,
) )
except Exception:
logger.exception("Settings alert preview failed for workspace %s", workspace_id)
alert_state_container.clear()
with alert_state_container:
ui.label("UNAVAILABLE").classes(_alert_badge_classes("critical"))
email_state_label.set_text("Preview unavailable due to an internal error")
alert_message.set_text(
"Preview unavailable due to an internal error. Last saved settings remain unchanged."
)
status.set_text(_save_card_status_text(last_saved_config, preview_config=preview_config))
alert_history_column.clear()
return
alert_state_container.clear()
with alert_state_container:
ui.label(alert_status.severity.upper()).classes(_alert_badge_classes(alert_status.severity))
email_state_label.set_text(
f"Email alerts {'enabled' if alert_status.email_alerts_enabled else 'disabled'} · Warning {alert_status.warning_threshold:.0%} · Critical {alert_status.critical_threshold:.0%}"
)
alert_message.set_text(alert_status.message)
status.set_text(_save_card_status_text(last_saved_config, preview_config=preview_config))
alert_history_column.clear()
if alert_status.history_notice:
with alert_history_column:
ui.label(alert_status.history_notice).classes("text-sm text-amber-700 dark:text-amber-300")
elif alert_status.history:
for event in alert_status.history[:5]:
with alert_history_column:
with ui.row().classes(
"w-full items-start justify-between gap-3 rounded-lg bg-slate-50 p-3 dark:bg-slate-800"
):
with ui.column().classes("gap-1"):
ui.label(event.message).classes(
"text-sm font-medium text-slate-900 dark:text-slate-100"
)
ui.label(f"Spot ${event.spot_price:,.2f} · LTV {event.ltv_ratio:.1%}").classes(
"text-xs text-slate-500 dark:text-slate-400"
)
ui.label(event.severity.upper()).classes(_alert_badge_classes(event.severity))
else:
with alert_history_column:
ui.label("No alert history yet.").classes("text-sm text-slate-500 dark:text-slate-400")
def update_entry_basis(*_args: object) -> None:
nonlocal syncing_entry_basis
apply_entry_basis_mode()
if syncing_entry_basis:
return
price = as_positive_float(entry_price.value)
if price is None:
update_calculations()
return
syncing_entry_basis = True
try:
mode = str(entry_basis_mode.value or "value_price")
if mode == "weight":
ounces = as_positive_float(gold_ounces.value)
if ounces is not None:
gold_value.value = round(ounces * price, 2)
else:
start_value = as_positive_float(gold_value.value)
if start_value is not None:
gold_ounces.value = round(start_value / price, 6)
finally:
syncing_entry_basis = False
update_calculations()
def update_calculations(*_args: object) -> None:
price = as_positive_float(entry_price.value)
collateral_value = as_positive_float(gold_value.value)
ounces = as_positive_float(gold_ounces.value)
loan = as_non_negative_float(loan_amount.value)
margin = as_positive_float(margin_threshold.value)
if collateral_value is not None and collateral_value > 0 and loan is not None:
ltv = (loan / collateral_value) * 100
ltv_display.set_text(f"{ltv:.1f}%")
if margin is not None:
buffer = (margin - loan / collateral_value) * 100
buffer_display.set_text(f"{buffer:.1f}%")
else:
buffer_display.set_text("")
else:
ltv_display.set_text("")
buffer_display.set_text("")
if loan is not None and margin is not None and ounces is not None and ounces > 0:
margin_price_display.set_text(f"${loan / (margin * ounces):,.2f}/oz")
elif (
loan is not None
and margin is not None
and price is not None
and collateral_value is not None
and collateral_value > 0
):
implied_ounces = collateral_value / price
margin_price_display.set_text(f"${loan / (margin * implied_ounces):,.2f}/oz")
else:
margin_price_display.set_text("")
render_alert_state()
for element in (entry_basis_mode, entry_price, gold_value, gold_ounces):
element.on_value_change(update_entry_basis)
for element in (
loan_amount,
margin_threshold,
monthly_budget,
ltv_warning,
vol_alert,
price_alert,
email_alerts,
primary_source,
fallback_source,
refresh_interval,
display_mode,
):
element.on_value_change(update_calculations)
apply_entry_basis_mode()
update_entry_basis()
def save_settings() -> None:
nonlocal last_saved_config
try:
new_config = build_preview_config()
workspace_repo.save_portfolio_config(workspace_id, new_config)
last_saved_config = new_config
render_alert_state()
status.set_text(_save_card_status_text(last_saved_config))
ui.notify("Settings saved successfully", color="positive") ui.notify("Settings saved successfully", color="positive")
except ValueError as e: except ValueError as e:
status.set_text(_save_card_status_text(last_saved_config, invalid=True))
ui.notify(f"Validation error: {e}", color="negative") ui.notify(f"Validation error: {e}", color="negative")
except Exception as e: except Exception:
ui.notify(f"Failed to save: {e}", color="negative") logger.exception("Failed to save settings for workspace %s", workspace_id)
status.set_text(_save_card_status_text(last_saved_config, save_failed=True))
with ui.row().classes("w-full items-center justify-between gap-4 mt-6"): ui.notify("Failed to save settings. Check logs for details.", color="negative")
status = ui.label(
f"Current: gold=${config.gold_value:,.0f}, loan=${config.loan_amount:,.0f}, "
f"current LTV={config.current_ltv:.1%}"
).classes("text-sm text-slate-500 dark:text-slate-400")
ui.button("Save settings", on_click=save_settings).props("color=primary")

155
app/services/alerts.py Normal file
View File

@@ -0,0 +1,155 @@
"""Alert evaluation and history persistence."""
from __future__ import annotations
import logging
from dataclasses import dataclass
from decimal import Decimal
from typing import Mapping
from app.domain.portfolio_math import build_alert_context
from app.models.alerts import AlertEvent, AlertHistoryLoadError, AlertHistoryRepository, AlertStatus
from app.models.portfolio import PortfolioConfig
from app.services.boundary_values import boundary_decimal
logger = logging.getLogger(__name__)
@dataclass(frozen=True, slots=True)
class AlertEvaluationInput:
ltv_ratio: Decimal
spot_price: Decimal
updated_at: str
warning_threshold: Decimal
critical_threshold: Decimal
email_alerts_enabled: bool
def _normalize_alert_evaluation_input(
config: PortfolioConfig,
portfolio: Mapping[str, object],
) -> AlertEvaluationInput:
return AlertEvaluationInput(
ltv_ratio=boundary_decimal(
portfolio.get("ltv_ratio"),
field_name="portfolio.ltv_ratio",
),
spot_price=boundary_decimal(
portfolio.get("spot_price"),
field_name="portfolio.spot_price",
),
updated_at=str(portfolio.get("quote_updated_at", "")),
warning_threshold=boundary_decimal(
config.ltv_warning,
field_name="config.ltv_warning",
),
critical_threshold=boundary_decimal(
config.margin_threshold,
field_name="config.margin_threshold",
),
email_alerts_enabled=bool(config.email_alerts),
)
def _ratio_text(value: Decimal) -> str:
return f"{float(value):.1%}"
def build_portfolio_alert_context(
config: PortfolioConfig,
*,
spot_price: float,
source: str,
updated_at: str,
) -> dict[str, float | str]:
return build_alert_context(
config,
spot_price=spot_price,
source=source,
updated_at=updated_at,
)
class AlertService:
def __init__(self, history_path=None) -> None:
self.repository = AlertHistoryRepository(history_path=history_path)
def evaluate(
self, config: PortfolioConfig, portfolio: Mapping[str, object], *, persist: bool = True
) -> AlertStatus:
history: list[AlertEvent] = []
history_unavailable = False
history_notice: str | None = None
try:
history = self.repository.load()
except AlertHistoryLoadError as exc:
history_unavailable = True
history_notice = (
"Alert history is temporarily unavailable due to a storage error. New alerts are not being recorded."
)
logger.warning("Alert history unavailable at %s: %s", exc.history_path, exc)
evaluation = _normalize_alert_evaluation_input(config, portfolio)
if evaluation.ltv_ratio >= evaluation.critical_threshold:
severity = "critical"
message = (
f"Current LTV {_ratio_text(evaluation.ltv_ratio)} is above the critical threshold of "
f"{_ratio_text(evaluation.critical_threshold)}."
)
elif evaluation.ltv_ratio >= evaluation.warning_threshold:
severity = "warning"
message = (
f"Current LTV {_ratio_text(evaluation.ltv_ratio)} is above the warning threshold of "
f"{_ratio_text(evaluation.warning_threshold)}."
)
else:
severity = "ok"
message = "LTV is within configured thresholds."
preview_history: list[AlertEvent] = []
if severity != "ok":
event = AlertEvent(
severity=severity,
message=message,
ltv_ratio=float(evaluation.ltv_ratio),
warning_threshold=float(evaluation.warning_threshold),
critical_threshold=float(evaluation.critical_threshold),
spot_price=float(evaluation.spot_price),
updated_at=evaluation.updated_at,
email_alerts_enabled=evaluation.email_alerts_enabled,
)
if persist:
if not history_unavailable and self._should_record(history, event):
history.append(event)
self.repository.save(history)
else:
preview_history = [event]
if not persist:
resolved_history = preview_history
elif history_unavailable:
resolved_history = []
elif severity != "ok":
resolved_history = list(reversed(self.repository.load()))
else:
resolved_history = history
return AlertStatus(
severity=severity,
message=message,
ltv_ratio=float(evaluation.ltv_ratio),
warning_threshold=float(evaluation.warning_threshold),
critical_threshold=float(evaluation.critical_threshold),
email_alerts_enabled=evaluation.email_alerts_enabled,
history=resolved_history,
history_unavailable=history_unavailable,
history_notice=history_notice,
)
@staticmethod
def _should_record(history: list[AlertEvent], event: AlertEvent) -> bool:
if not history:
return True
latest = history[-1]
return latest.severity != event.severity

View File

@@ -0,0 +1,12 @@
"""Backtesting services and historical market-data adapters."""
from .comparison import EventComparisonService
from .historical_provider import SyntheticHistoricalProvider, YFinanceHistoricalPriceSource
from .service import BacktestService
__all__ = [
"BacktestService",
"EventComparisonService",
"SyntheticHistoricalProvider",
"YFinanceHistoricalPriceSource",
]

View File

@@ -0,0 +1,222 @@
from __future__ import annotations
from datetime import timedelta
from app.domain.backtesting_math import materialize_backtest_portfolio_state
from app.models.backtest import (
BacktestPortfolioState,
BacktestScenario,
EventComparisonRanking,
EventComparisonReport,
ProviderRef,
TemplateRef,
)
from app.models.event_preset import EventPreset
from app.services.backtesting.fixture_source import FixtureBoundSyntheticHistoricalProvider
from app.services.backtesting.historical_provider import (
DailyClosePoint,
SyntheticHistoricalProvider,
)
from app.services.backtesting.input_normalization import normalize_historical_scenario_inputs
from app.services.backtesting.service import BacktestService
from app.services.event_presets import EventPresetService
from app.services.strategy_templates import StrategyTemplateService
class EventComparisonService:
def __init__(
self,
provider: SyntheticHistoricalProvider | FixtureBoundSyntheticHistoricalProvider | None = None,
template_service: StrategyTemplateService | None = None,
event_preset_service: EventPresetService | None = None,
backtest_service: BacktestService | None = None,
) -> None:
self.provider = provider or SyntheticHistoricalProvider()
self.template_service = template_service or StrategyTemplateService()
self.event_preset_service = event_preset_service or EventPresetService()
self.backtest_service = backtest_service or BacktestService(
provider=self.provider,
template_service=self.template_service,
)
def compare_event(
self,
*,
preset_slug: str,
initial_portfolio: BacktestPortfolioState,
template_slugs: tuple[str, ...] | None = None,
provider_ref: ProviderRef | None = None,
) -> EventComparisonReport:
preset = self.event_preset_service.get_preset(preset_slug)
scenario = self.materialize_scenario(
preset,
initial_portfolio=initial_portfolio,
template_slugs=template_slugs,
provider_ref=provider_ref,
)
return self._compare_materialized_event(preset=preset, scenario=scenario)
def compare_event_from_inputs(
self,
*,
preset_slug: str,
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
template_slugs: tuple[str, ...] | None = None,
currency: str = "USD",
cash_balance: float = 0.0,
financing_rate: float = 0.0,
provider_ref: ProviderRef | None = None,
) -> EventComparisonReport:
normalized_inputs = normalize_historical_scenario_inputs(
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
currency=currency,
cash_balance=cash_balance,
financing_rate=financing_rate,
)
preset = self.event_preset_service.get_preset(preset_slug)
scenario = self.preview_scenario_from_inputs(
preset_slug=preset_slug,
underlying_units=normalized_inputs.underlying_units,
loan_amount=normalized_inputs.loan_amount,
margin_call_ltv=normalized_inputs.margin_call_ltv,
template_slugs=template_slugs,
currency=normalized_inputs.currency,
cash_balance=normalized_inputs.cash_balance,
financing_rate=normalized_inputs.financing_rate,
provider_ref=provider_ref,
)
return self._compare_materialized_event(preset=preset, scenario=scenario)
def preview_scenario_from_inputs(
self,
*,
preset_slug: str,
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
template_slugs: tuple[str, ...] | None = None,
currency: str = "USD",
cash_balance: float = 0.0,
financing_rate: float = 0.0,
provider_ref: ProviderRef | None = None,
) -> BacktestScenario:
normalized_inputs = normalize_historical_scenario_inputs(
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
currency=currency,
cash_balance=cash_balance,
financing_rate=financing_rate,
)
preset = self.event_preset_service.get_preset(preset_slug)
history = self._load_preset_history(preset)
entry_spot = history[0].close
initial_portfolio = materialize_backtest_portfolio_state(
symbol=preset.symbol,
underlying_units=normalized_inputs.underlying_units,
entry_spot=entry_spot,
loan_amount=normalized_inputs.loan_amount,
margin_call_ltv=normalized_inputs.margin_call_ltv,
currency=normalized_inputs.currency,
cash_balance=normalized_inputs.cash_balance,
financing_rate=normalized_inputs.financing_rate,
)
return self.materialize_scenario(
preset,
initial_portfolio=initial_portfolio,
template_slugs=template_slugs,
provider_ref=provider_ref,
history=history,
)
def materialize_scenario(
self,
preset: EventPreset,
*,
initial_portfolio: BacktestPortfolioState,
template_slugs: tuple[str, ...] | None = None,
provider_ref: ProviderRef | None = None,
history: list[DailyClosePoint] | None = None,
) -> BacktestScenario:
selected_template_slugs = (
tuple(preset.scenario_overrides.default_template_slugs) if template_slugs is None else tuple(template_slugs)
)
if not selected_template_slugs:
raise ValueError("Event comparison requires at least one template slug")
resolved_history = self._load_preset_history(preset) if history is None else history
if not resolved_history:
raise ValueError("Event comparison history must not be empty")
scenario_portfolio = materialize_backtest_portfolio_state(
symbol=preset.symbol,
underlying_units=initial_portfolio.underlying_units,
entry_spot=resolved_history[0].close,
loan_amount=initial_portfolio.loan_amount,
margin_call_ltv=initial_portfolio.margin_call_ltv,
currency=initial_portfolio.currency,
cash_balance=initial_portfolio.cash_balance,
financing_rate=initial_portfolio.financing_rate,
)
template_refs = tuple(
TemplateRef(slug=slug, version=self.template_service.get_template(slug).version)
for slug in selected_template_slugs
)
return BacktestScenario(
scenario_id=f"event-{preset.slug}",
display_name=preset.display_name,
symbol=preset.symbol,
start_date=resolved_history[0].date,
end_date=resolved_history[-1].date,
initial_portfolio=scenario_portfolio,
template_refs=template_refs,
provider_ref=provider_ref
or ProviderRef(
provider_id=self.provider.provider_id,
pricing_mode=self.provider.pricing_mode,
),
)
def _compare_materialized_event(self, *, preset: EventPreset, scenario: BacktestScenario) -> EventComparisonReport:
run_result = self.backtest_service.run_scenario(scenario)
ranked_results = sorted(
run_result.template_results,
key=lambda result: (
result.summary_metrics.margin_call_days_hedged,
result.summary_metrics.max_ltv_hedged,
result.summary_metrics.total_hedge_cost,
-result.summary_metrics.end_value_hedged_net,
result.template_slug,
),
)
rankings = tuple(
EventComparisonRanking(
rank=index,
template_slug=result.template_slug,
template_name=result.template_name,
survived_margin_call=not result.summary_metrics.margin_threshold_breached_hedged,
margin_call_days_hedged=result.summary_metrics.margin_call_days_hedged,
max_ltv_hedged=result.summary_metrics.max_ltv_hedged,
hedge_cost=result.summary_metrics.total_hedge_cost,
final_equity=result.summary_metrics.end_value_hedged_net,
result=result,
)
for index, result in enumerate(ranked_results, start=1)
)
return EventComparisonReport(
event_preset=preset,
scenario=scenario,
rankings=rankings,
run_result=run_result,
)
def _load_preset_history(self, preset: EventPreset) -> list[DailyClosePoint]:
requested_start = preset.window_start - timedelta(days=preset.scenario_overrides.lookback_days or 0)
requested_end = preset.window_end + timedelta(days=preset.scenario_overrides.recovery_days or 0)
history = self.provider.load_history(preset.symbol, requested_start, requested_end)
if not history:
raise ValueError(f"No historical prices found for event preset: {preset.slug}")
return history

View File

@@ -0,0 +1,388 @@
"""Databento historical price source for backtesting."""
from __future__ import annotations
import hashlib
import json
import logging
from dataclasses import dataclass
from datetime import date, timedelta
from pathlib import Path
from typing import TYPE_CHECKING, Any
from app.services.backtesting.historical_provider import DailyClosePoint
if TYPE_CHECKING:
import databento as db
import pandas as pd
else:
db = None
pd = None
DATABENTO_AVAILABLE = False
logger = logging.getLogger(__name__)
# Try to import databento, gracefully degrade if not available
try:
import databento as _db
import pandas as _pd
db = _db
pd = _pd
DATABENTO_AVAILABLE = True
except ImportError:
pass
@dataclass
class DatabentoSourceConfig:
"""Configuration for Databento data source."""
api_key: str | None = None # Falls back to DATABENTO_API_KEY env var
cache_dir: Path = Path(".cache/databento")
dataset: str = "XNAS.BASIC"
schema: str = "ohlcv-1d"
stype_in: str = "raw_symbol"
# Re-download threshold
max_cache_age_days: int = 30
def __post_init__(self) -> None:
# Ensure cache_dir is a Path
if isinstance(self.cache_dir, str):
object.__setattr__(self, "cache_dir", Path(self.cache_dir))
@dataclass(frozen=True)
class DatabentoCacheKey:
"""Cache key for Databento data."""
dataset: str
symbol: str
schema: str
start_date: date
end_date: date
def cache_path(self, cache_dir: Path) -> Path:
key_str = f"{self.dataset}_{self.symbol}_{self.schema}_{self.start_date}_{self.end_date}"
key_hash = hashlib.sha256(key_str.encode()).hexdigest()[:16]
return cache_dir / f"dbn_{key_hash}.parquet"
def metadata_path(self, cache_dir: Path) -> Path:
key_str = f"{self.dataset}_{self.symbol}_{self.schema}_{self.start_date}_{self.end_date}"
key_hash = hashlib.sha256(key_str.encode()).hexdigest()[:16]
return cache_dir / f"dbn_{key_hash}_meta.json"
class DatabentoHistoricalPriceSource:
"""Databento-based historical price source for backtesting.
This provider fetches historical daily OHLCV data from Databento's API
and caches it locally to minimize API calls and costs.
Key features:
- Smart caching with configurable age threshold
- Cost estimation before fetching
- Symbol-to-dataset resolution (GLD→XNAS.BASIC, GC=F→GLBX.MDP3)
- Parquet storage for fast loading
Example usage:
source = DatabentoHistoricalPriceSource()
prices = source.load_daily_closes("GLD", date(2024, 1, 1), date(2024, 1, 31))
"""
def __init__(self, config: DatabentoSourceConfig | None = None) -> None:
if not DATABENTO_AVAILABLE:
raise RuntimeError("databento package required: pip install databento>=0.30.0")
self.config = config or DatabentoSourceConfig()
self.config.cache_dir.mkdir(parents=True, exist_ok=True)
self._client: Any = None # db.Historical
@property
def client(self) -> Any:
"""Get or create Databento client."""
if self._client is None:
if db is None:
raise RuntimeError("databento package not installed")
self._client = db.Historical(key=self.config.api_key)
return self._client
def _load_from_cache(self, key: DatabentoCacheKey) -> list[DailyClosePoint] | None:
"""Load cached data if available and fresh."""
cache_file = key.cache_path(self.config.cache_dir)
meta_file = key.metadata_path(self.config.cache_dir)
if not cache_file.exists() or not meta_file.exists():
return None
try:
with open(meta_file) as f:
meta = json.load(f)
# Check dataset and symbol match (for cache invalidation)
if meta.get("dataset") != key.dataset or meta.get("symbol") != key.symbol:
return None
cache_age = (date.today() - date.fromisoformat(meta["download_date"])).days
if cache_age > self.config.max_cache_age_days:
return None
if meta.get("start_date") != key.start_date.isoformat() or meta.get("end_date") != key.end_date.isoformat():
return None
if meta.get("dataset") != key.dataset or meta.get("symbol") != key.symbol:
return None
# Load parquet and convert
df = pd.read_parquet(cache_file)
return self._df_to_daily_points(df)
except Exception:
return None
def _save_to_cache(self, key: DatabentoCacheKey, df: Any, cost_usd: float = 0.0) -> None:
"""Save data to cache."""
if pd is None:
return
cache_file = key.cache_path(self.config.cache_dir)
meta_file = key.metadata_path(self.config.cache_dir)
df.to_parquet(cache_file, index=False)
meta = {
"download_date": date.today().isoformat(),
"dataset": key.dataset,
"symbol": key.symbol,
"schema": key.schema,
"start_date": key.start_date.isoformat(),
"end_date": key.end_date.isoformat(),
"rows": len(df),
"cost_usd": cost_usd,
}
with open(meta_file, "w") as f:
json.dump(meta, f, indent=2)
def _fetch_from_databento(self, key: DatabentoCacheKey) -> Any:
"""Fetch data from Databento API."""
data = self.client.timeseries.get_range(
dataset=key.dataset,
symbols=key.symbol,
schema=key.schema,
start=key.start_date.isoformat(),
end=(key.end_date + timedelta(days=1)).isoformat(), # Exclusive end
stype_in=self.config.stype_in,
)
return data.to_df()
def _df_to_daily_points(self, df: Any) -> list[DailyClosePoint]:
"""Convert DataFrame to DailyClosePoint list with OHLC data."""
from app.services.backtesting.historical_provider import DailyClosePoint
if pd is None:
return []
def parse_price(raw_val: Any) -> float | None:
"""Parse Databento price (int64 scaled by 1e9)."""
if raw_val is None or (isinstance(raw_val, float) and pd.isna(raw_val)):
return None
if isinstance(raw_val, (int, float)):
return float(raw_val) / 1e9 if raw_val > 1e9 else float(raw_val)
return float(raw_val) if raw_val else None
points = []
for idx, row in df.iterrows():
# Databento ohlcv schema has ts_event as timestamp
ts = row.get("ts_event", row.get("ts_recv", idx))
if hasattr(ts, "date"):
row_date = ts.date()
else:
# Parse ISO date string
ts_str = str(ts)
row_date = date.fromisoformat(ts_str[:10])
close = parse_price(row.get("close"))
low = parse_price(row.get("low"))
high = parse_price(row.get("high"))
open_price = parse_price(row.get("open"))
if close and close > 0:
points.append(
DailyClosePoint(
date=row_date,
close=close,
low=low,
high=high,
open=open_price,
)
)
return sorted(points, key=lambda p: p.date)
from app.services.backtesting.historical_provider import DailyClosePoint
def load_daily_closes(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
"""Load daily closing prices from Databento (with caching).
Args:
symbol: Trading symbol (GLD, GC=F, XAU)
start_date: Inclusive start date
end_date: Inclusive end date
Returns:
List of DailyClosePoint sorted by date
"""
# Map symbols to datasets
dataset = self._resolve_dataset(symbol)
databento_symbol = self._resolve_symbol(symbol)
key = DatabentoCacheKey(
dataset=dataset,
symbol=databento_symbol,
schema=self.config.schema,
start_date=start_date,
end_date=end_date,
)
# Try cache first
cached = self._load_from_cache(key)
if cached is not None:
return cached
# Fetch from Databento
df = self._fetch_from_databento(key)
# Get cost estimate (approximate)
try:
cost_usd = self.get_cost_estimate(symbol, start_date, end_date)
except Exception:
cost_usd = 0.0
# Cache results
self._save_to_cache(key, df, cost_usd)
return self._df_to_daily_points(df)
def _resolve_dataset(self, symbol: str) -> str:
"""Resolve symbol to Databento dataset."""
symbol_upper = symbol.upper()
if symbol_upper in ("GLD", "GLDM", "IAU"):
return "XNAS.BASIC" # ETFs on Nasdaq
elif symbol_upper in ("GC=F", "GC", "GOLD"):
return "GLBX.MDP3" # CME gold futures
elif symbol_upper == "XAU":
return "XNAS.BASIC" # Treat as GLD proxy
else:
return self.config.dataset # Use configured default
def _resolve_symbol(self, symbol: str) -> str:
"""Resolve vault-dash symbol to Databento symbol."""
symbol_upper = symbol.upper()
if symbol_upper == "XAU":
return "GLD" # Proxy XAU via GLD prices
elif symbol_upper == "GC=F":
return "GC" # Use parent symbol for continuous contracts
return symbol_upper
def get_cost_estimate(self, symbol: str, start_date: date, end_date: date) -> float:
"""Estimate cost in USD for a data request.
Args:
symbol: Trading symbol
start_date: Start date
end_date: End date
Returns:
Estimated cost in USD
"""
dataset = self._resolve_dataset(symbol)
databento_symbol = self._resolve_symbol(symbol)
try:
cost = self.client.metadata.get_cost(
dataset=dataset,
symbols=databento_symbol,
schema=self.config.schema,
start=start_date.isoformat(),
end=(end_date + timedelta(days=1)).isoformat(),
)
return float(cost)
except Exception:
return 0.0 # Return 0 if cost estimation fails
def get_available_range(self, symbol: str) -> tuple[date | None, date | None]:
"""Get the available date range for a symbol from Databento.
Args:
symbol: Trading symbol
Returns:
Tuple of (start_date, end_date) or (None, None) if unavailable
"""
# Note: Databento availability depends on the dataset
# For now, return None to indicate we should try fetching
return None, None
def get_cache_stats(self) -> dict[str, Any]:
"""Get cache statistics."""
cache_dir = self.config.cache_dir
if not cache_dir.exists():
return {"status": "empty", "entries": [], "file_count": 0, "total_size_bytes": 0}
entries = []
total_size = 0
file_count = 0
for meta_file in cache_dir.glob("*_meta.json"):
try:
with open(meta_file) as f:
meta = json.load(f)
entries.append(
{
"symbol": meta.get("symbol"),
"dataset": meta.get("dataset"),
"start_date": meta.get("start_date"),
"end_date": meta.get("end_date"),
"download_date": meta.get("download_date"),
"rows": meta.get("rows"),
"cost_usd": meta.get("cost_usd"),
}
)
total_size += meta_file.stat().st_size
file_count += 1
except Exception:
continue
# Count parquet files too
for parquet_file in cache_dir.glob("dbn_*.parquet"):
total_size += parquet_file.stat().st_size
file_count += 1
return {
"status": "populated" if entries else "empty",
"entries": entries,
"file_count": file_count,
"total_size_bytes": total_size,
}
def clear_cache(self) -> int:
"""Clear all cache files.
Returns:
Number of files deleted.
"""
cache_dir = self.config.cache_dir
if not cache_dir.exists():
return 0
count = 0
for cache_file in cache_dir.glob("dbn_*.parquet"):
cache_file.unlink()
count += 1
for meta_file in cache_dir.glob("dbn_*_meta.json"):
meta_file.unlink()
count += 1
return count

View File

@@ -0,0 +1,98 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import date
from enum import StrEnum
from typing import Any
from app.services.backtesting.historical_provider import DailyClosePoint, SyntheticHistoricalProvider
SEEDED_GLD_2024_FIXTURE_HISTORY: tuple[DailyClosePoint, ...] = (
DailyClosePoint(date=date(2024, 1, 2), close=100.0),
DailyClosePoint(date=date(2024, 1, 3), close=96.0),
DailyClosePoint(date=date(2024, 1, 4), close=92.0),
DailyClosePoint(date=date(2024, 1, 5), close=88.0),
DailyClosePoint(date=date(2024, 1, 8), close=85.0),
)
class WindowPolicy(StrEnum):
EXACT = "exact"
BOUNDED = "bounded"
@dataclass(frozen=True)
class SharedHistoricalFixtureSource:
feature_label: str
supported_symbol: str
history: tuple[DailyClosePoint, ...]
window_policy: WindowPolicy
@property
def start_date(self) -> date:
return self.history[0].date
@property
def end_date(self) -> date:
return self.history[-1].date
def load_daily_closes(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
if start_date > end_date:
raise ValueError("start_date must be on or before end_date")
normalized_symbol = symbol.strip().upper()
if normalized_symbol != self.supported_symbol.strip().upper():
raise ValueError(
f"{self.feature_label} deterministic fixture data only supports {self.supported_symbol} on this page"
)
if self.window_policy is WindowPolicy.EXACT:
if start_date != self.start_date or end_date != self.end_date:
raise ValueError(
f"{self.feature_label} deterministic fixture data only supports {self.supported_symbol} "
f"on the seeded {self.start_date.isoformat()} through {self.end_date.isoformat()} window"
)
else:
if start_date < self.start_date or end_date > self.end_date:
raise ValueError(
f"{self.feature_label} deterministic fixture data only supports the seeded "
f"{self.start_date.isoformat()} through {self.end_date.isoformat()} window. "
f"For dates outside this range, please use Databento or Yahoo Finance data source."
)
return [point for point in self.history if start_date <= point.date <= end_date]
class FixtureBoundSyntheticHistoricalProvider:
def __init__(self, base_provider: SyntheticHistoricalProvider, source: SharedHistoricalFixtureSource) -> None:
self.base_provider = base_provider
self.source = source
def load_history(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
rows = self.source.load_daily_closes(symbol, start_date, end_date)
return sorted(rows, key=lambda row: row.date)
def __getattr__(self, name: str) -> Any:
return getattr(self.base_provider, name)
def build_backtest_ui_fixture_source() -> SharedHistoricalFixtureSource:
return SharedHistoricalFixtureSource(
feature_label="BT-001A",
supported_symbol="GLD",
history=SEEDED_GLD_2024_FIXTURE_HISTORY,
window_policy=WindowPolicy.BOUNDED,
)
def build_event_comparison_fixture_source() -> SharedHistoricalFixtureSource:
return SharedHistoricalFixtureSource(
feature_label="BT-003A",
supported_symbol="GLD",
history=SEEDED_GLD_2024_FIXTURE_HISTORY,
window_policy=WindowPolicy.BOUNDED,
)
def bind_fixture_source(
base_provider: SyntheticHistoricalProvider,
source: SharedHistoricalFixtureSource,
) -> FixtureBoundSyntheticHistoricalProvider:
return FixtureBoundSyntheticHistoricalProvider(base_provider=base_provider, source=source)

View File

@@ -0,0 +1,560 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import date, timedelta
from math import isfinite
from typing import Protocol, cast
from app.models.backtest import ProviderRef
try:
import yfinance as yf
except ImportError: # pragma: no cover - optional in tests
yf = None
from app.core.pricing.black_scholes import BlackScholesInputs, OptionType, black_scholes_price_and_greeks
from app.models.strategy_template import TemplateLeg
@dataclass(frozen=True)
class DailyClosePoint:
date: date
close: float
low: float | None = None # Day's low for margin call evaluation
high: float | None = None # Day's high
open: float | None = None # Day's open
def __post_init__(self) -> None:
if self.close <= 0:
raise ValueError("close must be positive")
if self.low is not None and self.low <= 0:
raise ValueError("low must be positive")
if self.high is not None and self.high <= 0:
raise ValueError("high must be positive")
if self.open is not None and self.open <= 0:
raise ValueError("open must be positive")
@dataclass(frozen=True)
class SyntheticOptionQuote:
position_id: str
leg_id: str
spot: float
strike: float
expiry: date
quantity: float
mark: float
def __post_init__(self) -> None:
for field_name in ("position_id", "leg_id"):
value = getattr(self, field_name)
if not isinstance(value, str) or not value:
raise ValueError(f"{field_name} is required")
for field_name in ("spot", "strike", "quantity", "mark"):
value = getattr(self, field_name)
if not isinstance(value, (int, float)) or isinstance(value, bool) or not isfinite(float(value)):
raise TypeError(f"{field_name} must be a finite number")
if self.spot <= 0:
raise ValueError("spot must be positive")
if self.strike <= 0:
raise ValueError("strike must be positive")
if self.quantity <= 0:
raise ValueError("quantity must be positive")
if self.mark < 0:
raise ValueError("mark must be non-negative")
@dataclass(frozen=True)
class DailyOptionSnapshot:
contract_key: str
symbol: str
snapshot_date: date
expiry: date
option_type: str
strike: float
mid: float
def __post_init__(self) -> None:
if not self.contract_key:
raise ValueError("contract_key is required")
if not self.symbol:
raise ValueError("symbol is required")
if self.option_type not in {"put", "call"}:
raise ValueError("unsupported option_type")
if self.strike <= 0:
raise ValueError("strike must be positive")
if self.mid < 0:
raise ValueError("mid must be non-negative")
@dataclass
class HistoricalOptionPosition:
position_id: str
leg_id: str
contract_key: str
option_type: str
strike: float
expiry: date
quantity: float
entry_price: float
current_mark: float
last_mark_date: date
source_snapshot_date: date
def __post_init__(self) -> None:
for field_name in ("position_id", "leg_id", "contract_key"):
value = getattr(self, field_name)
if not isinstance(value, str) or not value:
raise ValueError(f"{field_name} is required")
if self.option_type not in {"put", "call"}:
raise ValueError("unsupported option_type")
for field_name in ("strike", "quantity", "entry_price", "current_mark"):
value = getattr(self, field_name)
if not isinstance(value, (int, float)) or isinstance(value, bool) or not isfinite(float(value)):
raise TypeError(f"{field_name} must be a finite number")
if self.strike <= 0:
raise ValueError("strike must be positive")
if self.quantity <= 0:
raise ValueError("quantity must be positive")
if self.entry_price < 0:
raise ValueError("entry_price must be non-negative")
if self.current_mark < 0:
raise ValueError("current_mark must be non-negative")
@dataclass(frozen=True)
class HistoricalOptionMark:
contract_key: str
mark: float
source: str
is_active: bool
realized_cashflow: float = 0.0
warning: str | None = None
def __post_init__(self) -> None:
if not self.contract_key:
raise ValueError("contract_key is required")
for field_name in ("mark", "realized_cashflow"):
value = getattr(self, field_name)
if not isinstance(value, (int, float)) or isinstance(value, bool) or not isfinite(float(value)):
raise TypeError(f"{field_name} must be a finite number")
if self.mark < 0:
raise ValueError("mark must be non-negative")
if self.realized_cashflow < 0:
raise ValueError("realized_cashflow must be non-negative")
class HistoricalPriceSource(Protocol):
def load_daily_closes(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
raise NotImplementedError
class OptionSnapshotSource(Protocol):
def load_option_chain(self, symbol: str, snapshot_date: date) -> list[DailyOptionSnapshot]:
raise NotImplementedError
class BacktestHistoricalProvider(Protocol):
provider_id: str
pricing_mode: str
def load_history(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
raise NotImplementedError
def validate_provider_ref(self, provider_ref: ProviderRef) -> None:
raise NotImplementedError
def open_position(
self,
*,
symbol: str,
leg: TemplateLeg,
position_id: str,
quantity: float,
as_of_date: date,
spot: float,
trading_days: list[DailyClosePoint],
) -> HistoricalOptionPosition:
raise NotImplementedError
def mark_position(
self,
position: HistoricalOptionPosition,
*,
symbol: str,
as_of_date: date,
spot: float,
) -> HistoricalOptionMark:
raise NotImplementedError
class YFinanceHistoricalPriceSource:
@staticmethod
def _normalize_daily_close_row(
*, row_date: object, close: object, low: object = None, high: object = None, open_price: object = None
) -> DailyClosePoint | None:
if close is None:
return None
if not hasattr(row_date, "date"):
raise TypeError(f"historical row date must support .date(), got {type(row_date)!r}")
if isinstance(close, (int, float)):
normalized_close = float(close)
else:
raise TypeError(f"close must be numeric, got {type(close)!r}")
if not isfinite(normalized_close):
raise ValueError("historical close must be finite")
# Parse optional OHLC fields
def parse_optional(val: object) -> float | None:
if val is None:
return None
if isinstance(val, (int, float)):
result = float(val)
return result if isfinite(result) and result > 0 else None
return None
return DailyClosePoint(
date=row_date.date(),
close=normalized_close,
low=parse_optional(low),
high=parse_optional(high),
open=parse_optional(open_price),
)
def load_daily_closes(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
if yf is None:
raise RuntimeError("yfinance is required to load historical backtest prices")
ticker = yf.Ticker(symbol)
inclusive_end_date = end_date + timedelta(days=1)
history = ticker.history(start=start_date.isoformat(), end=inclusive_end_date.isoformat(), interval="1d")
rows: list[DailyClosePoint] = []
for index, row in history.iterrows():
point = self._normalize_daily_close_row(
row_date=index,
close=row.get("Close"),
low=row.get("Low"),
high=row.get("High"),
open_price=row.get("Open"),
)
if point is not None:
rows.append(point)
return rows
class SyntheticHistoricalProvider:
provider_id = "synthetic_v1"
pricing_mode = "synthetic_bs_mid"
def __init__(
self,
source: HistoricalPriceSource | None = None,
implied_volatility: float = 0.16,
risk_free_rate: float = 0.045,
) -> None:
if implied_volatility <= 0:
raise ValueError("implied_volatility must be positive")
self.source = source or YFinanceHistoricalPriceSource()
self.implied_volatility = implied_volatility
self.risk_free_rate = risk_free_rate
def load_history(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
rows = self.source.load_daily_closes(symbol, start_date, end_date)
filtered = [row for row in rows if start_date <= row.date <= end_date]
return sorted(filtered, key=lambda row: row.date)
def validate_provider_ref(self, provider_ref: ProviderRef) -> None:
if provider_ref.provider_id != self.provider_id or provider_ref.pricing_mode != self.pricing_mode:
raise ValueError(
"Unsupported provider/pricing combination for synthetic MVP engine: "
f"{provider_ref.provider_id}/{provider_ref.pricing_mode}"
)
def resolve_expiry(self, trading_days: list[DailyClosePoint], as_of_date: date, target_expiry_days: int) -> date:
target_date = date.fromordinal(as_of_date.toordinal() + target_expiry_days)
for day in trading_days:
if day.date >= target_date:
return day.date
return target_date
def open_position(
self,
*,
symbol: str,
leg: TemplateLeg,
position_id: str,
quantity: float,
as_of_date: date,
spot: float,
trading_days: list[DailyClosePoint],
) -> HistoricalOptionPosition:
expiry = self.resolve_expiry(trading_days, as_of_date, leg.target_expiry_days)
strike = spot * leg.strike_rule.value
quote = self.price_option(
position_id=position_id,
leg=leg,
spot=spot,
strike=strike,
expiry=expiry,
quantity=quantity,
valuation_date=as_of_date,
)
return HistoricalOptionPosition(
position_id=position_id,
leg_id=leg.leg_id,
contract_key=f"{symbol}-{expiry.isoformat()}-{leg.option_type}-{strike:.4f}",
option_type=leg.option_type,
strike=strike,
expiry=expiry,
quantity=quantity,
entry_price=quote.mark,
current_mark=quote.mark,
last_mark_date=as_of_date,
source_snapshot_date=as_of_date,
)
def mark_position(
self,
position: HistoricalOptionPosition,
*,
symbol: str,
as_of_date: date,
spot: float,
) -> HistoricalOptionMark:
if as_of_date >= position.expiry:
intrinsic = self.intrinsic_value(option_type=position.option_type, spot=spot, strike=position.strike)
return HistoricalOptionMark(
contract_key=position.contract_key,
mark=0.0,
source="intrinsic_expiry",
is_active=False,
realized_cashflow=intrinsic * position.quantity,
)
quote = self.price_option_by_type(
position_id=position.position_id,
leg_id=position.leg_id,
option_type=position.option_type,
spot=spot,
strike=position.strike,
expiry=position.expiry,
quantity=position.quantity,
valuation_date=as_of_date,
)
position.current_mark = quote.mark
position.last_mark_date = as_of_date
return HistoricalOptionMark(
contract_key=position.contract_key,
mark=quote.mark,
source="synthetic_bs_mid",
is_active=True,
)
def price_option(
self,
*,
position_id: str,
leg: TemplateLeg,
spot: float,
strike: float,
expiry: date,
quantity: float,
valuation_date: date,
) -> SyntheticOptionQuote:
return self.price_option_by_type(
position_id=position_id,
leg_id=leg.leg_id,
option_type=leg.option_type,
spot=spot,
strike=strike,
expiry=expiry,
quantity=quantity,
valuation_date=valuation_date,
)
def price_option_by_type(
self,
*,
position_id: str,
leg_id: str,
option_type: str,
spot: float,
strike: float,
expiry: date,
quantity: float,
valuation_date: date,
) -> SyntheticOptionQuote:
remaining_days = max(1, expiry.toordinal() - valuation_date.toordinal())
mark = black_scholes_price_and_greeks(
BlackScholesInputs(
spot=spot,
strike=strike,
time_to_expiry=remaining_days / 365.0,
risk_free_rate=self.risk_free_rate,
volatility=self.implied_volatility,
option_type=cast(OptionType, option_type),
valuation_date=valuation_date,
)
).price
return SyntheticOptionQuote(
position_id=position_id,
leg_id=leg_id,
spot=spot,
strike=strike,
expiry=expiry,
quantity=quantity,
mark=mark,
)
@staticmethod
def intrinsic_value(*, option_type: str, spot: float, strike: float) -> float:
if option_type == "put":
return max(strike - spot, 0.0)
if option_type == "call":
return max(spot - strike, 0.0)
raise ValueError(f"Unsupported option type: {option_type}")
class EmptyOptionSnapshotSource:
def load_option_chain(self, symbol: str, snapshot_date: date) -> list[DailyOptionSnapshot]:
return []
class DailyOptionsSnapshotProvider:
provider_id = "daily_snapshots_v1"
pricing_mode = "snapshot_mid"
def __init__(
self,
price_source: HistoricalPriceSource | None = None,
snapshot_source: OptionSnapshotSource | None = None,
) -> None:
self.price_source = price_source or YFinanceHistoricalPriceSource()
self.snapshot_source = snapshot_source or EmptyOptionSnapshotSource()
def load_history(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
rows = self.price_source.load_daily_closes(symbol, start_date, end_date)
filtered = [row for row in rows if start_date <= row.date <= end_date]
return sorted(filtered, key=lambda row: row.date)
def validate_provider_ref(self, provider_ref: ProviderRef) -> None:
if provider_ref.provider_id != self.provider_id or provider_ref.pricing_mode != self.pricing_mode:
raise ValueError(
"Unsupported provider/pricing combination for historical snapshot engine: "
f"{provider_ref.provider_id}/{provider_ref.pricing_mode}"
)
def open_position(
self,
*,
symbol: str,
leg: TemplateLeg,
position_id: str,
quantity: float,
as_of_date: date,
spot: float,
trading_days: list[DailyClosePoint],
) -> HistoricalOptionPosition:
del trading_days # selection must use only the entry-day snapshot, not future state
selected_snapshot = self._select_entry_snapshot(symbol=symbol, leg=leg, as_of_date=as_of_date, spot=spot)
return HistoricalOptionPosition(
position_id=position_id,
leg_id=leg.leg_id,
contract_key=selected_snapshot.contract_key,
option_type=selected_snapshot.option_type,
strike=selected_snapshot.strike,
expiry=selected_snapshot.expiry,
quantity=quantity,
entry_price=selected_snapshot.mid,
current_mark=selected_snapshot.mid,
last_mark_date=as_of_date,
source_snapshot_date=as_of_date,
)
def mark_position(
self,
position: HistoricalOptionPosition,
*,
symbol: str,
as_of_date: date,
spot: float,
) -> HistoricalOptionMark:
if as_of_date >= position.expiry:
intrinsic = SyntheticHistoricalProvider.intrinsic_value(
option_type=position.option_type,
spot=spot,
strike=position.strike,
)
return HistoricalOptionMark(
contract_key=position.contract_key,
mark=0.0,
source="intrinsic_expiry",
is_active=False,
realized_cashflow=intrinsic * position.quantity,
)
exact_snapshot = next(
(
snapshot
for snapshot in self.snapshot_source.load_option_chain(symbol, as_of_date)
if snapshot.contract_key == position.contract_key
),
None,
)
if exact_snapshot is not None:
position.current_mark = exact_snapshot.mid
position.last_mark_date = as_of_date
return HistoricalOptionMark(
contract_key=position.contract_key,
mark=exact_snapshot.mid,
source="snapshot_mid",
is_active=True,
)
if position.current_mark < 0:
raise ValueError(f"Missing historical mark for {position.contract_key} on {as_of_date.isoformat()}")
return HistoricalOptionMark(
contract_key=position.contract_key,
mark=position.current_mark,
source="carry_forward",
is_active=True,
warning=(
f"Missing historical mark for {position.contract_key} on {as_of_date.isoformat()}; "
f"carrying forward prior mark from {position.last_mark_date.isoformat()}."
),
)
def _select_entry_snapshot(
self,
*,
symbol: str,
leg: TemplateLeg,
as_of_date: date,
spot: float,
) -> DailyOptionSnapshot:
target_expiry = date.fromordinal(as_of_date.toordinal() + leg.target_expiry_days)
target_strike = spot * leg.strike_rule.value
chain = [
snapshot
for snapshot in self.snapshot_source.load_option_chain(symbol, as_of_date)
if snapshot.symbol.strip().upper() == symbol.strip().upper() and snapshot.option_type == leg.option_type
]
eligible_expiries = [snapshot for snapshot in chain if snapshot.expiry >= target_expiry]
if not eligible_expiries:
raise ValueError(
f"No eligible historical option snapshots found for {symbol} on {as_of_date.isoformat()} "
f"at or beyond target expiry {target_expiry.isoformat()}"
)
selected_expiry = min(
eligible_expiries,
key=lambda snapshot: ((snapshot.expiry - target_expiry).days, snapshot.expiry),
).expiry
expiry_matches = [snapshot for snapshot in eligible_expiries if snapshot.expiry == selected_expiry]
return min(
expiry_matches, key=lambda snapshot: self._strike_sort_key(snapshot.strike, target_strike, leg.option_type)
)
@staticmethod
def _strike_sort_key(strike: float, target_strike: float, option_type: str) -> tuple[float, float]:
if option_type == "put":
return (abs(strike - target_strike), -strike)
return (abs(strike - target_strike), strike)

View File

@@ -0,0 +1,51 @@
from __future__ import annotations
from dataclasses import dataclass
from app.services.boundary_values import boundary_decimal
@dataclass(frozen=True, slots=True)
class NormalizedHistoricalScenarioInputs:
underlying_units: float
loan_amount: float
margin_call_ltv: float
currency: str = "USD"
cash_balance: float = 0.0
financing_rate: float = 0.0
def normalize_historical_scenario_inputs(
*,
underlying_units: object,
loan_amount: object,
margin_call_ltv: object,
currency: object = "USD",
cash_balance: object = 0.0,
financing_rate: object = 0.0,
) -> NormalizedHistoricalScenarioInputs:
normalized_currency = str(currency).strip().upper()
if not normalized_currency:
raise ValueError("Currency is required")
units = float(boundary_decimal(underlying_units, field_name="underlying_units"))
normalized_loan_amount = float(boundary_decimal(loan_amount, field_name="loan_amount"))
ltv = float(boundary_decimal(margin_call_ltv, field_name="margin_call_ltv"))
normalized_cash_balance = float(boundary_decimal(cash_balance, field_name="cash_balance"))
normalized_financing_rate = float(boundary_decimal(financing_rate, field_name="financing_rate"))
if units <= 0:
raise ValueError("Underlying units must be positive")
if normalized_loan_amount < 0:
raise ValueError("Loan amount must be non-negative")
if not 0 < ltv < 1:
raise ValueError("Margin call LTV must be between 0 and 1")
return NormalizedHistoricalScenarioInputs(
underlying_units=units,
loan_amount=normalized_loan_amount,
margin_call_ltv=ltv,
currency=normalized_currency,
cash_balance=normalized_cash_balance,
financing_rate=normalized_financing_rate,
)

View File

@@ -0,0 +1,359 @@
"""Async backtest job execution with progress tracking.
This module provides a non-blocking backtest execution system:
1. Jobs are submitted and run in background threads
2. Progress is tracked with stages (validating, fetching_prices, calculating, complete)
3. UI polls for status updates
4. Results are cached for retrieval
"""
from __future__ import annotations
import logging
import threading
import uuid
from dataclasses import dataclass, field
from datetime import date, datetime
from enum import Enum
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from app.services.backtesting.ui_service import BacktestPageService
logger = logging.getLogger(__name__)
class JobStatus(str, Enum):
"""Status of a backtest job."""
PENDING = "pending"
RUNNING = "running"
COMPLETE = "complete"
FAILED = "failed"
class JobStage(str, Enum):
"""Execution stages with user-friendly labels."""
VALIDATING = "validating"
FETCHING_PRICES = "fetching_prices"
CALCULATING = "calculating"
COMPLETE = "complete"
FAILED = "failed"
@property
def label(self) -> str:
"""Human-readable stage label."""
labels = {
JobStage.VALIDATING: "Validating inputs...",
JobStage.FETCHING_PRICES: "Fetching historical prices...",
JobStage.CALCULATING: "Running backtest calculations...",
JobStage.COMPLETE: "Complete",
JobStage.FAILED: "Failed",
}
return labels.get(self, self.value)
@dataclass
class BacktestJob:
"""Represents a backtest job with progress tracking."""
job_id: str
status: JobStatus = JobStatus.PENDING
stage: JobStage = JobStage.VALIDATING
progress: int = 0 # 0-100
message: str = ""
result: dict[str, Any] | None = None
error: str | None = None
created_at: datetime = field(default_factory=datetime.utcnow)
completed_at: datetime | None = None
def to_dict(self) -> dict[str, Any]:
"""Serialize job for JSON response."""
return {
"job_id": self.job_id,
"status": self.status.value,
"stage": self.stage.value,
"stage_label": self.stage.label,
"progress": self.progress,
"message": self.message,
"has_result": self.result is not None,
"error": self.error,
"created_at": self.created_at.isoformat(),
"completed_at": self.completed_at.isoformat() if self.completed_at else None,
}
class BacktestJobStore:
"""In-memory store for backtest jobs.
Jobs are stored in a dict keyed by workspace_id -> job_id.
Each workspace only has one active job at a time (latest replaces previous).
"""
def __init__(self) -> None:
self._jobs: dict[str, BacktestJob] = {} # workspace_id -> job
self._results: dict[str, dict[str, Any]] = {} # job_id -> result
self._lock = threading.Lock()
def create_job(self, workspace_id: str) -> BacktestJob:
"""Create a new job for a workspace, replacing any existing one."""
job = BacktestJob(job_id=str(uuid.uuid4()))
with self._lock:
self._jobs[workspace_id] = job
logger.info(f"Created job {job.job_id} for workspace {workspace_id}")
return job
def get_job(self, workspace_id: str) -> BacktestJob | None:
"""Get the current job for a workspace."""
with self._lock:
return self._jobs.get(workspace_id)
def update_job(
self,
workspace_id: str,
*,
status: JobStatus | None = None,
stage: JobStage | None = None,
progress: int | None = None,
message: str | None = None,
result: dict[str, Any] | None = None,
error: str | None = None,
) -> None:
"""Update job state."""
with self._lock:
job = self._jobs.get(workspace_id)
if not job:
return
if status:
job.status = status
if stage:
job.stage = stage
if progress is not None:
job.progress = progress
if message is not None:
job.message = message
if result is not None:
job.result = result
self._results[job.job_id] = result
if error:
job.error = error
if stage == JobStage.COMPLETE or stage == JobStage.FAILED:
job.completed_at = datetime.utcnow()
def get_result(self, job_id: str) -> dict[str, Any] | None:
"""Get cached result by job ID."""
with self._lock:
return self._results.get(job_id)
def clear_job(self, workspace_id: str) -> None:
"""Remove job from store."""
with self._lock:
if workspace_id in self._jobs:
del self._jobs[workspace_id]
# Global job store singleton
job_store = BacktestJobStore()
def run_backtest_job(
workspace_id: str,
job: BacktestJob,
service: "BacktestPageService",
symbol: str,
start_date: date,
end_date: date,
template_slug: str,
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
data_source: str,
) -> None:
"""Execute backtest in background thread with progress updates.
This function runs in a background thread and updates the job state
as it progresses through stages.
"""
try:
# Stage 1: Validating
job_store.update_job(
workspace_id,
status=JobStatus.RUNNING,
stage=JobStage.VALIDATING,
progress=10,
message="Validating inputs...",
)
# Stage 2: Fetching prices
job_store.update_job(
workspace_id,
stage=JobStage.FETCHING_PRICES,
progress=30,
message=f"Fetching prices for {symbol}...",
)
# Run the backtest (this includes price fetching)
result = service.run_read_only_scenario(
symbol=symbol,
start_date=start_date,
end_date=end_date,
template_slug=template_slug,
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
data_source=data_source,
)
# Stage 3: Calculating (already done by run_read_only_scenario)
job_store.update_job(
workspace_id,
stage=JobStage.CALCULATING,
progress=70,
message="Processing results...",
)
# Convert result to dict for serialization
# BacktestPageRunResult has: scenario, run_result, entry_spot, data_source, data_cost_usd
template_results = result.run_result.template_results
first_template = template_results[0] if template_results else None
summary = first_template.summary_metrics if first_template else None
result_dict = {
"scenario_id": result.scenario.scenario_id,
"scenario_name": result.scenario.display_name,
"symbol": result.scenario.symbol,
"start_date": result.scenario.start_date.isoformat(),
"end_date": result.scenario.end_date.isoformat(),
"entry_spot": result.entry_spot,
"underlying_units": result.scenario.initial_portfolio.underlying_units,
"loan_amount": result.scenario.initial_portfolio.loan_amount,
"margin_call_ltv": result.scenario.initial_portfolio.margin_call_ltv,
"data_source": result.data_source,
"data_cost_usd": result.data_cost_usd,
# Summary metrics from first template result
"start_value": summary.start_value if summary else 0.0,
"end_value_hedged_net": summary.end_value_hedged_net if summary else 0.0,
"total_hedge_cost": summary.total_hedge_cost if summary else 0.0,
"max_ltv_hedged": summary.max_ltv_hedged if summary else 0.0,
"max_ltv_unhedged": summary.max_ltv_unhedged if summary else 0.0,
"margin_call_days_hedged": summary.margin_call_days_hedged if summary else 0,
"margin_call_days_unhedged": summary.margin_call_days_unhedged if summary else 0,
"margin_threshold_breached_hedged": summary.margin_threshold_breached_hedged if summary else False,
"margin_threshold_breached_unhedged": summary.margin_threshold_breached_unhedged if summary else False,
# Template results with full daily path
"template_results": [
{
"template_slug": tr.template_slug,
"template_name": tr.template_name,
"summary_metrics": {
"start_value": tr.summary_metrics.start_value,
"end_value_hedged_net": tr.summary_metrics.end_value_hedged_net,
"total_hedge_cost": tr.summary_metrics.total_hedge_cost,
"max_ltv_hedged": tr.summary_metrics.max_ltv_hedged,
"max_ltv_unhedged": tr.summary_metrics.max_ltv_unhedged,
"margin_call_days_hedged": tr.summary_metrics.margin_call_days_hedged,
"margin_call_days_unhedged": tr.summary_metrics.margin_call_days_unhedged,
},
"daily_path": [
{
"date": dp.date.isoformat(),
"spot_close": dp.spot_close,
"spot_open": dp.spot_open if dp.spot_open is not None else dp.spot_close,
"spot_low": dp.spot_low if dp.spot_low is not None else dp.spot_close,
"spot_high": dp.spot_high if dp.spot_high is not None else dp.spot_close,
"underlying_value": dp.underlying_value,
"option_market_value": dp.option_market_value,
"net_portfolio_value": dp.net_portfolio_value,
"option_contracts": dp.option_contracts,
"ltv_hedged": dp.ltv_hedged,
"ltv_unhedged": dp.ltv_unhedged,
"margin_call_hedged": dp.margin_call_hedged,
"margin_call_unhedged": dp.margin_call_unhedged,
}
for dp in tr.daily_path
],
}
for tr in template_results
],
}
# Stage 4: Complete
job_store.update_job(
workspace_id,
status=JobStatus.COMPLETE,
stage=JobStage.COMPLETE,
progress=100,
message="Backtest complete!",
result=result_dict,
)
logger.info(f"Job {job.job_id} completed successfully")
except Exception as e:
logger.exception(f"Job {job.job_id} failed: {e}")
error_msg = str(e)
# Check for Databento API errors
if "data_start_before_available_start" in error_msg:
import re
match = re.search(r"available start of dataset [^(]+\('([^']+)'", error_msg)
if match:
available_start = match.group(1).split()[0]
error_msg = (
f"Data not available before {available_start}. Please set start date to {available_start} or later."
)
else:
error_msg = "Selected start date is before data is available for this dataset."
elif "BentoClientError" in error_msg or "422" in error_msg:
error_msg = f"Data source error: {error_msg}"
job_store.update_job(
workspace_id,
status=JobStatus.FAILED,
stage=JobStage.FAILED,
progress=0,
error=error_msg,
)
def start_backtest_job(
workspace_id: str,
service: "BacktestPageService",
symbol: str,
start_date: date,
end_date: date,
template_slug: str,
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
data_source: str,
) -> BacktestJob:
"""Start a backtest job in a background thread.
Returns immediately with the job ID. The job runs in the background
and can be polled for status.
"""
job = job_store.create_job(workspace_id)
thread = threading.Thread(
target=run_backtest_job,
kwargs={
"workspace_id": workspace_id,
"job": job,
"service": service,
"symbol": symbol,
"start_date": start_date,
"end_date": end_date,
"template_slug": template_slug,
"underlying_units": underlying_units,
"loan_amount": loan_amount,
"margin_call_ltv": margin_call_ltv,
"data_source": data_source,
},
daemon=True,
)
thread.start()
return job

View File

@@ -0,0 +1,73 @@
from __future__ import annotations
from math import isclose
from app.backtesting.engine import SyntheticBacktestEngine
from app.models.backtest import BacktestRunResult, BacktestScenario
from app.models.strategy_template import StrategyTemplate
from app.services.backtesting.historical_provider import BacktestHistoricalProvider, SyntheticHistoricalProvider
from app.services.strategy_templates import StrategyTemplateService
class BacktestService:
ENTRY_SPOT_ABS_TOLERANCE = 0.01
ENTRY_SPOT_REL_TOLERANCE = 1e-6
def __init__(
self,
provider: BacktestHistoricalProvider | None = None,
template_service: StrategyTemplateService | None = None,
) -> None:
self.provider = provider or SyntheticHistoricalProvider()
self.template_service = template_service or StrategyTemplateService()
self.engine = SyntheticBacktestEngine(self.provider)
def run_scenario(self, scenario: BacktestScenario) -> BacktestRunResult:
self.provider.validate_provider_ref(scenario.provider_ref)
scenario_symbol = scenario.symbol.strip().upper()
history = self.provider.load_history(scenario_symbol, scenario.start_date, scenario.end_date)
if not history:
raise ValueError("No historical prices found for scenario window")
if history[0].date != scenario.start_date:
raise ValueError(
"Scenario start_date must match the first available historical price point for "
"entry_timing='scenario_start_close'"
)
if not isclose(
scenario.initial_portfolio.entry_spot,
history[0].close,
rel_tol=self.ENTRY_SPOT_REL_TOLERANCE,
abs_tol=self.ENTRY_SPOT_ABS_TOLERANCE,
):
raise ValueError(
"initial_portfolio.entry_spot must match the first historical close used for entry "
"when entry_timing='scenario_start_close'"
)
template_results = []
for template_ref in scenario.template_refs:
template = self.template_service.get_template(template_ref.slug)
if template.version != template_ref.version:
raise ValueError(
f"Template version mismatch for {template_ref.slug}: expected {template_ref.version}, got {template.version}"
)
template_symbol = template.underlying_symbol.strip().upper()
if template_symbol not in {scenario_symbol, "*"}:
raise ValueError(f"Template {template.slug} does not support symbol {scenario_symbol}")
self._validate_template_for_mvp(template)
template_results.append(self.engine.run_template(scenario, template, history))
return BacktestRunResult(scenario_id=scenario.scenario_id, template_results=tuple(template_results))
def _validate_template_for_mvp(self, template: StrategyTemplate) -> None:
provider_label = (
"historical snapshot engine" if self.provider.pricing_mode == "snapshot_mid" else "synthetic MVP engine"
)
if template.contract_mode != "continuous_units":
raise ValueError(f"Unsupported contract_mode for {provider_label}: {template.contract_mode}")
if template.roll_policy.policy_type != "hold_to_expiry":
raise ValueError(f"Unsupported roll_policy for {provider_label}: {template.roll_policy.policy_type}")
if template.entry_policy.entry_timing != "scenario_start_close":
raise ValueError(f"Unsupported entry_timing for {provider_label}: {template.entry_policy.entry_timing}")
if template.entry_policy.stagger_days is not None:
raise ValueError(f"Unsupported entry_policy configuration for {provider_label}")

View File

@@ -0,0 +1,388 @@
from __future__ import annotations
from copy import copy
from dataclasses import dataclass
from datetime import date
from math import isclose
from typing import Any
from app.backtesting.engine import SyntheticBacktestEngine
from app.domain.backtesting_math import materialize_backtest_portfolio_state
from app.models.backtest import (
BacktestRunResult,
BacktestScenario,
ProviderRef,
TemplateRef,
)
from app.services.backtesting.databento_source import DatabentoHistoricalPriceSource, DatabentoSourceConfig
from app.services.backtesting.fixture_source import (
FixtureBoundSyntheticHistoricalProvider,
build_backtest_ui_fixture_source,
)
from app.services.backtesting.historical_provider import (
DailyClosePoint,
SyntheticHistoricalProvider,
YFinanceHistoricalPriceSource,
)
from app.services.backtesting.input_normalization import normalize_historical_scenario_inputs
from app.services.backtesting.service import BacktestService
from app.services.strategy_templates import StrategyTemplateService
SUPPORTED_BACKTEST_PAGE_SYMBOLS = ("GLD", "GC", "XAU")
SUPPORTED_DATABENTO_BACKTEST_PAGE_SYMBOLS = ("GLD", "XAU")
def _validate_initial_collateral(underlying_units: float, entry_spot: float, loan_amount: float) -> None:
initial_collateral_value = underlying_units * entry_spot
if loan_amount >= initial_collateral_value:
raise ValueError(
"Historical scenario starts undercollateralized: "
f"loan ${loan_amount:,.0f} exceeds initial collateral ${initial_collateral_value:,.0f} "
f"at entry spot ${entry_spot:,.2f}. Reduce loan amount or increase underlying units."
)
@dataclass(frozen=True)
class BacktestPageRunResult:
scenario: BacktestScenario
run_result: BacktestRunResult
entry_spot: float
data_source: str = "synthetic"
data_cost_usd: float = 0.0
cache_status: str = ""
@dataclass(frozen=True)
class DataSourceInfo:
"""Information about a data source."""
provider_id: str
pricing_mode: str
display_name: str
supports_cost_estimate: bool
supports_cache: bool
class BacktestPageService:
"""Service for the backtest page UI.
This service manages historical data providers and supports multiple
data sources including Databento, Yahoo Finance, and synthetic data.
"""
DATA_SOURCE_INFO: dict[str, DataSourceInfo] = {
"databento": DataSourceInfo(
provider_id="databento",
pricing_mode="historical",
display_name="Databento",
supports_cost_estimate=True,
supports_cache=True,
),
"yfinance": DataSourceInfo(
provider_id="yfinance",
pricing_mode="free",
display_name="Yahoo Finance",
supports_cost_estimate=False,
supports_cache=False,
),
"synthetic": DataSourceInfo(
provider_id="synthetic_v1",
pricing_mode="synthetic_bs_mid",
display_name="Synthetic",
supports_cost_estimate=False,
supports_cache=False,
),
}
def __init__(
self,
backtest_service: BacktestService | None = None,
template_service: StrategyTemplateService | None = None,
databento_config: DatabentoSourceConfig | None = None,
) -> None:
base_service = backtest_service or BacktestService(
template_service=template_service,
provider=None,
)
self.template_service = template_service or base_service.template_service
self.databento_config = databento_config
# Use the injected provider if available, otherwise create a new one
base_provider = base_service.provider
if base_provider is None:
base_provider = SyntheticHistoricalProvider()
fixture_provider = FixtureBoundSyntheticHistoricalProvider(
base_provider=base_provider, # type: ignore[arg-type]
source=build_backtest_ui_fixture_source(),
)
self.backtest_service = copy(base_service)
self.backtest_service.provider = fixture_provider
self.backtest_service.template_service = self.template_service
self.backtest_service.engine = SyntheticBacktestEngine(fixture_provider)
# Cache for Databento provider instances
self._databento_provider: DatabentoHistoricalPriceSource | None = None
self._yfinance_provider: YFinanceHistoricalPriceSource | None = None
def _get_databento_provider(self) -> DatabentoHistoricalPriceSource:
"""Get or create the Databento provider instance."""
if self._databento_provider is None:
self._databento_provider = DatabentoHistoricalPriceSource(config=self.databento_config)
return self._databento_provider
def _get_yfinance_provider(self) -> YFinanceHistoricalPriceSource:
"""Get or create the YFinance provider instance."""
if self._yfinance_provider is None:
self._yfinance_provider = YFinanceHistoricalPriceSource()
return self._yfinance_provider
@staticmethod
def validate_data_source_support(symbol: str, data_source: str) -> None:
normalized_symbol = symbol.strip().upper()
if data_source == "databento" and normalized_symbol not in SUPPORTED_DATABENTO_BACKTEST_PAGE_SYMBOLS:
raise ValueError(
"Databento backtests currently support GLD and XAU only. "
"GC futures remain unavailable on the backtest page until contract mapping is wired."
)
def get_historical_prices(
self, symbol: str, start_date: date, end_date: date, data_source: str
) -> list[DailyClosePoint]:
"""Load historical prices from the specified data source.
Args:
symbol: Trading symbol (GLD, GC, XAU)
start_date: Start date
end_date: End date
data_source: One of "databento", "yfinance", "synthetic"
Returns:
List of daily close points sorted by date
"""
self.validate_data_source_support(symbol, data_source)
if data_source == "databento":
return self._get_databento_provider().load_daily_closes(symbol, start_date, end_date)
elif data_source == "yfinance":
return self._get_yfinance_provider().load_daily_closes(symbol, start_date, end_date)
else:
# Use synthetic fixture data
return self.backtest_service.provider.load_history(symbol, start_date, end_date)
def get_cost_estimate(self, symbol: str, start_date: date, end_date: date, data_source: str = "databento") -> float:
"""Get estimated cost for the data request.
Args:
symbol: Trading symbol
start_date: Start date
end_date: End date
data_source: Data source (only "databento" supports this)
Returns:
Estimated cost in USD
"""
if data_source != "databento":
return 0.0
self.validate_data_source_support(symbol, data_source)
try:
provider = self._get_databento_provider()
return provider.get_cost_estimate(symbol, start_date, end_date)
except Exception:
return 0.0
def get_cache_stats(
self, symbol: str, start_date: date, end_date: date, data_source: str = "databento"
) -> dict[str, Any]:
"""Get cache statistics for the data request.
Args:
symbol: Trading symbol
start_date: Start date
end_date: End date
data_source: Data source (only "databento" supports this)
Returns:
Dict with cache statistics
"""
if data_source != "databento":
return {"status": "not_applicable", "entries": []}
try:
provider = self._get_databento_provider()
return provider.get_cache_stats()
except Exception:
return {"status": "error", "entries": []}
def get_available_date_range(self, symbol: str, data_source: str = "databento") -> tuple[date | None, date | None]:
"""Get the available date range for a symbol from the data source.
Args:
symbol: Trading symbol
data_source: Data source (only "databento" supports this)
Returns:
Tuple of (start_date, end_date) or (None, None) if unavailable
"""
if data_source != "databento":
return None, None
self.validate_data_source_support(symbol, data_source)
try:
provider = self._get_databento_provider()
return provider.get_available_range(symbol)
except Exception:
return None, None
def template_options(self, symbol: str = "GLD") -> list[dict[str, str | int]]:
return [
{
"label": template.display_name,
"slug": template.slug,
"version": template.version,
"description": template.description,
}
for template in self.template_service.list_active_templates(symbol)
]
def derive_entry_spot(self, symbol: str, start_date: date, end_date: date, data_source: str) -> float:
history = self.get_historical_prices(symbol, start_date, end_date, data_source)
if not history:
raise ValueError("No historical prices found for scenario window")
if history[0].date != start_date:
raise ValueError(
"Scenario start date must match the first available historical close for entry-at-start backtests"
)
return history[0].close
def validate_preview_inputs(
self,
*,
symbol: str,
start_date: date,
end_date: date,
template_slug: str,
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
entry_spot: float | None = None,
data_source: str,
) -> float:
normalized_symbol = symbol.strip().upper()
if not normalized_symbol:
raise ValueError("Symbol is required")
if normalized_symbol not in SUPPORTED_BACKTEST_PAGE_SYMBOLS:
raise ValueError(f"Backtests support symbols: {', '.join(SUPPORTED_BACKTEST_PAGE_SYMBOLS)}")
self.validate_data_source_support(normalized_symbol, data_source)
if start_date > end_date:
raise ValueError("Start date must be on or before end date")
normalized_inputs = normalize_historical_scenario_inputs(
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
)
if not template_slug:
raise ValueError("Template selection is required")
self.template_service.get_template(template_slug)
derived_entry_spot = self.derive_entry_spot(normalized_symbol, start_date, end_date, data_source)
if entry_spot is not None and not isclose(
entry_spot,
derived_entry_spot,
rel_tol=BacktestService.ENTRY_SPOT_REL_TOLERANCE,
abs_tol=BacktestService.ENTRY_SPOT_ABS_TOLERANCE,
):
raise ValueError(
f"Supplied entry spot ${entry_spot:,.2f} does not match derived historical entry spot ${derived_entry_spot:,.2f}"
)
_validate_initial_collateral(
normalized_inputs.underlying_units,
derived_entry_spot,
normalized_inputs.loan_amount,
)
return derived_entry_spot
def run_read_only_scenario(
self,
*,
symbol: str,
start_date: date,
end_date: date,
template_slug: str,
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
data_source: str = "synthetic",
) -> BacktestPageRunResult:
normalized_symbol = symbol.strip().upper()
entry_spot = self.validate_preview_inputs(
symbol=normalized_symbol,
start_date=start_date,
end_date=end_date,
template_slug=template_slug,
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
data_source=data_source,
)
normalized_inputs = normalize_historical_scenario_inputs(
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
)
template = self.template_service.get_template(template_slug)
initial_portfolio = materialize_backtest_portfolio_state(
symbol=normalized_symbol,
underlying_units=normalized_inputs.underlying_units,
entry_spot=entry_spot,
loan_amount=normalized_inputs.loan_amount,
margin_call_ltv=normalized_inputs.margin_call_ltv,
)
# Fetch historical prices using the specified data source
history = self.get_historical_prices(normalized_symbol, start_date, end_date, data_source)
if not history:
raise ValueError("No historical prices found for scenario window")
if history[0].date != start_date:
raise ValueError(
"Scenario start date must match the first available historical close for entry-at-start backtests"
)
# Use the fixture provider's ID for the scenario (for pricing mode)
# The actual price data comes from the specified data_source
provider_id = self.backtest_service.provider.provider_id
pricing_mode = self.backtest_service.provider.pricing_mode
scenario = BacktestScenario(
scenario_id=(
f"{normalized_symbol.lower()}-{start_date.isoformat()}-{end_date.isoformat()}-{template.slug}"
),
display_name=f"{normalized_symbol} backtest {start_date.isoformat()}{end_date.isoformat()}",
symbol=normalized_symbol,
start_date=start_date,
end_date=end_date,
initial_portfolio=initial_portfolio,
template_refs=(TemplateRef(slug=template.slug, version=template.version),),
provider_ref=ProviderRef(
provider_id=provider_id,
pricing_mode=pricing_mode,
),
)
# Get cost estimate for Databento
data_cost_usd = 0.0
if data_source == "databento":
data_cost_usd = self.get_cost_estimate(normalized_symbol, start_date, end_date, data_source)
# Run the backtest engine directly with pre-fetched history
# This bypasses the fixture provider in BacktestService.run_scenario
template_result = self.backtest_service.engine.run_template(scenario, template, history)
run_result = BacktestRunResult(scenario_id=scenario.scenario_id, template_results=(template_result,))
return BacktestPageRunResult(
scenario=scenario,
run_result=run_result,
entry_spot=entry_spot,
data_source=data_source,
data_cost_usd=data_cost_usd,
)

View File

@@ -0,0 +1,25 @@
from __future__ import annotations
from decimal import Decimal, InvalidOperation
from app.domain.units import decimal_from_float, to_decimal
def boundary_decimal(value: object, *, field_name: str) -> Decimal:
if value is None:
raise ValueError(f"{field_name} must be present")
if isinstance(value, bool):
raise TypeError(f"{field_name} must be numeric, got bool")
if isinstance(value, float):
return decimal_from_float(value)
if isinstance(value, (Decimal, int)):
return to_decimal(value)
if isinstance(value, str):
stripped = value.strip()
if not stripped:
raise ValueError(f"{field_name} must be present")
try:
return to_decimal(stripped)
except InvalidOperation as exc:
raise ValueError(f"{field_name} must be numeric, got {value!r}") from exc
raise TypeError(f"{field_name} must be numeric, got {type(value)!r}")

View File

@@ -35,8 +35,9 @@ class CacheService:
return return
try: try:
self._client = RedisClient.from_url(self.url, decode_responses=True) # type: ignore[misc] if self.url:
await self._client.ping() self._client = RedisClient.from_url(self.url, decode_responses=True) # type: ignore[misc]
await self._client.ping() # type: ignore[union-attr]
logger.info("Connected to Redis cache") logger.info("Connected to Redis cache")
except Exception as exc: # pragma: no cover - network dependent except Exception as exc: # pragma: no cover - network dependent
logger.warning("Redis unavailable, cache disabled: %s", exc) logger.warning("Redis unavailable, cache disabled: %s", exc)
@@ -81,6 +82,7 @@ def get_cache() -> CacheService:
global _cache_instance global _cache_instance
if _cache_instance is None: if _cache_instance is None:
import os import os
redis_url = os.environ.get("REDIS_URL") redis_url = os.environ.get("REDIS_URL")
_cache_instance = CacheService(redis_url) _cache_instance = CacheService(redis_url)
return _cache_instance return _cache_instance

View File

@@ -5,9 +5,11 @@ from __future__ import annotations
import asyncio import asyncio
import logging import logging
import math import math
from datetime import UTC, datetime from datetime import date, datetime, timezone
from typing import Any from typing import Any
from app.core.calculations import option_row_greeks
from app.domain.instruments import gld_ounces_per_share
from app.services.cache import CacheService from app.services.cache import CacheService
from app.strategies.engine import StrategySelectionEngine from app.strategies.engine import StrategySelectionEngine
@@ -22,9 +24,15 @@ except ImportError: # pragma: no cover - optional dependency
class DataService: class DataService:
"""Fetches portfolio and market data, using Redis when available.""" """Fetches portfolio and market data, using Redis when available."""
def __init__(self, cache: CacheService, default_symbol: str = "GLD") -> None: def __init__(self, cache: CacheService, default_underlying: str = "GLD") -> None:
self.cache = cache self.cache = cache
self.default_symbol = default_symbol self.default_underlying = default_underlying
self.gc_f_symbol = "GC=F" # COMEX Gold Futures
@property
def default_symbol(self) -> str:
"""Backward compatibility alias for default_underlying."""
return self.default_underlying
async def get_portfolio(self, symbol: str | None = None) -> dict[str, Any]: async def get_portfolio(self, symbol: str | None = None) -> dict[str, Any]:
ticker = (symbol or self.default_symbol).upper() ticker = (symbol or self.default_symbol).upper()
@@ -41,94 +49,403 @@ class DataService:
"portfolio_value": round(quote["price"] * 1000, 2), "portfolio_value": round(quote["price"] * 1000, 2),
"loan_amount": 600_000.0, "loan_amount": 600_000.0,
"ltv_ratio": round(600_000.0 / max(quote["price"] * 1000, 1), 4), "ltv_ratio": round(600_000.0 / max(quote["price"] * 1000, 1), 4),
"updated_at": datetime.now(UTC).isoformat(), "updated_at": datetime.now(timezone.utc).isoformat(),
"source": quote["source"], "source": quote["source"],
} }
await self.cache.set_json(cache_key, portfolio) await self.cache.set_json(cache_key, portfolio)
return portfolio return portfolio
async def get_quote(self, symbol: str) -> dict[str, Any]: async def get_quote(self, symbol: str) -> dict[str, Any]:
cache_key = f"quote:{symbol}" """Fetch quote for the given symbol, routing to appropriate data source.
For GLD: fetches from yfinance (ETF share price)
For GC=F: fetches from yfinance (futures price) or returns placeholder
"""
normalized_symbol = symbol.upper()
cache_key = f"quote:{normalized_symbol}"
cached = await self.cache.get_json(cache_key) cached = await self.cache.get_json(cache_key)
if cached and isinstance(cached, dict): if cached and isinstance(cached, dict):
return cached try:
normalized_cached = self._normalize_quote_payload(cached, normalized_symbol)
except ValueError:
normalized_cached = None
if normalized_cached is not None:
if normalized_cached != cached:
await self.cache.set_json(cache_key, normalized_cached)
return normalized_cached
# Route based on underlying
if normalized_symbol == "GC=F":
quote = self._normalize_quote_payload(await self._fetch_gc_futures(), normalized_symbol)
else:
quote = self._normalize_quote_payload(await self._fetch_quote(normalized_symbol), normalized_symbol)
quote = await self._fetch_quote(symbol)
await self.cache.set_json(cache_key, quote) await self.cache.set_json(cache_key, quote)
return quote return quote
async def get_options_chain(self, symbol: str | None = None) -> dict[str, Any]: async def get_option_expirations(self, symbol: str | None = None) -> dict[str, Any]:
ticker_symbol = (symbol or self.default_symbol).upper() ticker_symbol = (symbol or self.default_underlying).upper()
cache_key = f"options:{ticker_symbol}" cache_key = f"options:{ticker_symbol}:expirations"
cached = await self.cache.get_json(cache_key) cached = await self.cache.get_json(cache_key)
if cached and isinstance(cached, dict): if cached and isinstance(cached, dict):
return cached malformed_list_shape = (
not isinstance(cached.get("expirations"), list) and cached.get("expirations") is not None
)
try:
normalized_cached = self._normalize_option_expirations_payload(cached, ticker_symbol)
except ValueError as exc:
logger.warning("Discarding cached option expirations payload for %s: %s", ticker_symbol, exc)
normalized_cached = None
if malformed_list_shape:
logger.warning("Discarding malformed cached option expirations payload for %s", ticker_symbol)
normalized_cached = None
if normalized_cached is not None:
if normalized_cached != cached:
await self.cache.set_json(cache_key, normalized_cached)
return normalized_cached
# GC=F options not yet implemented - return placeholder
if ticker_symbol == "GC=F":
quote = await self.get_quote(ticker_symbol)
payload = self._fallback_option_expirations(
ticker_symbol,
quote,
source="placeholder",
error="Options data for GC=F coming soon",
)
await self.cache.set_json(cache_key, payload)
return payload
quote = await self.get_quote(ticker_symbol) quote = await self.get_quote(ticker_symbol)
if yf is None: if yf is None:
options_chain = self._fallback_options_chain(ticker_symbol, quote, source="fallback") payload = self._fallback_option_expirations(
await self.cache.set_json(cache_key, options_chain) ticker_symbol,
return options_chain quote,
source="fallback",
error="yfinance is not installed",
)
await self.cache.set_json(cache_key, payload)
return payload
try: try:
ticker = yf.Ticker(ticker_symbol) ticker = yf.Ticker(ticker_symbol)
expirations = await asyncio.to_thread(lambda: list(ticker.options or [])) expirations = await asyncio.to_thread(lambda: list(ticker.options or []))
if not expirations: if not expirations:
options_chain = self._fallback_options_chain( payload = self._fallback_option_expirations(
ticker_symbol, ticker_symbol,
quote, quote,
source="fallback", source="fallback",
error="No option expirations returned by yfinance", error="No option expirations returned by yfinance",
) )
await self.cache.set_json(cache_key, options_chain) await self.cache.set_json(cache_key, payload)
return options_chain return payload
calls: list[dict[str, Any]] = [] payload = self._normalize_option_expirations_payload(
puts: list[dict[str, Any]] = [] {
"symbol": ticker_symbol,
for expiry in expirations: "updated_at": datetime.now(timezone.utc).isoformat(),
try: "expirations": expirations,
chain = await asyncio.to_thread(ticker.option_chain, expiry) "underlying_price": quote["price"],
except Exception as exc: # pragma: no cover - network dependent "source": "yfinance",
logger.warning("Failed to fetch option chain for %s %s: %s", ticker_symbol, expiry, exc) },
continue ticker_symbol,
)
calls.extend(self._normalize_option_rows(chain.calls, ticker_symbol, expiry, "call")) await self.cache.set_json(cache_key, payload)
puts.extend(self._normalize_option_rows(chain.puts, ticker_symbol, expiry, "put")) return payload
if not calls and not puts:
options_chain = self._fallback_options_chain(
ticker_symbol,
quote,
source="fallback",
error="No option contracts returned by yfinance",
)
await self.cache.set_json(cache_key, options_chain)
return options_chain
options_chain = {
"symbol": ticker_symbol,
"updated_at": datetime.now(UTC).isoformat(),
"expirations": expirations,
"calls": calls,
"puts": puts,
"rows": sorted(calls + puts, key=lambda row: (row["expiry"], row["strike"], row["type"])),
"underlying_price": quote["price"],
"source": "yfinance",
}
await self.cache.set_json(cache_key, options_chain)
return options_chain
except Exception as exc: # pragma: no cover - network dependent except Exception as exc: # pragma: no cover - network dependent
logger.warning("Failed to fetch options chain for %s from yfinance: %s", ticker_symbol, exc) logger.warning("Failed to fetch option expirations for %s from yfinance: %s", ticker_symbol, exc)
options_chain = self._fallback_options_chain( payload = self._fallback_option_expirations(
ticker_symbol, ticker_symbol,
quote, quote,
source="fallback", source="fallback",
error=str(exc), error=str(exc),
) )
await self.cache.set_json(cache_key, options_chain) await self.cache.set_json(cache_key, payload)
return options_chain return payload
async def get_options_chain_for_expiry(
self, symbol: str | None = None, expiry: str | None = None
) -> dict[str, Any]:
ticker_symbol = (symbol or self.default_underlying).upper()
expirations_data = await self.get_option_expirations(ticker_symbol)
expirations = list(expirations_data.get("expirations") or [])
target_expiry = expiry or (expirations[0] if expirations else None)
quote = await self.get_quote(ticker_symbol)
if not target_expiry:
return self._fallback_options_chain(
ticker_symbol,
quote,
expirations=expirations,
selected_expiry=None,
source=expirations_data.get("source", quote.get("source", "fallback")),
error=expirations_data.get("error"),
)
cache_key = f"options:{ticker_symbol}:{target_expiry}"
cached = await self.cache.get_json(cache_key)
if cached and isinstance(cached, dict):
malformed_list_shape = any(
not isinstance(cached.get(field), list) and cached.get(field) is not None
for field in ("expirations", "calls", "puts", "rows")
)
try:
normalized_cached = self._normalize_options_chain_payload(cached, ticker_symbol)
except ValueError as exc:
logger.warning(
"Discarding cached options chain payload for %s %s: %s", ticker_symbol, target_expiry, exc
)
normalized_cached = None
if malformed_list_shape:
logger.warning(
"Discarding malformed cached options chain payload for %s %s", ticker_symbol, target_expiry
)
normalized_cached = None
if normalized_cached is not None:
if normalized_cached != cached:
await self.cache.set_json(cache_key, normalized_cached)
return normalized_cached
# GC=F options not yet implemented - return placeholder
if ticker_symbol == "GC=F":
payload = self._fallback_options_chain(
ticker_symbol,
quote,
expirations=expirations,
selected_expiry=target_expiry,
source="placeholder",
error="Options data for GC=F coming soon",
)
await self.cache.set_json(cache_key, payload)
return payload
if yf is None:
payload = self._fallback_options_chain(
ticker_symbol,
quote,
expirations=expirations,
selected_expiry=target_expiry,
source="fallback",
error="yfinance is not installed",
)
await self.cache.set_json(cache_key, payload)
return payload
try:
ticker = yf.Ticker(ticker_symbol)
chain = await asyncio.to_thread(ticker.option_chain, target_expiry)
calls = self._normalize_option_rows(chain.calls, ticker_symbol, target_expiry, "call", quote["price"])
puts = self._normalize_option_rows(chain.puts, ticker_symbol, target_expiry, "put", quote["price"])
if not calls and not puts:
payload = self._fallback_options_chain(
ticker_symbol,
quote,
expirations=expirations,
selected_expiry=target_expiry,
source="fallback",
error="No option contracts returned by yfinance",
)
await self.cache.set_json(cache_key, payload)
return payload
payload = self._normalize_options_chain_payload(
{
"symbol": ticker_symbol,
"selected_expiry": target_expiry,
"updated_at": datetime.now(timezone.utc).isoformat(),
"expirations": expirations,
"calls": calls,
"puts": puts,
"rows": sorted(calls + puts, key=lambda row: (row["strike"], row["type"])),
"underlying_price": quote["price"],
"source": "yfinance",
},
ticker_symbol,
)
await self.cache.set_json(cache_key, payload)
return payload
except Exception as exc: # pragma: no cover - network dependent
logger.warning(
"Failed to fetch options chain for %s %s from yfinance: %s", ticker_symbol, target_expiry, exc
)
payload = self._fallback_options_chain(
ticker_symbol,
quote,
expirations=expirations,
selected_expiry=target_expiry,
source="fallback",
error=str(exc),
)
await self.cache.set_json(cache_key, payload)
return payload
async def get_options_chain(self, symbol: str | None = None) -> dict[str, Any]:
ticker_symbol = (symbol or self.default_symbol).upper()
expirations_data = await self.get_option_expirations(ticker_symbol)
expirations = list(expirations_data.get("expirations") or [])
if not expirations:
quote = await self.get_quote(ticker_symbol)
return self._fallback_options_chain(
ticker_symbol,
quote,
expirations=[],
selected_expiry=None,
source=expirations_data.get("source", quote.get("source", "fallback")),
error=expirations_data.get("error"),
)
return await self.get_options_chain_for_expiry(ticker_symbol, expirations[0])
async def get_gc_futures(self) -> dict[str, Any]:
"""Fetch GC=F (COMEX Gold Futures) quote.
Returns a quote dict similar to get_quote but for gold futures.
Falls back gracefully if GC=F is unavailable.
"""
cache_key = f"quote:{self.gc_f_symbol}"
cached = await self.cache.get_json(cache_key)
if cached and isinstance(cached, dict):
try:
normalized_cached = self._normalize_quote_payload(cached, self.gc_f_symbol)
except ValueError:
normalized_cached = None
if normalized_cached is not None:
if normalized_cached != cached:
await self.cache.set_json(cache_key, normalized_cached)
return normalized_cached
quote = self._normalize_quote_payload(await self._fetch_gc_futures(), self.gc_f_symbol)
await self.cache.set_json(cache_key, quote)
return quote
async def _fetch_gc_futures(self) -> dict[str, Any]:
"""Fetch GC=F from yfinance with graceful fallback."""
if yf is None:
return self._fallback_gc_futures(source="fallback", error="yfinance is not installed")
try:
ticker = yf.Ticker(self.gc_f_symbol)
history = await asyncio.to_thread(ticker.history, period="5d", interval="1d")
if history.empty:
return self._fallback_gc_futures(source="fallback", error="No history returned for GC=F")
closes = history["Close"]
last = float(closes.iloc[-1])
previous = float(closes.iloc[-2]) if len(closes) > 1 else last
change = round(last - previous, 4)
change_percent = round((change / previous) * 100, 4) if previous else 0.0
# Try to get more recent price from fast_info if available
try:
fast_price = ticker.fast_info.get("lastPrice", last)
if fast_price and fast_price > 0:
last = float(fast_price)
except Exception:
pass # Keep history close if fast_info unavailable
return {
"symbol": self.gc_f_symbol,
"price": round(last, 4),
"quote_unit": "ozt", # Gold futures are per troy ounce
"change": change,
"change_percent": change_percent,
"updated_at": datetime.now(timezone.utc).isoformat(),
"source": "yfinance",
}
except Exception as exc: # pragma: no cover - network dependent
logger.warning("Failed to fetch %s from yfinance: %s", self.gc_f_symbol, exc)
return self._fallback_gc_futures(source="fallback", error=str(exc))
@staticmethod
def _fallback_gc_futures(source: str, error: str | None = None) -> dict[str, Any]:
"""Fallback GC=F quote when live data unavailable."""
payload = {
"symbol": "GC=F",
"price": 2700.0, # Fallback estimate
"quote_unit": "ozt",
"change": 0.0,
"change_percent": 0.0,
"updated_at": datetime.now(timezone.utc).isoformat(),
"source": source,
}
if error:
payload["error"] = error
return payload
async def get_basis_data(self) -> dict[str, Any]:
"""Get GLD/GC=F basis data for comparison.
Returns:
Dict with GLD implied spot, GC=F adjusted price, basis in bps, and status info.
"""
gld_quote = await self.get_quote("GLD")
gc_f_quote = await self.get_gc_futures()
# Use current date for GLD ounces calculation
ounces_per_share = float(gld_ounces_per_share(date.today()))
# GLD implied spot = GLD_price / ounces_per_share
gld_price = gld_quote.get("price", 0.0)
gld_implied_spot = gld_price / ounces_per_share if ounces_per_share > 0 and gld_price > 0 else 0.0
# GC=F adjusted = (GC=F - contango_estimate) / 10 for naive comparison
# But actually GC=F is already per oz, so we just adjust for contango
gc_f_price = gc_f_quote.get("price", 0.0)
contango_estimate = 10.0 # Typical contango ~$10/oz
gc_f_adjusted = gc_f_price - contango_estimate if gc_f_price > 0 else 0.0
# Basis in bps = (GLD_implied_spot / GC=F_adjusted - 1) * 10000
basis_bps = 0.0
if gc_f_adjusted > 0 and gld_implied_spot > 0:
basis_bps = (gld_implied_spot / gc_f_adjusted - 1) * 10000
# Determine basis status
abs_basis = abs(basis_bps)
if abs_basis < 25:
basis_status = "green"
basis_label = "Normal"
elif abs_basis < 50:
basis_status = "yellow"
basis_label = "Elevated"
else:
basis_status = "red"
basis_label = "Warning"
# After-hours check: compare timestamps
gld_updated = gld_quote.get("updated_at", "")
gc_f_updated = gc_f_quote.get("updated_at", "")
after_hours = False
after_hours_note = ""
try:
gld_time = datetime.fromisoformat(gld_updated.replace("Z", "+00:00"))
gc_f_time = datetime.fromisoformat(gc_f_updated.replace("Z", "+00:00"))
# If GC=F updated much more recently, likely after-hours
time_diff = (gc_f_time - gld_time).total_seconds()
if time_diff > 3600: # More than 1 hour difference
after_hours = True
after_hours_note = "GLD quote may be stale (after-hours)"
except Exception:
pass
return {
"gld_implied_spot": round(gld_implied_spot, 2),
"gld_price": round(gld_price, 2),
"gld_ounces_per_share": round(ounces_per_share, 4),
"gc_f_price": round(gc_f_price, 2),
"gc_f_adjusted": round(gc_f_adjusted, 2),
"contango_estimate": contango_estimate,
"basis_bps": round(basis_bps, 1),
"basis_status": basis_status,
"basis_label": basis_label,
"after_hours": after_hours,
"after_hours_note": after_hours_note,
"gld_updated_at": gld_updated,
"gc_f_updated_at": gc_f_updated,
"gld_source": gld_quote.get("source", "unknown"),
"gc_f_source": gc_f_quote.get("source", "unknown"),
}
async def get_strategies(self, symbol: str | None = None) -> dict[str, Any]: async def get_strategies(self, symbol: str | None = None) -> dict[str, Any]:
ticker = (symbol or self.default_symbol).upper() ticker = (symbol or self.default_symbol).upper()
@@ -137,7 +454,7 @@ class DataService:
return { return {
"symbol": ticker, "symbol": ticker,
"updated_at": datetime.now(UTC).isoformat(), "updated_at": datetime.now(timezone.utc).isoformat(),
"paper_parameters": { "paper_parameters": {
"portfolio_value": engine.portfolio_value, "portfolio_value": engine.portfolio_value,
"loan_amount": engine.loan_amount, "loan_amount": engine.loan_amount,
@@ -172,27 +489,50 @@ class DataService:
return { return {
"symbol": symbol, "symbol": symbol,
"price": round(last, 4), "price": round(last, 4),
"quote_unit": "share",
"change": change, "change": change,
"change_percent": change_percent, "change_percent": change_percent,
"updated_at": datetime.now(UTC).isoformat(), "updated_at": datetime.now(timezone.utc).isoformat(),
"source": "yfinance", "source": "yfinance",
} }
except Exception as exc: # pragma: no cover - network dependent except Exception as exc: # pragma: no cover - network dependent
logger.warning("Failed to fetch %s from yfinance: %s", symbol, exc) logger.warning("Failed to fetch %s from yfinance: %s", symbol, exc)
return self._fallback_quote(symbol, source="fallback") return self._fallback_quote(symbol, source="fallback")
def _fallback_options_chain( def _fallback_option_expirations(
self, self,
symbol: str, symbol: str,
quote: dict[str, Any], quote: dict[str, Any],
*, *,
source: str, source: str,
error: str | None = None, error: str | None = None,
) -> dict[str, Any]:
payload = {
"symbol": symbol,
"updated_at": datetime.now(timezone.utc).isoformat(),
"expirations": [],
"underlying_price": quote["price"],
"source": source,
}
if error:
payload["error"] = error
return payload
def _fallback_options_chain(
self,
symbol: str,
quote: dict[str, Any],
*,
expirations: list[str],
selected_expiry: str | None,
source: str,
error: str | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
options_chain = { options_chain = {
"symbol": symbol, "symbol": symbol,
"updated_at": datetime.now(UTC).isoformat(), "selected_expiry": selected_expiry,
"expirations": [], "updated_at": datetime.now(timezone.utc).isoformat(),
"expirations": expirations,
"calls": [], "calls": [],
"puts": [], "puts": [],
"rows": [], "rows": [],
@@ -203,7 +543,107 @@ class DataService:
options_chain["error"] = error options_chain["error"] = error
return options_chain return options_chain
def _normalize_option_rows(self, frame: Any, symbol: str, expiry: str, option_type: str) -> list[dict[str, Any]]: @staticmethod
def _normalize_option_expirations_payload(payload: dict[str, Any], symbol: str) -> dict[str, Any]:
"""Normalize option expirations payload to explicit contract.
This is the named boundary adapter between external provider/cache
payloads and internal option expirations handling. It ensures:
- symbol is always present and uppercased
- expirations is always a list (empty if None/missing)
- Explicit symbol mismatches are rejected (fail-closed)
Args:
payload: Raw expirations dict from cache or provider
symbol: Expected symbol (used as fallback if missing from payload)
Returns:
Normalized expirations dict with explicit symbol and list type
Raises:
ValueError: If payload symbol explicitly conflicts with requested symbol
"""
normalized: dict[str, Any] = dict(payload)
normalized_symbol = symbol.upper()
# Ensure symbol is always present and normalized.
# Missing symbol is repaired from the requested key; explicit mismatches are rejected.
raw_symbol = normalized.get("symbol", normalized_symbol)
normalized_payload_symbol = str(raw_symbol).upper() if raw_symbol is not None else normalized_symbol
if raw_symbol is not None and normalized_payload_symbol != normalized_symbol:
raise ValueError(
f"Option expirations symbol mismatch: expected {normalized_symbol}, got {normalized_payload_symbol}"
)
normalized["symbol"] = normalized_payload_symbol
# Ensure expirations is always a list
expirations = normalized.get("expirations")
if not isinstance(expirations, list):
logger.warning(
"Repairing malformed option expirations payload for %s: expirations was %r",
normalized_symbol,
type(expirations).__name__,
)
normalized["expirations"] = []
return normalized
@staticmethod
def _normalize_options_chain_payload(payload: dict[str, Any], symbol: str) -> dict[str, Any]:
"""Normalize options chain payload to explicit contract.
This is the named boundary adapter between external provider/cache
payloads and internal options chain handling. It ensures:
- symbol is always present and uppercased
- calls, puts, rows, and expirations are always lists (empty if None/missing)
- Explicit symbol mismatches are rejected (fail-closed)
Args:
payload: Raw options chain dict from cache or provider
symbol: Expected symbol (used as fallback if missing from payload)
Returns:
Normalized options chain dict with explicit symbol and list types
Raises:
ValueError: If payload symbol explicitly conflicts with requested symbol
"""
normalized: dict[str, Any] = dict(payload)
normalized_symbol = symbol.upper()
# Ensure symbol is always present and normalized.
# Missing symbol is repaired from the requested key; explicit mismatches are rejected.
raw_symbol = normalized.get("symbol", normalized_symbol)
normalized_payload_symbol = str(raw_symbol).upper() if raw_symbol is not None else normalized_symbol
if raw_symbol is not None and normalized_payload_symbol != normalized_symbol:
raise ValueError(
f"Options chain symbol mismatch: expected {normalized_symbol}, got {normalized_payload_symbol}"
)
normalized["symbol"] = normalized_payload_symbol
# Ensure list fields are always lists
for field in ("expirations", "calls", "puts", "rows"):
if not isinstance(normalized.get(field), list):
logger.warning(
"Repairing malformed options chain payload for %s: %s was %r",
normalized_symbol,
field,
type(normalized.get(field)).__name__,
)
normalized[field] = []
return normalized
def _normalize_option_rows(
self,
frame: Any,
symbol: str,
expiry: str,
option_type: str,
underlying_price: float,
) -> list[dict[str, Any]]:
if frame is None or getattr(frame, "empty", True): if frame is None or getattr(frame, "empty", True):
return [] return []
@@ -219,27 +659,22 @@ class DataService:
implied_volatility = self._safe_float(item.get("impliedVolatility")) implied_volatility = self._safe_float(item.get("impliedVolatility"))
contract_symbol = str(item.get("contractSymbol") or "").strip() contract_symbol = str(item.get("contractSymbol") or "").strip()
rows.append( row = {
{ "contractSymbol": contract_symbol,
"contractSymbol": contract_symbol, "symbol": contract_symbol or f"{symbol} {expiry} {option_type.upper()} {strike:.2f}",
"symbol": contract_symbol or f"{symbol} {expiry} {option_type.upper()} {strike:.2f}", "strike": strike,
"strike": strike, "bid": bid,
"bid": bid, "ask": ask,
"ask": ask, "premium": last_price or self._midpoint(bid, ask),
"premium": last_price or self._midpoint(bid, ask), "lastPrice": last_price,
"lastPrice": last_price, "impliedVolatility": implied_volatility,
"impliedVolatility": implied_volatility, "expiry": expiry,
"expiry": expiry, "type": option_type,
"type": option_type, "openInterest": int(self._safe_float(item.get("openInterest"))),
"openInterest": int(self._safe_float(item.get("openInterest"))), "volume": int(self._safe_float(item.get("volume"))),
"volume": int(self._safe_float(item.get("volume"))), }
"delta": 0.0, row.update(option_row_greeks(row, underlying_price))
"gamma": 0.0, rows.append(row)
"theta": 0.0,
"vega": 0.0,
"rho": 0.0,
}
)
return rows return rows
@staticmethod @staticmethod
@@ -256,13 +691,55 @@ class DataService:
return round((bid + ask) / 2, 4) return round((bid + ask) / 2, 4)
return max(bid, ask, 0.0) return max(bid, ask, 0.0)
@staticmethod
def _normalize_quote_payload(payload: dict[str, Any], symbol: str) -> dict[str, Any]:
"""Normalize provider/cache quote payload to explicit contract.
This is the named boundary adapter between external float-heavy provider
payloads and internal quote handling. It ensures:
- symbol is always present and uppercased
- GLD quotes have explicit quote_unit='share' metadata
- Non-GLD symbols pass through without auto-assigned units
Fail-closed: missing/invalid fields are preserved for upstream handling
rather than silently defaulted. Type conversion is not performed here.
Args:
payload: Raw quote dict from cache or provider (float-heavy)
symbol: Expected symbol (used as fallback if missing from payload)
Returns:
Normalized quote dict with explicit symbol and GLD quote_unit
"""
normalized: dict[str, Any] = dict(payload)
normalized_symbol = symbol.upper()
# Ensure symbol is always present and normalized.
# Missing symbol is repaired from the requested key; explicit mismatches are rejected.
raw_symbol = normalized.get("symbol", normalized_symbol)
normalized_payload_symbol = str(raw_symbol).upper() if raw_symbol is not None else normalized_symbol
if raw_symbol is not None and normalized_payload_symbol != normalized_symbol:
raise ValueError(
f"Quote payload symbol mismatch: expected {normalized_symbol}, got {normalized_payload_symbol}"
)
normalized["symbol"] = normalized_payload_symbol
# Add explicit quote_unit for GLD (CORE-002A/B compatibility)
# Repair missing or empty unit metadata, but preserve explicit non-empty values
if normalized["symbol"] == "GLD" and not normalized.get("quote_unit"):
normalized["quote_unit"] = "share"
return normalized
@staticmethod @staticmethod
def _fallback_quote(symbol: str, source: str) -> dict[str, Any]: def _fallback_quote(symbol: str, source: str) -> dict[str, Any]:
return { return {
"symbol": symbol, "symbol": symbol,
"price": 215.0, "price": 215.0,
"quote_unit": "share",
"change": 0.0, "change": 0.0,
"change_percent": 0.0, "change_percent": 0.0,
"updated_at": datetime.now(UTC).isoformat(), "updated_at": datetime.now(timezone.utc).isoformat(),
"source": source, "source": source,
} }

View File

@@ -0,0 +1,310 @@
from __future__ import annotations
from dataclasses import dataclass
from app.models.backtest import BacktestScenario, EventComparisonRanking, EventComparisonReport
from app.services.backtesting.comparison import EventComparisonService
from app.services.backtesting.fixture_source import bind_fixture_source, build_event_comparison_fixture_source
from app.services.backtesting.historical_provider import SyntheticHistoricalProvider
from app.services.backtesting.input_normalization import normalize_historical_scenario_inputs
from app.services.event_presets import EventPresetService
from app.services.strategy_templates import StrategyTemplateService
SUPPORTED_EVENT_COMPARISON_SYMBOL = "GLD"
def _validate_initial_collateral(underlying_units: float, entry_spot: float, loan_amount: float) -> None:
initial_collateral_value = underlying_units * entry_spot
if loan_amount >= initial_collateral_value:
raise ValueError(
"Historical scenario starts undercollateralized: "
f"loan ${loan_amount:,.0f} exceeds initial collateral ${initial_collateral_value:,.0f} "
f"at entry spot ${entry_spot:,.2f}. Reduce loan amount or increase underlying units."
)
EventComparisonFixtureHistoricalPriceSource = build_event_comparison_fixture_source
@dataclass(frozen=True)
class EventComparisonChartSeries:
name: str
values: tuple[float, ...]
@dataclass(frozen=True)
class EventComparisonChartModel:
dates: tuple[str, ...]
series: tuple[EventComparisonChartSeries, ...]
@dataclass(frozen=True)
class EventComparisonDrilldownRow:
date: str
spot_close: float
net_portfolio_value: float
option_market_value: float
realized_option_cashflow: float
ltv_unhedged: float
ltv_hedged: float
margin_call_hedged: bool
active_position_ids: tuple[str, ...]
@dataclass(frozen=True)
class EventComparisonDrilldownModel:
rank: int
template_slug: str
template_name: str
survived_margin_call: bool
margin_call_days_hedged: int
total_option_payoff_realized: float
hedge_cost: float
final_equity: float
worst_ltv_hedged: float
worst_ltv_date: str | None
breach_dates: tuple[str, ...]
rows: tuple[EventComparisonDrilldownRow, ...]
class EventComparisonPageService:
def __init__(
self,
comparison_service: EventComparisonService | None = None,
event_preset_service: EventPresetService | None = None,
template_service: StrategyTemplateService | None = None,
) -> None:
self.event_preset_service = event_preset_service or EventPresetService()
self.template_service = template_service or StrategyTemplateService()
if comparison_service is None:
provider = bind_fixture_source(
SyntheticHistoricalProvider(),
build_event_comparison_fixture_source(),
)
comparison_service = EventComparisonService(
provider=provider,
event_preset_service=self.event_preset_service,
template_service=self.template_service,
)
self.comparison_service = comparison_service
def preset_options(self, symbol: str = SUPPORTED_EVENT_COMPARISON_SYMBOL) -> list[dict[str, object]]:
return [
{
"slug": preset.slug,
"label": preset.display_name,
"description": preset.description,
"default_template_slugs": list(preset.scenario_overrides.default_template_slugs),
}
for preset in self.event_preset_service.list_presets(symbol)
]
def template_options(self, symbol: str = SUPPORTED_EVENT_COMPARISON_SYMBOL) -> list[dict[str, object]]:
return [
{
"slug": template.slug,
"label": template.display_name,
"description": template.description,
}
for template in self.template_service.list_active_templates(symbol)
]
def default_template_selection(self, preset_slug: str) -> tuple[str, ...]:
preset = self.event_preset_service.get_preset(preset_slug)
return tuple(preset.scenario_overrides.default_template_slugs)
def derive_entry_spot(self, *, preset_slug: str, template_slugs: tuple[str, ...]) -> float:
if not template_slugs:
raise ValueError("Select at least one strategy template.")
scenario = self.comparison_service.preview_scenario_from_inputs(
preset_slug=preset_slug,
template_slugs=template_slugs,
underlying_units=1.0,
loan_amount=0.0,
margin_call_ltv=0.75,
)
return float(scenario.initial_portfolio.entry_spot)
def preview_scenario(
self,
*,
preset_slug: str,
template_slugs: tuple[str, ...],
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
) -> BacktestScenario:
if not template_slugs:
raise ValueError("Select at least one strategy template.")
normalized_inputs = normalize_historical_scenario_inputs(
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
)
try:
scenario = self.comparison_service.preview_scenario_from_inputs(
preset_slug=preset_slug,
template_slugs=template_slugs,
underlying_units=normalized_inputs.underlying_units,
loan_amount=normalized_inputs.loan_amount,
margin_call_ltv=normalized_inputs.margin_call_ltv,
)
except ValueError as exc:
if str(exc) == "loan_amount must be less than initial collateral value":
preset = self.event_preset_service.get_preset(preset_slug)
preview = self.comparison_service.provider.load_history(
preset.symbol.strip().upper(),
preset.window_start,
preset.window_end,
)
if preview:
_validate_initial_collateral(
normalized_inputs.underlying_units,
preview[0].close,
normalized_inputs.loan_amount,
)
raise
_validate_initial_collateral(
normalized_inputs.underlying_units,
scenario.initial_portfolio.entry_spot,
normalized_inputs.loan_amount,
)
return scenario
def run_read_only_comparison(
self,
*,
preset_slug: str,
template_slugs: tuple[str, ...],
underlying_units: float,
loan_amount: float,
margin_call_ltv: float,
) -> EventComparisonReport:
if not preset_slug:
raise ValueError("Preset selection is required")
if not template_slugs:
raise ValueError("Select at least one strategy template.")
normalized_inputs = normalize_historical_scenario_inputs(
underlying_units=underlying_units,
loan_amount=loan_amount,
margin_call_ltv=margin_call_ltv,
)
preset = self.event_preset_service.get_preset(preset_slug)
normalized_symbol = preset.symbol.strip().upper()
if normalized_symbol != SUPPORTED_EVENT_COMPARISON_SYMBOL:
raise ValueError("BT-003A event comparison is currently limited to GLD on this page")
try:
preview = self.comparison_service.preview_scenario_from_inputs(
preset_slug=preset.slug,
template_slugs=template_slugs,
underlying_units=normalized_inputs.underlying_units,
loan_amount=normalized_inputs.loan_amount,
margin_call_ltv=normalized_inputs.margin_call_ltv,
)
except ValueError as exc:
if str(exc) == "loan_amount must be less than initial collateral value":
preview_history = self.comparison_service.provider.load_history(
normalized_symbol,
preset.window_start,
preset.window_end,
)
if preview_history:
_validate_initial_collateral(
normalized_inputs.underlying_units,
preview_history[0].close,
normalized_inputs.loan_amount,
)
raise
_validate_initial_collateral(
normalized_inputs.underlying_units,
preview.initial_portfolio.entry_spot,
normalized_inputs.loan_amount,
)
return self.comparison_service.compare_event_from_inputs(
preset_slug=preset.slug,
template_slugs=template_slugs,
underlying_units=normalized_inputs.underlying_units,
loan_amount=normalized_inputs.loan_amount,
margin_call_ltv=normalized_inputs.margin_call_ltv,
)
@staticmethod
def chart_model(report: EventComparisonReport, max_ranked_series: int = 3) -> EventComparisonChartModel:
ranked = report.rankings[:max_ranked_series]
if not ranked:
return EventComparisonChartModel(dates=(), series=())
dates = tuple(point.date.isoformat() for point in ranked[0].result.daily_path)
series = [
EventComparisonChartSeries(
name="Unhedged collateral baseline",
values=tuple(round(point.underlying_value, 2) for point in ranked[0].result.daily_path),
)
]
for item in ranked:
series.append(
EventComparisonChartSeries(
name=item.template_name,
values=tuple(round(point.net_portfolio_value, 2) for point in item.result.daily_path),
)
)
return EventComparisonChartModel(dates=dates, series=tuple(series))
@staticmethod
def drilldown_model(
report: EventComparisonReport,
*,
template_slug: str | None = None,
) -> EventComparisonDrilldownModel:
ranking = EventComparisonPageService._select_ranking(report, template_slug=template_slug)
daily_path = ranking.result.daily_path
worst_ltv_point = max(daily_path, key=lambda point: point.ltv_hedged, default=None)
breach_dates = tuple(point.date.isoformat() for point in daily_path if point.margin_call_hedged)
return EventComparisonDrilldownModel(
rank=ranking.rank,
template_slug=ranking.template_slug,
template_name=ranking.template_name,
survived_margin_call=ranking.survived_margin_call,
margin_call_days_hedged=ranking.margin_call_days_hedged,
total_option_payoff_realized=ranking.result.summary_metrics.total_option_payoff_realized,
hedge_cost=ranking.hedge_cost,
final_equity=ranking.final_equity,
worst_ltv_hedged=ranking.max_ltv_hedged,
worst_ltv_date=worst_ltv_point.date.isoformat() if worst_ltv_point is not None else None,
breach_dates=breach_dates,
rows=tuple(
EventComparisonDrilldownRow(
date=point.date.isoformat(),
spot_close=point.spot_close,
net_portfolio_value=point.net_portfolio_value,
option_market_value=point.option_market_value,
realized_option_cashflow=point.realized_option_cashflow,
ltv_unhedged=point.ltv_unhedged,
ltv_hedged=point.ltv_hedged,
margin_call_hedged=point.margin_call_hedged,
active_position_ids=point.active_position_ids,
)
for point in daily_path
),
)
@staticmethod
def drilldown_options(report: EventComparisonReport) -> dict[str, str]:
return {ranking.template_slug: f"#{ranking.rank}{ranking.template_name}" for ranking in report.rankings}
@staticmethod
def _select_ranking(
report: EventComparisonReport,
*,
template_slug: str | None = None,
) -> EventComparisonRanking:
if not report.rankings:
raise ValueError("Event comparison report has no ranked results")
if template_slug is None:
return report.rankings[0]
for ranking in report.rankings:
if ranking.template_slug == template_slug:
return ranking
raise ValueError(f"Unknown ranked template: {template_slug}")

View File

@@ -0,0 +1,116 @@
from __future__ import annotations
import json
from datetime import date
from pathlib import Path
from app.models.event_preset import EventPreset, EventScenarioOverrides
DEFAULT_EVENT_PRESET_FILE = Path(__file__).resolve().parents[2] / "config" / "event_presets.json"
def default_event_presets() -> list[EventPreset]:
return [
EventPreset(
event_preset_id="gld-jan-2024-selloff-v1",
slug="gld-jan-2024-selloff",
display_name="GLD January 2024 Selloff",
symbol="GLD",
window_start=date(2024, 1, 2),
window_end=date(2024, 1, 8),
anchor_date=date(2024, 1, 4),
event_type="selloff",
tags=("system", "selloff", "macro"),
description="Short January 2024 selloff window for deterministic synthetic event comparisons.",
scenario_overrides=EventScenarioOverrides(
default_template_slugs=(
"protective-put-atm-12m",
"protective-put-95pct-12m",
"protective-put-90pct-12m",
"ladder-50-50-atm-95pct-12m",
)
),
),
EventPreset(
event_preset_id="gld-jan-2024-drawdown-v1",
slug="gld-jan-2024-drawdown",
display_name="GLD January 2024 Drawdown",
symbol="GLD",
window_start=date(2024, 1, 2),
window_end=date(2024, 1, 8),
anchor_date=date(2024, 1, 5),
event_type="selloff",
tags=("system", "drawdown"),
description="January 2024 drawdown preset for deterministic synthetic event comparison runs.",
scenario_overrides=EventScenarioOverrides(
lookback_days=0,
recovery_days=0,
default_template_slugs=(
"protective-put-atm-12m",
"ladder-50-50-atm-95pct-12m",
"ladder-33-33-33-atm-95pct-90pct-12m",
),
),
),
EventPreset(
event_preset_id="gld-jan-2024-stress-window-v1",
slug="gld-jan-2024-stress-window",
display_name="GLD January 2024 Stress Window",
symbol="GLD",
window_start=date(2024, 1, 2),
window_end=date(2024, 1, 8),
anchor_date=None,
event_type="stress_test",
tags=("system", "stress_test"),
description="Stress-window preset with a modest warmup and recovery tail for report scaffolding.",
scenario_overrides=EventScenarioOverrides(
lookback_days=0,
recovery_days=0,
default_template_slugs=(
"protective-put-atm-12m",
"protective-put-95pct-12m",
),
),
),
]
class FileEventPresetRepository:
def __init__(self, path: str | Path = DEFAULT_EVENT_PRESET_FILE) -> None:
self.path = Path(path)
def list_presets(self) -> list[EventPreset]:
self._ensure_seeded()
payload = json.loads(self.path.read_text())
return [EventPreset.from_dict(item) for item in payload.get("presets", [])]
def get_by_slug(self, slug: str) -> EventPreset | None:
return next((preset for preset in self.list_presets() if preset.slug == slug), None)
def save_all(self, presets: list[EventPreset]) -> None:
self.path.parent.mkdir(parents=True, exist_ok=True)
payload = {"presets": [preset.to_dict() for preset in presets]}
self.path.write_text(json.dumps(payload, indent=2) + "\n")
def _ensure_seeded(self) -> None:
if self.path.exists():
return
self.save_all(default_event_presets())
class EventPresetService:
def __init__(self, repository: FileEventPresetRepository | None = None) -> None:
self.repository = repository or FileEventPresetRepository()
def list_presets(self, symbol: str | None = None) -> list[EventPreset]:
presets = self.repository.list_presets()
if symbol is None:
return presets
normalized_symbol = symbol.strip().upper()
return [preset for preset in presets if preset.symbol.strip().upper() == normalized_symbol]
def get_preset(self, slug: str) -> EventPreset:
preset = self.repository.get_by_slug(slug)
if preset is None:
raise KeyError(f"Unknown event preset: {slug}")
return preset

137
app/services/ltv_history.py Normal file
View File

@@ -0,0 +1,137 @@
from __future__ import annotations
import csv
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from decimal import Decimal
from io import StringIO
from typing import Mapping
from app.models.ltv_history import LtvHistoryRepository, LtvSnapshot
from app.services.boundary_values import boundary_decimal
@dataclass(frozen=True)
class LtvHistoryChartModel:
title: str
labels: tuple[str, ...]
ltv_values: tuple[float, ...]
threshold_values: tuple[float, ...]
class LtvHistoryService:
def __init__(self, repository: LtvHistoryRepository | None = None) -> None:
self.repository = repository or LtvHistoryRepository()
def record_workspace_snapshot(self, workspace_id: str, portfolio: Mapping[str, object]) -> list[LtvSnapshot]:
snapshots = self.repository.load(workspace_id)
snapshot = self._build_snapshot(portfolio)
updated: list[LtvSnapshot] = []
replaced = False
for existing in snapshots:
if existing.snapshot_date == snapshot.snapshot_date:
updated.append(snapshot)
replaced = True
else:
updated.append(existing)
if not replaced:
updated.append(snapshot)
updated.sort(key=lambda item: (item.snapshot_date, item.captured_at))
self.repository.save(workspace_id, updated)
return updated
@staticmethod
def chart_model(
snapshots: list[LtvSnapshot],
*,
days: int,
current_margin_threshold: Decimal | float | str | None = None,
) -> LtvHistoryChartModel:
if days <= 0:
raise ValueError("days must be positive")
title = f"{days} Day"
if not snapshots:
return LtvHistoryChartModel(title=title, labels=(), ltv_values=(), threshold_values=())
latest_date = max(datetime.fromisoformat(item.snapshot_date).date() for item in snapshots)
cutoff_date = latest_date - timedelta(days=days - 1)
filtered = [item for item in snapshots if datetime.fromisoformat(item.snapshot_date).date() >= cutoff_date]
threshold = (
boundary_decimal(current_margin_threshold, field_name="current_margin_threshold")
if current_margin_threshold is not None
else filtered[-1].margin_threshold
)
threshold_value = round(float(threshold * Decimal("100")), 1)
return LtvHistoryChartModel(
title=title,
labels=tuple(item.snapshot_date for item in filtered),
ltv_values=tuple(round(float(item.ltv_ratio * Decimal("100")), 1) for item in filtered),
threshold_values=tuple(threshold_value for _ in filtered),
)
@staticmethod
def export_csv(snapshots: list[LtvSnapshot]) -> str:
output = StringIO()
writer = csv.DictWriter(
output,
fieldnames=[
"snapshot_date",
"captured_at",
"ltv_ratio_pct",
"margin_threshold_pct",
"loan_amount_usd",
"collateral_value_usd",
"spot_price_usd_per_ozt",
"source",
],
)
writer.writeheader()
for snapshot in snapshots:
writer.writerow(
{
"snapshot_date": snapshot.snapshot_date,
"captured_at": snapshot.captured_at,
"ltv_ratio_pct": f"{float(snapshot.ltv_ratio * Decimal('100')):.1f}",
"margin_threshold_pct": f"{float(snapshot.margin_threshold * Decimal('100')):.1f}",
"loan_amount_usd": _decimal_text(snapshot.loan_amount),
"collateral_value_usd": _decimal_text(snapshot.collateral_value),
"spot_price_usd_per_ozt": _decimal_text(snapshot.spot_price),
"source": snapshot.source,
}
)
return output.getvalue()
@staticmethod
def _build_snapshot(portfolio: Mapping[str, object]) -> LtvSnapshot:
captured_at = _normalize_timestamp(str(portfolio.get("quote_updated_at", "")))
return LtvSnapshot(
snapshot_date=captured_at[:10],
captured_at=captured_at,
ltv_ratio=boundary_decimal(portfolio.get("ltv_ratio"), field_name="portfolio.ltv_ratio"),
margin_threshold=boundary_decimal(
portfolio.get("margin_call_ltv"),
field_name="portfolio.margin_call_ltv",
),
loan_amount=boundary_decimal(portfolio.get("loan_amount"), field_name="portfolio.loan_amount"),
collateral_value=boundary_decimal(portfolio.get("gold_value"), field_name="portfolio.gold_value"),
spot_price=boundary_decimal(portfolio.get("spot_price"), field_name="portfolio.spot_price"),
source=str(portfolio.get("quote_source", "unknown")) or "unknown",
)
def _normalize_timestamp(value: str) -> str:
if value:
try:
return datetime.fromisoformat(value.replace("Z", "+00:00")).astimezone(UTC).isoformat()
except ValueError:
pass
return datetime.now(UTC).replace(microsecond=0).isoformat()
def _decimal_text(value: Decimal) -> str:
if value == value.to_integral():
return str(value.quantize(Decimal("1")))
normalized = value.normalize()
exponent = normalized.as_tuple().exponent
if isinstance(exponent, int) and exponent < 0:
return format(normalized, "f")
return str(normalized)

View File

@@ -0,0 +1,120 @@
"""Position cost calculations for premium, spread, and storage costs."""
from __future__ import annotations
from decimal import Decimal
from typing import Any
from app.models.position import Position
def calculate_effective_entry(
entry_price: Decimal,
purchase_premium: Decimal | None = None,
) -> Decimal:
"""Calculate effective entry cost including dealer premium.
Args:
entry_price: Spot price at entry (per unit)
purchase_premium: Dealer markup over spot as percentage (e.g., 0.04 for 4%)
Returns:
Effective entry cost per unit
"""
if purchase_premium is None or purchase_premium == 0:
return entry_price
return entry_price * (Decimal("1") + purchase_premium)
def calculate_effective_exit(
current_spot: Decimal,
bid_ask_spread: Decimal | None = None,
) -> Decimal:
"""Calculate effective exit value after bid/ask spread.
Args:
current_spot: Current spot price (per unit)
bid_ask_spread: Expected sale discount below spot as percentage (e.g., 0.03 for 3%)
Returns:
Effective exit value per unit
"""
if bid_ask_spread is None or bid_ask_spread == 0:
return current_spot
return current_spot * (Decimal("1") - bid_ask_spread)
def calculate_true_pnl(
position: Position,
current_spot: Decimal,
) -> dict[str, Any]:
"""Calculate true P&L accounting for premium and spread.
Args:
position: Position to calculate P&L for
current_spot: Current spot price per unit
Returns:
Dict with paper_pnl, realized_pnl, effective_entry, effective_exit, entry_value, exit_value
"""
# Effective entry cost (includes premium)
effective_entry = calculate_effective_entry(position.entry_price, position.purchase_premium)
# Effective exit value (after spread)
effective_exit = calculate_effective_exit(current_spot, position.bid_ask_spread)
# Paper P&L (without premium/spread)
paper_pnl = (current_spot - position.entry_price) * position.quantity
# True P&L (with premium/spread)
true_pnl = (effective_exit - effective_entry) * position.quantity
# Entry and exit values
entry_value = position.entry_price * position.quantity
exit_value = current_spot * position.quantity
return {
"paper_pnl": float(paper_pnl),
"true_pnl": float(true_pnl),
"effective_entry": float(effective_entry),
"effective_exit": float(effective_exit),
"entry_value": float(entry_value),
"exit_value": float(exit_value),
"premium_impact": float((position.purchase_premium or 0) * entry_value),
"spread_impact": float((position.bid_ask_spread or 0) * exit_value),
}
def get_default_premium_for_product(
underlying: str, product_type: str = "default"
) -> tuple[Decimal | None, Decimal | None]:
"""Get default premium/spread for common gold products.
Args:
underlying: Underlying instrument ("GLD", "GC=F", "XAU")
product_type: Product type ("default", "coin_1oz", "bar_1kg", "allocated")
Returns:
Tuple of (purchase_premium, bid_ask_spread) or None if not applicable
"""
# GLD/GLDM: ETF is liquid, minimal spread
if underlying in ("GLD", "GLDM"):
# ETF spread is minimal, premium is 0
return Decimal("0"), Decimal("0.001") # 0% premium, 0.1% spread
# GC=F: Futures roll costs are handled separately (GCF-001)
if underlying == "GC=F":
return None, None
# XAU: Physical gold
if underlying == "XAU":
defaults = {
"default": (Decimal("0.04"), Decimal("0.03")), # 4% premium, 3% spread
"coin_1oz": (Decimal("0.04"), Decimal("0.03")), # 1oz coins: 4% premium, 3% spread
"bar_1kg": (Decimal("0.015"), Decimal("0.015")), # 1kg bars: 1.5% premium, 1.5% spread
"allocated": (Decimal("0.001"), Decimal("0.003")), # Allocated: 0.1% premium, 0.3% spread
}
return defaults.get(product_type, defaults["default"])
# Unknown underlying
return None, None

View File

@@ -1,10 +1,12 @@
"""Live price feed service for fetching real-time GLD and other asset prices.""" """Live price feed service for fetching real-time GLD and other asset prices."""
from __future__ import annotations
import asyncio import asyncio
import logging import logging
import math
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime, timedelta from datetime import datetime
from typing import Optional
import yfinance as yf import yfinance as yf
@@ -13,15 +15,31 @@ from app.services.cache import get_cache
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@dataclass @dataclass(frozen=True)
class PriceData: class PriceData:
"""Price data for a symbol.""" """Price data for a symbol."""
symbol: str symbol: str
price: float price: float
currency: str currency: str
timestamp: datetime timestamp: datetime
source: str = "yfinance" source: str = "yfinance"
def __post_init__(self) -> None:
normalized_symbol = self.symbol.strip().upper()
if not normalized_symbol:
raise ValueError("symbol is required")
if not math.isfinite(self.price) or self.price <= 0:
raise ValueError("price must be a finite positive number")
normalized_currency = self.currency.strip().upper()
if not normalized_currency:
raise ValueError("currency is required")
if not isinstance(self.timestamp, datetime):
raise TypeError("timestamp must be a datetime")
object.__setattr__(self, "symbol", normalized_symbol)
object.__setattr__(self, "currency", normalized_currency)
object.__setattr__(self, "source", self.source.strip() or "yfinance")
class PriceFeed: class PriceFeed:
"""Live price feed service using yfinance with Redis caching.""" """Live price feed service using yfinance with Redis caching."""
@@ -32,69 +50,118 @@ class PriceFeed:
def __init__(self): def __init__(self):
self._cache = get_cache() self._cache = get_cache()
async def get_price(self, symbol: str) -> Optional[PriceData]: @staticmethod
"""Get current price for a symbol, with caching. def _required_payload_value(payload: dict[str, object], key: str, *, context: str) -> object:
if key not in payload:
raise TypeError(f"{context} is missing required field: {key}")
return payload[key]
Args: @classmethod
symbol: Yahoo Finance symbol (e.g., "GLD", "BTC-USD") def _normalize_cached_price_payload(cls, payload: object, *, expected_symbol: str) -> PriceData:
if not isinstance(payload, dict):
raise TypeError("cached price payload must be a plain dict")
payload_symbol = str(payload.get("symbol", expected_symbol)).strip().upper()
normalized_symbol = expected_symbol.strip().upper()
if payload_symbol != normalized_symbol:
raise ValueError(f"cached symbol mismatch: {payload_symbol} != {normalized_symbol}")
timestamp = cls._required_payload_value(payload, "timestamp", context="cached price payload")
if not isinstance(timestamp, str) or not timestamp.strip():
raise TypeError("cached timestamp must be a non-empty ISO string")
price_val = cls._required_payload_value(payload, "price", context="cached price payload")
if not isinstance(price_val, (int, float)):
raise TypeError(f"cached price must be numeric, got {type(price_val).__name__}")
price = float(price_val)
return PriceData(
symbol=payload_symbol,
price=price,
currency=str(payload.get("currency", "USD")),
timestamp=datetime.fromisoformat(timestamp),
source=str(payload.get("source", "yfinance")),
)
@classmethod
def _normalize_provider_price_payload(cls, payload: object, *, expected_symbol: str) -> PriceData:
if not isinstance(payload, dict):
raise TypeError("provider price payload must be a plain dict")
payload_symbol = str(payload.get("symbol", expected_symbol)).strip().upper()
normalized_symbol = expected_symbol.strip().upper()
if payload_symbol != normalized_symbol:
raise ValueError(f"provider symbol mismatch: {payload_symbol} != {normalized_symbol}")
timestamp = cls._required_payload_value(payload, "timestamp", context="provider price payload")
if not isinstance(timestamp, datetime):
raise TypeError("provider timestamp must be a datetime")
price_val = cls._required_payload_value(payload, "price", context="provider price payload")
if not isinstance(price_val, (int, float)):
raise TypeError(f"provider price must be numeric, got {type(price_val).__name__}")
price = float(price_val)
return PriceData(
symbol=payload_symbol,
price=price,
currency=str(payload.get("currency", "USD")),
timestamp=timestamp,
source=str(payload.get("source", "yfinance")),
)
@staticmethod
def _price_data_to_cache_payload(data: PriceData) -> dict[str, object]:
return {
"symbol": data.symbol,
"price": data.price,
"currency": data.currency,
"timestamp": data.timestamp.isoformat(),
"source": data.source,
}
async def get_price(self, symbol: str) -> PriceData | None:
"""Get current price for a symbol, with caching."""
normalized_symbol = symbol.strip().upper()
cache_key = f"price:{normalized_symbol}"
Returns:
PriceData or None if fetch fails
"""
# Check cache first
if self._cache.enabled: if self._cache.enabled:
cache_key = f"price:{symbol}" cached = await self._cache.get_json(cache_key)
cached = await self._cache.get(cache_key) if cached is not None:
if cached: try:
return PriceData(**cached) return self._normalize_cached_price_payload(cached, expected_symbol=normalized_symbol)
except (TypeError, ValueError) as exc:
logger.warning("Discarding cached price payload for %s: %s", normalized_symbol, exc)
# Fetch from yfinance
try: try:
data = await self._fetch_yfinance(symbol) payload = await self._fetch_yfinance(normalized_symbol)
if data: if payload is None:
# Cache the result return None
if self._cache.enabled: data = self._normalize_provider_price_payload(payload, expected_symbol=normalized_symbol)
await self._cache.set( if self._cache.enabled:
cache_key, await self._cache.set_json(
{ cache_key, self._price_data_to_cache_payload(data), ttl=self.CACHE_TTL_SECONDS
"symbol": data.symbol, )
"price": data.price, return data
"currency": data.currency, except Exception as exc:
"timestamp": data.timestamp.isoformat(), logger.error("Failed to fetch price for %s: %s", normalized_symbol, exc)
"source": data.source return None
},
ttl=self.CACHE_TTL_SECONDS
)
return data
except Exception as e:
logger.error(f"Failed to fetch price for {symbol}: {e}")
return None async def _fetch_yfinance(self, symbol: str) -> dict[str, object] | None:
async def _fetch_yfinance(self, symbol: str) -> Optional[PriceData]:
"""Fetch price from yfinance (run in thread pool to avoid blocking).""" """Fetch price from yfinance (run in thread pool to avoid blocking)."""
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, self._sync_fetch_yfinance, symbol) return await loop.run_in_executor(None, self._sync_fetch_yfinance, symbol)
def _sync_fetch_yfinance(self, symbol: str) -> Optional[PriceData]: def _sync_fetch_yfinance(self, symbol: str) -> dict[str, object] | None:
"""Synchronous yfinance fetch.""" """Synchronous yfinance fetch."""
ticker = yf.Ticker(symbol) ticker = yf.Ticker(symbol)
hist = ticker.history(period="1d", interval="1m") hist = ticker.history(period="1d", interval="1m")
if not hist.empty: if hist.empty:
last_price = hist["Close"].iloc[-1] return None
currency = ticker.info.get("currency", "USD") last_price = hist["Close"].iloc[-1]
return {
"symbol": symbol,
"price": float(last_price),
"currency": ticker.info.get("currency", "USD"),
"timestamp": datetime.utcnow(),
"source": "yfinance",
}
return PriceData( async def get_prices(self, symbols: list[str]) -> dict[str, PriceData | None]:
symbol=symbol,
price=float(last_price),
currency=currency,
timestamp=datetime.utcnow()
)
return None
async def get_prices(self, symbols: list[str]) -> dict[str, Optional[PriceData]]:
"""Get prices for multiple symbols concurrently.""" """Get prices for multiple symbols concurrently."""
tasks = [self.get_price(s) for s in symbols] tasks = [self.get_price(symbol) for symbol in symbols]
results = await asyncio.gather(*tasks) results = await asyncio.gather(*tasks)
return {s: r for s, r in zip(symbols, results)} return {symbol: result for symbol, result in zip(symbols, results, strict=True)}

View File

@@ -0,0 +1,56 @@
from __future__ import annotations
from dataclasses import dataclass
from decimal import Decimal
from typing import Protocol
from app.models.portfolio import PortfolioConfig
from app.services.boundary_values import boundary_decimal
class _SaveStatusConfig(Protocol):
entry_basis_mode: str
gold_value: float | None
entry_price: float | None
gold_ounces: float | None
current_ltv: float
margin_call_price: object
@dataclass(frozen=True, slots=True)
class SaveStatusSnapshot:
entry_basis_mode: str
gold_value: Decimal
entry_price: Decimal
gold_ounces: Decimal
current_ltv: Decimal
margin_call_price: Decimal
def _normalize_save_status_snapshot(config: PortfolioConfig | _SaveStatusConfig) -> SaveStatusSnapshot:
margin_call_price = config.margin_call_price
resolved_margin_call_price = margin_call_price() if callable(margin_call_price) else margin_call_price
return SaveStatusSnapshot(
entry_basis_mode=config.entry_basis_mode,
gold_value=boundary_decimal(config.gold_value, field_name="config.gold_value"),
entry_price=boundary_decimal(config.entry_price, field_name="config.entry_price"),
gold_ounces=boundary_decimal(config.gold_ounces, field_name="config.gold_ounces"),
current_ltv=boundary_decimal(config.current_ltv, field_name="config.current_ltv"),
margin_call_price=boundary_decimal(
resolved_margin_call_price,
field_name="config.margin_call_price",
),
)
def margin_call_price_value(config: PortfolioConfig | _SaveStatusConfig) -> float:
return float(_normalize_save_status_snapshot(config).margin_call_price)
def save_status_text(config: PortfolioConfig | _SaveStatusConfig) -> str:
snapshot = _normalize_save_status_snapshot(config)
return (
f"Saved: basis={snapshot.entry_basis_mode}, start=${float(snapshot.gold_value):,.0f}, "
f"entry=${float(snapshot.entry_price):,.2f}/oz, weight={float(snapshot.gold_ounces):,.2f} oz, "
f"LTV={float(snapshot.current_ltv):.1%}, trigger=${float(snapshot.margin_call_price):,.2f}/oz"
)

View File

@@ -0,0 +1,105 @@
"""Storage cost calculation service for positions with physical storage requirements."""
from __future__ import annotations
from decimal import Decimal
from app.models.position import Position
_DECIMAL_ZERO = Decimal("0")
_DECIMAL_ONE = Decimal("1")
_DECIMAL_HUNDRED = Decimal("100")
_DECIMAL_TWELVE = Decimal("12")
def calculate_annual_storage_cost(position: Position, current_value: Decimal) -> Decimal:
"""Calculate annual storage cost for a single position.
Args:
position: Position with optional storage_cost_basis and storage_cost_period
current_value: Current market value of the position (quantity × current_price)
Returns:
Annual storage cost in position's storage_cost_currency (default USD)
Notes:
- If storage_cost_basis is None, returns 0 (no storage cost)
- If storage_cost_period is "monthly", annualizes the cost (×12)
- If storage_cost_basis is a percentage, applies it to current_value
- If storage_cost_basis is a fixed amount, uses it directly
"""
if position.storage_cost_basis is None:
return _DECIMAL_ZERO
basis = position.storage_cost_basis
period = position.storage_cost_period or "annual"
# Determine if basis is a percentage (e.g., 0.12 for 0.12%) or fixed amount
# Heuristic: if basis < 1, treat as percentage; otherwise as fixed amount
if basis < _DECIMAL_ONE:
# Percentage-based cost
if period == "monthly":
# Monthly percentage, annualize it
annual_rate = basis * _DECIMAL_TWELVE
else:
# Already annual
annual_rate = basis
# Apply percentage to current value
return (current_value * annual_rate) / _DECIMAL_HUNDRED
else:
# Fixed amount
if period == "monthly":
# Monthly fixed cost, annualize it
return basis * _DECIMAL_TWELVE
else:
# Already annual fixed cost
return basis
def calculate_total_storage_cost(
positions: list[Position],
current_values: dict[str, Decimal],
) -> Decimal:
"""Calculate total annual storage cost across all positions.
Args:
positions: List of positions with optional storage costs
current_values: Mapping of position ID (str) to current market value
Returns:
Total annual storage cost in USD (assumes all positions use USD)
"""
total = _DECIMAL_ZERO
for position in positions:
current_value = current_values.get(str(position.id), _DECIMAL_ZERO)
cost = calculate_annual_storage_cost(position, current_value)
total += cost
return total
def get_default_storage_cost_for_underlying(underlying: str) -> tuple[Decimal | None, str | None]:
"""Get default storage cost settings for a given underlying instrument.
Args:
underlying: Instrument symbol (e.g., "XAU", "GLD", "GC=F")
Returns:
Tuple of (storage_cost_basis, storage_cost_period) or (None, None) if no default
Notes:
- XAU (physical gold): 0.12% annual for allocated vault storage
- GLD: None (expense ratio baked into share price)
- GC=F: None (roll costs are the storage analog, handled separately)
"""
if underlying == "XAU":
# Physical gold: 0.12% annual storage cost for allocated vault storage
return Decimal("0.12"), "annual"
elif underlying == "GLD":
# GLD: expense ratio is implicit in share price, no separate storage cost
return None, None
elif underlying == "GC=F":
# Futures: roll costs are the storage analog (deferred to GCF-001)
return None, None
else:
return None, None

View File

@@ -0,0 +1,346 @@
from __future__ import annotations
import json
import re
from pathlib import Path
from typing import Any
from uuid import uuid4
from app.models.strategy_template import StrategyTemplate
from app.strategies.base import BaseStrategy, StrategyConfig
from app.strategies.laddered_put import LadderedPutStrategy, LadderSpec
from app.strategies.protective_put import ProtectivePutSpec, ProtectivePutStrategy
CONFIG_TEMPLATE_FILE = Path(__file__).resolve().parents[2] / "config" / "strategy_templates.json"
DATA_TEMPLATE_FILE = Path("data/strategy_templates.json")
_SLUGIFY_RE = re.compile(r"[^a-z0-9]+")
def default_strategy_templates() -> list[StrategyTemplate]:
return [
StrategyTemplate.protective_put(
template_id="protective-put-atm-12m-v1",
slug="protective-put-atm-12m",
display_name="Protective Put ATM",
description="Full downside protection using a 12-month at-the-money put.",
strike_pct=1.0,
target_expiry_days=365,
tags=("system", "protective_put", "conservative"),
),
StrategyTemplate.protective_put(
template_id="protective-put-95pct-12m-v1",
slug="protective-put-95pct-12m",
display_name="Protective Put 95%",
description="Lower-cost 12-month protective put using a 95% spot strike.",
strike_pct=0.95,
target_expiry_days=365,
tags=("system", "protective_put", "balanced"),
),
StrategyTemplate.protective_put(
template_id="protective-put-90pct-12m-v1",
slug="protective-put-90pct-12m",
display_name="Protective Put 90%",
description="Cost-sensitive 12-month protective put using a 90% spot strike.",
strike_pct=0.90,
target_expiry_days=365,
tags=("system", "protective_put", "cost_sensitive"),
),
StrategyTemplate.laddered_put(
template_id="ladder-50-50-atm-95pct-12m-v1",
slug="ladder-50-50-atm-95pct-12m",
display_name="Laddered Puts 50/50 ATM + 95%",
description="Split hedge evenly across ATM and 95% strike 12-month puts.",
strike_pcts=(1.0, 0.95),
weights=(0.5, 0.5),
target_expiry_days=365,
tags=("system", "laddered_put", "balanced"),
),
StrategyTemplate.laddered_put(
template_id="ladder-33-33-33-atm-95pct-90pct-12m-v1",
slug="ladder-33-33-33-atm-95pct-90pct-12m",
display_name="Laddered Puts 33/33/33 ATM + 95% + 90%",
description="Three-layer 12-month put ladder across ATM, 95%, and 90% strikes.",
strike_pcts=(1.0, 0.95, 0.90),
weights=(1 / 3, 1 / 3, 1 / 3),
target_expiry_days=365,
tags=("system", "laddered_put", "cost_sensitive"),
),
]
class FileStrategyTemplateRepository:
def __init__(
self,
path: str | Path = DATA_TEMPLATE_FILE,
*,
seed_path: str | Path | None = CONFIG_TEMPLATE_FILE,
) -> None:
self.path = Path(path)
self.seed_path = Path(seed_path) if seed_path is not None else None
def list_templates(self) -> list[StrategyTemplate]:
self._ensure_store()
defaults = self._seed_templates()
payload = json.loads(self.path.read_text())
customs = [StrategyTemplate.from_dict(item) for item in payload.get("templates", [])]
merged: dict[str, StrategyTemplate] = {template.slug: template for template in defaults}
for template in customs:
merged[template.slug] = template
return list(merged.values())
def get_by_slug(self, slug: str) -> StrategyTemplate | None:
return next((template for template in self.list_templates() if template.slug == slug), None)
def save_all(self, templates: list[StrategyTemplate]) -> None:
self.path.parent.mkdir(parents=True, exist_ok=True)
default_slugs = {template.slug for template in self._seed_templates()}
payload = {
"templates": [
template.to_dict()
for template in templates
if template.slug not in default_slugs or "system" not in template.tags
]
}
self.path.write_text(json.dumps(payload, indent=2) + "\n")
def _ensure_store(self) -> None:
if self.path.exists():
return
self.path.parent.mkdir(parents=True, exist_ok=True)
self.path.write_text(json.dumps({"templates": []}, indent=2) + "\n")
def _seed_templates(self) -> list[StrategyTemplate]:
if self.seed_path is not None and self.seed_path.exists():
payload = json.loads(self.seed_path.read_text())
return [StrategyTemplate.from_dict(item) for item in payload.get("templates", [])]
return default_strategy_templates()
class StrategyTemplateService:
def __init__(self, repository: FileStrategyTemplateRepository | None = None) -> None:
self.repository = repository or FileStrategyTemplateRepository()
def list_active_templates(self, underlying_symbol: str = "GLD") -> list[StrategyTemplate]:
symbol = underlying_symbol.upper()
return [
template
for template in self.repository.list_templates()
if template.status == "active" and template.underlying_symbol.upper() in {symbol, "*"}
]
def get_template(self, slug: str) -> StrategyTemplate:
template = self.repository.get_by_slug(slug)
if template is None:
raise KeyError(f"Unknown strategy template: {slug}")
return template
def create_custom_template(
self,
*,
display_name: str,
template_kind: str,
target_expiry_days: int,
strike_pcts: tuple[float, ...],
weights: tuple[float, ...] | None = None,
underlying_symbol: str = "GLD",
) -> StrategyTemplate:
name = display_name.strip()
if not name:
raise ValueError("Template name is required")
if target_expiry_days <= 0:
raise ValueError("Expiration days must be positive")
if not strike_pcts:
raise ValueError("At least one strike is required")
if any(strike_pct <= 0 for strike_pct in strike_pcts):
raise ValueError("Strike percentages must be positive")
templates = self.repository.list_templates()
normalized_name = name.casefold()
if any(template.display_name.casefold() == normalized_name for template in templates):
raise ValueError("Template name already exists")
slug = self._slugify(name)
if any(template.slug == slug for template in templates):
raise ValueError("Template slug already exists; choose a different name")
template_id = f"custom-{uuid4()}"
if template_kind == "protective_put":
if len(strike_pcts) != 1:
raise ValueError("Protective put builder expects exactly one strike")
template = StrategyTemplate.protective_put(
template_id=template_id,
slug=slug,
display_name=name,
description=f"Custom {target_expiry_days}-day protective put at {strike_pcts[0] * 100:.0f}% strike.",
strike_pct=strike_pcts[0],
target_expiry_days=target_expiry_days,
underlying_symbol=underlying_symbol,
tags=("custom", "protective_put"),
)
elif template_kind == "laddered_put":
if len(strike_pcts) < 2:
raise ValueError("Laddered put builder expects at least two strikes")
resolved_weights = weights or self._equal_weights(len(strike_pcts))
if len(resolved_weights) != len(strike_pcts):
raise ValueError("Weights must match the number of strikes")
template = StrategyTemplate.laddered_put(
template_id=template_id,
slug=slug,
display_name=name,
description=(
f"Custom {target_expiry_days}-day put ladder at "
+ ", ".join(f"{strike_pct * 100:.0f}%" for strike_pct in strike_pcts)
+ " strikes."
),
strike_pcts=strike_pcts,
weights=resolved_weights,
target_expiry_days=target_expiry_days,
underlying_symbol=underlying_symbol,
tags=("custom", "laddered_put"),
)
else:
raise ValueError(f"Unsupported strategy type: {template_kind}")
templates.append(template)
self.repository.save_all(templates)
return template
def build_strategy(self, config: StrategyConfig, slug: str) -> BaseStrategy:
return self.build_strategy_from_template(config, self.get_template(slug))
def build_strategy_from_template(self, config: StrategyConfig, template: StrategyTemplate) -> BaseStrategy:
months = max(1, round(template.target_expiry_days / 30.4167))
if template.template_kind == "protective_put":
leg = template.legs[0]
return ProtectivePutStrategy(
config,
ProtectivePutSpec(
label=self._protective_label(leg.strike_rule.value),
strike_pct=leg.strike_rule.value,
months=months,
),
)
if template.template_kind == "laddered_put":
return LadderedPutStrategy(
config,
LadderSpec(
label=self._ladder_label(template),
weights=tuple(leg.allocation_weight for leg in template.legs),
strike_pcts=tuple(leg.strike_rule.value for leg in template.legs),
months=months,
),
)
raise ValueError(f"Unsupported template kind: {template.template_kind}")
def catalog_items(self) -> list[dict[str, Any]]:
ui_defaults = {
"protective_put_atm": {"estimated_cost": 6.25, "coverage": "High"},
"protective_put_otm_95": {"estimated_cost": 4.95, "coverage": "Balanced"},
"protective_put_otm_90": {"estimated_cost": 3.7, "coverage": "Cost-efficient"},
"laddered_put_50_50_atm_otm95": {
"estimated_cost": 4.45,
"coverage": "Layered",
},
"laddered_put_33_33_33_atm_otm95_otm90": {
"estimated_cost": 3.85,
"coverage": "Layered",
},
}
items: list[dict[str, Any]] = []
for template in self.list_active_templates():
strategy_name = self.strategy_name(template)
downside_put_legs = [
{
"allocation_weight": leg.allocation_weight,
"strike_pct": leg.strike_rule.value,
}
for leg in template.legs
if leg.side == "long" and leg.option_type == "put"
]
defaults = ui_defaults.get(strategy_name, {}) if "system" in template.tags else {}
items.append(
{
"name": strategy_name,
"template_slug": template.slug,
"label": template.display_name,
"description": template.description,
"downside_put_legs": downside_put_legs,
"estimated_cost": defaults.get("estimated_cost", self._estimated_cost(template)),
"coverage": defaults.get("coverage", self._coverage_label(template)),
}
)
return items
def strategy_name(self, template: StrategyTemplate) -> str:
strategy = self.build_strategy_from_template(
StrategyConfig(portfolio=self._stub_portfolio(), spot_price=1.0, volatility=0.16, risk_free_rate=0.045),
template,
)
return strategy.name
@staticmethod
def _slugify(display_name: str) -> str:
slug = _SLUGIFY_RE.sub("-", display_name.strip().lower()).strip("-")
if not slug:
raise ValueError("Template name must contain letters or numbers")
return slug
@staticmethod
def _equal_weights(count: int) -> tuple[float, ...]:
if count <= 0:
raise ValueError("count must be positive")
base = round(1.0 / count, 10)
weights = [base for _ in range(count)]
weights[-1] = 1.0 - sum(weights[:-1])
return tuple(weights)
@staticmethod
def _estimated_cost(template: StrategyTemplate) -> float:
weighted_cost = sum(
leg.allocation_weight * max(1.1, 6.25 - ((1.0 - leg.strike_rule.value) * 25.5)) for leg in template.legs
)
expiry_factor = max(0.45, (template.target_expiry_days / 365) ** 0.5)
weighted_cost *= expiry_factor
if len(template.legs) > 1:
weighted_cost *= 0.8
return round(weighted_cost, 2)
@staticmethod
def _coverage_label(template: StrategyTemplate) -> str:
if len(template.legs) > 1:
return "Layered"
strike_pct = template.legs[0].strike_rule.value
if strike_pct >= 0.99:
return "High"
if strike_pct >= 0.95:
return "Balanced"
return "Cost-efficient"
@staticmethod
def _protective_label(strike_pct: float) -> str:
if abs(strike_pct - 1.0) < 1e-9:
return "ATM"
return f"OTM_{int(round(strike_pct * 100))}"
def _ladder_label(self, template: StrategyTemplate) -> str:
weight_labels = "_".join(str(int(round(leg.allocation_weight * 100))) for leg in template.legs)
strike_labels = "_".join(self._strike_label(leg.strike_rule.value) for leg in template.legs)
return f"{weight_labels}_{strike_labels}"
@staticmethod
def _strike_label(strike_pct: float) -> str:
if abs(strike_pct - 1.0) < 1e-9:
return "ATM"
return f"OTM{int(round(strike_pct * 100))}"
@staticmethod
def _stub_portfolio():
from app.models.portfolio import LombardPortfolio
return LombardPortfolio(
gold_ounces=1.0,
gold_price_per_ounce=1.0,
loan_amount=0.5,
initial_ltv=0.5,
margin_call_ltv=0.75,
)

92
app/services/turnstile.py Normal file
View File

@@ -0,0 +1,92 @@
from __future__ import annotations
import logging
import os
from dataclasses import dataclass
import requests
TURNSTILE_VERIFY_URL = "https://challenges.cloudflare.com/turnstile/v0/siteverify"
DEFAULT_TURNSTILE_TEST_SITE_KEY = "1x00000000000000000000AA"
DEFAULT_TURNSTILE_TEST_SECRET_KEY = "1x0000000000000000000000000000000AA"
ALWAYS_FAIL_TURNSTILE_TEST_SITE_KEY = "2x00000000000000000000AB"
ALWAYS_FAIL_TURNSTILE_TEST_SECRET_KEY = "2x0000000000000000000000000000000AA"
logger = logging.getLogger(__name__)
@dataclass(frozen=True)
class TurnstileSettings:
site_key: str
secret_key: str
enabled: bool
uses_test_keys: bool
def _environment() -> str:
return os.getenv("APP_ENV", os.getenv("ENVIRONMENT", "development")).lower()
def load_turnstile_settings() -> TurnstileSettings:
site_key = os.getenv("TURNSTILE_SITE_KEY", "")
secret_key = os.getenv("TURNSTILE_SECRET_KEY", "")
enabled = os.getenv("TURNSTILE_ENABLED", "true").lower() not in {"0", "false", "no"}
env = _environment()
known_test_pairs = {
(DEFAULT_TURNSTILE_TEST_SITE_KEY, DEFAULT_TURNSTILE_TEST_SECRET_KEY),
(ALWAYS_FAIL_TURNSTILE_TEST_SITE_KEY, ALWAYS_FAIL_TURNSTILE_TEST_SECRET_KEY),
}
if env == "test":
if (site_key, secret_key) not in known_test_pairs:
if site_key or secret_key:
logger.info("Ignoring configured Turnstile credentials in test environment and using test keys")
site_key = DEFAULT_TURNSTILE_TEST_SITE_KEY
secret_key = DEFAULT_TURNSTILE_TEST_SECRET_KEY
elif not site_key or not secret_key:
if env == "development":
site_key = site_key or DEFAULT_TURNSTILE_TEST_SITE_KEY
secret_key = secret_key or DEFAULT_TURNSTILE_TEST_SECRET_KEY
else:
raise RuntimeError("Turnstile keys must be configured outside development/test environments")
uses_test_keys = site_key == DEFAULT_TURNSTILE_TEST_SITE_KEY and secret_key == DEFAULT_TURNSTILE_TEST_SECRET_KEY
return TurnstileSettings(
site_key=site_key,
secret_key=secret_key,
enabled=enabled,
uses_test_keys=uses_test_keys,
)
def verify_turnstile_token(token: str, remote_ip: str | None = None) -> bool:
settings = load_turnstile_settings()
if not settings.enabled:
return True
if not token.strip():
return False
if _environment() == "test":
if (
settings.site_key == ALWAYS_FAIL_TURNSTILE_TEST_SITE_KEY
and settings.secret_key == ALWAYS_FAIL_TURNSTILE_TEST_SECRET_KEY
):
return False
if settings.uses_test_keys:
return True
try:
response = requests.post(
TURNSTILE_VERIFY_URL,
data={
"secret": settings.secret_key,
"response": token,
"remoteip": remote_ip or "",
},
timeout=10,
)
response.raise_for_status()
payload = response.json()
except (requests.RequestException, ValueError) as exc:
logger.warning("Turnstile verification failed: %s", exc)
return False
return bool(payload.get("success"))

View File

@@ -1,5 +1,4 @@
from .base import BaseStrategy, StrategyConfig from .base import BaseStrategy, StrategyConfig
from .engine import StrategySelectionEngine
from .laddered_put import LadderedPutStrategy, LadderSpec from .laddered_put import LadderedPutStrategy, LadderSpec
from .lease import LeaseAnalysisSpec, LeaseStrategy from .lease import LeaseAnalysisSpec, LeaseStrategy
from .protective_put import ProtectivePutSpec, ProtectivePutStrategy from .protective_put import ProtectivePutSpec, ProtectivePutStrategy
@@ -15,3 +14,11 @@ __all__ = [
"LeaseStrategy", "LeaseStrategy",
"StrategySelectionEngine", "StrategySelectionEngine",
] ]
def __getattr__(name: str):
if name == "StrategySelectionEngine":
from .engine import StrategySelectionEngine
return StrategySelectionEngine
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")

View File

@@ -9,10 +9,9 @@ from app.core.pricing.black_scholes import (
DEFAULT_VOLATILITY, DEFAULT_VOLATILITY,
) )
from app.models.portfolio import LombardPortfolio from app.models.portfolio import LombardPortfolio
from app.services.strategy_templates import StrategyTemplateService
from app.strategies.base import BaseStrategy, StrategyConfig from app.strategies.base import BaseStrategy, StrategyConfig
from app.strategies.laddered_put import LadderedPutStrategy, LadderSpec
from app.strategies.lease import LeaseStrategy from app.strategies.lease import LeaseStrategy
from app.strategies.protective_put import ProtectivePutSpec, ProtectivePutStrategy
RiskProfile = Literal["conservative", "balanced", "cost_sensitive"] RiskProfile = Literal["conservative", "balanced", "cost_sensitive"]
@@ -34,6 +33,7 @@ class StrategySelectionEngine:
spot_price: float = RESEARCH_GLD_SPOT spot_price: float = RESEARCH_GLD_SPOT
volatility: float = RESEARCH_VOLATILITY volatility: float = RESEARCH_VOLATILITY
risk_free_rate: float = RESEARCH_RISK_FREE_RATE risk_free_rate: float = RESEARCH_RISK_FREE_RATE
template_service: StrategyTemplateService | None = None
def _config(self) -> StrategyConfig: def _config(self) -> StrategyConfig:
portfolio = LombardPortfolio( portfolio = LombardPortfolio(
@@ -52,30 +52,12 @@ class StrategySelectionEngine:
def _strategies(self) -> list[BaseStrategy]: def _strategies(self) -> list[BaseStrategy]:
config = self._config() config = self._config()
return [ template_service = self.template_service or StrategyTemplateService()
ProtectivePutStrategy(config, ProtectivePutSpec(label="ATM", strike_pct=1.0, months=12)), template_strategies = [
ProtectivePutStrategy(config, ProtectivePutSpec(label="OTM_95", strike_pct=0.95, months=12)), template_service.build_strategy_from_template(config, template)
ProtectivePutStrategy(config, ProtectivePutSpec(label="OTM_90", strike_pct=0.90, months=12)), for template in template_service.list_active_templates("GLD")
LadderedPutStrategy(
config,
LadderSpec(
label="50_50_ATM_OTM95",
weights=(0.5, 0.5),
strike_pcts=(1.0, 0.95),
months=12,
),
),
LadderedPutStrategy(
config,
LadderSpec(
label="33_33_33_ATM_OTM95_OTM90",
weights=(1 / 3, 1 / 3, 1 / 3),
strike_pcts=(1.0, 0.95, 0.90),
months=12,
),
),
LeaseStrategy(config),
] ]
return [*template_strategies, LeaseStrategy(config)]
def compare_all_strategies(self) -> list[dict]: def compare_all_strategies(self) -> list[dict]:
comparisons: list[dict] = [] comparisons: list[dict] = []
@@ -149,6 +131,7 @@ class StrategySelectionEngine:
spot_price=self.spot_price, spot_price=self.spot_price,
volatility=volatility, volatility=volatility,
risk_free_rate=self.risk_free_rate, risk_free_rate=self.risk_free_rate,
template_service=self.template_service,
) )
recommendation = engine.recommend("balanced") recommendation = engine.recommend("balanced")
results["volatility"].append( results["volatility"].append(
@@ -169,6 +152,7 @@ class StrategySelectionEngine:
spot_price=spot_price, spot_price=spot_price,
volatility=DEFAULT_VOLATILITY, volatility=DEFAULT_VOLATILITY,
risk_free_rate=DEFAULT_RISK_FREE_RATE, risk_free_rate=DEFAULT_RISK_FREE_RATE,
template_service=self.template_service,
) )
recommendation = engine.recommend("balanced") recommendation = engine.recommend("balanced")
results["spot_price"].append( results["spot_price"].append(

View File

@@ -3,10 +3,13 @@ from __future__ import annotations
from dataclasses import dataclass from dataclasses import dataclass
from app.strategies.base import BaseStrategy, StrategyConfig from app.strategies.base import BaseStrategy, StrategyConfig
# Re-export for test access
from app.strategies.protective_put import ( from app.strategies.protective_put import (
DEFAULT_SCENARIO_CHANGES, DEFAULT_SCENARIO_CHANGES,
ProtectivePutSpec, ProtectivePutSpec,
ProtectivePutStrategy, ProtectivePutStrategy,
gld_ounces_per_share, # noqa: F401
) )
@@ -87,7 +90,7 @@ class LadderedPutStrategy(BaseStrategy):
contract = leg.build_contract() contract = leg.build_contract()
weighted_payoff = contract.payoff(threshold_price) * weight weighted_payoff = contract.payoff(threshold_price) * weight
total_payoff += weighted_payoff total_payoff += weighted_payoff
floor_value += contract.strike * leg.hedge_units * weight floor_value += contract.strike * contract.notional_units * weight
leg_protection.append( leg_protection.append(
{ {
"weight": weight, "weight": weight,

View File

@@ -1,5 +1,6 @@
from __future__ import annotations from __future__ import annotations
import math
from dataclasses import dataclass from dataclasses import dataclass
from datetime import date, timedelta from datetime import date, timedelta
@@ -7,6 +8,7 @@ from app.core.pricing.black_scholes import (
BlackScholesInputs, BlackScholesInputs,
black_scholes_price_and_greeks, black_scholes_price_and_greeks,
) )
from app.domain.instruments import gld_ounces_per_share
from app.models.option import Greeks, OptionContract from app.models.option import Greeks, OptionContract
from app.models.strategy import HedgingStrategy from app.models.strategy import HedgingStrategy
from app.strategies.base import BaseStrategy, StrategyConfig from app.strategies.base import BaseStrategy, StrategyConfig
@@ -47,7 +49,8 @@ class ProtectivePutStrategy(BaseStrategy):
@property @property
def hedge_units(self) -> float: def hedge_units(self) -> float:
return self.config.portfolio.gold_value / self.config.spot_price """Gold ounces to hedge (canonical portfolio weight)."""
return self.config.portfolio.gold_ounces
@property @property
def strike(self) -> float: def strike(self) -> float:
@@ -57,6 +60,20 @@ class ProtectivePutStrategy(BaseStrategy):
def term_years(self) -> float: def term_years(self) -> float:
return self.spec.months / 12.0 return self.spec.months / 12.0
@property
def gld_backing(self) -> float:
"""GLD ounces per share for contract count calculation."""
return float(gld_ounces_per_share())
@property
def contract_count(self) -> int:
"""Number of GLD option contracts needed.
GLD options cover 100 shares each. Each share represents ~0.0919 oz
(expense-ratio adjusted). Formula: ceil(gold_ounces / (100 * backing)).
"""
return math.ceil(self.hedge_units / (100 * self.gld_backing))
def build_contract(self) -> OptionContract: def build_contract(self) -> OptionContract:
pricing = black_scholes_price_and_greeks( pricing = black_scholes_price_and_greeks(
BlackScholesInputs( BlackScholesInputs(
@@ -73,8 +90,8 @@ class ProtectivePutStrategy(BaseStrategy):
strike=self.strike, strike=self.strike,
expiry=date.today() + timedelta(days=max(1, round(365 * self.term_years))), expiry=date.today() + timedelta(days=max(1, round(365 * self.term_years))),
premium=pricing.price, premium=pricing.price,
quantity=1.0, quantity=float(self.contract_count),
contract_size=self.hedge_units, contract_size=100 * self.gld_backing,
underlying_price=self.config.spot_price, underlying_price=self.config.spot_price,
greeks=Greeks( greeks=Greeks(
delta=pricing.delta, delta=pricing.delta,
@@ -114,7 +131,7 @@ class ProtectivePutStrategy(BaseStrategy):
payoff_at_threshold = contract.payoff(threshold_price) payoff_at_threshold = contract.payoff(threshold_price)
hedged_value_at_threshold = self.config.portfolio.gold_value_at_price(threshold_price) + payoff_at_threshold hedged_value_at_threshold = self.config.portfolio.gold_value_at_price(threshold_price) + payoff_at_threshold
protected_ltv = self.config.portfolio.loan_amount / hedged_value_at_threshold protected_ltv = self.config.portfolio.loan_amount / hedged_value_at_threshold
floor_value = contract.strike * self.hedge_units floor_value = contract.strike * contract.notional_units
return { return {
"strategy": self.name, "strategy": self.name,
"threshold_price": round(threshold_price, 2), "threshold_price": round(threshold_price, 2),

70
config/event_presets.json Normal file
View File

@@ -0,0 +1,70 @@
{
"presets": [
{
"event_preset_id": "gld-jan-2024-selloff-v1",
"slug": "gld-jan-2024-selloff",
"display_name": "GLD January 2024 Selloff",
"symbol": "GLD",
"window_start": "2024-01-02",
"window_end": "2024-01-08",
"anchor_date": "2024-01-04",
"event_type": "selloff",
"tags": ["system", "selloff", "macro"],
"description": "Short January 2024 selloff window for deterministic synthetic event comparisons.",
"scenario_overrides": {
"lookback_days": null,
"recovery_days": null,
"default_template_slugs": [
"protective-put-atm-12m",
"protective-put-95pct-12m",
"protective-put-90pct-12m",
"ladder-50-50-atm-95pct-12m"
]
},
"created_at": "2026-03-24T00:00:00+00:00"
},
{
"event_preset_id": "gld-jan-2024-drawdown-v1",
"slug": "gld-jan-2024-drawdown",
"display_name": "GLD January 2024 Drawdown",
"symbol": "GLD",
"window_start": "2024-01-02",
"window_end": "2024-01-08",
"anchor_date": "2024-01-05",
"event_type": "selloff",
"tags": ["system", "drawdown"],
"description": "January 2024 drawdown preset for deterministic synthetic event comparison runs.",
"scenario_overrides": {
"lookback_days": 0,
"recovery_days": 0,
"default_template_slugs": [
"protective-put-atm-12m",
"ladder-50-50-atm-95pct-12m",
"ladder-33-33-33-atm-95pct-90pct-12m"
]
},
"created_at": "2026-03-24T00:00:00+00:00"
},
{
"event_preset_id": "gld-jan-2024-stress-window-v1",
"slug": "gld-jan-2024-stress-window",
"display_name": "GLD January 2024 Stress Window",
"symbol": "GLD",
"window_start": "2024-01-02",
"window_end": "2024-01-08",
"anchor_date": null,
"event_type": "stress_test",
"tags": ["system", "stress_test"],
"description": "Stress-window preset with a modest warmup and recovery tail for report scaffolding.",
"scenario_overrides": {
"lookback_days": 0,
"recovery_days": 0,
"default_template_slugs": [
"protective-put-atm-12m",
"protective-put-95pct-12m"
]
},
"created_at": "2026-03-24T00:00:00+00:00"
}
]
}

View File

@@ -0,0 +1,253 @@
{
"templates": [
{
"template_id": "protective-put-atm-12m-v1",
"slug": "protective-put-atm-12m",
"display_name": "Protective Put ATM",
"description": "Full downside protection using a 12-month at-the-money put.",
"template_kind": "protective_put",
"status": "active",
"version": 1,
"underlying_symbol": "GLD",
"contract_mode": "continuous_units",
"legs": [
{
"leg_id": "protective-put-atm-12m-v1-leg-1",
"side": "long",
"option_type": "put",
"allocation_weight": 1.0,
"strike_rule": {
"rule_type": "spot_pct",
"value": 1.0
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
}
],
"roll_policy": {
"policy_type": "hold_to_expiry",
"days_before_expiry": null,
"rebalance_on_new_deposit": false
},
"entry_policy": {
"entry_timing": "scenario_start_close",
"stagger_days": null
},
"tags": [
"system",
"protective_put",
"conservative"
],
"created_at": "2026-03-24T00:00:00+00:00",
"updated_at": "2026-03-24T00:00:00+00:00"
},
{
"template_id": "protective-put-95pct-12m-v1",
"slug": "protective-put-95pct-12m",
"display_name": "Protective Put 95%",
"description": "Lower-cost 12-month protective put using a 95% spot strike.",
"template_kind": "protective_put",
"status": "active",
"version": 1,
"underlying_symbol": "GLD",
"contract_mode": "continuous_units",
"legs": [
{
"leg_id": "protective-put-95pct-12m-v1-leg-1",
"side": "long",
"option_type": "put",
"allocation_weight": 1.0,
"strike_rule": {
"rule_type": "spot_pct",
"value": 0.95
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
}
],
"roll_policy": {
"policy_type": "hold_to_expiry",
"days_before_expiry": null,
"rebalance_on_new_deposit": false
},
"entry_policy": {
"entry_timing": "scenario_start_close",
"stagger_days": null
},
"tags": [
"system",
"protective_put",
"balanced"
],
"created_at": "2026-03-24T00:00:00+00:00",
"updated_at": "2026-03-24T00:00:00+00:00"
},
{
"template_id": "protective-put-90pct-12m-v1",
"slug": "protective-put-90pct-12m",
"display_name": "Protective Put 90%",
"description": "Cost-sensitive 12-month protective put using a 90% spot strike.",
"template_kind": "protective_put",
"status": "active",
"version": 1,
"underlying_symbol": "GLD",
"contract_mode": "continuous_units",
"legs": [
{
"leg_id": "protective-put-90pct-12m-v1-leg-1",
"side": "long",
"option_type": "put",
"allocation_weight": 1.0,
"strike_rule": {
"rule_type": "spot_pct",
"value": 0.9
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
}
],
"roll_policy": {
"policy_type": "hold_to_expiry",
"days_before_expiry": null,
"rebalance_on_new_deposit": false
},
"entry_policy": {
"entry_timing": "scenario_start_close",
"stagger_days": null
},
"tags": [
"system",
"protective_put",
"cost_sensitive"
],
"created_at": "2026-03-24T00:00:00+00:00",
"updated_at": "2026-03-24T00:00:00+00:00"
},
{
"template_id": "ladder-50-50-atm-95pct-12m-v1",
"slug": "ladder-50-50-atm-95pct-12m",
"display_name": "Laddered Puts 50/50 ATM + 95%",
"description": "Split hedge evenly across ATM and 95% strike 12-month puts.",
"template_kind": "laddered_put",
"status": "active",
"version": 1,
"underlying_symbol": "GLD",
"contract_mode": "continuous_units",
"legs": [
{
"leg_id": "ladder-50-50-atm-95pct-12m-v1-leg-1",
"side": "long",
"option_type": "put",
"allocation_weight": 0.5,
"strike_rule": {
"rule_type": "spot_pct",
"value": 1.0
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
},
{
"leg_id": "ladder-50-50-atm-95pct-12m-v1-leg-2",
"side": "long",
"option_type": "put",
"allocation_weight": 0.5,
"strike_rule": {
"rule_type": "spot_pct",
"value": 0.95
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
}
],
"roll_policy": {
"policy_type": "hold_to_expiry",
"days_before_expiry": null,
"rebalance_on_new_deposit": false
},
"entry_policy": {
"entry_timing": "scenario_start_close",
"stagger_days": null
},
"tags": [
"system",
"laddered_put",
"balanced"
],
"created_at": "2026-03-24T00:00:00+00:00",
"updated_at": "2026-03-24T00:00:00+00:00"
},
{
"template_id": "ladder-33-33-33-atm-95pct-90pct-12m-v1",
"slug": "ladder-33-33-33-atm-95pct-90pct-12m",
"display_name": "Laddered Puts 33/33/33 ATM + 95% + 90%",
"description": "Three-layer 12-month put ladder across ATM, 95%, and 90% strikes.",
"template_kind": "laddered_put",
"status": "active",
"version": 1,
"underlying_symbol": "GLD",
"contract_mode": "continuous_units",
"legs": [
{
"leg_id": "ladder-33-33-33-atm-95pct-90pct-12m-v1-leg-1",
"side": "long",
"option_type": "put",
"allocation_weight": 0.3333333333333333,
"strike_rule": {
"rule_type": "spot_pct",
"value": 1.0
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
},
{
"leg_id": "ladder-33-33-33-atm-95pct-90pct-12m-v1-leg-2",
"side": "long",
"option_type": "put",
"allocation_weight": 0.3333333333333333,
"strike_rule": {
"rule_type": "spot_pct",
"value": 0.95
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
},
{
"leg_id": "ladder-33-33-33-atm-95pct-90pct-12m-v1-leg-3",
"side": "long",
"option_type": "put",
"allocation_weight": 0.3333333333333333,
"strike_rule": {
"rule_type": "spot_pct",
"value": 0.9
},
"target_expiry_days": 365,
"quantity_rule": "target_coverage_pct",
"target_coverage_pct": 1.0
}
],
"roll_policy": {
"policy_type": "hold_to_expiry",
"days_before_expiry": null,
"rebalance_on_new_deposit": false
},
"entry_policy": {
"entry_timing": "scenario_start_close",
"stagger_days": null
},
"tags": [
"system",
"laddered_put",
"cost_sensitive"
],
"created_at": "2026-03-24T00:00:00+00:00",
"updated_at": "2026-03-24T00:00:00+00:00"
}
]
}

View File

@@ -15,11 +15,31 @@ services:
NICEGUI_MOUNT_PATH: ${NICEGUI_MOUNT_PATH:-/} NICEGUI_MOUNT_PATH: ${NICEGUI_MOUNT_PATH:-/}
NICEGUI_STORAGE_SECRET: ${NICEGUI_STORAGE_SECRET} NICEGUI_STORAGE_SECRET: ${NICEGUI_STORAGE_SECRET}
CORS_ORIGINS: ${CORS_ORIGINS:-*} CORS_ORIGINS: ${CORS_ORIGINS:-*}
TURNSTILE_SITE_KEY: ${TURNSTILE_SITE_KEY:-}
TURNSTILE_SECRET_KEY: ${TURNSTILE_SECRET_KEY:-}
ports: ports:
- "${APP_BIND_ADDRESS:-127.0.0.1}:${APP_PORT:-8000}:8000" - "${APP_BIND_ADDRESS:-127.0.0.1}:${APP_PORT:-8000}:8000"
networks:
- default
- proxy-net
volumes:
- vault-dash-data:/app/data
healthcheck: healthcheck:
test: ["CMD", "curl", "-fsS", "http://127.0.0.1:8000/health"] test:
[
"CMD",
"python",
"-c",
"import sys, urllib.request; urllib.request.urlopen('http://127.0.0.1:8000/health', timeout=3); sys.exit(0)",
]
interval: 30s interval: 30s
timeout: 5s timeout: 5s
retries: 5 retries: 5
start_period: 20s start_period: 20s
networks:
proxy-net:
external: true
volumes:
vault-dash-data:

View File

@@ -5,7 +5,7 @@ services:
dockerfile: Dockerfile dockerfile: Dockerfile
image: vault-dash:dev image: vault-dash:dev
ports: ports:
- "8000:8000" - "8100:8000"
environment: environment:
APP_ENV: development APP_ENV: development
APP_HOST: 0.0.0.0 APP_HOST: 0.0.0.0
@@ -20,6 +20,7 @@ services:
volumes: volumes:
- ./app:/app/app - ./app:/app/app
- ./config:/app/config - ./config:/app/config
- vault-dash-data:/app/data
depends_on: depends_on:
redis: redis:
condition: service_healthy condition: service_healthy
@@ -46,4 +47,5 @@ services:
restart: unless-stopped restart: unless-stopped
volumes: volumes:
vault-dash-data:
redis-data: redis-data:

View File

@@ -0,0 +1,66 @@
# BT-002 Historical Options Snapshot Provider
## What shipped
BT-002 adds a point-in-time historical options snapshot provider for backtests.
The new provider lives in `app/services/backtesting/historical_provider.py` and plugs into the same `BacktestService` / engine flow as the existing synthetic provider.
## Provider contract
The snapshot provider exposes the same backtest-facing behaviors as the synthetic provider:
- load underlying daily closes for the scenario window
- validate `ProviderRef`
- open positions at scenario start using only the entry-day snapshot
- mark open positions later using the exact same contract identity
This lets backtests swap:
- synthetic pricing: `synthetic_v1 / synthetic_bs_mid`
- observed snapshot pricing: `daily_snapshots_v1 / snapshot_mid`
## Contract-selection rules
The provider uses explicit, deterministic, point-in-time rules:
1. filter to the entry-day option chain only
2. keep contracts with expiry at or beyond the target expiry date
3. choose the nearest eligible expiry
4. within that expiry, choose the nearest strike to the target strike
5. on equal-distance strike ties:
- puts prefer the higher strike
- calls prefer the lower strike
These rules avoid lookahead bias because later snapshots are not consulted for entry selection.
## Daily mark-to-market rules
After entry, the provider marks positions using the exact same `contract_key`.
It does **not** silently substitute a different strike or expiry when the original contract is missing.
Current fallback policy:
1. use the exact same contract from the same-day snapshot
2. if missing before expiry, carry forward the previous mark for that same contract and emit a warning
3. if the valuation date is at or after expiry, settle to intrinsic value and close the position
## Data-quality tradeoffs
The current BT-002 slice intentionally keeps the data model simple:
- snapshots are assumed to provide a precomputed daily `mid`
- the provider does not currently derive mids from bid/ask pairs
- missing exact-contract marks are explicit warnings, not silent substitutions
- the engine currently still supports `continuous_units` sizing for snapshot-backed runs
## Known limitations / follow-up
This slice does **not** yet include:
- file-backed or external ingestion of real historical snapshot datasets
- listed-contract rounding / contract-size-aware position sizing
- persistent run-status objects beyond template-level warnings
Those follow-ups should remain explicit roadmap work rather than being implied by BT-002.

View File

@@ -0,0 +1,543 @@
# CORE-001A Decimal Unit Value Object Architecture
## Scope
This document defines the first implementation slice for:
- **CORE-001** — Explicit Unit/Value Classes for Domain Quantities
- **CORE-001A** — Decimal Unit Value Object Foundation
The goal is to introduce a small, strict, reusable domain layer that prevents silent unit confusion across portfolio, hedge, and backtesting code.
This slice is intentionally limited to:
- enums / typed constants for currency and weight units
- immutable Decimal-based value objects
- explicit conversion methods
- explicitly allowed arithmetic operators
- fail-closed defaults for invalid or ambiguous arithmetic
This slice should **not** yet migrate every page or calculation path. That belongs to:
- **CORE-001B** — overview and hedge migration
- **CORE-001C** — backtests and event-comparison migration
- **CORE-001D** — persistence / API / integration cleanup
---
## Design goals
1. **Eliminate unit ambiguity by construction.** Values must carry unit metadata.
2. **Use Decimal for bookkeeping accuracy.** No binary floating point in core domain value objects.
3. **Fail closed by default.** Arithmetic only works when explicitly defined and unit-safe.
4. **Keep the first slice small.** Add primitives first, then migrate consumers incrementally.
5. **Make edge conversions explicit.** Float-heavy libraries remain at the boundaries only.
---
## Core design decisions
### 1. Canonical numeric type: `Decimal`
All domain value objects introduced in this slice should use Python `Decimal` as the canonical numeric representation.
Implementation guidance:
- never construct `Decimal` directly from `float`
- construct from:
- `str`
- `int`
- existing `Decimal`
- introduce a small helper such as `to_decimal(value: Decimal | int | str) -> Decimal`
- if a float enters from an external provider, convert it at the edge using a deliberate helper, e.g. `decimal_from_float(value: float) -> Decimal`
### 2. Immutable value objects
Use frozen dataclasses for predictable behavior and easy testing.
Recommended style:
- `@dataclass(frozen=True, slots=True)`
- validation in `__post_init__`
- methods return new values rather than mutating in place
### 3. Unit metadata is mandatory
A raw numeric value without unit/currency metadata must not be considered a domain quantity.
Examples:
- `Money(amount=Decimal("1000"), currency=BaseCurrency.USD)`
- `GoldQuantity(amount=Decimal("220"), unit=WeightUnit.OUNCE_TROY)`
- `PricePerWeight(amount=Decimal("4400"), currency=BaseCurrency.USD, per_unit=WeightUnit.OUNCE_TROY)`
### 4. Unsupported operators should fail
Do not make these classes behave like plain numbers.
Examples of operations that should fail unless explicitly defined:
- `Money + GoldQuantity`
- `GoldQuantity + PricePerWeight`
- `Money * Money`
- `PricePerWeight + Money`
- adding values with different currencies without explicit conversion support
### 5. Explicit conversions only
Unit changes must be requested directly.
Examples:
- `gold.to_unit(WeightUnit.GRAM)`
- `price.to_unit(WeightUnit.KILOGRAM)`
- `money.assert_currency(BaseCurrency.USD)`
---
## Proposed module layout
Recommended first location:
- `app/domain/units.py`
Optional supporting files if it grows:
- `app/domain/__init__.py`
- `app/domain/decimal_utils.py`
- `app/domain/exceptions.py`
Reasoning:
- this is core domain logic, not page logic
- it should be usable from models, services, calculations, and backtesting
- avoid burying it inside `app/models/` if the types are broader than persistence models
---
## Proposed enums
## `BaseCurrency`
```python
from enum import StrEnum
class BaseCurrency(StrEnum):
USD = "USD"
EUR = "EUR"
CHF = "CHF"
```
Notes:
- omit `Invalid`; prefer validation failure over sentinel invalid values
- add currencies only when needed
## `WeightUnit`
```python
from enum import StrEnum
class WeightUnit(StrEnum):
GRAM = "g"
KILOGRAM = "kg"
OUNCE_TROY = "ozt"
```
Notes:
- use **troy ounce**, not generic `oz`, because gold math should be explicit
- naming should be domain-precise to avoid ounce ambiguity
---
## Proposed conversion constants
Use Decimal constants, not floats.
```python
GRAMS_PER_KILOGRAM = Decimal("1000")
GRAMS_PER_TROY_OUNCE = Decimal("31.1034768")
```
Recommended helper:
```python
def weight_unit_factor(unit: WeightUnit) -> Decimal:
...
```
Interpretation:
- factor returns grams per given unit
- conversions can normalize through grams
Example:
```python
def convert_weight(amount: Decimal, from_unit: WeightUnit, to_unit: WeightUnit) -> Decimal:
grams = amount * weight_unit_factor(from_unit)
return grams / weight_unit_factor(to_unit)
```
---
## Proposed value objects
## `Money`
```python
@dataclass(frozen=True, slots=True)
class Money:
amount: Decimal
currency: BaseCurrency
```
### Allowed operations
- `Money + Money -> Money` if same currency
- `Money - Money -> Money` if same currency
- `Money * Decimal -> Money`
- `Money / Decimal -> Money`
- unary negation
- equality on same currency and amount
### Must fail
- addition/subtraction across different currencies
- multiplication by `Money`
- addition to non-money quantities
- division by `Money` unless a future ratio type is added explicitly
### Recommended methods
- `zero(currency: BaseCurrency) -> Money`
- `assert_currency(currency: BaseCurrency) -> Money`
- `quantize_cents() -> Money` (optional, for display/persistence edges only)
## `Weight`
Use a neutral weight-bearing quantity rather than a `Gold` type for the foundation layer.
```python
@dataclass(frozen=True, slots=True)
class Weight:
amount: Decimal
unit: WeightUnit
```
### Allowed operations
- `Weight + Weight -> Weight` after explicit normalization or same-unit conversion inside method
- `Weight - Weight -> Weight`
- `Weight * Decimal -> Weight`
- `Weight / Decimal -> Weight`
- `to_unit(unit: WeightUnit) -> Weight`
### Must fail
- implicit multiplication with `Weight`
- addition to `Money`
- comparison without normalization if not explicitly handled
## `PricePerWeight`
```python
@dataclass(frozen=True, slots=True)
class PricePerWeight:
amount: Decimal
currency: BaseCurrency
per_unit: WeightUnit
```
Interpretation:
- `4400 USD / ozt`
- `141.46 USD / g`
### Allowed operations
- `PricePerWeight.to_unit(unit: WeightUnit) -> PricePerWeight`
- `Weight * PricePerWeight -> Money`
- `PricePerWeight * Weight -> Money`
- `PricePerWeight * Decimal -> PricePerWeight` (optional; acceptable if useful)
### Must fail
- adding prices of different currencies without explicit conversion
- adding `PricePerWeight` to `Money`
- multiplying `PricePerWeight * PricePerWeight`
## `AssetQuantity`
Backtesting currently uses neutral `underlying_units`, while live pages also use physical-gold semantics. For the foundation layer, keep this explicit.
Two viable approaches:
### Option A: only introduce `Weight` in CORE-001A
Pros:
- simpler first slice
- cleanly solves current gold/spot confusion first
Cons:
- historical `underlying_units` remain primitive a bit longer
### Option B: also introduce a neutral counted-asset quantity
```python
@dataclass(frozen=True, slots=True)
class AssetQuantity:
amount: Decimal
symbol: str
```
Recommendation:
- **Option A for CORE-001A**
- defer `AssetQuantity` to CORE-001C where historical scenario boundaries are migrated
---
## Explicit operator design
The main rule is:
> only define operators that are unquestionably unit-safe and domain-obvious.
Recommended first operator set:
```python
Weight * PricePerWeight -> Money
PricePerWeight * Weight -> Money
Money + Money -> Money
Money - Money -> Money
Weight + Weight -> Weight
Weight - Weight -> Weight
```
Example pseudocode:
```python
def __mul__(self, other: object) -> Money:
if isinstance(other, PricePerWeight):
adjusted_price = other.to_unit(self.unit)
return Money(
amount=self.amount * adjusted_price.amount,
currency=adjusted_price.currency,
)
return NotImplemented
```
Important note:
- returning `NotImplemented` is correct for unsupported operator pairs
- if the reverse operation is also unsupported, Python will raise a `TypeError`
- that is the desired fail-closed behavior
---
## Validation rules
Recommended invariants:
### Money
- currency required
- amount finite Decimal
### Weight
- unit required
- amount finite Decimal
- negative values allowed only if the domain needs them; otherwise reject at construction
Recommendation:
- allow negative values in the primitive type
- enforce business positivity at higher-level models where appropriate
### PricePerWeight
- currency required
- per-unit required
- amount must be non-negative for current use cases
---
## Serialization guidance
For CORE-001A, serialization can remain explicit and boring.
Recommended shape:
```python
{
"amount": "4400.00",
"currency": "USD",
"per_unit": "ozt"
}
```
Guidelines:
- serialize `Decimal` as string, not float
- keep enum serialization stable and human-readable
- avoid hidden coercion in JSON helpers
Persistence migration itself belongs primarily to **CORE-001D**, but the foundational classes should be serialization-friendly.
---
## Interop boundaries
Several existing libraries/services are float-heavy:
- yfinance payloads
- chart libraries
- some existing calculations in `app/core/`
- option pricing inputs/outputs
CORE-001A should establish a clear policy:
### Inside the core domain
- use Decimal-bearing unit-safe types
### At external edges
- accept floats only in adapters
- immediately convert to Decimal-bearing domain types
- convert back to floats only when required by third-party/chart APIs
Recommended helper names:
- `decimal_from_provider_float(...)`
- `money_from_float_usd(...)`
- `price_per_ounce_usd_from_float(...)`
- `to_chart_float(...)`
The helper names should make the boundary obvious.
---
## Suggested implementation order
### Step 1: add foundational helpers
- `to_decimal(...)`
- `decimal_from_float(...)`
- weight conversion constants and helpers
### Step 2: add enums
- `BaseCurrency`
- `WeightUnit`
### Step 3: add frozen value objects
- `Money`
- `Weight`
- `PricePerWeight`
### Step 4: add unit tests first
Test at least:
- `Weight(Decimal("1"), OUNCE_TROY).to_unit(GRAM)`
- `PricePerWeight(USD/ozt).to_unit(GRAM)`
- `Weight * PricePerWeight -> Money`
- `Money + Money` same currency succeeds
- `Money + Money` different currency fails
- invalid operator combinations raise `TypeError`
- decimal construction helpers reject unsafe ambiguous input patterns if desired
### Step 5: add one thin usage seam
Before full migration, wire one small non-invasive helper or calculation path to prove ergonomics.
Recommendation:
- introduce a helper used by the overview quote fallback path or a standalone calculation test first
- keep page-wide migration for CORE-001B
---
## Proposed first test file
- `tests/test_units.py`
Recommended test groups:
1. Decimal normalization helpers
2. Weight conversions
3. Price-per-weight conversions
4. Unit-safe multiplication
5. Currency mismatch failures
6. Unsupported operator failures
7. Serialization shape helpers if implemented in this slice
---
## Migration notes for later slices
## CORE-001B
Migrate:
- overview spot resolution / fallback
- margin-call math
- hedge starting-position math
- hedge scenario contribution math where unit-bearing values are mixed
## CORE-001C
Migrate:
- workspace `gold_value -> historical underlying_units` conversion
- backtest scenario portfolio construction
- event comparison scenario materialization
- explicit distinction between:
- physical gold weight
- USD notional collateral value
- historical underlying units
## CORE-001D
Clean up:
- persistence schemas
- API serialization
- cache payloads
- third-party provider adapters
---
## Non-goals for CORE-001A
Do not attempt all of this in the first slice:
- full currency FX conversion
- replacing every float in the app
- redesigning all existing Pydantic models at once
- full options contract quantity modeling
- chart formatting overhaul
- database migration complexity
---
## Recommendation
Implement `CORE-001A` as a small, strict, test-first domain package introducing:
- `BaseCurrency`
- `WeightUnit`
- `Money`
- `Weight`
- `PricePerWeight`
all backed by `Decimal`, immutable, with explicit conversions and only a tiny allowed operator surface.
That creates the foundation needed to safely migrate the visible calculation paths in `CORE-001B` and the historical scenario paths in `CORE-001C` without repeating the unit-confusion bugs already discovered in overview and backtesting.

View File

@@ -0,0 +1,139 @@
# CORE-001D Boundary and Persistence Cleanup Plan
## Goal
Make Decimal/unit-safe domain types reliable at external boundaries without forcing a risky full-model rewrite.
This slice follows:
- `CORE-001A` Decimal/unit foundations
- `CORE-001B` overview + hedge migration
- `CORE-001C` backtests + event comparison migration
## Why this exists
The visible math paths now use unit-safe domain helpers, but several persistence and adapter seams still store or pass raw floats. That is acceptable at the edge for now, but the boundaries should be explicit, tested, and stable.
## Main hotspots found in the current codebase
### 1. Portfolio persistence still serializes raw floats
Files:
- `app/models/portfolio.py`
- workspace-backed `portfolio_config.json` payloads
Current state:
- `PortfolioConfig` stores float fields
- `PortfolioRepository.save/load` round-trips plain JSON numbers
Risk:
- persistence format is implicit
- Decimal-safe internal math can be silently rounded or reinterpreted at reload boundaries
### 2. Legacy portfolio/domain types still expose float-heavy APIs
Files:
- `app/models/portfolio.py`
- `app/services/settings_status.py`
- `app/services/alerts.py`
Current state:
- `LombardPortfolio` remains float-based for compatibility
- several services convert Decimal-backed results back to float for formatting/threshold checks
Risk:
- domain-safe calculations exist, but callers can still drift into ambiguous float semantics
### 3. Backtesting UI/service seams still take float inputs
Files:
- `app/services/backtesting/ui_service.py`
- `app/services/backtesting/comparison.py`
- `app/services/event_comparison_ui.py`
- `app/domain/backtesting_math.py`
Current state:
- typed materialization exists, but service entrypoints still accept `float`
- conversions back to float happen for model compatibility
Risk:
- callers can bypass intent and reintroduce unit ambiguity at service boundaries
### 4. Provider/cache adapters use generic JSON and float payloads
Files:
- `app/services/cache.py`
- `app/services/price_feed.py`
- `app/services/data_service.py`
- `app/services/backtesting/historical_provider.py`
Current state:
- cache serialization supports datetime only via custom default
- provider payloads are mostly raw floats, dicts, and lists
Risk:
- external payloads are fine to keep float-heavy, but conversion into domain-safe structures should happen at named boundaries and be test-covered
## Recommended implementation order
### Step 1: make persistence format explicit
Target:
- `PortfolioConfig` JSON shape
- workspace portfolio JSON shape
Deliverables:
- explicit serialization helpers for persisted money/price/weight-like fields
- tests proving stable round-trip behavior
- docs for JSON number vs string decisions
Preferred near-term approach:
- keep external JSON ergonomic
- document exact persisted field meanings and units
- ensure reload path normalizes through a single constructor/adapter
### Step 2: add named boundary adapters
Target:
- portfolio persistence load/save
- price feed quote ingestion
- historical close ingestion
- options-chain normalization
Deliverables:
- helper functions with explicit names such as `*_from_provider_payload(...)` or `*_to_persistence_dict(...)`
- tests proving conversion behavior and fail-closed validation
### Step 3: reduce raw-float service entrypoints where practical
Target:
- backtesting UI/comparison service inputs
- settings/alerts status helpers
Deliverables:
- services accept typed or normalized values earlier
- float conversion, where still required, happens at the last compatibility seam
## Non-goals
- replacing every float in every Pydantic/dataclass immediately
- redesigning third-party payload models wholesale
- changing public UI formatting behavior just for type purity
## First candidate sub-slices
### CORE-001D1 — Portfolio persistence serialization seam
- make `PortfolioConfig` persistence round-trip explicit
- add serialization tests for workspace-scoped config files
### CORE-001D2 — Provider and cache adapter boundaries
- document/test cache + provider conversion seams
- ensure raw external floats are normalized before domain math
### CORE-001D3 — Service entrypoint tightening
- narrow float-heavy internal service APIs where easy and low-risk
## Success criteria
- persistence schema for bookkeeping-sensitive fields is explicit and tested
- Decimal/unit-safe values cross boundaries through named adapters
- remaining float-heavy hotspots are either removed or intentionally documented as edge-only
- no regression in existing browser-visible flows
## Pre-launch rollout policy
For the current pre-launch stage, the storage schema may make a clean breaking transition.
That means:
- newly persisted numeric domain values should use explicit structured unit-aware storage
- old flat storage payloads do not need compatibility or migration yet
- invalid or old-format payloads should fail loudly instead of being silently normalized
A real migration path should be introduced later, once persistence is considered live for users.

View File

@@ -0,0 +1,780 @@
# Databento Historical Data Integration Plan
## Overview
Integrate Databento historical API for backtesting and scenario comparison pages, replacing yfinance for historical data on these pages. The integration will support configurable start prices/values independent of portfolio settings, with intelligent caching to avoid redundant downloads.
## Architecture
### Current State
- **Backtest page** (`app/pages/backtests.py`): Uses `YFinanceHistoricalPriceSource` via `BacktestPageService`
- **Event comparison** (`app/pages/event_comparison.py`): Uses seeded event presets with yfinance data
- **Historical provider** (`app/services/backtesting/historical_provider.py`): Protocol-based architecture with `YFinanceHistoricalPriceSource` and `SyntheticHistoricalProvider`
### Target State
- Add `DatabentoHistoricalPriceSource` implementing `HistoricalPriceSource` protocol
- Add `DatabentoHistoricalOptionSource` implementing `OptionSnapshotSource` protocol (future)
- Smart caching layer: only re-download when parameters change
- Pre-seeded scenario data via batch downloads
## Databento Data Sources
### Underlyings and Datasets
| Instrument | Dataset | Symbol Format | Notes |
|------------|---------|----------------|-------|
| GLD ETF | `XNAS.BASIC` or `EQUS.PLUS` | `GLD` | US equities consolidated |
| GC=F Futures | `GLBX.MDP3` | `GC` + continuous or `GC=F` raw | Gold futures |
| Gold Options | `OPRA.PILLAR` | `GLD` underlying | Options on GLD ETF |
### Schemas
| Schema | Use Case | Fields |
|--------|----------|--------|
| `ohlcv-1d` | Daily backtesting | open, high, low, close, volume |
| `ohlcv-1h` | Intraday scenarios | Hourly bars |
| `trades` | Tick-level analysis | Full trade data |
| `definition` | Instrument metadata | Expiries, strike prices, tick sizes |
## Implementation Plan
### Phase 1: Historical Price Source (DATA-DB-001)
**File:** `app/services/backtesting/databento_source.py`
```python
from __future__ import annotations
from dataclasses import dataclass
from datetime import date, timedelta
from pathlib import Path
from typing import Any
import hashlib
import json
from app.services.backtesting.historical_provider import DailyClosePoint, HistoricalPriceSource
try:
import databento as db
DATABENTO_AVAILABLE = True
except ImportError:
DATABENTO_AVAILABLE = False
@dataclass(frozen=True)
class DatabentoCacheKey:
"""Cache key for Databento data requests."""
dataset: str
symbol: str
schema: str
start_date: date
end_date: date
def cache_path(self, cache_dir: Path) -> Path:
key_str = f"{self.dataset}_{self.symbol}_{self.schema}_{self.start_date}_{self.end_date}"
key_hash = hashlib.sha256(key_str.encode()).hexdigest()[:16]
return cache_dir / f"dbn_{key_hash}.parquet"
def metadata_path(self, cache_dir: Path) -> Path:
key_str = f"{self.dataset}_{self.symbol}_{self.schema}_{self.start_date}_{self.end_date}"
key_hash = hashlib.sha256(key_str.encode()).hexdigest()[:16]
return cache_dir / f"dbn_{key_hash}_meta.json"
@dataclass
class DatabentoSourceConfig:
"""Configuration for Databento data source."""
api_key: str | None = None # Falls back to DATABENTO_API_KEY env var
cache_dir: Path = Path(".cache/databento")
dataset: str = "XNAS.BASIC"
schema: str = "ohlcv-1d"
stype_in: str = "raw_symbol"
# Re-download threshold
max_cache_age_days: int = 30
class DatabentoHistoricalPriceSource(HistoricalPriceSource):
"""Databento-based historical price source for backtesting."""
def __init__(self, config: DatabentoSourceConfig | None = None) -> None:
if not DATABENTO_AVAILABLE:
raise RuntimeError("databento package required: pip install databento")
self.config = config or DatabentoSourceConfig()
self.config.cache_dir.mkdir(parents=True, exist_ok=True)
self._client: db.Historical | None = None
@property
def client(self) -> db.Historical:
if self._client is None:
self._client = db.Historical(key=self.config.api_key)
return self._client
def _load_from_cache(self, key: DatabentoCacheKey) -> list[DailyClosePoint] | None:
"""Load cached data if available and fresh."""
cache_file = key.cache_path(self.config.cache_dir)
meta_file = key.metadata_path(self.config.cache_dir)
if not cache_file.exists() or not meta_file.exists():
return None
try:
with open(meta_file) as f:
meta = json.load(f)
# Check cache age
download_date = date.fromisoformat(meta["download_date"])
age_days = (date.today() - download_date).days
if age_days > self.config.max_cache_age_days:
return None
# Check parameters match
if meta["dataset"] != key.dataset or meta["symbol"] != key.symbol:
return None
# Load parquet and convert
import pandas as pd
df = pd.read_parquet(cache_file)
return self._df_to_daily_points(df)
except Exception:
return None
def _save_to_cache(self, key: DatabentoCacheKey, df: pd.DataFrame) -> None:
"""Save data to cache."""
cache_file = key.cache_path(self.config.cache_dir)
meta_file = key.metadata_path(self.config.cache_dir)
df.to_parquet(cache_file, index=False)
meta = {
"download_date": date.today().isoformat(),
"dataset": key.dataset,
"symbol": key.symbol,
"schema": key.schema,
"start_date": key.start_date.isoformat(),
"end_date": key.end_date.isoformat(),
"rows": len(df),
}
with open(meta_file, "w") as f:
json.dump(meta, f, indent=2)
def _fetch_from_databento(self, key: DatabentoCacheKey) -> pd.DataFrame:
"""Fetch data from Databento API."""
data = self.client.timeseries.get_range(
dataset=key.dataset,
symbols=key.symbol,
schema=key.schema,
start=key.start_date.isoformat(),
end=(key.end_date + timedelta(days=1)).isoformat(), # Exclusive end
stype_in=self.config.stype_in,
)
df = data.to_df()
return df
def _df_to_daily_points(self, df: pd.DataFrame) -> list[DailyClosePoint]:
"""Convert DataFrame to DailyClosePoint list."""
points = []
for idx, row in df.iterrows():
# Databento ohlcv schema has ts_event as timestamp
ts = row.get("ts_event", row.get("ts_recv", idx))
if hasattr(ts, "date"):
row_date = ts.date()
else:
row_date = date.fromisoformat(str(ts)[:10])
close = float(row["close"]) / 1e9 # Databento prices are int64 x 1e-9
points.append(DailyClosePoint(date=row_date, close=close))
return sorted(points, key=lambda p: p.date)
def load_daily_closes(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
"""Load daily closing prices from Databento (with caching)."""
# Map symbols to datasets
dataset = self._resolve_dataset(symbol)
databento_symbol = self._resolve_symbol(symbol)
key = DatabentoCacheKey(
dataset=dataset,
symbol=databento_symbol,
schema=self.config.schema,
start_date=start_date,
end_date=end_date,
)
# Try cache first
cached = self._load_from_cache(key)
if cached is not None:
return cached
# Fetch from Databento
import pandas as pd
df = self._fetch_from_databento(key)
# Cache results
self._save_to_cache(key, df)
return self._df_to_daily_points(df)
def _resolve_dataset(self, symbol: str) -> str:
"""Resolve symbol to Databento dataset."""
symbol_upper = symbol.upper()
if symbol_upper in ("GLD", "GLDM", "IAU"):
return "XNAS.BASIC" # ETFs on Nasdaq
elif symbol_upper in ("GC=F", "GC", "GOLD"):
return "GLBX.MDP3" # CME gold futures
elif symbol_upper == "XAU":
return "XNAS.BASIC" # Treat as GLD proxy
else:
return self.config.dataset # Use configured default
def _resolve_symbol(self, symbol: str) -> str:
"""Resolve vault-dash symbol to Databento symbol."""
symbol_upper = symbol.upper()
if symbol_upper == "XAU":
return "GLD" # Proxy XAU via GLD prices
elif symbol_upper == "GC=F":
return "GC" # Use parent symbol for continuous contracts
return symbol_upper
def get_cost_estimate(self, symbol: str, start_date: date, end_date: date) -> float:
"""Estimate cost in USD for a data request."""
dataset = self._resolve_dataset(symbol)
databento_symbol = self._resolve_symbol(symbol)
try:
cost = self.client.metadata.get_cost(
dataset=dataset,
symbols=databento_symbol,
schema=self.config.schema,
start=start_date.isoformat(),
end=(end_date + timedelta(days=1)).isoformat(),
)
return cost
except Exception:
return 0.0 # Return 0 if cost estimation fails
class DatabentoBacktestProvider:
"""Databento-backed historical provider for synthetic backtesting."""
provider_id = "databento_v1"
pricing_mode = "synthetic_bs_mid"
def __init__(
self,
price_source: DatabentoHistoricalPriceSource,
implied_volatility: float = 0.16,
risk_free_rate: float = 0.045,
) -> None:
self.price_source = price_source
self.implied_volatility = implied_volatility
self.risk_free_rate = risk_free_rate
def load_history(self, symbol: str, start_date: date, end_date: date) -> list[DailyClosePoint]:
return self.price_source.load_daily_closes(symbol, start_date, end_date)
# ... rest delegates to SyntheticHistoricalProvider logic
```
### Phase 2: Backtest Settings Model (DATA-DB-002)
**File:** `app/models/backtest_settings.py`
```python
from dataclasses import dataclass, field
from datetime import date
from uuid import UUID
from app.models.backtest import ProviderRef
@dataclass(frozen=True)
class BacktestSettings:
"""User-configurable backtest settings (independent of portfolio)."""
# Scenario identification
settings_id: UUID
name: str
# Data source configuration
data_source: str = "databento" # "databento", "yfinance", "synthetic"
dataset: str = "XNAS.BASIC"
schema: str = "ohlcv-1d"
# Date range
start_date: date = date(2024, 1, 1)
end_date: date = date(2024, 12, 31)
# Independent scenario configuration (not derived from portfolio)
underlying_symbol: str = "GLD"
start_price: float = 0.0 # 0 = auto-derive from first close
underlying_units: float = 1000.0 # Independent of portfolio
loan_amount: float = 0.0 # Debt position for LTV analysis
margin_call_ltv: float = 0.75
# Templates to test
template_slugs: tuple[str, ...] = field(default_factory=lambda: ("protective-put-atm-12m",))
# Provider reference
provider_ref: ProviderRef = field(default_factory=lambda: ProviderRef(
provider_id="databento_v1",
pricing_mode="synthetic_bs_mid",
))
# Cache metadata
cache_key: str = "" # Populated when data is fetched
data_cost_usd: float = 0.0 # Cost of last data fetch
```
### Phase 3: Cache Management (DATA-DB-003)
**File:** `app/services/backtesting/databento_cache.py`
```python
from __future__ import annotations
from dataclasses import dataclass
from datetime import date, timedelta
from pathlib import Path
import hashlib
import json
from app.services.backtesting.databento_source import DatabentoCacheKey
@dataclass
class CacheEntry:
"""Metadata for a cached Databento dataset."""
cache_key: DatabentoCacheKey
file_path: Path
download_date: date
size_bytes: int
cost_usd: float
class DatabentoCacheManager:
"""Manages Databento data cache lifecycle."""
def __init__(self, cache_dir: Path = Path(".cache/databento")) -> None:
self.cache_dir = cache_dir
self.cache_dir.mkdir(parents=True, exist_ok=True)
def list_entries(self) -> list[CacheEntry]:
"""List all cached entries."""
entries = []
for meta_file in self.cache_dir.glob("*_meta.json"):
with open(meta_file) as f:
meta = json.load(f)
cache_file = meta_file.with_name(meta_file.stem.replace("_meta", "") + ".parquet")
if cache_file.exists():
entries.append(CacheEntry(
cache_key=DatabentoCacheKey(
dataset=meta["dataset"],
symbol=meta["symbol"],
schema=meta["schema"],
start_date=date.fromisoformat(meta["start_date"]),
end_date=date.fromisoformat(meta["end_date"]),
),
file_path=cache_file,
download_date=date.fromisoformat(meta["download_date"]),
size_bytes=cache_file.stat().st_size,
cost_usd=0.0, # Would need to track separately
))
return entries
def invalidate_expired(self, max_age_days: int = 30) -> list[Path]:
"""Remove cache entries older than max_age_days."""
removed = []
cutoff = date.today() - timedelta(days=max_age_days)
for entry in self.list_entries():
if entry.download_date < cutoff:
entry.file_path.unlink(missing_ok=True)
meta_file = entry.file_path.with_name(entry.file_path.stem + "_meta.json")
meta_file.unlink(missing_ok=True)
removed.append(entry.file_path)
return removed
def clear_all(self) -> int:
"""Clear all cached data."""
count = 0
for file in self.cache_dir.glob("*"):
if file.is_file():
file.unlink()
count += 1
return count
def get_cache_size(self) -> int:
"""Get total cache size in bytes."""
return sum(f.stat().st_size for f in self.cache_dir.glob("*") if f.is_file())
def should_redownload(self, key: DatabentoCacheKey, params_changed: bool) -> bool:
"""Determine if data should be re-downloaded."""
cache_file = key.cache_path(self.cache_dir)
meta_file = key.metadata_path(self.cache_dir)
if params_changed:
return True
if not cache_file.exists() or not meta_file.exists():
return True
try:
with open(meta_file) as f:
meta = json.load(f)
download_date = date.fromisoformat(meta["download_date"])
age_days = (date.today() - download_date).days
return age_days > 30
except Exception:
return True
```
### Phase 4: Backtest Page UI Updates (DATA-DB-004)
**Key changes to `app/pages/backtests.py`:**
1. Add Databento configuration section
2. Add independent start price/units inputs
3. Show estimated data cost before fetching
4. Cache status indicator
```python
# In backtests.py
with ui.card().classes("w-full ..."):
ui.label("Data Source").classes("text-lg font-semibold")
data_source = ui.select(
{"databento": "Databento (historical market data)", "yfinance": "Yahoo Finance (free, limited)"},
value="databento",
label="Data source",
).classes("w-full")
# Databento-specific settings
with ui.column().classes("w-full gap-2").bind_visibility_from(data_source, "value", lambda v: v == "databento"):
ui.label("Dataset configuration").classes("text-sm text-slate-500")
dataset_select = ui.select(
{"XNAS.BASIC": "Nasdaq Basic (GLD)", "GLBX.MDP3": "CME Globex (GC=F)"},
value="XNAS.BASIC",
label="Dataset",
).classes("w-full")
schema_select = ui.select(
{"ohlcv-1d": "Daily bars", "ohlcv-1h": "Hourly bars"},
value="ohlcv-1d",
label="Resolution",
).classes("w-full")
# Cost estimate
cost_label = ui.label("Estimated cost: $0.00").classes("text-sm text-slate-500")
# Cache status
cache_status = ui.label("").classes("text-xs text-slate-400")
# Independent scenario settings
with ui.card().classes("w-full ..."):
ui.label("Scenario Configuration").classes("text-lg font-semibold")
ui.label("Configure start values independent of portfolio settings").classes("text-sm text-slate-500")
start_price_input = ui.number(
"Start price",
value=0.0,
min=0.0,
step=0.01,
).classes("w-full")
ui.label("Set to 0 to auto-derive from first historical close").classes("text-xs text-slate-400 -mt-2")
underlying_units_input = ui.number(
"Underlying units",
value=1000.0,
min=0.0001,
step=0.0001,
).classes("w-full")
loan_amount_input = ui.number(
"Loan amount ($)",
value=0.0,
min=0.0,
step=1000,
).classes("w-full")
```
### Phase 5: Scenario Pre-Seeding (DATA-DB-005)
**File:** `app/services/backtesting/scenario_bulk_download.py`
```python
from __future__ import annotations
from dataclasses import dataclass
from datetime import date
from pathlib import Path
import json
try:
import databento as db
DATABENTO_AVAILABLE = True
except ImportError:
DATABENTO_AVAILABLE = False
@dataclass
class ScenarioPreset:
"""Pre-configured scenario ready for backtesting."""
preset_id: str
display_name: str
symbol: str
dataset: str
window_start: date
window_end: date
default_start_price: float # First close in window
default_templates: tuple[str, ...]
event_type: str
tags: tuple[str, ...]
description: str
def download_historical_presets(
client: db.Historical,
presets: list[ScenarioPreset],
output_dir: Path,
) -> dict[str, Path]:
"""Bulk download historical data for all presets.
Returns mapping of preset_id to cached file path.
"""
results = {}
for preset in presets:
cache_key = DatabentoCacheKey(
dataset=preset.dataset,
symbol=preset.symbol,
schema="ohlcv-1d",
start_date=preset.window_start,
end_date=preset.window_end,
)
cache_file = cache_key.cache_path(output_dir)
# Download if not cached
if not cache_file.exists():
data = client.timeseries.get_range(
dataset=preset.dataset,
symbols=preset.symbol,
schema="ohlcv-1d",
start=preset.window_start.isoformat(),
end=preset.window_end.isoformat(),
)
data.to_parquet(cache_file)
results[preset.preset_id] = cache_file
return results
def create_default_presets() -> list[ScenarioPreset]:
"""Create default scenario presets for gold hedging research."""
return [
ScenarioPreset(
preset_id="gld-2020-covid-crash",
display_name="GLD March 2020 COVID Crash",
symbol="GLD",
dataset="XNAS.BASIC",
window_start=date(2020, 2, 15),
window_end=date(2020, 4, 15),
default_start_price=143.0, # Approx GLD close on 2020-02-15
default_templates=("protective-put-atm-12m", "protective-put-95pct-12m"),
event_type="crash",
tags=("covid", "crash", "high-vol"),
description="March 2020 COVID market crash - extreme volatility event",
),
ScenarioPreset(
preset_id="gld-2022-rate-hike-cycle",
display_name="GLD 2022 Rate Hike Cycle",
symbol="GLD",
dataset="XNAS.BASIC",
window_start=date(2022, 1, 1),
window_end=date(2022, 12, 31),
default_start_price=168.0,
default_templates=("protective-put-atm-12m", "ladder-50-50-atm-95pct-12m"),
event_type="rate_cycle",
tags=("rates", "fed", "extended"),
description="Full year 2022 - aggressive Fed rate hikes",
),
ScenarioPreset(
preset_id="gcf-2024-rally",
display_name="GC=F 2024 Gold Rally",
symbol="GC",
dataset="GLBX.MDP3",
window_start=date(2024, 1, 1),
window_end=date(2024, 12, 31),
default_start_price=2060.0,
default_templates=("protective-put-atm-12m",),
event_type="rally",
tags=("gold", "futures", "rally"),
description="Gold futures rally in 2024",
),
]
```
### Phase 6: Settings Persistence (DATA-DB-006)
**File:** `app/models/backtest_settings_repository.py`
```python
from dataclasses import asdict
from datetime import date
from pathlib import Path
from uuid import UUID, uuid4
import json
from app.models.backtest_settings import BacktestSettings
class BacktestSettingsRepository:
"""Persistence for backtest settings."""
def __init__(self, base_path: Path | None = None) -> None:
self.base_path = base_path or Path(".workspaces")
def _settings_path(self, workspace_id: str) -> Path:
return self.base_path / workspace_id / "backtest_settings.json"
def load(self, workspace_id: str) -> BacktestSettings:
"""Load backtest settings, creating defaults if not found."""
path = self._settings_path(workspace_id)
if path.exists():
with open(path) as f:
data = json.load(f)
return BacktestSettings(
settings_id=UUID(data["settings_id"]),
name=data.get("name", "Default Backtest"),
data_source=data.get("data_source", "databento"),
dataset=data.get("dataset", "XNAS.BASIC"),
schema=data.get("schema", "ohlcv-1d"),
start_date=date.fromisoformat(data["start_date"]),
end_date=date.fromisoformat(data["end_date"]),
underlying_symbol=data.get("underlying_symbol", "GLD"),
start_price=data.get("start_price", 0.0),
underlying_units=data.get("underlying_units", 1000.0),
loan_amount=data.get("loan_amount", 0.0),
margin_call_ltv=data.get("margin_call_ltv", 0.75),
template_slugs=tuple(data.get("template_slugs", ("protective-put-atm-12m",))),
cache_key=data.get("cache_key", ""),
data_cost_usd=data.get("data_cost_usd", 0.0),
)
# Return defaults
return BacktestSettings(
settings_id=uuid4(),
name="Default Backtest",
)
def save(self, workspace_id: str, settings: BacktestSettings) -> None:
"""Persist backtest settings."""
path = self._settings_path(workspace_id)
path.parent.mkdir(parents=True, exist_ok=True)
data = asdict(settings)
data["settings_id"] = str(data["settings_id"])
data["start_date"] = data["start_date"].isoformat()
data["end_date"] = data["end_date"].isoformat()
data["template_slugs"] = list(data["template_slugs"])
data["provider_ref"] = {
"provider_id": settings.provider_ref.provider_id,
"pricing_mode": settings.provider_ref.pricing_mode,
}
with open(path, "w") as f:
json.dump(data, f, indent=2)
```
## Roadmap Items
### DATA-DB-001: Databento Historical Price Source
**Dependencies:** None
**Estimated effort:** 2-3 days
**Deliverables:**
- `app/services/backtesting/databento_source.py`
- `tests/test_databento_source.py` (mocked API)
- Environment variable `DATABENTO_API_KEY` support
### DATA-DB-002: Backtest Settings Model
**Dependencies:** None
**Estimated effort:** 1 day
**Deliverables:**
- `app/models/backtest_settings.py`
- Repository for persistence
### DATA-DB-003: Cache Management
**Dependencies:** DATA-DB-001
**Estimated effort:** 1 day
**Deliverables:**
- `app/services/backtesting/databento_cache.py`
- Cache cleanup CLI command
### DATA-DB-004: Backtest Page UI Updates
**Dependencies:** DATA-DB-001, DATA-DB-002
**Estimated effort:** 2 days
**Deliverables:**
- Updated `app/pages/backtests.py`
- Updated `app/pages/event_comparison.py`
- Cost estimation display
### DATA-DB-005: Scenario Pre-Seeding
**Dependencies:** DATA-DB-001
**Estimated effort:** 1-2 days
**Deliverables:**
- `app/services/backtesting/scenario_bulk_download.py`
- Pre-configured presets for gold hedging research
- Bulk download script
### DATA-DB-006: Options Data Source (Future)
**Dependencies:** DATA-DB-001
**Estimated effort:** 3-5 days
**Deliverables:**
- `DatabentoOptionSnapshotSource` implementing `OptionSnapshotSource`
- OPRA.PILLAR integration for historical options chains
## Configuration
Add to `.env`:
```
DATABENTO_API_KEY=db-xxxxxxxxxxxxxxxxxxxxxxxx
```
Add to `requirements.txt`:
```
databento>=0.30.0
```
Add to `pyproject.toml`:
```toml
[project.optional-dependencies]
databento = ["databento>=0.30.0"]
```
## Testing Strategy
1. **Unit tests** with mocked Databento responses (`tests/test_databento_source.py`)
2. **Integration tests** with recorded VCR cassettes (`tests/cassettes/*.yaml`)
3. **E2E tests** using cached data (`tests/test_backtest_databento_playwright.py`)
## Cost Management
- Use `metadata.get_cost()` before fetching to show estimated cost
- Default to cached data when available
- Batch download for large historical ranges (>1 year)
- Consider Databento flat rate plans for heavy usage
## Security Considerations
- API key stored in environment variable, never in code
- Cache files contain only market data (no PII)
- Rate limiting respected (100 requests/second per IP)

View File

@@ -78,11 +78,19 @@ env:
4. **Configure Docker on the VPS**: 4. **Configure Docker on the VPS**:
- Ensure Docker and Docker Compose are installed - Ensure Docker and Docker Compose are installed
- The deploy script will pull the container image from the registry - The deploy script will pull the container image from the registry
- Ensure the shared external Docker network `proxy-net` exists so Caddy can reverse proxy the deployment by container name
5. **Verify network connectivity**: 5. **Publish public route through Caddy**:
- Add `lombard.uncloud.tech` to `/opt/caddy/Caddyfile`
- Reverse proxy to `vault-dash:8000` on `proxy-net`
- Reload Caddy and verify `https://lombard.uncloud.tech/health`
- Remove the retired `vd1.uncloud.vpn` route if it still exists
6. **Verify network connectivity**:
- Forgejo runner must be able to reach the VPS via SSH - Forgejo runner must be able to reach the VPS via SSH
- VPS must be able to pull images from the registry - VPS must be able to pull images from the registry
## Instructions for the DevOps Agent ## Instructions for the DevOps Agent
When setting up the deployment: When setting up the deployment:

View File

@@ -0,0 +1,969 @@
# EXEC-001A / BT-001 MVP Architecture
## Scope
This document defines the MVP design for four related roadmap items:
- **EXEC-001A** — Named Strategy Templates
- **BT-001** — Synthetic Historical Backtesting
- **BT-002** — Historical Daily Options Snapshot Provider
- **BT-003** — Selloff Event Comparison Report
The goal is to give implementation agents a concrete architecture without requiring a database or a full UI rewrite. The MVP should fit the current codebase shape:
- domain models in `app/models/`
- IO and orchestration in `app/services/`
- strategy math in `app/strategies/` or a new `app/backtesting/` package
- lightweight docs under `docs/`
## Design goals
1. **Keep current live quote/options flows working.** Do not overload `app/services/data_service.py` with historical backtest state.
2. **Make templates reusable and named.** A strategy definition should be saved once and referenced by many backtests.
3. **Support synthetic-first backtests.** BT-001 must work before BT-002 exists.
4. **Prevent lookahead bias by design.** Providers and the run engine must expose only data available at each `as_of_date`.
5. **Preserve a migration path to real daily options snapshots.** Synthetic pricing and snapshot-based pricing must share the same provider contract.
6. **Stay file-backed for MVP persistence.** Repositories may use JSON files under `data/` first, behind interfaces.
## Terminology decision
The current code uses `LombardPortfolio.gold_ounces`, but the strategy engine effectively treats that field as generic underlying units. For historical backtesting, implementation agents should **not** extend that ambiguity.
### Recommendation
- Keep `LombardPortfolio` unchanged for existing live pages.
- Introduce backtesting-specific portfolio state using the neutral term **`underlying_units`**.
- Treat `symbol` + `underlying_units` as the canonical tradable exposure.
This avoids mixing physical ounces, GLD shares, and synthetic units in the backtest engine.
---
## MVP architecture summary
### Main decision
Create a new isolated subsystem:
- `app/models/strategy_template.py`
- `app/models/backtest.py`
- `app/models/event_preset.py`
- `app/services/historical/`
- `app/services/backtesting/`
- optional thin adapters in `app/strategies/` for reusing existing payoff logic
### Why isolate it
The current `DataService` is a live/synthetic read service with cache-oriented payload shaping. Historical backtesting needs:
- versioned saved definitions
- run lifecycle state
- daily path simulation
- historical provider abstraction
- reproducible result storage
Those concerns should not be mixed into the current request-time quote service.
---
## Domain model proposals
## 1. Strategy templates (EXEC-001A)
A strategy template is a **named, versioned, reusable hedge definition**. It is not a run result and it is not a specific dated option contract.
### `StrategyTemplate`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `template_id` | `str` | Stable UUID/string key |
| `slug` | `str` | Human-readable unique name, e.g. `protective-put-atm-12m` |
| `display_name` | `str` | UI/report label |
| `description` | `str` | Short rationale |
| `template_kind` | enum | `protective_put`, `laddered_put`, `collar` (future-safe) |
| `status` | enum | `draft`, `active`, `archived` |
| `version` | `int` | Increment on material rule changes |
| `underlying_symbol` | `str` | MVP may allow one symbol per template |
| `contract_mode` | enum | `continuous_units` for synthetic MVP, `listed_contracts` for BT-002+ |
| `legs` | `list[TemplateLeg]` | One or more parametric legs |
| `roll_policy` | `RollPolicy` | How/when to replace expiring hedges |
| `entry_policy` | `EntryPolicy` | When the initial hedge is entered |
| `tags` | `list[str]` | e.g. `conservative`, `income-safe` |
| `created_at` | `datetime` | Audit |
| `updated_at` | `datetime` | Audit |
### `TemplateLeg`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `leg_id` | `str` | Stable within template version |
| `side` | enum | `long` or `short`; MVP uses `long` only for puts |
| `option_type` | enum | `put` or `call` |
| `allocation_weight` | `float` | Must sum to `1.0` across active hedge legs in MVP |
| `strike_rule` | `StrikeRule` | MVP: `spot_pct` only |
| `target_expiry_days` | `int` | e.g. `365`, `180`, `90` |
| `quantity_rule` | enum | MVP: `target_coverage_pct` |
| `target_coverage_pct` | `float` | Usually `1.0` for full hedge, but supports partial hedges later |
### `StrikeRule`
MVP shape:
| Field | Type | Notes |
|---|---|---|
| `rule_type` | enum | `spot_pct` |
| `value` | `float` | e.g. `1.00`, `0.95`, `0.90` |
Future-safe, but not in MVP:
- `delta_target`
- `fixed_strike`
- `moneyness_bucket`
### `RollPolicy`
Recommended MVP fields:
| Field | Type | Notes |
|---|---|---|
| `policy_type` | enum | `hold_to_expiry`, `roll_n_days_before_expiry` |
| `days_before_expiry` | `int` | Required for rolling mode |
| `rebalance_on_new_deposit` | `bool` | Default `false` in MVP |
### `EntryPolicy`
Recommended MVP fields:
| Field | Type | Notes |
|---|---|---|
| `entry_timing` | enum | `scenario_start_close` |
| `stagger_days` | `int \| None` | Not used in MVP, keep nullable |
### MVP template invariants
Implementation agents should enforce:
- `slug` unique among active templates
- template versions immutable once referenced by a completed run
- weights sum to `1.0` for `protective_put`/`laddered_put` templates
- all legs use the same `target_expiry_days` in MVP unless explicitly marked as a ladder with shared roll policy
- `underlying_symbol` on the template must either match the scenario symbol or be `*`/generic if generic templates are later supported
### Template examples
- `protective-put-atm-12m`
- `protective-put-95pct-12m`
- `ladder-50-50-atm-95pct-12m`
- `ladder-33-33-33-atm-95pct-90pct-12m`
These map cleanly onto the existing strategy set in `app/strategies/engine.py`.
---
## 2. Backtest scenarios
A backtest scenario is the **saved experiment definition**. It says what portfolio, time window, templates, provider, and execution rules are used.
### `BacktestScenario`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `scenario_id` | `str` | Stable UUID/string key |
| `slug` | `str` | Human-readable name |
| `display_name` | `str` | Report label |
| `description` | `str` | Optional scenario intent |
| `symbol` | `str` | Underlying being hedged |
| `start_date` | `date` | Inclusive |
| `end_date` | `date` | Inclusive |
| `initial_portfolio` | `BacktestPortfolioState` | Portfolio at day 0 |
| `template_refs` | `list[TemplateRef]` | One or more template versions to compare |
| `provider_ref` | `ProviderRef` | Which historical provider to use |
| `execution_model` | `ExecutionModel` | Daily close-to-close for MVP |
| `valuation_frequency` | enum | `daily` in MVP |
| `benchmark_mode` | enum | `unhedged_only` in MVP |
| `event_preset_id` | `str \| None` | Optional link for BT-003 |
| `notes` | `list[str]` | Optional warnings/assumptions |
| `created_at` | `datetime` | Audit |
### `BacktestPortfolioState`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `currency` | `str` | `USD` in MVP |
| `underlying_units` | `float` | Canonical exposure size |
| `entry_spot` | `float` | Starting spot reference |
| `loan_amount` | `float` | Outstanding loan |
| `margin_call_ltv` | `float` | Stress threshold |
| `cash_balance` | `float` | Usually `0.0` in MVP |
| `financing_rate` | `float` | Optional, default `0.0` in MVP |
### `TemplateRef`
Use a small immutable reference object:
| Field | Type | Notes |
|---|---|---|
| `template_id` | `str` | Stable template key |
| `version` | `int` | Required for reproducibility |
| `display_name_override` | `str \| None` | Optional report label |
### `ProviderRef`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `provider_id` | `str` | e.g. `synthetic_v1`, `daily_snapshots_v1` |
| `config_key` | `str` | Named config/profile used by the run |
| `pricing_mode` | enum | `synthetic_bs_mid` or `snapshot_mid` |
### `ExecutionModel`
MVP decision:
- **Daily close-to-close engine**
- Positions are evaluated once per trading day
- If a template rule triggers on date `T`, entry/roll is executed using provider data **as of date `T` close**
- Mark-to-market for date `T` uses the same `T` snapshot
This is a simplification, but it is deterministic and compatible with BT-002 daily snapshots.
### Scenario invariants
- `start_date <= end_date`
- at least one `template_ref`
- all referenced template versions must exist before run submission
- `initial_portfolio.loan_amount < initial_portfolio.underlying_units * entry_spot`
- scenario must declare the provider explicitly; no hidden global default inside the engine
---
## 3. Backtest runs and results
A run is the **execution record** of one scenario against one or more templates under one provider.
### `BacktestRun`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `run_id` | `str` | Stable UUID |
| `scenario_id` | `str` | Source scenario |
| `status` | enum | `queued`, `running`, `completed`, `failed`, `cancelled` |
| `provider_snapshot` | `ProviderSnapshot` | Frozen provider config used at run time |
| `submitted_at` | `datetime` | Audit |
| `started_at` | `datetime \| None` | Audit |
| `completed_at` | `datetime \| None` | Audit |
| `engine_version` | `str` | Git SHA or app version |
| `rules_version` | `str` | Semantic rules hash for reproducibility |
| `warnings` | `list[str]` | Missing data fallback, skipped dates, etc. |
| `error` | `str \| None` | Failure detail |
### `ProviderSnapshot`
Freeze the provider state used by a run:
| Field | Type | Notes |
|---|---|---|
| `provider_id` | `str` | Resolved provider implementation |
| `config` | `dict[str, Any]` | Frozen provider config used for the run |
| `source_version` | `str \| None` | Optional data snapshot/build hash |
### `BacktestRunResult`
Top-level recommended fields:
| Field | Type | Notes |
|---|---|---|
| `run_id` | `str` | Foreign key |
| `scenario_snapshot` | `BacktestScenario` or frozen subset | Freeze used inputs |
| `template_results` | `list[TemplateBacktestResult]` | One per template |
| `comparison_summary` | `RunComparisonSummary` | Ranked table |
| `generated_at` | `datetime` | Audit |
### `TemplateBacktestResult`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `template_id` | `str` | Identity |
| `template_version` | `int` | Reproducibility |
| `template_name` | `str` | Display |
| `summary_metrics` | `BacktestSummaryMetrics` | Compact ranking metrics |
| `daily_path` | `list[BacktestDailyPoint]` | Daily timeseries |
| `position_log` | `list[BacktestPositionRecord]` | Open/roll/expire events |
| `trade_log` | `list[BacktestTradeRecord]` | Cashflow events |
| `validation_notes` | `list[str]` | e.g. synthetic IV fallback used |
### `BacktestSummaryMetrics`
Recommended MVP metrics:
| Field | Type | Notes |
|---|---|---|
| `start_value` | `float` | Initial collateral value |
| `end_value_unhedged` | `float` | Baseline terminal collateral |
| `end_value_hedged_net` | `float` | After hedge P&L and premiums |
| `total_hedge_cost` | `float` | Sum of paid premiums |
| `total_option_payoff_realized` | `float` | Expiry/close realized payoff |
| `max_ltv_unhedged` | `float` | Path max |
| `max_ltv_hedged` | `float` | Path max |
| `margin_call_days_unhedged` | `int` | Count |
| `margin_call_days_hedged` | `int` | Count |
| `worst_drawdown_unhedged` | `float` | Optional but useful |
| `worst_drawdown_hedged` | `float` | Optional but useful |
| `days_protected_below_threshold` | `int` | Optional convenience metric |
| `roll_count` | `int` | Operational complexity |
### `BacktestDailyPoint`
Recommended daily path fields:
| Field | Type | Notes |
|---|---|---|
| `date` | `date` | Trading date |
| `spot_close` | `float` | Underlying close |
| `underlying_value` | `float` | `underlying_units * spot_close` |
| `option_market_value` | `float` | Mark-to-market of open hedge |
| `premium_cashflow` | `float` | Negative on entry/roll |
| `realized_option_cashflow` | `float` | Expiry/sale value |
| `net_portfolio_value` | `float` | Underlying + option MTM + cash |
| `loan_amount` | `float` | Constant in MVP |
| `ltv_unhedged` | `float` | Baseline |
| `ltv_hedged` | `float` | Hedge-aware |
| `margin_call_unhedged` | `bool` | Baseline |
| `margin_call_hedged` | `bool` | Hedge-aware |
| `active_position_ids` | `list[str]` | Traceability |
### `BacktestTradeRecord`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `trade_id` | `str` | Stable key |
| `date` | `date` | Execution date |
| `action` | enum | `buy_open`, `sell_close`, `expire`, `roll` |
| `leg_id` | `str` | Template leg link |
| `instrument_key` | `HistoricalInstrumentKey` | Strike/expiry/type |
| `quantity` | `float` | Continuous or discrete |
| `price` | `float` | Fill price |
| `cashflow` | `float` | Signed |
| `reason` | enum | `initial_entry`, `scheduled_roll`, `expiry`, `scenario_end` |
### Run/result invariants
- runs are append-only after completion
- results must freeze template versions and scenario inputs used at execution time
- failed runs may omit `template_results` but must preserve `warnings`/`error`
- ranking should never rely on a metric that can be absent without a fallback rule
---
## 4. Event presets (BT-003)
An event preset is a **named reusable market window** used to compare strategy behavior across selloffs.
### `EventPreset`
Recommended fields:
| Field | Type | Notes |
|---|---|---|
| `event_preset_id` | `str` | Stable key |
| `slug` | `str` | e.g. `covid-crash-2020` |
| `display_name` | `str` | Report label |
| `symbol` | `str` | Underlying symbol |
| `window_start` | `date` | Inclusive |
| `window_end` | `date` | Inclusive |
| `anchor_date` | `date \| None` | Optional focal date |
| `event_type` | enum | `selloff`, `recovery`, `stress_test` |
| `tags` | `list[str]` | e.g. `macro`, `liquidity`, `vol-spike` |
| `description` | `str` | Why this event exists |
| `scenario_overrides` | `EventScenarioOverrides` | Optional defaults |
| `created_at` | `datetime` | Audit |
### `EventScenarioOverrides`
MVP fields:
| Field | Type | Notes |
|---|---|---|
| `lookback_days` | `int \| None` | Optional pre-window warmup |
| `recovery_days` | `int \| None` | Optional post-event tail |
| `default_template_slugs` | `list[str]` | Suggested comparison set |
| `normalize_start_value` | `bool` | Default `true` for event comparison charts |
### BT-003 usage pattern
- a report selects one or more `EventPreset`s
- each preset materializes a `BacktestScenario`
- the same template set is run across all events
- report compares normalized daily paths and summary metrics
### MVP event decision
Use **manual date windows only**. Do not attempt automatic peak/trough detection in the first slice.
---
## Historical provider abstraction
## Core interface
Create a provider contract that exposes only **point-in-time historical data**.
### `HistoricalMarketDataProvider`
Recommended methods:
```python
class HistoricalMarketDataProvider(Protocol):
provider_id: str
def get_trading_days(self, symbol: str, start_date: date, end_date: date) -> list[date]: ...
def get_underlying_bars(
self, symbol: str, start_date: date, end_date: date
) -> list[UnderlyingBar]: ...
def get_option_snapshot(
self, query: OptionSnapshotQuery
) -> OptionSnapshot: ...
def price_open_position(
self, position: HistoricalOptionPosition, as_of_date: date
) -> HistoricalOptionMark: ...
```
### Why this interface
It cleanly supports both provider types:
- **BT-001 synthetic provider** — generate option values from deterministic assumptions
- **BT-002 snapshot provider** — read real daily option quotes/surfaces from stored snapshots
It also makes lookahead control explicit: every method is asked for data **as of** a specific date.
## Supporting provider models
### `UnderlyingBar`
| Field | Type | Notes |
|---|---|---|
| `date` | `date` | Trading day |
| `open` | `float` | Optional for future use |
| `high` | `float` | Optional |
| `low` | `float` | Optional |
| `close` | `float` | Required |
| `volume` | `float \| None` | Optional |
| `source` | `str` | Provider/source tag |
### `OptionSnapshotQuery`
| Field | Type | Notes |
|---|---|---|
| `symbol` | `str` | Underlying |
| `as_of_date` | `date` | Point-in-time date |
| `option_type` | enum | `put`/`call` |
| `target_expiry_days` | `int` | Desired tenor |
| `strike_rule` | `StrikeRule` | Resolved against current spot |
| `pricing_side` | enum | `mid` in MVP |
### `OptionSnapshot`
| Field | Type | Notes |
|---|---|---|
| `as_of_date` | `date` | Snapshot date |
| `symbol` | `str` | Underlying |
| `underlying_close` | `float` | Spot used for selection/pricing |
| `selected_contract` | `HistoricalOptionQuote` | Resolved contract |
| `selection_notes` | `list[str]` | e.g. nearest expiry/nearest strike |
| `source` | `str` | Provider ID |
### `HistoricalOptionQuote`
| Field | Type | Notes |
|---|---|---|
| `instrument_key` | `HistoricalInstrumentKey` | Canonical contract identity |
| `bid` | `float` | Optional for snapshot provider |
| `ask` | `float` | Optional |
| `mid` | `float` | Required for MVP valuation |
| `implied_volatility` | `float \| None` | Required for BT-002, synthetic-derived for BT-001 |
| `delta` | `float \| None` | Optional now, useful later |
| `open_interest` | `int \| None` | Optional now |
| `volume` | `int \| None` | Optional now |
| `source` | `str` | Provider/source tag |
### `HistoricalInstrumentKey`
| Field | Type | Notes |
|---|---|---|
| `symbol` | `str` | Underlying |
| `option_type` | enum | `put`/`call` |
| `expiry` | `date` | Contract expiry |
| `strike` | `float` | Contract strike |
---
## Provider implementations
## A. `SyntheticHistoricalProvider` (BT-001 first)
Purpose:
- generate deterministic historical backtests without requiring stored historical options chains
- use historical underlying closes plus a synthetic volatility/rates regime
- resolve template legs into synthetic option quotes on each rebalance date
- reprice open positions daily using the same model family
### Recommended behavior
Inputs:
- underlying close series (from yfinance file cache, CSV fixture, or another deterministic source)
- configured implied volatility regime, e.g. fixed `0.16` or dated step regime
- configured risk-free rate regime
- optional stress spread for transaction cost realism
Entry and valuation:
- on a rebalance date, compute strike from `spot_pct * spot_close`
- set expiry by nearest trading day to `as_of_date + target_expiry_days`
- price using Black-Scholes with the current day's spot, configured IV, remaining time, and option type
- on later dates, reprice the same contract using current spot and remaining time only
### MVP synthetic assumptions
- constant or schedule-based implied volatility; no future realized volatility leakage
- no stochastic volatility process in first slice
- no early exercise modeling
- no assignment modeling
- `mid` price only
- deterministic rounding/selection rules
### Why synthetic-first is acceptable
It validates:
- template persistence
- run lifecycle
- path valuation
- daily result rendering
- anti-lookahead contract boundaries
before adding BT-002 data ingestion complexity.
## B. `DailyOptionsSnapshotProvider` (BT-002)
Purpose:
- load historical option quotes for each trading day
- resolve actual listed contracts closest to template rules
- mark open positions to historical daily mids thereafter
### Recommended behavior
- selection on entry day uses nearest eligible expiry and nearest eligible strike from that day's chain only
- mark-to-market later uses the exact same contract key if a quote exists on later dates
- if the contract is missing on a later date, provider returns a missing-data result and the engine applies a documented fallback policy
### MVP fallback policy for missing marks
Implementation agents should choose one explicit fallback and test it. Recommended order:
1. exact contract from same-day snapshot
2. if unavailable, previous available mark from same contract with warning
3. if unavailable and contract is expired, intrinsic value at expiry or zero afterward
4. otherwise fail the run or mark the template result incomplete
Do **not** silently substitute a different strike/expiry for an already-open position.
---
## Backtest engine flow
Create a dedicated engine under `app/backtesting/engine.py`. Keep orchestration and repository wiring in `app/services/backtesting/`.
### High-level loop
For each template in the scenario:
1. load trading days from provider
2. create baseline unhedged path
3. resolve initial hedge on `start_date`
4. for each trading day:
- read underlying close for day `T`
- mark open option positions as of `T`
- compute unhedged and hedged portfolio value
- compute LTV and margin-call flags
- check roll/expiry rules using only `T` data
- if a roll is due, close/expire old position and open replacement using `T` snapshot
5. liquidate remaining position at scenario end if still open
6. calculate summary metrics
7. rank templates inside `comparison_summary`
### Position model recommendation
Use a separate open-position model rather than reusing `OptionContract` directly.
Recommended `HistoricalOptionPosition` fields:
- `position_id`
- `instrument_key`
- `opened_at`
- `expiry`
- `quantity`
- `entry_price`
- `current_mark`
- `template_leg_id`
- `source_snapshot_date`
Reason: backtests need lifecycle state and audit fields that the current `OptionContract` model does not carry.
### Ranking recommendation
For MVP comparison views, rank templates by:
1. fewer `margin_call_days_hedged`
2. lower `max_ltv_hedged`
3. lower `total_hedge_cost`
4. higher `end_value_hedged_net`
This is easier to explain than a single opaque score.
---
## Data realism constraints
Implementation agents should treat the following as mandatory MVP rules.
## 1. Point-in-time only
On day `T`, the engine may use only:
- underlying bar for `T`
- option snapshot for `T`
- provider configuration known before the run starts
- open positions created earlier or on `T`
It may **not** use:
- future closes
- future implied vols
- terminal event windows beyond `T` for trading decisions
- any provider helper that precomputes the whole path and leaks future state into contract selection
## 2. Stable contract identity after entry
Once a contract is opened, daily valuation must use that exact contract identity:
- same symbol
- same expiry
- same strike
- same option type
No rolling relabeling of a live position to a “nearest” contract.
## 3. Explicit selection rules
Template rules must resolve to contracts with deterministic tiebreakers:
- nearest expiry at or beyond target DTE
- nearest strike to rule target
- if tied, prefer more conservative strike for puts (higher strike) and earliest expiry
Tiebreakers must be documented and unit-tested.
## 4. Execution timing must be fixed
MVP should use **same-day close execution** consistently.
Do not mix:
- signal at close / fill next open
- signal at close / fill same close
- signal intraday / mark at close
If this changes later, it must be a scenario-level parameter.
## 5. Continuous-vs-listed quantity must be explicit
MVP synthetic runs may use `continuous_units`.
The shipped BT-002 provider slice also remains `continuous_units`-only.
`listed_contracts` with contract-size rounding is deferred to follow-up slice `BT-002A`.
Do not hide rounding rules inside providers.
They belong in the position sizing logic and must be recorded in the result.
## 6. Costs must be recorded as cashflows
Premiums and close/expiry proceeds must be stored as dated cashflows.
Do not collapse the entire hedge economics into end-of-period payoff only.
## 7. Missing data cannot be silent
Any missing snapshot/mark fallback must add:
- a run or template warning
- a template validation note
- and, in a fuller follow-up slice, a deterministic result status if the template becomes incomplete
---
## Anti-lookahead rules
These should be copied into tests and implementation notes verbatim.
1. **Contract selection rule**: select options using only the entry-day snapshot.
2. **Daily MTM rule**: mark open positions using only same-day data for the same contract.
3. **Expiry rule**: once `as_of_date >= expiry`, option value becomes intrinsic-at-expiry or zero after expiry according to the provider contract; it is not repriced with negative time-to-expiry.
4. **Event preset rule**: event presets may define scenario dates, but the strategy engine may not inspect future event endpoints when deciding to roll or exit.
5. **Synthetic vol rule**: synthetic providers may use fixed or date-indexed IV schedules, but never realized future path statistics from dates after `as_of_date`.
6. **Metric rule**: comparison metrics may summarize the whole run only after the run completes; they may not feed back into trading decisions during the run.
---
## Phased implementation plan with TDD slices
Each slice should leave behind tests and a minimal implementation path.
## Slice 0 — Red tests for model invariants
Target:
- create tests for `StrategyTemplate`, `BacktestScenario`, `BacktestRun`, `EventPreset`
- validate weights, dates, versioned references, and uniqueness assumptions
Suggested tests:
- invalid ladder weights rejected
- scenario with end before start rejected
- template ref requires explicit version
- loan amount cannot exceed initial collateral value
## Slice 1 — Named template repository (EXEC-001A core)
Target:
- file-backed `StrategyTemplateRepository`
- save/load/list active templates
- version bump on immutable update
Suggested tests:
- saving template round-trips cleanly
- updating active template creates version 2, not in-place mutation
- archived template stays loadable for historical runs
## Slice 2 — Synthetic provider contract (BT-001 foundation)
Target:
- `HistoricalMarketDataProvider` protocol
- `SyntheticHistoricalProvider`
- deterministic underlying fixture input + synthetic option pricing
Suggested tests:
- provider returns stable trading day list
- spot-pct strike resolution uses same-day spot only
- repricing uses decreasing time to expiry
- no future bar access required for day-`T` pricing
## Slice 3 — Single-template backtest engine
Target:
- run one protective-put template across a short scenario
- output daily path + summary metrics
Suggested tests:
- hedge premium paid on entry day
- option MTM increases when spot falls materially below strike
- hedged max LTV is <= unhedged max LTV in a monotonic selloff fixture
- completed run freezes scenario and template version snapshots
## Slice 4 — Multi-template comparison runs
Target:
- compare ATM, 95% put, 50/50 ladder on same scenario
- produce ranked `comparison_summary`
Suggested tests:
- all template results share same scenario snapshot
- ranking uses documented metric order
- equal primary metric falls back to next metric deterministically
## Slice 5 — Roll logic and expiry behavior
Target:
- support `roll_n_days_before_expiry`
- support expiry settlement and position replacement
Suggested tests:
- roll occurs exactly on configured trading-day offset
- expired contracts stop carrying time value
- no contract identity mutation between entry and close
## Slice 6 — Event presets and BT-003 scenario materialization
Target:
- repository for `EventPreset`
- materialize preset -> scenario
- run comparison over multiple named events
Suggested tests:
- preset dates map cleanly into scenario dates
- scenario overrides are applied explicitly
- normalized event series start from common baseline
## Slice 7 — Daily snapshot provider (BT-002)
Target:
- add `DailyOptionsSnapshotProvider` behind same contract
- reuse existing engine with provider swap only
Suggested tests:
- entry picks nearest valid listed contract from snapshot
- later MTM uses same contract key
- missing mark generates warning and applies documented fallback
- synthetic and snapshot providers both satisfy same provider test suite
## Slice 8 — Thin API/UI integration after engine is proven
Not part of this docs implementation scope, but the natural next step is:
- `/api/templates`
- `/api/backtests`
- `/api/backtests/{run_id}`
- later a NiceGUI page for listing templates and runs
Per project rules, do not claim this feature is live until the UI consumes real run data.
---
## Recommended file/module layout
Recommended minimal layout for this codebase:
```text
app/
backtesting/
__init__.py
engine.py # run loop, ranking, metric aggregation
position_sizer.py # continuous vs listed quantity rules
result_metrics.py # path -> summary metrics
scenario_materializer.py # event preset -> scenario
selection.py # strike/expiry resolution helpers
models/
strategy_template.py # StrategyTemplate, TemplateLeg, RollPolicy, EntryPolicy
backtest.py # BacktestScenario, BacktestRun, results, daily points
event_preset.py # EventPreset, overrides
historical_data.py # UnderlyingBar, OptionSnapshot, InstrumentKey, marks
services/
backtesting/
__init__.py
orchestrator.py # submit/load/list runs
repositories.py # file-backed run repository helpers
historical/
__init__.py
base.py # HistoricalMarketDataProvider protocol
synthetic.py # BT-001 provider
snapshots.py # BT-002 provider
templates/
__init__.py
repository.py # save/load/list/version templates
events/
__init__.py
repository.py # save/load/list presets
```
### Persistence recommendation for MVP
Use file-backed repositories first:
```text
data/
strategy_templates.json
event_presets.json
backtests/
<run_id>.json
```
Reason:
- aligns with current `PortfolioRepository` style
- keeps the MVP small
- allows deterministic fixtures in tests
- can later move behind the same repository interfaces
---
## Code reuse guidance
Implementation agents should reuse existing code selectively.
### Safe to reuse
- pricing helpers in `app/core/pricing/`
- payoff logic concepts from `app/models/option.py`
- existing strategy presets from `app/strategies/engine.py` as seed templates
### Do not reuse directly without adaptation
- `StrategySelectionEngine` as the backtest engine
- `DataService` as a historical run orchestrator
- `LombardPortfolio.gold_ounces` as the canonical backtest exposure field
Reason: these current types are optimized for present-time research payloads, not dated position lifecycle state.
---
## Open implementation decisions to settle before coding
1. **Underlying source for synthetic BT-001**: use yfinance historical closes directly, local fixture CSVs, or both?
2. **Quantity mode in first runnable slice**: support only `continuous_units` first, or implement listed contract rounding immediately?
3. **Scenario end behavior**: liquidate remaining option at final close, or leave terminal MTM only?
4. **Missing snapshot policy**: hard-fail vs warn-and-carry-forward?
5. **Provider metadata freezing**: store config only, or config + source data hash?
Recommended answers for MVP:
- yfinance historical closes with deterministic test fixtures for unit tests
- `continuous_units` first
- liquidate at final close for clearer realized P&L
- warn-and-carry-forward only for same-contract marks, otherwise fail
- freeze provider config plus app/git version
---
## Implementation-ready recommendations
1. **Build BT-001 around a new provider interface, not around `DataService`.**
2. **Treat templates as immutable versioned definitions.** Runs must reference template versions, not mutable slugs only.
3. **Use a daily close-to-close engine for MVP and document it everywhere.**
4. **Record every hedge premium and payoff as dated cashflows.**
5. **Keep synthetic provider and daily snapshot provider behind the same contract.**
6. **Introduce `underlying_units` in backtesting models to avoid `gold_ounces` ambiguity.**
7. **Make missing data warnings explicit and persistent in run results.**

170
docs/GLD_BASIS_RESEARCH.md Normal file
View File

@@ -0,0 +1,170 @@
# GLD ETF vs Gold Futures Basis: Implementation Guide
## Executive Summary
GLD (SPDR Gold Shares ETF) does **not** track gold at a simple 10:1 ratio. Two mechanical factors affect the conversion between GLD and physical gold/futures.
## Key Findings for Dashboard Implementation
### 1. Expense Ratio Decay (Permanent, Predictable)
| Metric | Launch (2004) | Current (2026) |
|--------|----------------|-----------------|
| Gold per share | 0.1000 oz | ~0.0919 oz |
| Effective ratio | 10:1 | **10.9:1** |
| Cumulative decay | — | 8.1% |
**Formula:**
```
ounces_per_share = 0.10 * e^(-0.004 * years_since_2004)
```
**Dashboard Implementation:**
```python
# Current GLD backing (as of 2026)
GLD_OUNCES_PER_SHARE = 0.0919
def gld_price_to_gold_spot(gld_price: float) -> float:
"""Convert GLD price to implied gold spot price."""
return gld_price / GLD_OUNCES_PER_SHARE
def gold_spot_to_gld_price(gold_spot: float) -> float:
"""Convert gold spot price to GLD equivalent."""
return gold_spot * GLD_OUNCES_PER_SHARE
# Example: At $4,600/oz gold
# GLD ≈ $4,600 × 0.0919 ≈ $423 (NOT $460)
```
### 2. Futures-Spot Basis (Variable, Market-Dependent)
| Market State | Futures vs. Spot | GLD vs. GC=F÷10 |
|--------------|------------------|-----------------|
| **Contango** (normal) | Futures > Spot by $10-15/oz | GLD appears at "discount" |
| **Backwardation** (stress) | Spot > Futures | GLD appears at "premium" |
**During stress events:**
- March 2020 COVID: COMEX futures $50-70 above London spot
- GLD holders benefited (physical-backed)
- Futures traders suffered negative roll yield
### 3. After-Hours Pricing Gap
| Instrument | Trading Hours |
|------------|---------------|
| GLD | US market hours (9:30 AM - 4 PM ET) |
| GC=F (futures) | 23 hours/day, 6 days/week |
**Implication:** GLD opens with gaps after weekend/overnight gold moves. Dashboard should show "last regular session close" vs. "current futures indication."
## Dashboard Recommendations
### Data Display
```python
class GoldPriceDisplay:
"""Recommended price display for vault-dash."""
def __init__(self, gld_price: float, gold_futures_price: float):
self.gld_price = gld_price
self.gold_futures_price = gold_futures_price
self.ounces_per_share = 0.0919 # Current GLD backing
@property
def implied_spot_from_gld(self) -> float:
"""Gold spot implied from GLD price."""
return self.gld_price / self.ounces_per_share
@property
def gld_fair_value(self) -> float:
"""What GLD 'should' trade at based on futures."""
# Adjust for contango (~$10/oz typically)
spot_estimate = self.gold_futures_price - 10
return spot_estimate * self.ounces_per_share
@property
def basis_bps(self) -> float:
"""Basis between GLD and fair value in basis points."""
diff_bps = (self.gld_price / self.gld_fair_value - 1) * 10000
return diff_bps
```
### Warning Thresholds
```python
# Raise warnings when basis exceeds normal bounds
BASIS_WARNING_THRESHOLD = 50 # 50 bps = 0.5%
if abs(basis_bps) > BASIS_WARNING_THRESHOLD:
# GLD trading at unusual premium/discount
# Possible causes: after-hours gap, physical stress, AP arb failure
show_warning(f"GLD basis elevated: {basis_bps:.0f} bps")
```
### Options Pricing
For Lombard hedge calculations:
```python
def calculate_hedge_strikes(
portfolio_gold_ounces: float,
margin_call_price: float,
current_gold_spot: float
) -> dict:
"""
Calculate appropriate GLD option strikes for hedge.
IMPORTANT: Use GLD price directly, not converted from futures.
"""
# Convert gold price thresholds to GLD strikes
gld_current = current_gold_spot * GLD_OUNCES_PER_SHARE
gld_margin_call = margin_call_price * GLD_OUNCES_PER_SHARE
# Recommended strikes
atm_strike = round(gld_current) # e.g., $423
otm_10_strike = round(gld_current * 0.90) # 10% OTM: ~$381
otm_5_strike = round(gld_current * 0.95) # 5% OTM: ~$402
return {
"current_gld": gld_current,
"margin_call_gld": gld_margin_call,
"atm_strike": atm_strike,
"otm_5pct_strike": otm_5_strike,
"otm_10pct_strike": otm_10_strike,
"contracts_needed": math.ceil(portfolio_gold_ounces / (100 * GLD_OUNCES_PER_SHARE))
}
```
## Data Sources
| Data Point | Source | Notes |
|------------|--------|-------|
| GLD NAV/ounce | spdrgoldshares.com | Daily updated |
| GLD price | Market data API | Real-time |
| GC=F price | CME/futures API | Extended hours |
| Contango estimate | Futures curve | Calculate from term structure |
## Key Takeaways for Vault-Dash
1. **Never use a fixed 10:1 ratio** — always use current GLD backing (~0.092 oz/share)
2. **Display both measures:**
- GLD implied spot = GLD ÷ 0.0919
- GC=F adjusted = GC=F ÷ 10 (naive) for comparison
3. **Show basis indicator:**
- Green: basis within ±25 bps (normal)
- Yellow: basis ±25-50 bps (elevated)
- Red: basis > 50 bps (unusual — possible stress)
4. **For hedging:**
- Use GLD's actual price for strike selection
- Contract count = gold ounces ÷ (100 × 0.0919)
- Don't convert from GC=F to GLD — use GLD directly
## References
- SEC GLD Registration Statement (2004)
- SPDR Gold Shares: GLDM methodology updates
- CME Gold Futures specifications
- Research paper: "Optimal Hedging Strategies for Gold-Backed Lombard Loans"

View File

@@ -1,249 +0,0 @@
# Vault Dashboard Roadmap
## Overview
A prioritized roadmap for the Vault Dashboard Lombard loan hedging platform.
## Legend
- **Priority**: P0 (Critical), P1 (High), P2 (Medium), P3 (Low)
- **Dependencies**: Features tagged with `[depends: ID]` require the named feature to be completed first
- **Effort**: S (Small), M (Medium), L (Large)
---
## Phase 1: Data Foundation (Foundation Layer)
### DATA-001: Live Price Feed Integration [P0, M] **[foundation]**
**As a** portfolio manager, **I want** real-time gold price updates **so that** my LTV calculations reflect current market conditions.
**Acceptance Criteria:**
- Integrate yfinance/live data for GLD spot price
- Update prices every 30 seconds via WebSocket
- Display last update timestamp
- Fallback to cached data if feed fails
**Technical Notes:**
- Create `app/services/price_feed.py` with async price fetching
- Extend existing WebSocket manager in `app/services/websocket.py`
- Store prices in Redis with 60s TTL
**Dependencies:** None
---
### DATA-002: Options Chain Data [P0, L] **[depends: DATA-001]**
**As a** trader, **I want** live options chain data for GLD **so that** I can evaluate protective put strikes and premiums.
**Acceptance Criteria:**
- Fetch options chains from yfinance or IBKR API
- Display strikes, expiration dates, bid/ask, implied volatility
- Cache chain data for 5 minutes
- Support filtering by expiration (30/60/90 days)
**Technical Notes:**
- Create `app/services/options_chain.py`
- Add `/api/options/chain` endpoint
- Update Options Chain page (`app/pages/options.py`)
**Dependencies:** DATA-001
---
### DATA-003: Greeks Calculation [P1, M] **[depends: DATA-002]**
**As a** risk manager, **I want** real-time Greeks calculations **so that** I understand my hedge sensitivity.
**Acceptance Criteria:**
- Calculate Delta, Gamma, Theta, Vega for selected options
- Display Greeks in options chain view
- Show portfolio-level Greeks if positions held
- Use Black-Scholes model with live IV
**Technical Notes:**
- Create `app/services/pricing.py` with B-S model
- Add QuantLib integration (optional dependency)
- Cache calculations for performance
**Dependencies:** DATA-002
---
## Phase 2: Portfolio & Risk (Core Features)
### PORT-001: Portfolio State Management [P0, M] **[depends: DATA-001]**
**As a** user, **I want** to configure my actual portfolio (gold value, loan amount) **so that** LTV calculations match my real position.
**Acceptance Criteria:**
- Settings page with editable portfolio parameters
- Store config in Redis/database
- Validate LTV < 100%
- Show current vs recommended collateral
**Technical Notes:**
- Extend `app/pages/settings.py`
- Create `app/models/portfolio.py` with Pydantic models
- Add persistence layer (Redis JSON or SQLite)
**Dependencies:** DATA-001
---
### PORT-002: Alert Notifications [P1, M] **[depends: PORT-001]**
**As a** risk manager, **I want** alerts when LTV approaches margin call thresholds **so that** I can take action before liquidation.
**Acceptance Criteria:**
- Configurable alert thresholds (default: 70%, 75%)
- Browser push notifications
- Email notifications (optional)
- Alert history log
**Technical Notes:**
- Create `app/services/alerts.py`
- Integrate browser notifications API
- Add `/api/alerts/configure` endpoint
- Background task to check thresholds
**Dependencies:** PORT-001
---
### PORT-003: Historical LTV Chart [P2, M] **[depends: PORT-001]**
**As a** user, **I want** to see my LTV history over time **so that** I can identify trends and stress periods.
**Acceptance Criteria:**
- Store LTV snapshots every hour
- Display 7/30/90 day charts
- Show margin call threshold line
- Export data as CSV
**Technical Notes:**
- Create `app/services/history.py`
- Use TimescaleDB or Redis TimeSeries (optional)
- Integrate with existing chart components
**Dependencies:** PORT-001
---
## Phase 3: Strategy Execution (Advanced Features)
### EXEC-001: Strategy Builder [P1, L] **[depends: DATA-003]**
**As a** trader, **I want** to build and compare hedging strategies **so that** I can choose optimal protection.
**Acceptance Criteria:**
- Select strategy type (protective put, collar, laddered)
- Choose strikes and expirations
- See P&L payoff diagrams
- Compare cost vs protection level
**Technical Notes:**
- Extend `app/pages/hedge.py`
- Create `app/services/strategy_builder.py`
- Add payoff chart visualization
- Store strategy templates
**Dependencies:** DATA-003
---
### EXEC-002: IBKR Order Integration [P2, L] **[depends: EXEC-001]**
**As a** authorized user, **I want** to execute hedge trades directly from the dashboard **so that** I can act quickly on recommendations.
**Acceptance Criteria:**
- IBKR API connection (paper trading first)
- Preview order with estimated fill
- One-click execution
- Order tracking and status updates
**Technical Notes:**
- Create `app/services/broker.py` with IBKR API
- Add paper/live mode toggle
- Store credentials securely
- Order audit log
**Dependencies:** EXEC-001
---
### EXEC-003: Position Monitoring [P2, M] **[depends: EXEC-002]**
**As a** portfolio manager, **I want** to see my open hedge positions **so that** I know my current protection status.
**Acceptance Criteria:**
- Display open options positions
- Show expiration countdown
- Calculate net Greeks exposure
- Alert on approaching expiration
**Technical Notes:**
- Create positions table/view
- Sync with IBKR positions
- Update portfolio Greeks calculation
**Dependencies:** EXEC-002
---
## Phase 4: Reporting & Analytics (Polish)
### RPT-001: Strategy Report Generation [P3, M] **[depends: EXEC-001]**
**As a** compliance officer, **I want** PDF reports of hedging decisions **so that** I can document risk management.
**Acceptance Criteria:**
- Generate PDF with strategy rationale
- Include P&L scenarios
- Date range selection
- Export to email/share
**Technical Notes:**
- Use reportlab or weasyprint
- Create `app/services/reporting.py`
- Add download endpoint
**Dependencies:** EXEC-001
---
### RPT-002: What-If Analysis [P3, L] **[depends: DATA-003]**
**As a** risk manager, **I want** to simulate gold price drops **so that** I can stress test my protection.
**Acceptance Criteria:**
- Slider to adjust gold price scenarios (-10%, -20%, etc.)
- Show portfolio P&L impact
- Display hedge payoff under scenarios
- Compare protected vs unprotected
**Technical Notes:**
- Extend strategy builder with scenario mode
- Add sensitivity analysis
- Interactive chart updates
**Dependencies:** DATA-003
---
## Dependency Graph
```
DATA-001 (Price Feed)
├── DATA-002 (Options Chain)
│ ├── DATA-003 (Greeks)
│ │ ├── EXEC-001 (Strategy Builder)
│ │ │ ├── EXEC-002 (IBKR Orders)
│ │ │ │ └── EXEC-003 (Position Monitoring)
│ │ │ └── RPT-001 (Reports)
│ │ └── RPT-002 (What-If)
│ └── PORT-001 (Portfolio Config)
│ ├── PORT-002 (Alerts)
│ └── PORT-003 (History)
```
---
## Implementation Priority Queue
1. **DATA-001** - Unblock all other features
2. **PORT-001** - Enable user-specific calculations
3. **DATA-002** - Core options data
4. **DATA-003** - Risk metrics
5. **PORT-002** - Risk management safety
6. **EXEC-001** - Core user workflow
7. **EXEC-002** - Execution capability
8. Remaining features

117
docs/roadmap/ROADMAP.yaml Normal file
View File

@@ -0,0 +1,117 @@
version: 1
updated_at: 2026-04-07
structure:
backlog_dir: docs/roadmap/backlog
in_progress_dir: docs/roadmap/in-progress
done_dir: docs/roadmap/done
blocked_dir: docs/roadmap/blocked
cancelled_dir: docs/roadmap/cancelled
notes:
- The roadmap source of truth is this index plus the per-task YAML files in the status folders.
- One task lives in one YAML file and changes state by moving between status folders.
- Priority ordering is maintained here so agents can parse one short file first.
- Pre-alpha policy: we may cut or replace old features without backward compatibility until alpha is declared.
- Alpha migration policy: once alpha is declared, compatibility only needs to move forward; backward migrations are not required.
priority_queue:
- DATA-DB-007
- DATA-002A
- DATA-001A
- DATA-DB-005
- OPS-001
- BT-003
- BT-002A
- GCF-001
- DATA-DB-006
- EXEC-002
recently_completed:
- UX-002
- BT-004
- BT-005
- CORE-003
- CONV-001
- DATA-DB-004
- PORTFOLIO-003
- PORTFOLIO-002
- DISPLAY-002
- PRICING-002
- BT-001C
- BT-002
- PORT-003
- BT-003B
- CORE-001D
- CORE-001D3C
- CORE-001D2D
- CORE-001D2C
- CORE-001D3B
- CORE-001D3A
- UX-001
- CORE-002
- CORE-002C
- CORE-001D2B
- CORE-001D2A
- CORE-002B
states:
backlog:
- DATA-DB-007
- DATA-DB-005
- DATA-DB-006
- DATA-002A
- DATA-001A
- OPS-001
- BT-003
- BT-002A
- GCF-001
- EXEC-002
in_progress: []
done:
- BT-004
- BT-005
- CORE-003
- CONV-001
- DATA-DB-003
- DATA-DB-004
- DATA-DB-002
- DATA-DB-001
- PORTFOLIO-003
- PORTFOLIO-002
- DISPLAY-002
- PRICING-003
- PRICING-002
- PRICING-001
- DATA-001
- DATA-002
- DATA-003
- PORT-001
- PORT-001A
- PORT-002
- PORT-003
- PORT-004
- SEC-001
- SEC-001A
- EXEC-001A
- EXEC-001
- BT-001
- BT-001A
- BT-001C
- BT-002
- BT-003A
- BT-003B
- CORE-001A
- CORE-001B
- CORE-001C
- CORE-001D
- CORE-001D2A
- CORE-001D2B
- CORE-001D2C
- CORE-001D2D
- CORE-001D3A
- CORE-001D3B
- CORE-001D3C
- CORE-002
- CORE-002A
- CORE-002B
- CORE-002C
- UX-001
- UX-002
blocked: []
cancelled: []

View File

@@ -0,0 +1,16 @@
id: BT-002A
title: Snapshot Ingestion and Listed Contract Sizing
status: backlog
priority: P3
effort: M
depends_on:
- BT-002
tags:
- backtesting
- data
summary: Extend BT-002 from provider support to file-backed/external snapshot ingestion and listed-contract sizing semantics.
acceptance_criteria:
- Historical snapshot data can be loaded from a documented file-backed or external source, not only injected in-memory fixtures.
- Snapshot-backed runs can size positions in listed contract units with explicit contract-size rounding rules.
- Snapshot data-quality warnings and incomplete-run behavior are persisted/reportable, not only template-local warnings.
- Provider configuration and snapshot-source assumptions are documented for reproducible runs.

View File

@@ -0,0 +1,13 @@
id: BT-003
title: Selloff Event Comparison Report
status: backlog
priority: P2
effort: M
depends_on:
- BT-001
tags: [backtesting, events]
summary: Rank named strategies across historical selloff events.
acceptance_criteria:
- Event presets define named windows.
- Reports rank strategies by survival, max LTV, cost, and final equity.
- UI can show unhedged vs hedged path comparisons.

View File

@@ -0,0 +1,23 @@
id: CORE-002B
title: Hedge and Strategy Runtime Quote Unit Rollout
status: backlog
priority: P0
effort: M
depends_on:
- CORE-002A
- CORE-001B
tags:
- core
- units
- hedge
- pricing
summary: Apply explicit instrument-aware quote-unit conversions to the next visible ounce-based hedge/runtime paths so they no longer assume ounce-native spot prices.
acceptance_criteria:
- Hedge/runtime displays that consume live or configured GLD spots use explicit share->ozt conversions where needed.
- Visible strategy/hedge labels distinguish converted collateral spot from raw share quotes when relevant.
- Unsupported or missing quote-unit metadata fails closed rather than silently applying raw share prices as ounce prices.
- Tests cover the changed hedge/runtime math and browser-visible route behavior.
technical_notes:
- Likely file targets include `app/pages/hedge.py`, `app/pages/common.py`, and any service/helpers feeding hedge summary/runtime spot values.
- Reuse the new instrument metadata seam rather than introducing new ad hoc scale factors.
- Keep backtesting/event share-based paths compatible while tightening visible ounce-based paths.

View File

@@ -0,0 +1,112 @@
name: CORE-003 Mypy Type Safety
description: |
Fix all mypy type errors to enable strict type checking in CI.
Currently 42 errors in 15 files. The CI uses `|| true` to allow warnings,
but we should fix these properly with strong types and conversion functions.
status: backlog
priority: medium
created_at: 2026-03-29
dependencies: []
acceptance_criteria:
- mypy passes with 0 errors on app/core app/models app/strategies app/services
- CI type-check job passes without `|| true`
- All type narrowing uses proper patterns (properties, cast(), or isinstance checks)
- No duplicate method definitions
scope:
in_scope:
- Fix type errors in app/domain/units.py
- Fix type errors in app/domain/portfolio_math.py
- Fix type errors in app/models/portfolio.py
- Fix type errors in app/domain/backtesting_math.py
- Fix type errors in app/domain/instruments.py
- Fix type errors in app/services/*.py
- Fix type errors in app/pages/*.py
- Remove `|| true` from type-check job in CI
out_of_scope:
- Adding new type annotations to previously untyped code
- Refactoring business logic
files_with_errors:
- file: app/domain/units.py
errors: 6
pattern: "WeightUnit | str" not narrowed after __post_init__
fix: Use _unit_typed property for type-narrowed access
- file: app/models/portfolio.py
errors: 1
pattern: "Duplicate _serialize_value definition"
fix: Remove duplicate method definition
- file: app/domain/backtesting_math.py
errors: 1
pattern: "assert_currency" argument type
fix: Use Money.assert_currency properly or add type narrowing
- file: app/domain/instruments.py
errors: 1
pattern: "to_unit" argument type
fix: Use _unit_typed property or explicit coercion
- file: app/domain/portfolio_math.py
errors: 11
pattern: "float(object), Weight | Money union, dict type mismatch"
fix: Add proper type guards and conversion functions
- file: app/services/backtesting/ui_service.py
errors: 2
pattern: "Provider type mismatch, YFinance vs Databento source"
fix: Use proper union types for provider interface
- file: app/services/event_comparison_ui.py
errors: 1
pattern: "FixtureBoundSyntheticHistoricalProvider type"
fix: Update type annotations for provider hierarchy
- file: app/services/cache.py
errors: 1
pattern: "str | None to Redis URL"
fix: Add None check or use assertion
- file: app/services/price_feed.py
errors: 2
pattern: "float(object)"
fix: Add explicit type coercion
- file: app/pages/settings.py
errors: 1
pattern: "Return value on ui.button scope"
fix: Proper return type annotation
implementation_notes: |
The root cause is that frozen dataclass fields with `Field: UnionType`
are not narrowed by `__post_init__` coercion. Mypy sees the declared
type, not the runtime type.
Solutions:
1. Add `@property def _field_typed(self) -> NarrowType:` for internal use
2. Use `cast(NarrowType, self.field)` at call sites
3. Use `isinstance` checks before operations requiring narrow type
Pattern example from units.py fix:
```python
@property
def _unit_typed(self) -> WeightUnit:
"""Type-narrowed unit accessor for internal use."""
return self.unit # type: ignore[return-value]
def to_unit(self, unit: WeightUnit) -> Weight:
return Weight(amount=convert_weight(self.amount, self._unit_typed, unit), unit=unit)
```
estimated_effort: 4-6 hours
tags:
- type-safety
- mypy
- technical-debt
- ci-quality

View File

@@ -0,0 +1,15 @@
id: DATA-001A
title: Live Overview Price Wiring
status: backlog
priority: P0
effort: S
depends_on:
- DATA-001
- PORT-001
tags: [overview, pricing]
summary: Use the live price service directly on the overview page.
acceptance_criteria:
- Overview uses live quote data instead of a hardcoded spot.
- Source and last-updated metadata are displayed.
- Margin-call and LTV values use configured portfolio inputs.
- Browser test verifies visible live data metadata.

View File

@@ -0,0 +1,15 @@
id: DATA-002A
title: Lazy Options Loading
status: backlog
priority: P0
effort: S
depends_on:
- DATA-002
tags: [options, performance]
summary: Render the options page fast by loading only the minimum data initially.
acceptance_criteria:
- Initial page load fetches expirations plus one default expiry chain.
- Changing expiry fetches only that expiry on demand.
- Browser test verifies /options becomes visible quickly with no visible runtime error.
technical_notes:
- Keep initial render fast and move additional data loading behind user selection.

View File

@@ -0,0 +1,48 @@
id: DATA-DB-005
title: Scenario Pre-Seeding from Bulk Downloads
status: backlog
priority: medium
dependencies:
- DATA-DB-001
estimated_effort: 1-2 days
created: 2026-03-28
updated: 2026-03-28
description: |
Create pre-configured scenario presets for gold hedging research and implement
bulk download capability to pre-seed event comparison pages. This allows quick
testing against historical events without per-event data fetching.
acceptance_criteria:
- Default presets include COVID crash, rate hike cycle, gold rally events
- Bulk download script fetches all preset data
- Presets stored in config file (JSON/YAML)
- Event comparison page shows preset data availability
- One-click "Download All Presets" button
- Progress indicator during bulk download
implementation_notes: |
Default presets:
- GLD March 2020 COVID Crash (extreme volatility)
- GLD 2022 Rate Hike Cycle (full year)
- GC=F 2024 Gold Rally (futures data)
Bulk download flow:
1. Create batch job for each preset
2. Show progress per preset
3. Store in cache directory
4. Update preset availability status
Preset format:
- preset_id: unique identifier
- display_name: human-readable name
- symbol: GLD, GC, etc.
- dataset: Databento dataset
- window_start/end: date range
- default_start_price: first close
- default_templates: hedging strategies
- event_type: crash, rally, rate_cycle
- tags: for filtering
dependencies_detail:
- DATA-DB-001: Needs cache infrastructure

View File

@@ -0,0 +1,46 @@
id: DATA-DB-006
title: Databento Options Data Source
status: backlog
priority: low
dependencies:
- DATA-DB-001
estimated_effort: 3-5 days
created: 2026-03-28
updated: 2026-03-28
description: |
Implement historical options data source using Databento's OPRA.PILLAR dataset.
This enables historical options chain lookups for accurate backtesting with
real options prices, replacing synthetic Black-Scholes pricing.
acceptance_criteria:
- DatabentoOptionSnapshotSource implements OptionSnapshotSource protocol
- OPRA.PILLAR dataset used for GLD/SPY options
- Option chain lookup by snapshot_date and symbol
- Strike and expiry filtering supported
- Cached per-date for efficiency
- Fallback to synthetic pricing when data unavailable
implementation_notes: |
OPRA.PILLAR provides consolidated options data from all US options exchanges.
Key challenges:
1. OPRA data volume is large - need efficient caching
2. Option symbology differs from regular symbols
3. Need strike/expiry resolution in symbology
Implementation approach:
- Use 'definition' schema to get instrument metadata
- Use 'trades' or 'ohlcv-1d' for price history
- Cache per (symbol, expiration, strike, option_type, date)
- Use continuous contracts for futures options (GC=F)
Symbology:
- GLD options: Use underlying symbol "GLD" with OPRA
- GC options: Use parent symbology "GC" for continuous contracts
This is a future enhancement - not required for initial backtesting
which uses synthetic Black-Scholes pricing.
dependencies_detail:
- DATA-DB-001: Needs base cache infrastructure

View File

@@ -0,0 +1,20 @@
id: DATA-DB-007
title: Databento GC Contract Mapping for Backtests
status: backlog
priority: P1
effort: M
depends_on:
- DATA-DB-001
tags:
- databento
- futures
- backtests
summary: Add real Databento futures contract mapping for GC backtests so the page can support gold futures without fail-closed restrictions.
acceptance_criteria:
- Backtest-page Databento runs support GC without requiring users to know raw contract symbols.
- Contract selection or front-month rollover rules are explicit and test-covered.
- The selected contract path yields non-empty historical price data for supported windows.
- Browser validation confirms the GC path works from `/{workspace_id}/backtests` with no visible runtime error.
technical_notes:
- Current hardening work intentionally fail-closes GC on the backtest page because the raw `GC` symbol does not resolve reliably in Databento historical requests.
- Follow-up work should decide between explicit contract selection, continuous mapping, or deterministic rollover logic before re-enabling GC in the Databento path.

View File

@@ -0,0 +1,13 @@
id: EXEC-002
title: IBKR Order Integration
status: backlog
priority: P2
effort: L
depends_on:
- EXEC-001
tags: [broker, execution]
summary: Execute hedge trades directly from the dashboard.
acceptance_criteria:
- Support IBKR paper trading first.
- Preview order, execute, and track status.
- Securely store credentials and maintain audit history.

View File

@@ -0,0 +1,25 @@
id: GCF-001
title: GC=F Options Data Source
status: backlog
priority: P2
size: L
depends_on:
- DATA-004
tags: [data-source, options, futures]
summary: Wire GC=F futures options data for users who choose GC=F as primary underlying.
acceptance_criteria:
- GC=F underlying fetches live options chain from CME or equivalent source
- Options chain includes: strikes, expirations, bid/ask, IV, delta
- Options displayed in futures contract units (100 oz per contract)
- Strike selection in GC=F mode uses futures prices directly
- Fallback to estimated options if live data unavailable
notes:
- GC=F is COMEX Gold Futures, contract size = 100 troy oz
- Options on futures have different quoting than equity options
- May need paid data feed (CME, ICE, broker API)
- Alternative: estimate from GLD options + basis
implementation_hints:
- Add `get_gcf_options_chain()` to `DataService`
- Contract size: 100 oz per futures option
- Explore yfinance GC=F options (limited) vs paid sources
- Cache aggressively to minimize API calls

View File

@@ -0,0 +1,15 @@
id: OPS-001
title: Public Caddy Route for Lombard Dashboard
status: backlog
priority: P1
effort: S
depends_on: []
tags: [ops, deploy, routing]
summary: Move the production route to public HTTPS at lombard.uncloud.tech.
acceptance_criteria:
- Caddy proxies lombard.uncloud.tech to the deployment container.
- HTTPS works with a valid certificate.
- Health check succeeds through Caddy.
- Deployment docs note that vd1.uncloud.vpn was retired in favor of the public route.
technical_notes:
- Keep public-exposure controls aligned with SEC-001 Turnstile bootstrap protection.

View File

@@ -0,0 +1,9 @@
id: BT-001
title: Synthetic Historical Backtesting
status: done
priority: P1
effort: L
depends_on:
- EXEC-001A
- PORT-001
summary: Synthetic historical backtesting engine ships with deterministic and optional provider-backed paths.

View File

@@ -0,0 +1,10 @@
id: BT-001A
title: Backtest Scenario Runner UI
status: done
priority: P1
effort: M
depends_on:
- BT-001
- EXEC-001A
- PORT-001
summary: Thin read-only /backtests UI over the synthetic backtest engine.

View File

@@ -0,0 +1,18 @@
id: BT-001C
title: Shared Historical Fixture/Test Provider Cleanup
status: done
priority: P2
effort: S
depends_on:
- BT-001A
- BT-003A
tags:
- backtesting
- test-infra
summary: Deterministic historical fixture logic for browser-tested backtest UIs is now centralized behind a shared fixture source used by both `/backtests` and `/event-comparison`.
completed_notes:
- Added `app/services/backtesting/fixture_source.py` with shared seeded GLD fixture history and explicit exact-vs-bounded window policies.
- Updated `app/services/backtesting/ui_service.py` so the `/backtests` page uses the shared fixture source in exact-window mode and still fails closed outside the seeded BT-001A range.
- Updated `app/services/event_comparison_ui.py` so the `/event-comparison` page uses the same shared fixture source in bounded-window mode for preset subranges inside the seeded BT-003A fixture window.
- Added focused regression coverage in `tests/test_backtesting_fixture_source.py` proving the shared source enforces exact and bounded policies explicitly and that both page services use the centralized fixture source.
- During this implementation loop, local Docker validation stayed green on the affected historical routes: `/health` returned OK and `tests/test_e2e_playwright.py` passed against the Docker-served app.

View File

@@ -0,0 +1,20 @@
id: BT-002
title: Historical Daily Options Snapshot Provider
status: done
priority: P2
effort: L
depends_on:
- BT-001
tags:
- backtesting
- data
summary: Backtests can now use a point-in-time historical options snapshot provider with exact-contract mark-to-market instead of synthetic-only option pricing.
completed_notes:
- Added shared historical position/mark provider hooks in `app/services/backtesting/historical_provider.py` so `BacktestService` can swap provider implementations while preserving the backtest engine flow.
- Snapshot-backed runs still fail closed on `listed_contracts`; BT-002 ships observed snapshot pricing for `continuous_units` only, with listed-contract sizing explicitly deferred to `BT-002A`.
- Added `DailyOptionsSnapshotProvider` with deterministic entry-day contract selection, exact-contract mark-to-market, and explicit carry-forward warnings when later marks are missing.
- Updated `app/backtesting/engine.py` and `app/services/backtesting/service.py` so snapshot-backed runs and synthetic runs share the same scenario execution path.
- Added focused regression coverage in `tests/test_backtesting_snapshots.py` for entry-day-only selection, observed snapshot marks, and no-substitution missing-mark fallback behavior.
- Added provider/data-quality documentation in `docs/BT-002_HISTORICAL_OPTIONS_SNAPSHOT_PROVIDER.md`, including current limitations around precomputed mids, continuous-units sizing, and follow-up ingestion work.
- Docker-served browser validation still passed on the affected historical routes after the engine/provider seam changes: `/health` returned OK and `tests/test_e2e_playwright.py` passed against the local Docker app.
- While closing that browser loop, `/{workspace_id}/event-comparison` preset changes were corrected to preserve user-edited underlying units and only reset preset-driven template selection, matching the UI copy and stale-state behavior.

View File

@@ -0,0 +1,9 @@
id: BT-003A
title: Event Comparison UI Read Path
status: done
priority: P1
effort: M
depends_on:
- BT-003
- BT-001A
summary: Thin read-only /event-comparison UI over EventComparisonService.

View File

@@ -0,0 +1,17 @@
id: BT-003B
title: Event Comparison Drilldown
status: done
priority: P1
effort: M
depends_on:
- BT-003A
tags:
- backtesting
- ui
summary: The event comparison page now explains why one ranked strategy beat another by exposing a selectable drilldown over the ranked results.
completed_notes:
- Added service-backed drilldown models in `app/services/event_comparison_ui.py` for ranked strategy selection, worst-LTV inspection, breach dates, and daily path rows.
- Updated `app/pages/event_comparison.py` to render a `Strategy drilldown` selector, selected-strategy summary cards, worst-LTV and breach-date highlights, and a daily path details table.
- Added regression coverage in `tests/test_event_comparison_ui.py` and extended `tests/test_e2e_playwright.py` so drilldown selection proves route-visible content changes.
- Local Docker validation is now confirmed on Docker Desktop: the stack starts cleanly, `/health` returns OK, and browser automation on the Docker-served `/{workspace_id}/event-comparison` route verified the drilldown UI and selection updates.
- The earlier Docker validation confusion was caused by a local SSH port-forward hijacking host requests to `localhost:8000`, not by the app container itself.

Some files were not shown because too many files have changed in this diff Show More