Testing and Debugging
Use this page when a documented call does not behave the way you expect.
Debug Order
- confirm chain type
- confirm portal
- confirm auth model
- confirm required headers
- confirm whether Secure Channel is required
- confirm whether the endpoint is actually API-exposed
Web Chain Checklist
- correct portal entrypoint
- valid JWT if required
- valid
X-Client-Hash - valid
X-SC-Session-Idif required - matching user role and portal permissions
API Chain Checklist
- path starts with
/api/v1/** - request is signed correctly
- timestamp and nonce are valid
- API key has the correct scope
- endpoint supports API-key access
What To Capture In A Bug Report
- exact path
- chain type
- portal
- headers used
- whether Secure Channel was active
- HTTP status
- public error code and message
Simulation Environment (doc-verify)
The doc-verify simulation environment provides full-stack E2E testing against a real backend with a real database, Redis, and RabbitMQ. Unlike mocked tests, every request hits the actual Spring Boot application with live infrastructure, so test results reflect true production behavior.
Components:
- Docker services — MySQL, Redis, RabbitMQ, Mailpit (local SMTP trap for MFA codes)
- Spring Boot backend — the full
slaunchx-backend-platformapplication - MFA via Mailpit — verification codes are intercepted locally instead of sent to real email providers
Location: slaunchx-api-toolkit/doc-verify/
Test framework: Vitest + TypeScript
Running E2E Tests Locally
# 1. Start the simulation environment (Docker services + backend)
cd repos/devportal/slaunchx-api-toolkit/doc-verify
BACKEND_DIR=~/workspace/repos/core/slaunchx-backend-platform bash start.sh
# 2. Seed test data (creates 4 portals with auth fixtures)
source env.doc-verify && source logs/access-code.env
npx tsx scripts/seed-portals.ts
# 3. Run all tests
npx vitest runThe start.sh script brings up Docker containers, waits for readiness, then launches the backend. Seeding creates users, workspaces, API keys, and other fixtures across all four portals so that every test file has the auth context it needs.
Test Coverage
- 926 E2E test cases
- 105 controllers covered across 4 portals (system, tenant, partner, consumer)
- 75 test files organized by portal and domain
Test categories include: auth, billing (recharge, exchange, withdrawal, settlement), workspace, security, file, export, profile, notifications, sandbox, secure-channel, API keys, and constants.
ExampleRecorder Pattern
E2E tests double as API documentation generators. Each test uses ExampleRecorder to capture the full request/response pair during execution. After a test run the recorded examples are flushed to data/examples/ as JSON files. These files serve as the source of truth for API behavior documentation — they prove that every documented example actually works against the real backend.
Common E2E Testing Issues
| Issue | Detail |
|---|---|
| MFA email timing | Mailpit may take 1–2 seconds to receive verification codes. Tests that fetch codes immediately after requesting them can fail intermittently. |
| Auth session caching | Sessions are cached per portal within a single test run. If a test mutates session state (e.g., changes password), subsequent tests in the same portal may see stale auth. |
| Storage unavailable | File upload tests may return HTTP 500 when MinIO is not configured. The simulation environment does not start MinIO by default. |
| Test ordering | Some security tests (IP whitelist, session termination) alter global state that can affect subsequent tests when running the full suite. Run isolated tests with npx vitest run <file> to confirm. |
Read Next
- Error Model
- the domain guide for the failing flow