Docs
Scenarios
Group tests with label selectors and run load tests
Scenarios
Group requests and workflows using label selectors, and run load tests.
Overview
Scenarios let you:
- Select specific requests and workflows using label expressions
- Run load tests with configurable concurrency, rate, and duration
- Organize tests into logical groups (smoke, integration, etc.)
Scenario Structure
Fields
| Field | Type | Required | Description |
|---|---|---|---|
description | string | No | Human-readable description |
labelSelector | string | No | Boolean expression for filtering by labels. If omitted or empty, matches all requests and workflows |
concurrency | integer | No | Number of concurrent executions (default: 1) |
loadProfile | object | No | Load testing configuration |
LoadProfile Fields
| Field | Type | Description |
|---|---|---|
concurrency | integer | Number of concurrent virtual users (default: 1) |
rate | number | Requests per second rate limit |
duration | string | Test duration (e.g., "30s", "5m", "1h") |
Basic Examples
Simple Scenario
scenarios:
smoke-tests:
description: "Quick smoke tests for critical functionality"
labelSelector: "smoke"
Run with:
quadrastack --scenario smoke-tests
Label Selector Syntax
Single Label
scenarios:
api-tests:
labelSelector: "api" # Matches requests with "api" label
AND Operator (&&)
Both labels must be present:
scenarios:
critical-api:
labelSelector: "api && critical"
OR Operator (||)
Either label matches:
scenarios:
smoke-or-critical:
labelSelector: "smoke || critical"
NOT Operator (!)
Exclude requests with a label:
scenarios:
non-slow:
labelSelector: "!slow"
Complex Expressions
Combine operators with parentheses:
scenarios:
complex:
labelSelector: "(api || web) && !slow && critical"
Complete Example
Folder structure:
my-api-tests/
├── playbook.yaml
playbook.yaml:
vars:
default:
baseUrl: "https://api.example.com"
requests:
login:
method: POST
url: "{{.vars.baseUrl}}/auth/login"
labels: [auth, smoke, critical]
body:
username: admin
password: secret
expect:
status: 200
get-users:
method: GET
url: "{{.vars.baseUrl}}/api/users"
labels: [api, smoke]
expect:
status: 200
admin-reset:
method: POST
url: "{{.vars.baseUrl}}/admin/reset"
labels: [admin, dangerous]
expect:
status: 200
slow-report:
method: GET
url: "{{.vars.baseUrl}}/reports/monthly"
labels: [reports, slow]
expect:
status: 200
health-check:
method: GET
url: "{{.vars.baseUrl}}/health"
labels: [health, smoke, critical]
expect:
status: 200
scenarios:
# Run smoke tests only
smoke:
description: "Quick smoke tests"
labelSelector: "smoke"
# Matches: login, get-users, health-check
# Run all API tests
api:
description: "API endpoint tests"
labelSelector: "api"
# Matches: get-users
# Critical tests only
critical:
description: "Critical path tests"
labelSelector: "critical"
# Matches: login, health-check
# API tests but not slow ones
fast-api:
description: "Fast API tests"
labelSelector: "api && !slow"
# Matches: get-users
# Everything that's not dangerous
safe:
description: "Safe tests for CI"
labelSelector: "!dangerous"
# Matches: login, get-users, slow-report, health-check
# Auth OR admin
privileged:
description: "Privileged access tests"
labelSelector: "auth || admin"
# Matches: login, admin-reset
# Everything (no filter)
all:
description: "Run all tests"
labelSelector: ""
# Matches: ALL requests
Running Scenarios
Single Scenario
quadrastack --scenario smoke
Multiple Scenarios
quadrastack --scenario smoke --scenario integration
With Profile
quadrastack --scenario smoke --profile staging
With Output Configuration
quadrastack --scenario integration --output-dir ./results --output-detail full
Concurrency
Use concurrency to run multiple requests in parallel:
scenarios:
parallel-smoke:
description: "Run smoke tests in parallel"
labelSelector: "smoke"
concurrency: 5 # Run 5 requests at a time
Free tier limit: Maximum concurrency of 10.
Load Testing
Add loadProfile to run continuous load tests.
Basic Load Test
scenarios:
basic-load:
description: "Basic load test"
labelSelector: "api"
loadProfile:
concurrency: 50 # 50 concurrent users
duration: "2m" # Run for 2 minutes
Note: Free tier is limited to 10 concurrent users and does not support
rateorduration. To run high-scale load tests, you need a Pro or Business license. See Plans & Features.
Rate-Limited Load Test
scenarios:
rate-limited:
description: "Rate-limited load test"
labelSelector: "api"
loadProfile:
concurrency: 100 # 100 concurrent users
rate: 500 # Max 500 requests/second
duration: "5m" # Run for 5 minutes
Progressive Load Test Scenarios
Define multiple scenarios for different load levels:
scenarios:
# Baseline - light load
baseline:
description: "Baseline performance"
labelSelector: "api && !slow"
loadProfile:
concurrency: 10
duration: "1m"
# Normal load
normal-load:
description: "Normal traffic simulation"
labelSelector: "api"
loadProfile:
concurrency: 100
rate: 500
duration: "5m"
# Stress test
stress-test:
description: "Find system limits"
labelSelector: "api && critical"
loadProfile:
concurrency: 1000
rate: 5000
duration: "10m"
# Spike test
spike:
description: "Sudden traffic spike"
labelSelector: "api"
loadProfile:
concurrency: 500
rate: 2000
duration: "30s"
Run progressively:
# Start with baseline
quadrastack --scenario baseline
# Then normal load
quadrastack --scenario normal-load
# Then stress test
quadrastack --scenario stress-test
Duration Format
Duration values support these units:
| Unit | Example | Description |
|---|---|---|
s | 30s | Seconds |
m | 5m | Minutes |
h | 2h | Hours |
ms | 500ms | Milliseconds |
loadProfile:
duration: "30s" # 30 seconds
duration: "5m" # 5 minutes
duration: "2h" # 2 hours
Common Patterns
Test Pyramid
scenarios:
# Unit-level API tests (fast, many)
unit:
description: "Unit-level tests"
labelSelector: "unit"
# Integration tests (medium speed, fewer)
integration:
description: "Integration tests"
labelSelector: "integration"
# E2E tests (slow, few)
e2e:
description: "End-to-end tests"
labelSelector: "e2e"
By Feature Area
scenarios:
auth-tests:
labelSelector: "auth"
user-tests:
labelSelector: "users"
payment-tests:
labelSelector: "payments"
reporting-tests:
labelSelector: "reports"
By Priority
scenarios:
critical:
description: "Critical path tests"
labelSelector: "critical"
high-priority:
description: "High priority tests"
labelSelector: "high || critical"
all-priorities:
description: "All tests"
labelSelector: ""
By Speed
scenarios:
fast:
description: "Fast tests only"
labelSelector: "!slow"
slow:
description: "Slow tests"
labelSelector: "slow"
all:
description: "All tests"
labelSelector: ""
Environment-Specific
scenarios:
dev-smoke:
description: "Quick tests for development"
labelSelector: "smoke && !slow"
staging-full:
description: "Complete test suite for staging"
labelSelector: "!experimental"
prod-health:
description: "Production health checks"
labelSelector: "health || monitoring"
Best Practices
1. Use Meaningful Labels
# Bad
labels: [test1, a, x]
# Good
labels: [auth, critical, smoke]
2. Create Hierarchical Labels
requests:
user-login:
labels: [api, auth, users, smoke, critical]
# Can select by: api, auth, users, smoke, critical, or combinations
3. Start Small with Load Tests
# Start small
scenarios:
load-baseline:
loadProfile:
concurrency: 10
duration: "1m"
# Gradually increase
scenarios:
load-medium:
loadProfile:
concurrency: 100
duration: "5m"
4. Document Your Scenarios
scenarios:
production-health:
description: |
Critical health checks for production.
Runs every 5 minutes via cron.
Should complete in < 30 seconds.
labelSelector: "health && critical"
5. Use Fail-Fast Strategy
Run critical tests first:
# Run critical tests first
quadrastack --scenario critical
# Then full suite
quadrastack --scenario all-tests
Free Tier Limitations
| Feature | Free Tier | Pro/Business |
|---|---|---|
| Concurrency | Max 10 | Unlimited |
| Rate limiting | Not supported | Supported |
| Duration-based load tests | Not supported | Supported |
| Parallel scenarios | Sequential only | Parallel |
See Also
- Requests - Tagging requests with labels
- Workflows - Multi-step request sequences
- CLI Reference - Using --scenario flag
- Load Testing - Load testing guide