Docs
AI-Assisted Test Generation
Learn how to use Quadrastack's JSON schemas with AI tools like Claude, ChatGPT, and Gemini to automatically generate valid test playbooks.
AI-Assisted Test Generation
One of the most powerful ways to write Quadrastack tests is to pair it with an LLM (Large Language Model). By providing your AI assistant with our official JSON schemas, you can generate syntactically perfect test playbooks, scenarios, and verification logic in seconds.
Why Use Schemas with AI?
While general-purpose AI models know about "HTTP tests," they might guess at the specific syntax of a tool they haven't seen often. Quadrastack uses a specific compact DSL for assertions (e.g., exists, > 500) and a strict YAML structure.
Feeding the schema to the AI ensures:
- Zero Hallucinations: The AI knows exactly which fields (
expect,dependsOn,mockServer) exist. - Correct Syntax: It won't invent invalid operators or YAML structures.
- Context Awareness: It understands the relationship between
requests,scenarios, andmocks.
The Schemas
We provide public, versioned schemas that you can feed directly to your AI context.
| Schema | URL | Purpose |
|---|---|---|
| Playbook Schema | https://quadrastack.com/schemas/v1/playbook.json | Definitions for requests, mocks, workflows, and scenarios. |
| CLI Reference | https://quadrastack.com/schemas/v1/cli-reference.json | Flags and configuration options for the usage of the CLI. |
How to Use
Option A: Direct Prompting (ChatGPT, Claude, Gemini)
The easiest method is to paste the schema URL or content into your prompt before asking for code.
Prompt Template:
I want to write a Quadrastack test playbook for a User API. Please use the strict syntax defined in the Quadrastack Playbook Schema: https://quadrastack.com/schemas/v1/playbook.json
Task: Create a playbook with:
- A POST request to create a user (
/users).- A GET request to fetch that user, which depends on the creation step.
- Verify the GET response has
status: 200and the
Expected Output: The AI will fetch (if it has web browsing) or interpret the structure to generate valid YAML:
requests:
create-user:
method: POST
url: https://api.example.com/users
body: |
{ "name": "Alice", "email": "alice@example.com" }
expect:
status: 201
get-user:
method: GET
url: https://api.example.com/users/alice
dependsOn: create-user
expect:
status: 200
body:
$.email: "alice@example.com"
Option B: IDES with AI (Cursor, Copilot)
If you are using an AI-native editor like Cursor or VS Code with Copilot, you can add the schema to your context context window.
- Download the schema locally (e.g.,
schemas/playbook.json). - Reference it in your chat.
- Cursor: Type
@playbook.jsonin the chat window. - Copilot: Open the file and reference it in your workspace context.
- Cursor: Type
- Ask for generation: "Generate a load test scenario for the checkout flow defined in this file."
Example: Generating Complex Mocks
Mocks can be verbose to write by hand. AI excels at this when given the schema.
Prompt:
Using the Quadrastack schema, write a Mock Server definition that simulates a chaotic payment gateway. It should match
POST /pay. 50% of the time it returns 200 OK. 30% of the time it returns 503 Service Unavailable. 20% of the time it delays by 5 seconds.
Result:
mockServer:
payment-gateway:
listen: ":8080"
mocks:
payment-chaos:
server: payment-gateway
match:
method: POST
path: /pay
respond:
- count: 5
status: 200
body: '{"status": "success"}'
- count: 3
status: 503
body: '{"error": "maintenance"}'
- count: 2
status: 200
delay: 5s
body: '{"status": "slow-success"}'
Generating from OpenAPI / Swagger
A powerful workflow is to feed BOTH your OpenAPI specification (Swagger) and the Quadrastack Playbook Schema to the AI. This allows the AI to completely automate the creation of your test suite.
Prompt:
I have attached my
openapi.jsonspec for the Payment API. Using the Quadrastack Playbook Schema: https://quadrastack.com/schemas/v1/playbook.jsonTask:
- Create a
mockServerthat implements the entire Payment API based on the OpenAPI spec. Use realistic examples for the responses.- Create a test suite (
requests) that calls every endpoint in the generated mock server to verify it returns 200 OK.
Why this is useful:
- Instant Mock Backend: You can generate a fully functional mock server for frontend development before the real backend exists.
- Coverage: ensures every endpoint defined in your spec has a corresponding test case.
- Validation: You can ask the AI to generate edge-case tests (invalid inputs) based on the validations defined in your OpenAPI spec.
Tips for Best Results
- Be Specific about Assertions: Explicitly ask to "use the Compact DSL for assertions" to get clean syntax like
count > 5instead of complex JSON objects. - Request Scenarios: Ask the AI to "Wrap these requests in a smoke-test scenario running with concurrency 5."
- Validate: always run
quadrastack --dry-runon generated code to ensure it's perfect.