AI Tools Compared

Writing OpenAPI specs by hand is tedious and error-prone. For a moderately complex API — 20 endpoints, nested schemas, security definitions — manually writing the YAML takes hours and still misses edge cases. AI tools have changed this significantly. Claude, GitHub Copilot, and specialized tools like Speakeasy can generate production-quality OpenAPI 3.1 specs from existing code or natural language descriptions. This guide compares them with real examples.

Best AI Tools for Generating OpenAPI Specs

OpenAPI 3.1 specifications are essential for API documentation and code generation, but writing them by hand is tedious and error-prone. AI tools can generate accurate specs from code, auto-validate response schemas, and catch missing security definitions.

FastAPI Automatic Generation

FastAPI generates OpenAPI specs automatically, but you still need to add proper docstrings and response models. Claude excels at this:

Prompt:

Write a FastAPI endpoint for fetching user profiles.
Include Pydantic request/response models, proper error handling,
and full OpenAPI documentation with examples.
Users endpoint: GET /api/v1/users/{user_id}
Responses: 200 (User object), 404 (not found), 401 (unauthorized)

Claude output:

from fastapi import FastAPI, HTTPException, status
from pydantic import BaseModel, Field
from typing import Optional

app = FastAPI()

class User(BaseModel):
    id: int = Field(..., description="User ID")
    name: str = Field(..., min_length=1, max_length=255)
    email: str = Field(..., pattern=r'^[\w\.-]+@[\w\.-]+\.\w+$')
    plan: Optional[str] = Field(None, description="Plan type: free, pro, enterprise")
    created_at: str = Field(..., description="ISO 8601 timestamp")

    class Config:
        json_schema_extra = {
            "example": {
                "id": 1,
                "name": "Alice",
                "email": "alice@example.com",
                "plan": "pro",
                "created_at": "2026-01-15T10:30:00Z"
            }
        }

@app.get(
    "/api/v1/users/{user_id}",
    response_model=User,
    status_code=status.HTTP_200_OK,
    tags=["users"],
    summary="Retrieve a user by ID",
    responses={
        200: {"description": "User found"},
        404: {"description": "User not found"},
        401: {"description": "Unauthorized"},
    }
)
async def get_user(user_id: int = Field(..., gt=0, description="User ID")) -> User:
    """
    Fetch a user profile by their ID.

    Returns the user object with profile information.
    Requires authentication via Authorization header.
    """
    if user_id == 999:
        raise HTTPException(
            status_code=status.HTTP_404_NOT_FOUND,
            detail="User not found"
        )
    return User(
        id=user_id,
        name="Alice",
        email="alice@example.com",
        plan="pro",
        created_at="2026-01-15T10:30:00Z"
    )

FastAPI auto-generates the OpenAPI spec from these models and docstrings. Access it at /openapi.json.

Express.js with swagger-jsdoc

Prompt:

Write an Express.js endpoint with Swagger/OpenAPI documentation.
Endpoint: POST /api/v1/orders
Request: { user_id, items, shipping_address }
Response: 201 { order_id, total, status }
Include validation, error responses, and security requirements.

Claude output:

import express from 'express';
import swaggerJsdoc from 'swagger-jsdoc';

const app = express();
app.use(express.json());

/**
 * @swagger
 * /api/v1/orders:
 *   post:
 *     summary: Create a new order
 *     tags:
 *       - orders
 *     security:
 *       - bearerAuth: []
 *     requestBody:
 *       required: true
 *       content:
 *         application/json:
 *           schema:
 *             type: object
 *             required:
 *               - user_id
 *               - items
 *               - shipping_address
 *             properties:
 *               user_id:
 *                 type: integer
 *                 description: User ID
 *               items:
 *                 type: array
 *                 items:
 *                   type: object
 *                   properties:
 *                     product_id:
 *                       type: integer
 *                     quantity:
 *                       type: integer
 *                       minimum: 1
 *               shipping_address:
 *                 type: string
 *                 minLength: 5
 *     responses:
 *       201:
 *         description: Order created successfully
 *         content:
 *           application/json:
 *             schema:
 *               type: object
 *               properties:
 *                 order_id:
 *                   type: string
 *                   example: "ORD-2026-001"
 *                 total:
 *                   type: number
 *                   example: 99.99
 *                 status:
 *                   type: string
 *                   enum: [pending, confirmed, shipped]
 *       400:
 *         description: Invalid request
 *       401:
 *         description: Unauthorized
 *       500:
 *         description: Server error
 */
app.post('/api/v1/orders', (req, res) => {
  const { user_id, items, shipping_address } = req.body;

  if (!user_id || !items || items.length === 0) {
    return res.status(400).json({ error: 'Missing required fields' });
  }

  res.status(201).json({
    order_id: `ORD-${Date.now()}`,
    total: 99.99,
    status: 'pending'
  });
});

const swaggerDef = {
  definition: {
    openapi: '3.1.0',
    info: {
      title: 'Orders API',
      version: '1.0.0'
    },
    components: {
      securitySchemes: {
        bearerAuth: {
          type: 'http',
          scheme: 'bearer'
        }
      }
    }
  },
  apis: ['./routes/*.js']
};

export const swaggerSpec = swaggerJsdoc(swaggerDef);

Go with swag

Prompt:

Write a Go handler using gin-gonic that fetches products.
Endpoint: GET /api/v1/products?limit=10&offset=0
Response: { total, items: [ { id, name, price } ] }
Include swag comments for OpenAPI generation.

Claude output:

package handlers

import (
	"github.com/gin-gonic/gin"
	"net/http"
)

type Product struct {
	ID    int     `json:"id" example:"123"`
	Name  string  `json:"name" example:"Widget"`
	Price float64 `json:"price" example:"29.99"`
}

type ProductList struct {
	Total int       `json:"total" example:"100"`
	Items []Product `json:"items"`
}

// GetProducts retrieves a paginated list of products
// @Summary List products
// @Description Get all products with pagination
// @Tags products
// @Accept json
// @Produce json
// @Param limit query int false "Limit" default(10) maximum(100)
// @Param offset query int false "Offset" default(0) minimum(0)
// @Success 200 {object} ProductList
// @Failure 400 {object} ErrorResponse
// @Failure 401 {object} ErrorResponse
// @Router /api/v1/products [get]
// @Security ApiKeyAuth
func GetProducts(c *gin.Context) {
	limit := 10
	offset := 0

	if l := c.Query("limit"); l != "" {
		// Parse and validate limit
	}

	c.JSON(http.StatusOK, ProductList{
		Total: 100,
		Items: []Product{
			{ID: 1, Name: "Widget", Price: 29.99},
		},
	})
}

Run swag init to auto-generate the OpenAPI spec from these comments.

Tool Comparison

Aspect Claude ChatGPT Speakeasy
FastAPI generation Excellent (Pydantic models) Good but less detailed N/A (code-first)
Express.js JSDoc Very accurate Mostly correct N/A
Go swag comments Precise Sometimes incomplete Excellent
Schema validation Catches missing fields Sometimes misses Thorough validation
Security (bearer, API key) Includes correctly Inconsistent Built-in
Response examples Detailed Minimal Auto-extracted
Pricing Free (Claude Opus $20/mo) Free tier + $20/mo Pro ~$10/request

Common Mistakes

Claude avoids these:


Claude: Best for Complex Schema Generation

Claude excels at understanding code context and generating accurate OpenAPI specs that match what your code actually does. Feed it your controller code and it produces correct request/response schemas, including edge cases most tools miss.

FastAPI example:

# Paste this into Claude with prompt: "Generate OpenAPI 3.1 spec for this endpoint"
@app.post("/users/{user_id}/posts", response_model=PostResponse)
async def create_post(
    user_id: int,
    post: PostCreate,
    current_user: User = Depends(get_current_user),
    db: AsyncSession = Depends(get_db)
) -> PostResponse:
    """Create a new post for the specified user."""
    ...

Claude produces the full path item including parameters, request body schema derived from PostCreate, response schema from PostResponse, and the security requirement from the Depends(get_current_user) dependency.

Strengths: handles inheritance, discriminators, and nested schemas correctly; good at inferring security schemes from auth patterns; understands framework conventions (FastAPI, Django REST, Express).

Weakness: no native IDE integration for iterative spec generation; you copy-paste code in and get YAML back.

For larger APIs, provide Claude with multiple route files and your model definitions in a single prompt. Claude can handle 30-40 endpoints per session and produces consistent $ref usage across the spec when given all the models upfront. Ask it explicitly to use $ref for shared schemas rather than inlining — this produces cleaner specs that validate correctly and are easier to maintain.

GitHub Copilot: Best for Inline Spec Writing

Copilot works best when you start writing the spec and let it autocomplete. Open a .yaml file, type paths:, and Copilot suggests paths based on what it infers about your project from surrounding files.

# Start typing and Copilot autocompletes based on your codebase context
openapi: 3.1.0
info:
  title: User Management API
  version: 1.0.0
paths:
  /users:
    get:
      # Copilot suggests summary, parameters, responses from your model files

This works well if your codebase has consistent patterns. Copilot picks up your model names, response shapes, and error patterns from surrounding files. It is less reliable for specs that need to be strictly accurate — it sometimes generates plausible-looking but incorrect schemas.

Best use case: greenfield API design where you are writing spec-first. Copilot accelerates the typing but requires careful review.

Copilot’s autocomplete is most reliable when your model files are open in the same editor session. Keep your TypeScript interfaces or Python Pydantic models open in adjacent tabs. Copilot reads the open file context and uses it to generate schemas that match your actual types rather than guessing.

Speakeasy: Best for Generating SDKs From Specs

Speakeasy is purpose-built for the OpenAPI ecosystem. It validates your spec, generates type-safe SDKs in 8+ languages, and creates usage documentation automatically.

# Install Speakeasy CLI
brew install speakeasy-api/homebrew-tap/speakeasy

# Generate spec from annotated code (supports TypeScript, Python, Go)
speakeasy generate sdk --schema ./openapi.yaml --lang typescript --out ./sdk

# Validate your spec against OpenAPI 3.1
speakeasy validate --schema ./openapi.yaml

Speakeasy is not primarily an AI tool, but its AI-assisted spec generation from code annotations is genuinely useful. The real value is downstream: once you have an accurate spec, Speakeasy generates and maintains SDKs across language updates.

Express and TypeScript Example With Claude

Express does not generate specs automatically, making AI assistance particularly valuable. Provide Claude with your route definitions and TypeScript interfaces:

// Feed this to Claude with: "Generate OpenAPI 3.1 spec for these routes"
interface CreateOrderRequest {
  userId: string;
  items: Array<{
    productId: string;
    quantity: number;
  }>;
  couponCode?: string;
}

interface OrderResponse {
  orderId: string;
  status: 'pending' | 'confirmed' | 'shipped' | 'delivered';
  total: number;
  createdAt: string;
}

router.post('/orders', authenticate, async (req: Request<{}, OrderResponse, CreateOrderRequest>, res) => {
  // ...
});

router.get('/orders/:id', authenticate, async (req: Request<{ id: string }>, res: Response<OrderResponse>) => {
  // ...
});

Claude generates:

paths:
  /orders:
    post:
      summary: Create a new order
      security:
        - bearerAuth: []
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/CreateOrderRequest'
      responses:
        '201':
          description: Order created successfully
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/OrderResponse'
        '401':
          $ref: '#/components/responses/Unauthorized'
  /orders/{id}:
    get:
      summary: Get order by ID
      security:
        - bearerAuth: []
      parameters:
        - name: id
          in: path
          required: true
          schema:
            type: string
      responses:
        '200':
          description: Order details
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/OrderResponse'
        '404':
          $ref: '#/components/responses/NotFound'

components:
  schemas:
    CreateOrderRequest:
      type: object
      required: [userId, items]
      properties:
        userId:
          type: string
        items:
          type: array
          minItems: 1
          items:
            type: object
            required: [productId, quantity]
            properties:
              productId:
                type: string
              quantity:
                type: integer
                minimum: 1
        couponCode:
          type: string
          nullable: true
    OrderResponse:
      type: object
      required: [orderId, status, total, createdAt]
      properties:
        orderId:
          type: string
        status:
          type: string
          enum: [pending, confirmed, shipped, delivered]
        total:
          type: number
          format: float
        createdAt:
          type: string
          format: date-time

Claude correctly maps the TypeScript union type to an OpenAPI enum, marks couponCode as nullable, and adds appropriate response codes. The $ref usage is consistent throughout.

Comparison: Claude vs Copilot vs Speakeasy

Task Claude GitHub Copilot Speakeasy
Generate spec from existing code Excellent Good Limited
Iterative inline spec writing Manual Excellent N/A
Spec validation Via prompt No Excellent
SDK generation No No Excellent
Handles complex schemas Excellent Inconsistent Reads existing specs
Cost $20/mo (Pro) $10/mo $200/mo (teams)

Practical Workflow: Claude + Speakeasy

The most effective workflow combines Claude for initial spec generation with Speakeasy for validation and SDK generation:

# Step 1: Use Claude to generate initial spec from your routes
# Paste your route files into Claude and ask for OpenAPI 3.1 YAML

# Step 2: Save as openapi.yaml, validate with Speakeasy
speakeasy validate --schema ./openapi.yaml

# Step 3: Fix any issues Claude missed (Speakeasy gives specific error messages)
# Re-run until clean

# Step 4: Generate SDKs
speakeasy generate sdk --schema ./openapi.yaml --lang typescript --out ./sdk/ts
speakeasy generate sdk --schema ./openapi.yaml --lang python --out ./sdk/python

This combination cuts spec generation time from days to hours for a mid-sized API.

Adding spec validation to CI ensures the spec stays accurate as your API evolves:

# .github/workflows/openapi-validate.yml
name: Validate OpenAPI Spec
on: [push, pull_request]

jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Install Speakeasy
        run: brew install speakeasy-api/homebrew-tap/speakeasy
      - name: Validate spec
        run: speakeasy validate --schema ./openapi.yaml

When a PR modifies routes without updating the spec, the validation job catches the drift. Pair this with a documentation build step that regenerates API reference docs from the validated spec.

Go Gin Example With Claude

For Go APIs using the Gin framework, Claude needs explicit prompting to handle Go’s type system correctly:

// Feed this to Claude with: "Generate OpenAPI 3.1 spec, use Go types correctly"
type CreateOrderRequest struct {
    UserID    uuid.UUID       `json:"user_id" binding:"required"`
    Items     []OrderItem     `json:"items" binding:"required,min=1"`
    CouponCode *string        `json:"coupon_code,omitempty"`
}

type OrderItem struct {
    ProductID uuid.UUID `json:"product_id" binding:"required"`
    Quantity  int       `json:"quantity" binding:"required,min=1,max=100"`
}

Claude correctly maps uuid.UUID to string with format: uuid, *string to a nullable string, and the min=1 binding to minItems: 1 on the array.

For Go APIs, also tell Claude to use binding:"required" as the source of truth for required fields in the schema. Claude reads the struct tags accurately but benefits from explicit guidance on which tags to prioritize for schema generation.

Frequently Asked Questions

Are free AI tools good enough for generating OpenAPI specs?

For simple APIs (under 10 endpoints, flat schemas), free Claude or Copilot tiers are sufficient. For complex APIs with nested schemas, authentication flows, and multiple response types, the paid tiers give significantly better results. Speakeasy has no meaningful free tier for teams.

How do I evaluate which tool fits my workflow?

Run a practical test: take a real endpoint from your codebase and ask each tool to generate the spec. Compare accuracy against the actual request/response shapes. A real-world test on your own code gives better signal than benchmark comparisons.

How quickly do AI tool recommendations go out of date?

AI tools evolve rapidly. The comparison above reflects March 2026 capabilities. Claude and Copilot have both improved significantly on code understanding in the past year. Verify current features before committing to a paid plan.