Requesty router supports structured JSON outputs from various model providers, making it easy to get consistent, parseable responses across different LLMs.

JSON Object Format

For all models, you can request responses in JSON format by specifying response_format={"type": "json_object"}:

import os
from openai import OpenAI
from pydantic import BaseModel
from typing import List
from dotenv import load_dotenv

# Load API key from environment variables
load_dotenv()
ROUTER_API_KEY = os.getenv("ROUTER_API_KEY")

# Define your data model
class Entities(BaseModel):
    attributes: List[str]
    colors: List[str]
    animals: List[str]

# Initialize OpenAI client with Requesty router
client = OpenAI(
    api_key=ROUTER_API_KEY,
    base_url="https://router.requesty.ai/v1",
    default_headers={"Authorization": f"Bearer {ROUTER_API_KEY}"}
)

# Request a JSON response
response = client.chat.completions.create(
    model="openai/gpt-4o",  # Works with any supported model
    messages=[
        {
            "role": "system",
            "content": "Extract entities from the input text and return them in JSON format with the following structure: {\"attributes\": [...], \"colors\": [...], \"animals\": [...]}"
        },
        {
            "role": "user",
            "content": "The quick brown fox jumps over the lazy dog with piercing blue eyes",
        },
    ],
    response_format={"type": "json_object"}
)

# Parse with Pydantic
content = response.choices[0].message.content
extracted = Entities.model_validate_json(content)

print(f"Attributes: {extracted.attributes}")
print(f"Colors: {extracted.colors}")
print(f"Animals: {extracted.animals}")

JSON Schema (For OpenAI and Anthropic Models)

For models that support JSON schema (currently OpenAI and Anthropic models), you can use the more powerful parse method with a Pydantic model:

from openai import OpenAI
from pydantic import BaseModel
from typing import List

class Animals(BaseModel):
    animals: List[str]

client = OpenAI(
    api_key=ROUTER_API_KEY,
    base_url="https://router.requesty.ai/v1",
    default_headers={"Authorization": f"Bearer {ROUTER_API_KEY}"}
)

# Use the parse helper with a Pydantic model
response = client.beta.chat.completions.parse(
    model="anthropic/claude-3-7-sonnet-latest",
    messages=[
        {
            "role": "system",
            "content": "Extract the animals from the input text"
        },
        {
            "role": "user",
            "content": "The quick brown fox jumps over the lazy dog"
        },
    ],
    response_format=Animals,
)

animals = Animals.model_validate_json(response.choices[0].message.content)
print(f"Found animals: {animals.animals}")  # ['fox', 'dog']

Compatibility Notes

  • JSON object format works with all models supported by Requesty
  • JSON schema is available for OpenAI and Anthropic models
  • Some models may have different capabilities for complex structured outputs
  • Stream mode can also work with structured outputs (see streaming documentation)

Error Handling

When working with structured outputs, it’s important to handle potential parsing errors:

try:
    extracted = Entities.model_validate_json(content)
    # Process the data
except Exception as e:
    print(f"Error parsing response: {e}")
    # Handle the error appropriately