- Access 300+ models from OpenAI, Anthropic, Google, Mistral, and many more providers
- Use both the Anthropic Messages API and OpenAI Chat Completions API formats
- Track and manage your spend in a single location
- Set up fallback policies so your assistant never goes down
How It Works
Prerequisites
- OpenClaw installed and running (
npm install -g openclaw) - A Requesty API key from the API Keys Page
Configuration
OpenClaw supports two API formats for connecting to Requesty. Choose the one that fits your use case.- Anthropic Messages API
- OpenAI Chat Completions API
Anthropic Messages API (anthropic-messages)
Use this format to access Claude models through Requesty’s Anthropic-compatible endpoint. This is the recommended approach if you primarily use Claude models.Get your Requesty API key
Create an API key on the API Keys Page.
The base URL differs between the two API formats:
- Anthropic Messages:
https://router.requesty.ai(no/v1suffix) - OpenAI Chat Completions:
https://router.requesty.ai/v1(with/v1suffix)
Onboarding Wizard
If you prefer a guided setup, use the OpenClaw onboarding wizard and select Custom Provider:- Choose OpenAI-compatible or Anthropic-compatible depending on the API format you want
- Enter the base URL (
https://router.requesty.ai/v1for OpenAI,https://router.requesty.aifor Anthropic) - Enter your Requesty API key
- Provide a model ID (e.g.
openai/gpt-4ooranthropic/claude-sonnet-4-5)
Adding Multiple Models
You can configure multiple models from different providers — all through a single Requesty API key:Model Selection
You can use any model from the Model Library. Model IDs follow theprovider/model-name format:
| Provider | Example Model ID |
|---|---|
| Anthropic | anthropic/claude-sonnet-4-5 |
| OpenAI | openai/gpt-4o |
google/gemini-2.5-pro | |
| AWS Bedrock | bedrock/claude-opus-4-6 |
| Mistral | mistral/mistral-large-latest |
policy/your-policy-name.
EU Region
For EU data residency, use the EU router endpoint:- Anthropic Messages:
https://router.eu.requesty.ai - OpenAI Chat Completions:
https://router.eu.requesty.ai/v1
Benefits of Using Requesty with OpenClaw
Access 300+ Models
Switch between models from different providers without changing your setup
Cost Management
Monitor spending and set limits across all your AI interactions
Fallback Policies
Automatic fallbacks ensure your assistant never goes down
Smart Routing
Intelligent routing selects the best provider based on availability and latency
Troubleshooting
”model not allowed”
The model must be in bothmodels.providers[].models[] and agents.defaults.models. Make sure the allowlist key uses the fully-qualified name (requesty/anthropic/claude-sonnet-4-5), not just the model ID.
Model doesn’t show in /models
Verify the model is listed in the models array of your provider definition. It’s common to add the allowlist entry but forget the provider model definition (or vice versa).
Connection errors
Test your Requesty API key directly with curl:baseUrl and apiKey.
Wrong model being called
Theid field in your model definition must match exactly what Requesty expects. Check the Model Library for the correct model ID.