Skip to main content
OpenClaw (formerly Moltbot, formerly Clawdbot) is an open-source personal AI assistant with 180k+ stars on GitHub. It runs on your own devices and connects to messaging channels you already use — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and more. Using the Requesty integration, you can:
  • Access 300+ models from OpenAI, Anthropic, Google, Mistral, and many more providers
  • Use both the Anthropic Messages API and OpenAI Chat Completions API formats
  • Track and manage your spend in a single location
  • Set up fallback policies so your assistant never goes down

How It Works

Prerequisites

  • OpenClaw installed and running (npm install -g openclaw)
  • A Requesty API key from the API Keys Page

Configuration

OpenClaw supports two API formats for connecting to Requesty. Choose the one that fits your use case.

Anthropic Messages API (anthropic-messages)

Use this format to access Claude models through Requesty’s Anthropic-compatible endpoint. This is the recommended approach if you primarily use Claude models.
1

Get your Requesty API key

Create an API key on the API Keys Page.
2

Edit your OpenClaw config

Open ~/.openclaw/openclaw.json and add the Requesty provider:
{
  "models": {
    "mode": "merge",
    "providers": {
      "requesty": {
        "baseUrl": "https://router.requesty.ai",
        "apiKey": "YOUR_REQUESTY_API_KEY",
        "api": "anthropic-messages",
        "models": [
          {
            "id": "anthropic/claude-sonnet-4-5",
            "name": "Claude Sonnet 4.5 (via Requesty)"
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "requesty/anthropic/claude-sonnet-4-5"
      },
      "models": {
        "requesty/anthropic/claude-sonnet-4-5": {}
      }
    }
  }
}
3

Apply and start

openclaw gateway config.apply --file ~/.openclaw/openclaw.json
The base URL differs between the two API formats:
  • Anthropic Messages: https://router.requesty.ai (no /v1 suffix)
  • OpenAI Chat Completions: https://router.requesty.ai/v1 (with /v1 suffix)

Onboarding Wizard

If you prefer a guided setup, use the OpenClaw onboarding wizard and select Custom Provider:
openclaw onboard
When prompted:
  1. Choose OpenAI-compatible or Anthropic-compatible depending on the API format you want
  2. Enter the base URL (https://router.requesty.ai/v1 for OpenAI, https://router.requesty.ai for Anthropic)
  3. Enter your Requesty API key
  4. Provide a model ID (e.g. openai/gpt-4o or anthropic/claude-sonnet-4-5)

Adding Multiple Models

You can configure multiple models from different providers — all through a single Requesty API key:
{
  "models": {
    "mode": "merge",
    "providers": {
      "requesty": {
        "baseUrl": "https://router.requesty.ai",
        "apiKey": "YOUR_REQUESTY_API_KEY",
        "api": "anthropic-messages",
        "models": [
          {
            "id": "anthropic/claude-sonnet-4-5",
            "name": "Claude Sonnet 4.5"
          },
          {
            "id": "anthropic/claude-opus-4-6",
            "name": "Claude Opus 4.6"
          },
          {
            "id": "bedrock/claude-sonnet-4-5",
            "name": "Claude Sonnet 4.5 (Bedrock)"
          }
        ]
      },
      "requesty-openai": {
        "baseUrl": "https://router.requesty.ai/v1",
        "apiKey": "YOUR_REQUESTY_API_KEY",
        "api": "openai-completions",
        "models": [
          {
            "id": "openai/gpt-4o",
            "name": "GPT-4o"
          },
          {
            "id": "google/gemini-2.5-pro",
            "name": "Gemini 2.5 Pro"
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "requesty/anthropic/claude-sonnet-4-5",
        "fallbacks": [
          "requesty-openai/openai/gpt-4o"
        ]
      },
      "models": {
        "requesty/anthropic/claude-sonnet-4-5": { "alias": "sonnet" },
        "requesty/anthropic/claude-opus-4-6": { "alias": "opus" },
        "requesty/bedrock/claude-sonnet-4-5": { "alias": "sonnet-bedrock" },
        "requesty-openai/openai/gpt-4o": { "alias": "gpt4o" },
        "requesty-openai/google/gemini-2.5-pro": { "alias": "gemini" }
      }
    }
  }
}
Then switch models in chat with:
/model sonnet
/model opus
/model gpt4o
/model gemini

Model Selection

You can use any model from the Model Library. Model IDs follow the provider/model-name format:
ProviderExample Model ID
Anthropicanthropic/claude-sonnet-4-5
OpenAIopenai/gpt-4o
Googlegoogle/gemini-2.5-pro
AWS Bedrockbedrock/claude-opus-4-6
Mistralmistral/mistral-large-latest
You can also use Fallback Policies by setting the model to policy/your-policy-name.

EU Region

For EU data residency, use the EU router endpoint:
  • Anthropic Messages: https://router.eu.requesty.ai
  • OpenAI Chat Completions: https://router.eu.requesty.ai/v1

Benefits of Using Requesty with OpenClaw

Access 300+ Models

Switch between models from different providers without changing your setup

Cost Management

Monitor spending and set limits across all your AI interactions

Fallback Policies

Automatic fallbacks ensure your assistant never goes down

Smart Routing

Intelligent routing selects the best provider based on availability and latency

Troubleshooting

”model not allowed”

The model must be in both models.providers[].models[] and agents.defaults.models. Make sure the allowlist key uses the fully-qualified name (requesty/anthropic/claude-sonnet-4-5), not just the model ID.

Model doesn’t show in /models

Verify the model is listed in the models array of your provider definition. It’s common to add the allowlist entry but forget the provider model definition (or vice versa).

Connection errors

Test your Requesty API key directly with curl:
curl https://router.requesty.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [{"role": "user", "content": "hello"}]
  }'
If this works but OpenClaw doesn’t, the issue is in your OpenClaw config — double-check baseUrl and apiKey.

Wrong model being called

The id field in your model definition must match exactly what Requesty expects. Check the Model Library for the correct model ID.

Resources