POST
/
v1
/
messages
Create message
curl --request POST \
  --url https://router.requesty.ai/v1/messages \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <x-api-key>' \
  --data '{
  "model": "anthropic/claude-sonnet-4-20250514",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "system": "<string>",
  "temperature": 1,
  "top_p": 0.5,
  "top_k": 1,
  "stream": true,
  "stop_sequences": [
    "<string>"
  ],
  "tools": [
    {
      "name": "<string>",
      "description": "<string>",
      "input_schema": {}
    }
  ],
  "tool_choice": "auto"
}'
{
  "id": "<string>",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "<string>"
    }
  ],
  "model": "<string>",
  "stop_reason": "end_turn",
  "stop_sequence": "<string>",
  "usage": {
    "input_tokens": 123,
    "output_tokens": 123
  }
}
Send a message to an Anthropic-compatible model and receive a response. This endpoint follows the Anthropic Messages API format and supports all Anthropic models as well as compatible models from other providers through Requesty’s routing.

Base URL

https://router.requesty.ai/v1/messages

Authentication

Include your Requesty API key in the request headers using Anthropic’s standard format:
x-api-key: YOUR_REQUESTY_API_KEY

Headers

HeaderRequiredDescription
x-api-keyYour Requesty API key (Anthropic format)
Content-TypeMust be application/json
anthropic-versionAPI version (defaults to 2023-06-01)

Example Request

curl https://router.requesty.ai/v1/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_REQUESTY_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "anthropic/claude-sonnet-4-20250514",
    "max_tokens": 1024,
    "messages": [
      {
        "role": "user",
        "content": "Hello, Claude!"
      }
    ]
  }'

Model Selection

You can use any model available in the Model Library. Examples:
  • Anthropic Models: anthropic/claude-sonnet-4-20250514, anthropic/claude-3-7-sonnet
  • OpenAI Models: openai/gpt-4o, openai/gpt-4o-mini
  • Google Models: google/gemini-2.0-flash-exp
  • Other Providers: mistral/mistral-large-2411, meta/llama-3.3-70b-instruct
While this endpoint uses the Anthropic Messages format, Requesty automatically handles format conversion for non-Anthropic models, so you can use any supported model with this endpoint.

Streaming

Enable streaming responses by setting stream: true:
{
	"model": "anthropic/claude-sonnet-4-20250514",
	"max_tokens": 1024,
	"stream": true,
	"messages": [
		{
			"role": "user",
			"content": "Write a short story"
		}
	]
}

Vision Support

Send images using the content blocks format:
{
	"model": "anthropic/claude-sonnet-4-20250514",
	"max_tokens": 1024,
	"messages": [
		{
			"role": "user",
			"content": [
				{
					"type": "text",
					"text": "What do you see in this image?"
				},
				{
					"type": "image",
					"source": {
						"type": "base64",
						"media_type": "image/jpeg",
						"data": "/9j/4AAQSkZJRgABAQAAAQABAAD..."
					}
				}
			]
		}
	]
}

Tool Use

Define tools that the model can call:
{
	"model": "anthropic/claude-sonnet-4-20250514",
	"max_tokens": 1024,
	"tools": [
		{
			"name": "get_weather",
			"description": "Get the current weather in a given location",
			"input_schema": {
				"type": "object",
				"properties": {
					"location": {
						"type": "string",
						"description": "The city and state, e.g. San Francisco, CA"
					}
				},
				"required": ["location"]
			}
		}
	],
	"messages": [
		{
			"role": "user",
			"content": "What's the weather like in New York?"
		}
	]
}

System Prompts

Include system instructions using the system parameter:
{
	"model": "anthropic/claude-sonnet-4-20250514",
	"max_tokens": 1024,
	"system": "You are a helpful assistant that always responds in a friendly, professional manner.",
	"messages": [
		{
			"role": "user",
			"content": "Hello!"
		}
	]
}

Error Handling

The API returns standard HTTP status codes:
  • 200 - Success
  • 400 - Bad Request (invalid parameters)
  • 401 - Unauthorized (invalid API key)
  • 403 - Forbidden (insufficient permissions)
  • 429 - Rate Limited
  • 500 - Internal Server Error
Example error response:
{
	"error": {
		"type": "invalid_request_error",
		"message": "max_tokens is required"
	}
}

Response Format

Successful responses follow the Anthropic Messages format:
{
	"id": "msg_01ABC123",
	"type": "message",
	"role": "assistant",
	"content": [
		{
			"type": "text",
			"text": "Hello! I'm Claude, an AI assistant. How can I help you today?"
		}
	],
	"model": "anthropic/claude-sonnet-4-20250514",
	"stop_reason": "end_turn",
	"usage": {
		"input_tokens": 12,
		"output_tokens": 18
	}
}

Key Differences from OpenAI Chat Completions

  • Authentication: Uses x-api-key header instead of Authorization: Bearer
  • Required max_tokens: Unlike OpenAI’s API, the max_tokens parameter is required
  • Content Blocks: Messages use content blocks for rich content (text, images, tool calls)
  • System Parameter: System prompts are specified as a separate system parameter, not as a message
  • Role Restrictions: Only user and assistant roles are supported in messages (no system role)
For the most seamless experience with Anthropic models, use this endpoint. For broader compatibility across all providers, consider using the Chat Completions endpoint instead.

Headers

x-api-key
string
required

Your Requesty API key

anthropic-version
string
default:2023-06-01

The version of the Anthropic API to use

Example:

"2023-06-01"

Body

application/json

Response

200
application/json

Message response

The response is of type object.