Documentation Index
Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt
Use this file to discover all available pages before exploring further.
Building an application with Python Requests, or any other REST API client?
Using Requesty with Python Requests is straightforward - you just need to point your HTTP requests to the Requesty router endpoint.
This approach gives you maximum flexibility while still accessing all of Requesty’s powerful features.
This simple integration unlocks powerful features, such as:
All of this is available while maintaining full control over your HTTP requests.
With Requesty, you can access over 250+ models from various providers. To specify a model, you must include the provider prefix, like openai/gpt-4o-mini or anthropic/claude-sonnet-4-20250514.
You can find the full list of available models in the Model Library.
Basic Usage
Here’s how to make a simple chat completion request using Python Requests:
import requests
import os
def chat_completion():
# Safely load your API key from environment variables
REQUESTY_API_KEY = os.environ.get("REQUESTY_API_KEY")
if not REQUESTY_API_KEY:
print("Error: REQUESTY_API_KEY environment variable not set.")
return
try:
response = requests.post(
'https://router.requesty.ai/v1/chat/completions',
headers={
'Authorization': f'Bearer {REQUESTY_API_KEY}',
'Content-Type': 'application/json'
},
json={
'model': "openai/gpt-4o",
'messages': [
{'role': "user", 'content': "Hello, world!"}
]
}
)
response.raise_for_status() # Raise an HTTPError for bad responses (4xx or 5xx)
print(response.json()['choices'][0]['message']['content'])
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
chat_completion()
Streaming Responses
For streaming responses, you can use Server-Sent Events:
import requests
import os
import json
def streaming_chat():
REQUESTY_API_KEY = os.environ.get("REQUESTY_API_KEY")
if not REQUESTY_API_KEY:
print("Error: REQUESTY_API_KEY environment variable not set.")
return
try:
response = requests.post(
'https://router.requesty.ai/v1/chat/completions',
headers={
'Authorization': f'Bearer {REQUESTY_API_KEY}',
'Content-Type': 'application/json'
},
json={
'model': "openai/gpt-4o",
'messages': [
{'role': "user", 'content': "Write a short story about AI"}
],
'stream': True
},
stream=True # Important for streaming
)
response.raise_for_status()
for line in response.iter_lines():
decoded_line = line.decode('utf-8')
trimmed_line = decoded_line.strip()
if not trimmed_line.startswith('data:'):
continue
data = trimmed_line[len('data:'):].strip()
if data == '[DONE]':
print('\nStream completed')
break
try:
parsed = json.loads(data)
content = parsed.get('choices', [{}])[0].get('delta', {}).get('content')
if content:
print(content, end='')
except json.JSONDecodeError:
# Skip invalid JSON lines
pass
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
streaming_chat()