Documentation Index
Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt
Use this file to discover all available pages before exploring further.
Requesty supports sending PDF documents to AI models for analysis, summarization, and question answering. This feature works with both the Chat Completions and Messages API endpoints.
Supported Models
OpenAI Models
For OpenAI models to support PDF documents, you must use the openai-responses/ prefix instead of the standard openai/ prefix.
- ✅ Supports PDFs:
openai-responses/gpt-4.1, openai-responses/gpt-4o, etc.
- ❌ Does NOT support PDFs:
openai/gpt-4.1, openai/gpt-4o, etc.
The openai-responses/ prefix enables extended content type support, including PDFs, by using OpenAI’s responses API which handles additional file formats.
Other Providers
Most other model providers (like Anthropic, Google, etc.) support PDFs using their standard prefix format.
How It Works
PDF documents are sent as part of the message content using either base64 encoding or a URL. The AI model can then analyze the document and respond to questions about its contents.
Chat Completions API
Send PDFs using the input_file content type. You can provide the PDF as either base64-encoded data or a URL.
Using Base64-Encoded PDF
curl https://router.requesty.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
-d '{
"model": "anthropic/claude-sonnet-4-20250514",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Summarize this PDF"
},
{
"type": "input_file",
"filename": "document.pdf",
"file_data": "data:application/pdf;base64,JVBERi0="
}
]
}
]
}'
Using PDF URL
curl https://router.requesty.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
-d '{
"model": "anthropic/claude-sonnet-4-20250514",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Summarize this PDF"
},
{
"type": "input_file",
"filename": "document.pdf",
"mime_type": "application/pdf",
"file_url": "https://example.com/document.pdf"
}
]
}
]
}'
Parameters
type: Must be "input_file"
filename: The name of the PDF file (e.g., "document.pdf")
mime_type: The MIME type of the file (e.g., "application/pdf"). Required for some providers like Vertex AI
file_data: base64-encoded PDF content
file_url: Public URL for the requested PDF
Vertex AI: When using file URLs with Vertex AI, you must specify the mime_type field in the request. See Image Understanding for similar requirements with image MIME types.
See the Chat Completions API documentation for more details.
Messages API
Send PDFs using the document content type:
curl https://router.requesty.ai/v1/messages \
-H "x-api-key: YOUR_REQUESTY_API_KEY" \
-d '{
"model": "anthropic/claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this PDF?"
},
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": "JVBERi0=..."
}
}
]
}
]
}'
Parameters
type: Must be "document"
source.type: Must be "base64"
source.media_type: Must be "application/pdf"
source.data: Base64-encoded PDF content
See the Messages API documentation for more details.
Working with PDFs
Python Example (Chat Completions)
import base64
from openai import OpenAI
requesty_api_key = "YOUR_REQUESTY_API_KEY"
client = OpenAI(
api_key=requesty_api_key,
base_url="https://router.requesty.ai/v1",
)
# Option 1: Using base64-encoded PDF from a file
with open("document.pdf", "rb") as pdf_file:
pdf_data = base64.b64encode(pdf_file.read()).decode('utf-8')
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-20250514",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Summarize this PDF"
},
{
"type": "input_file",
"filename": "document.pdf",
"file_data": f"data:application/pdf;base64,{pdf_data}"
}
]
}
]
)
print(response.choices[0].message.content)
# Option 2: Using PDF URL
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-20250514",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Summarize this PDF"
},
{
"type": "input_file",
"filename": "document.pdf",
"file_url": "https://example.com/document.pdf"
}
]
}
]
)
print(response.choices[0].message.content)
Python Example (Messages API)
import base64
from anthropic import Anthropic
requesty_api_key = "YOUR_REQUESTY_API_KEY"
client = Anthropic(
api_key=requesty_api_key,
base_url="https://router.requesty.ai/v1",
)
# Read and encode PDF
with open("document.pdf", "rb") as pdf_file:
pdf_data = base64.b64encode(pdf_file.read()).decode('utf-8')
response = client.messages.create(
model="anthropic/claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this PDF?"
},
{
"type": "document",
"source": {
"type": "base64",
"media_type": "application/pdf",
"data": pdf_data
}
}
]
}
]
)
print(response.content[0].text)
JavaScript/TypeScript Example (Chat Completions)
import OpenAI from 'openai';
import fs from 'fs';
const client = new OpenAI({
apiKey: 'YOUR_REQUESTY_API_KEY',
baseURL: 'https://router.requesty.ai/v1',
});
// Option 1: Using base64-encoded PDF from a file
const pdfBuffer = fs.readFileSync('document.pdf');
const pdfData = pdfBuffer.toString('base64');
const response = await client.chat.completions.create({
model: 'anthropic/claude-sonnet-4-20250514',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Summarize this PDF'
},
{
type: 'input_file',
filename: 'document.pdf',
file_data: f'data:application/pdf;base64,{pdfData}'
}
]
}
]
});
console.log(response.choices[0].message.content);
// Option 2: Using PDF URL
const urlResponse = await client.chat.completions.create({
model: 'anthropic/claude-sonnet-4-20250514',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Summarize this PDF'
},
{
type: 'input_file',
filename: 'document.pdf',
file_url: 'https://example.com/document.pdf'
}
]
}
]
});
console.log(urlResponse.choices[0].message.content);