Documentation Index
Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt
Use this file to discover all available pages before exploring further.
Requesty supports image generation through two different endpoints: the dedicated Images API (/v1/images/generations) for standard image generation workflows, and the Chat Completions API (/v1/chat/completions) for models that return images alongside text.
Images API (/v1/images/generations)
The dedicated images endpoint follows the OpenAI Images API format and is the recommended way to generate images with supported models.
curl https://router.requesty.ai/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
-d '{
"model": "azure/openai/gpt-image-1",
"prompt": "A sunset over mountains with vibrant orange and purple skies",
"n": 1,
"size": "1024x1024",
"quality": "auto"
}'
Parameters
| Parameter | Type | Required | Description |
|---|
model | string | Yes | The model to use for image generation |
prompt | string | Yes | A text description of the desired image |
n | integer | No | Number of images to generate (default: 1) |
size | string | No | Image dimensions (e.g., 1024x1024, 1536x1024, 1024x1536) |
quality | string | No | Image quality (auto, high, medium, low) |
response_format | string | No | Output delivery format: url or b64_json (default: url) |
background | string | No | Background type: auto, transparent, or opaque |
output_format | string | No | File format: png, jpeg, or webp |
The response returns a data array containing the generated images:
{
"created": 1719000000,
"data": [
{
"url": "https://..."
}
]
}
When response_format is set to b64_json:
{
"created": 1719000000,
"data": [
{
"b64_json": "/9j/4AAQSkZJRgABAQ..."
}
]
}
Supported Models
| Model | Description |
|---|
azure/openai/gpt-image-1 | OpenAI’s GPT Image 1 model via Azure |
azure/openai/gpt-image-1.5 | OpenAI’s GPT Image 1.5 model via Azure |
Chat Completions API (/v1/chat/completions)
Some image generation models use the standard chat completions endpoint and return generated images alongside text responses.
curl https://router.requesty.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
-d '{
"model": "vertex/google/gemini-2.5-flash-image-preview",
"messages": [
{
"role": "user",
"content": "Generate an image of a sunset over mountains"
}
]
}'
The response includes both the standard text content and an array of generated images:
{
"model": "vertex/google/gemini-2.5-flash-image-preview",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I've generated an image of a sunset over mountains as requested.",
"images": [
{
"type": "image_url",
"image_url": {
"url": "data:image/png;base64,your_base64_image"
}
}
]
}
}
]
}
Python Example
import base64
from io import BytesIO
from PIL import Image
from openai import OpenAI
client = OpenAI(
api_key="YOUR_REQUESTY_API_KEY",
base_url="https://router.requesty.ai/v1",
)
response = client.chat.completions.create(
model="vertex/google/gemini-2.5-flash-image-preview",
messages=[
{
"role": "user",
"content": "Generate a futuristic cityscape at night"
}
]
)
# Extract the generated image
message = response.choices[0].message
if hasattr(message, 'images') and message.images:
for i, image_data in enumerate(message.images):
# Extract base64 data from data URL
# Format: "data:image/png;base64,actual_base64_data"
base64_str = image_data['image_url']['url'].split(',')[1]
image_bytes = base64.b64decode(base64_str)
# Open with PIL
image = Image.open(BytesIO(image_bytes))
# Save the image
image.save(f'generated_image_{i}.png')
print(f"Image saved as generated_image_{i}.png")
# Access the text response
print(message.content)
JavaScript/TypeScript Example
import OpenAI from 'openai';
import fs from 'fs';
const client = new OpenAI({
apiKey: 'YOUR_REQUESTY_API_KEY',
baseURL: 'https://router.requesty.ai/v1',
});
async function generateImage() {
const response = await client.chat.completions.create({
model: 'vertex/google/gemini-2.5-flash-image-preview',
messages: [
{
role: 'user',
content: 'Generate a serene landscape with a lake'
}
]
});
const message = response.choices[0].message;
// Handle generated images
if (message.images && message.images.length > 0) {
message.images.forEach((imageData, index) => {
// Extract base64 data from data URL
// Format: "data:image/png;base64,actual_base64_data"
const base64Data = imageData.image_url.url.split(',')[1];
const imageBuffer = Buffer.from(base64Data, 'base64');
// Save to file
fs.writeFileSync(`generated_image_${index}.png`, imageBuffer);
console.log(`Image saved as generated_image_${index}.png`);
});
}
// Access the text response
console.log(message.content);
}
generateImage();
Supported Models
| Model | Description |
|---|
vertex/google/gemini-2.5-flash-image-preview | Google Gemini image generation via Vertex AI |
Choosing an Endpoint
| Feature | Images API | Chat Completions API |
|---|
| Endpoint | /v1/images/generations | /v1/chat/completions |
| OpenAI SDK support | client.images.generate() | client.chat.completions.create() |
| Text + image response | No | Yes |
| Conversational context | No | Yes |
| Background control | Yes | No |
| Output format control | Yes (png, jpeg, webp) | No |
Image generation models may have different pricing compared to text models. Check the model library for specific pricing information.
Limitations
- Image size and resolution depend on the specific model capabilities
- Some models may have content filtering or safety restrictions
- Response size limits apply to the combined text and image data
Last modified on April 24, 2026