Requesty router now supports image generation models that return generated images alongside text responses through the standard chat completions endpoint.
How It Works
Image generation models use the same /v1/chat/completions
endpoint as text models, but return an additional images
array in the response containing the generated images.
Send requests using the standard chat completions format:
curl https://router.requesty.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
-d '{
"model": "vertex/google/gemini-2.5-flash-image-preview",
"messages": [
{
"role": "user",
"content": "Generate an image of a sunset over mountains"
}
]
}'
The response includes both the standard text content and an array of generated images:
{
"model": "vertex/google/gemini-2.5-flash-image-preview",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I've generated an image of a sunset over mountains as requested.",
"images": [
{
"type": "image_url",
"image_url": {
"url": "data:image/png;base64,your_base64_image"
}
}
]
}
}
]
}
Working with Generated Images
Python Example
import base64
from io import BytesIO
from PIL import Image
from openai import OpenAI
requesty_api_key = "YOUR_REQUESTY_API_KEY"
client = OpenAI(
api_key=requesty_api_key,
base_url="https://router.requesty.ai/v1",
)
response = client.chat.completions.create(
model="vertex/google/gemini-2.5-flash-image-preview",
messages=[
{
"role": "user",
"content": "Generate a futuristic cityscape at night"
}
]
)
# Extract the generated image
message = response.choices[0].message
if hasattr(message, 'images') and message.images:
for i, image_data in enumerate(message.images):
# Extract base64 data from data URL
# Format: "data:image/png;base64,actual_base64_data"
base64_str = image_data['image_url']['url'].split(',')[1]
image_bytes = base64.b64decode(base64_str)
# Open with PIL
image = Image.open(BytesIO(image_bytes))
# Save the image
image.save(f'generated_image_{i}.png')
print(f"Image saved as generated_image_{i}.png")
# Access the text response
print(message.content)
JavaScript/TypeScript Example
import OpenAI from 'openai';
import fs from 'fs';
const client = new OpenAI({
apiKey: 'YOUR_REQUESTY_API_KEY',
baseURL: 'https://router.requesty.ai/v1',
});
async function generateImage() {
const response = await client.chat.completions.create({
model: 'vertex/google/gemini-2.5-flash-image-preview',
messages: [
{
role: 'user',
content: 'Generate a serene landscape with a lake'
}
]
});
const message = response.choices[0].message;
// Handle generated images
if (message.images && message.images.length > 0) {
message.images.forEach((imageData, index) => {
// Extract base64 data from data URL
// Format: "data:image/png;base64,actual_base64_data"
const base64Data = imageData.image_url.url.split(',')[1];
const imageBuffer = Buffer.from(base64Data, 'base64');
// Save to file
fs.writeFileSync(`generated_image_${index}.png`, imageBuffer);
console.log(`Image saved as generated_image_${index}.png`);
});
}
// Access the text response
console.log(message.content);
}
generateImage();
Supported Models
Currently, Requesty supports the following image generation model:
- Vertex AI Gemini:
vertex/google/gemini-2.5-flash-image-preview
This model provides fast, high-quality image generation through Googleβs Vertex AI platform.
Image generation models may have different pricing compared to text models. Check the model details for specific pricing information.
Limitations
- Generated images are returned as base64-encoded data
- Image size and resolution depend on the specific model capabilities
- Some models may have content filtering or safety restrictions
- Response size limits apply to the combined text and image data