Skip to main content
Requesty supports image generation through two different endpoints: the dedicated Images API (/v1/images/generations) for standard image generation workflows, and the Chat Completions API (/v1/chat/completions) for models that return images alongside text.

Images API (/v1/images/generations)

The dedicated images endpoint follows the OpenAI Images API format and is the recommended way to generate images with supported models.

Request Format

curl https://router.requesty.ai/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -d '{
    "model": "azure/openai/gpt-image-1",
    "prompt": "A sunset over mountains with vibrant orange and purple skies",
    "n": 1,
    "size": "1024x1024",
    "quality": "auto"
  }'

Parameters

ParameterTypeRequiredDescription
modelstringYesThe model to use for image generation
promptstringYesA text description of the desired image
nintegerNoNumber of images to generate (default: 1)
sizestringNoImage dimensions (e.g., 1024x1024, 1536x1024, 1024x1536)
qualitystringNoImage quality (auto, high, medium, low)
response_formatstringNoOutput delivery format: url or b64_json (default: url)
backgroundstringNoBackground type: auto, transparent, or opaque
output_formatstringNoFile format: png, jpeg, or webp

Response Format

The response returns a data array containing the generated images:
{
  "created": 1719000000,
  "data": [
    {
      "url": "https://..."
    }
  ]
}
When response_format is set to b64_json:
{
  "created": 1719000000,
  "data": [
    {
      "b64_json": "/9j/4AAQSkZJRgABAQ..."
    }
  ]
}

Supported Models

ModelDescription
azure/openai/gpt-image-1OpenAI’s GPT Image 1 model via Azure
azure/openai/gpt-image-1.5OpenAI’s GPT Image 1.5 model via Azure

Chat Completions API (/v1/chat/completions)

Some image generation models use the standard chat completions endpoint and return generated images alongside text responses.

Request Format

curl https://router.requesty.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -d '{
    "model": "vertex/google/gemini-2.5-flash-image-preview",
    "messages": [
      {
        "role": "user",
        "content": "Generate an image of a sunset over mountains"
      }
    ]
  }'

Response Format

The response includes both the standard text content and an array of generated images:
{
  "model": "vertex/google/gemini-2.5-flash-image-preview",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "I've generated an image of a sunset over mountains as requested.",
        "images": [
          {
            "type": "image_url",
            "image_url": {
              "url": "data:image/png;base64,your_base64_image"
            }
          }
        ]
      }
    }
  ]
}

Python Example

import base64
from io import BytesIO
from PIL import Image
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_REQUESTY_API_KEY",
    base_url="https://router.requesty.ai/v1",
)

response = client.chat.completions.create(
    model="vertex/google/gemini-2.5-flash-image-preview",
    messages=[
        {
            "role": "user",
            "content": "Generate a futuristic cityscape at night"
        }
    ]
)

# Extract the generated image
message = response.choices[0].message
if hasattr(message, 'images') and message.images:
    for i, image_data in enumerate(message.images):
        # Extract base64 data from data URL
        # Format: "data:image/png;base64,actual_base64_data"
        base64_str = image_data['image_url']['url'].split(',')[1]
        image_bytes = base64.b64decode(base64_str)
        
        # Open with PIL
        image = Image.open(BytesIO(image_bytes))
        
        # Save the image
        image.save(f'generated_image_{i}.png')
        print(f"Image saved as generated_image_{i}.png")

# Access the text response
print(message.content)

JavaScript/TypeScript Example

import OpenAI from 'openai';
import fs from 'fs';

const client = new OpenAI({
  apiKey: 'YOUR_REQUESTY_API_KEY',
  baseURL: 'https://router.requesty.ai/v1',
});

async function generateImage() {
  const response = await client.chat.completions.create({
    model: 'vertex/google/gemini-2.5-flash-image-preview',
    messages: [
      {
        role: 'user',
        content: 'Generate a serene landscape with a lake'
      }
    ]
  });

  const message = response.choices[0].message;
  
  // Handle generated images
  if (message.images && message.images.length > 0) {
    message.images.forEach((imageData, index) => {
      // Extract base64 data from data URL
      // Format: "data:image/png;base64,actual_base64_data"
      const base64Data = imageData.image_url.url.split(',')[1];
      const imageBuffer = Buffer.from(base64Data, 'base64');
      
      // Save to file
      fs.writeFileSync(`generated_image_${index}.png`, imageBuffer);
      console.log(`Image saved as generated_image_${index}.png`);
    });
  }

  // Access the text response
  console.log(message.content);
}

generateImage();

Supported Models

ModelDescription
vertex/google/gemini-2.5-flash-image-previewGoogle Gemini image generation via Vertex AI

Choosing an Endpoint

FeatureImages APIChat Completions API
Endpoint/v1/images/generations/v1/chat/completions
OpenAI SDK supportclient.images.generate()client.chat.completions.create()
Text + image responseNoYes
Conversational contextNoYes
Background controlYesNo
Output format controlYes (png, jpeg, webp)No
Image generation models may have different pricing compared to text models. Check the model library for specific pricing information.

Limitations

  • Image size and resolution depend on the specific model capabilities
  • Some models may have content filtering or safety restrictions
  • Response size limits apply to the combined text and image data