Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt

Use this file to discover all available pages before exploring further.

Requesty supports image generation through two different endpoints: the dedicated Images API (/v1/images/generations and /v1/images/edits) for standard image workflows, and the Chat Completions API (/v1/chat/completions) for models that return images alongside text.

Images API (/v1/images/generations)

The dedicated images endpoint follows the OpenAI Images API format and is the recommended way to generate images with supported models.

Request Format

curl https://router.requesty.ai/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -d '{
    "model": "azure/openai/gpt-image-1",
    "prompt": "A sunset over mountains with vibrant orange and purple skies",
    "n": 1,
    "size": "1024x1024",
    "quality": "auto"
  }'

Parameters

ParameterTypeRequiredDescription
modelstringYesThe model to use for image generation
promptstringYesA text description of the desired image
nintegerNoNumber of images to generate (default: 1)
sizestringNoImage dimensions (e.g., 1024x1024, 1536x1024, 1024x1536)
qualitystringNoImage quality (auto, high, medium, low)
response_formatstringNoOutput delivery format: url or b64_json (default: url)
backgroundstringNoBackground type: auto, transparent, or opaque
output_formatstringNoFile format: png, jpeg, or webp

Response Format

The response returns a data array containing the generated images:
{
  "created": 1719000000,
  "data": [
    {
      "url": "https://..."
    }
  ]
}
When response_format is set to b64_json:
{
  "created": 1719000000,
  "data": [
    {
      "b64_json": "/9j/4AAQSkZJRgABAQ..."
    }
  ]
}

Supported Models

ModelDescription
azure/openai/gpt-image-1OpenAI’s GPT Image 1 model via Azure
azure/openai/gpt-image-1.5OpenAI’s GPT Image 1.5 model via Azure

Image Edits API (/v1/images/edits)

The image edits endpoint applies a text prompt to one or more input images. It is OpenAI compatible, so you can use client.images.edit() directly.

Request Format

The endpoint accepts both multipart/form-data (the OpenAI SDK default for file uploads) and application/json (with image references as base64 data URLs or file IDs).
curl https://router.requesty.ai/v1/images/edits \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -F "model=azure/openai/gpt-image-1" \
  -F "prompt=Make the sky a dramatic sunset" \
  -F "image[]=@./photo.png" \
  -F "size=1024x1024" \
  -F "quality=auto"
JSON variant:
curl https://router.requesty.ai/v1/images/edits \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -d '{
    "model": "azure/openai/gpt-image-1",
    "prompt": "Make the sky a dramatic sunset",
    "images": [
      { "image_url": "data:image/png;base64,iVBORw0KGgo..." }
    ],
    "size": "1024x1024"
  }'

Parameters

ParameterTypeRequiredDescription
modelstringYesThe model to use for image editing
promptstringYesA text description of the desired edit
image[] / imagesfile[] or ImageReference[]YesThe input images. Use image[] form fields for file uploads, or images with file_id or image_url in JSON. Up to 16 images.
maskfile or ImageReferenceNoOptional mask. Transparent pixels mark the area that will be regenerated.
nintegerNoNumber of edited images to generate (default: 1)
sizestringNoOutput size (auto, 1024x1024, 1536x1024, 1024x1536)
qualitystringNoImage quality (auto, high, medium, low)
input_fidelitystringNoFidelity to the input image (high or low)
backgroundstringNoBackground type (auto, transparent, opaque)
output_formatstringNoFile format (png, jpeg, webp)
output_compressionintegerNoCompression level (0 to 100) for webp or jpeg
response_formatstringNoOutput delivery format (url or b64_json)

Python Example

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_REQUESTY_API_KEY",
    base_url="https://router.requesty.ai/v1",
)

with open("photo.png", "rb") as image_file:
    response = client.images.edit(
        model="azure/openai/gpt-image-1",
        prompt="Make the sky a dramatic sunset",
        image=image_file,
        size="1024x1024",
    )

print(response.data[0].url)

JavaScript/TypeScript Example

import OpenAI from 'openai';
import fs from 'fs';

const client = new OpenAI({
  apiKey: 'YOUR_REQUESTY_API_KEY',
  baseURL: 'https://router.requesty.ai/v1',
});

const response = await client.images.edit({
  model: 'azure/openai/gpt-image-1',
  prompt: 'Make the sky a dramatic sunset',
  image: fs.createReadStream('./photo.png'),
  size: '1024x1024',
});

console.log(response.data[0].url);

Masked Edits

Pass a mask image to restrict edits to a specific region. Transparent pixels in the mask mark the area that will be regenerated.
curl https://router.requesty.ai/v1/images/edits \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -F "model=azure/openai/gpt-image-1" \
  -F "prompt=Replace the background with a forest" \
  -F "image[]=@./portrait.png" \
  -F "mask=@./mask.png" \
  -F "size=1024x1024"

Supported Models

ModelDescription
azure/openai/gpt-image-1OpenAI’s GPT Image 1 model via Azure
azure/openai/gpt-image-1.5OpenAI’s GPT Image 1.5 model via Azure

Chat Completions API (/v1/chat/completions)

Some image generation models use the standard chat completions endpoint and return generated images alongside text responses.

Request Format

curl https://router.requesty.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_REQUESTY_API_KEY" \
  -d '{
    "model": "vertex/google/gemini-2.5-flash-image-preview",
    "messages": [
      {
        "role": "user",
        "content": "Generate an image of a sunset over mountains"
      }
    ]
  }'

Response Format

The response includes both the standard text content and an array of generated images:
{
  "model": "vertex/google/gemini-2.5-flash-image-preview",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "I've generated an image of a sunset over mountains as requested.",
        "images": [
          {
            "type": "image_url",
            "image_url": {
              "url": "data:image/png;base64,your_base64_image"
            }
          }
        ]
      }
    }
  ]
}

Python Example

import base64
from io import BytesIO
from PIL import Image
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_REQUESTY_API_KEY",
    base_url="https://router.requesty.ai/v1",
)

response = client.chat.completions.create(
    model="vertex/google/gemini-2.5-flash-image-preview",
    messages=[
        {
            "role": "user",
            "content": "Generate a futuristic cityscape at night"
        }
    ]
)

# Extract the generated image
message = response.choices[0].message
if hasattr(message, 'images') and message.images:
    for i, image_data in enumerate(message.images):
        # Extract base64 data from data URL
        # Format: "data:image/png;base64,actual_base64_data"
        base64_str = image_data['image_url']['url'].split(',')[1]
        image_bytes = base64.b64decode(base64_str)
        
        # Open with PIL
        image = Image.open(BytesIO(image_bytes))
        
        # Save the image
        image.save(f'generated_image_{i}.png')
        print(f"Image saved as generated_image_{i}.png")

# Access the text response
print(message.content)

JavaScript/TypeScript Example

import OpenAI from 'openai';
import fs from 'fs';

const client = new OpenAI({
  apiKey: 'YOUR_REQUESTY_API_KEY',
  baseURL: 'https://router.requesty.ai/v1',
});

async function generateImage() {
  const response = await client.chat.completions.create({
    model: 'vertex/google/gemini-2.5-flash-image-preview',
    messages: [
      {
        role: 'user',
        content: 'Generate a serene landscape with a lake'
      }
    ]
  });

  const message = response.choices[0].message;
  
  // Handle generated images
  if (message.images && message.images.length > 0) {
    message.images.forEach((imageData, index) => {
      // Extract base64 data from data URL
      // Format: "data:image/png;base64,actual_base64_data"
      const base64Data = imageData.image_url.url.split(',')[1];
      const imageBuffer = Buffer.from(base64Data, 'base64');
      
      // Save to file
      fs.writeFileSync(`generated_image_${index}.png`, imageBuffer);
      console.log(`Image saved as generated_image_${index}.png`);
    });
  }

  // Access the text response
  console.log(message.content);
}

generateImage();

Supported Models

ModelDescription
vertex/google/gemini-2.5-flash-image-previewGoogle Gemini image generation via Vertex AI

Choosing an Endpoint

FeatureImages GenerateImages EditChat Completions API
Endpoint/v1/images/generations/v1/images/edits/v1/chat/completions
OpenAI SDK supportclient.images.generate()client.images.edit()client.chat.completions.create()
Accepts input imagesNoYes (up to 16)Yes (as chat content)
Mask supportNoYesNo
Text + image responseNoNoYes
Conversational contextNoNoYes
Background controlYesYesNo
Output format controlYes (png, jpeg, webp)Yes (png, jpeg, webp)No
Image generation models may have different pricing compared to text models. Check the model library for specific pricing information.

Limitations

  • Image size and resolution depend on the specific model capabilities
  • Some models may have content filtering or safety restrictions
  • Response size limits apply to the combined text and image data
Last modified on April 29, 2026