POST
/
v1
/
embeddings
Create embedding
curl --request POST \
  --url https://router.requesty.ai/v1/embeddings \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "input": "<string>",
  "model": "openai/text-embedding-3-small",
  "dimensions": 123,
  "encoding_format": "float",
  "user": "<string>"
}'
{
  "data": [
    {
      "embedding": [
        123
      ],
      "index": 123,
      "object": "<string>"
    }
  ],
  "model": "<string>",
  "object": "<string>",
  "usage": {
    "prompt_tokens": 123,
    "total_tokens": 123
  }
}

Authorizations

Authorization
string
header
required

API key for authentication

Body

application/json
input
required

Input text to embed, encoded as a string, array of strings, array of tokens, or array of token arrays A single text string to embed

model
string
required

The model name to use for embedding generation

Example:

"openai/text-embedding-3-small"

dimensions
integer

The number of dimensions the resulting output embeddings should have

encoding_format
enum<string>
default:float

The format to return the embeddings in. Can be either float or base64.

Available options:
float,
base64
user
string

A unique identifier representing your end-user.

Response

Embedding response

data
object[]
required

The list of embeddings generated by the model

model
string
required

The name of the model used to generate the embedding

object
string
required

The object type, which is always 'list'

usage
object
required