Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt

Use this file to discover all available pages before exploring further.

Building an application with LangChain? You can use the Requesty router to access any LLM, and get cost management, monitoring and fallbacks out-of-the-box. Here’s an example script:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambda
from os import getenv
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Define the prompt template
template = """You are an expert on Requesty router. The user has a question about this router:
Question: {question}
Answer: Let's think step by step."""

prompt = PromptTemplate(template=template, input_variables=["question"])

# Initialize the OpenAI LLM
llm = ChatOpenAI(
    openai_api_key=getenv("REQUESTY_API_KEY"),
    openai_api_base=getenv("REQUESTY_BASE_URL"),
    model_name="openai/gpt-4o",
)

# Create a Runnable Chain
llm_chain = prompt | llm

# Define the question
question = "What application should I build now that Requesty router provides access to 150+ LLMs?"

# Run the model and get the response
response = llm_chain.invoke({"question": question})
print(response)
Last modified on April 27, 2026