Building an application with LangChain?You can use the Requesty router to access any LLM, and get cost management, monitoring and fallbacks out-of-the-box.Here’s an example script:
Copy
from langchain_openai import ChatOpenAIfrom langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import RunnableLambdafrom os import getenvfrom dotenv import load_dotenv# Load environment variablesload_dotenv()# Define the prompt templatetemplate = """You are an expert on Requesty router. The user has a question about this router:Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# Initialize the OpenAI LLMllm = ChatOpenAI( openai_api_key=getenv("REQUESTY_API_KEY"), openai_api_base=getenv("REQUESTY_BASE_URL"), model_name="openai/gpt-4o",)# Create a Runnable Chainllm_chain = prompt | llm# Define the questionquestion = "What application should I build now that Requesty router provides access to 150+ LLMs?"# Run the model and get the responseresponse = llm_chain.invoke({"question": question})print(response)