Change the model parameter to any model, and you’re done!(Yes, you can use Anthropic or any other model without changing anything but the model parameter)
Copy
import osfrom dotenv import load_dotenvfrom haystack.components.agents import Agentfrom haystack.components.generators.chat import OpenAIChatGeneratorfrom haystack.dataclasses import ChatMessage# Initialize the agent with Requesty routeragent = Agent( chat_generator=OpenAIChatGenerator( model="anthropic/claude-sonnet-4-20250514", ), system_prompt="You are a helpful web agent powered by Requesty router.",)# Define the questionquestion = "What are the benefits of using Requesty router with Haystack?"# Run the agent and get the responseresult = agent.run(messages=[ChatMessage.from_user(question)])# Print the responseprint(result['last_message'].text)
Load your Requesty API key any way you want.
Pass the api_key, api_base_url and set the model parameter to any model, and you’re done!(Yes, you can use xAI or any other model without changing anything but the model parameter)
Copy
from haystack.components.agents import Agentfrom haystack.components.generators.chat import OpenAIChatGeneratorfrom haystack.dataclasses import ChatMessagefrom haystack.utils import Secret# Securely load your API keyrequesty_api_key = Secret.from_env_var("REQUESTY_API_KEY"),# Initialize the agent with Requesty routeragent = Agent( chat_generator=OpenAIChatGenerator( model="xai/grok-4", api_key=requesty_api_key, api_base_url="https://router.requesty.ai/v1", ), system_prompt="You are a helpful web agent powered by Requesty router.",)# Define the questionquestion = "What are the benefits of using Requesty router with Haystack?"# Run the agent and get the responseresult = agent.run(messages=[ChatMessage.from_user(question)])# Print the responseprint(result['last_message'].text)