Pi is an AI coding agent that runs locally on your machine. Using the Requesty integration, you can:Documentation Index
Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt
Use this file to discover all available pages before exploring further.
- Access 300+ models from OpenAI, Anthropic, Google, Mistral, and many other providers through one API key.
- Track and manage your spend in a single location.
- Apply fallback policies, load balancing, and latency routing to keep your agent responsive.
Prerequisites
- Pi installed on your machine. See pi.dev for installation instructions.
- A Requesty API key from the API Keys page.
Configuration
Install Pi
Follow the instructions at pi.dev to download and install Pi on your machine.
Configure the models file
Create or edit the models configuration file at Replace
~/.pi/agent/models.json and add Requesty as a provider:rqsty-sk-.... with your Requesty API key from the API Keys page.Sync models
Run the following command inside Pi to sync the available Requesty models:This pulls the latest model catalog from Requesty so you can select any approved model from your organization.
Selecting a model
After the sync completes, run the/model command inside Pi to choose a model from the list you synced.
You should be able to see both discrete models, e.g. anthropic/claude-opus-4-7 and your custom policies, e.g. policy/opus-europe.