OpenCode is an open source AI coding agent that runs in your terminal, with companion desktop and IDE apps. The source lives at github.com/sst/opencode. Using the Requesty integration, you can:Documentation Index
Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt
Use this file to discover all available pages before exploring further.
- Access 300+ models from OpenAI, Anthropic, Google, Mistral, and many other providers through one API key.
- Track and manage your spend in a single location.
- Apply fallback policies, load balancing, and latency routing to keep your agent responsive.
- Restrict the model picker to your organization’s Approved Models.
Prerequisites
- OpenCode installed (e.g.
curl -fsSL https://opencode.ai/install | bash). - A Requesty API key from the API Keys page.
Quick setup with /connect
The fastest way to add Requesty is the built in /connect command. It writes the API key into OpenCode’s secure store at ~/.local/share/opencode/auth.json. See the OpenCode providers guide for the full reference.
Paste your Requesty API key
Get your key from the API Keys page.
The CLI alternative is
opencode auth login. Choose Requesty when prompted, then paste your key. The credential is written to the same auth.json file.Manual configuration
You can also configure Requesty directly inopencode.json. This is useful when you want a checked in project config or when you want to switch the base URL to the EU region.
Create or edit opencode.json in your project root or ~/.config/opencode/opencode.json for a global config:
REQUESTY_API_KEY is the canonical environment variable OpenCode reads for this provider.EU routing
To pin traffic to the EU region, override the base URL:Recommended: use a Routing Policy
If you find yourself editingopencode.json every time you want to try a different model, point OpenCode at a Routing Policy instead. A policy is a named alias that lives on the Requesty side. You change which model (or chain of models) it resolves to from the Routing Policies page, and every OpenCode session immediately picks up the change. No config edits, no restarts, no PR review.
This is the pattern we recommend for teams. Hard coding a specific model in OpenCode is fine for solo experimentation, but a policy is safer the moment more than one person is using the same setup.
Create the policy in Requesty
Pick whichever policy type fits how you want the model swap to behave, then follow the linked guide to create it in the Requesty UI:- Fallback Policy. A primary model with one or more backups. If the primary fails, Requesty automatically retries the next model in the chain. Best for reliability.
- Load Balancing Policy. Splits traffic across models by weight. Best for A/B tests and gradual rollouts.
- Latency Routing Policy. Always picks whichever model is currently fastest. Best when you care about TTFT.
coding-default), and add the models.
Point OpenCode at the policy
Reference the policy asrequesty/policy/<your-policy-name> in your opencode.json:
/models picker. Policies appear alongside individual models in the list.
Swapping models without touching opencode.json
When you want to try a new model:- Open Routing Policies.
- Edit your policy and change (or reorder) the model list.
- Hit save. The next request from OpenCode uses the new model.
requesty/policy/coding-default switches at the same time. This is also how you safely roll out a new model: load balance 10% to it via a Load Balancing Policy, watch the dashboards, then ramp up.
Track cost per git branch, repo, and developer
One command installs a lightweight shell wrapper that tags every OpenCode session with metadata that shows up in your Requesty dashboards.What you get
- Cost per branch. See which feature branch is burning the most credits.
- Cost per repo. Break down spend across multiple repositories.
- Cost per developer. Know who is spending what.
- Agent version tracking. See which OpenCode version is generating spend.
Install
source ~/.zshrc (or source ~/.bashrc).
That is it. Every OpenCode session will now automatically send these headers to Requesty:
| Header | Value |
|---|---|
X-Requesty-Branch | Current git branch |
X-Requesty-Repo | org/repo from git origin |
X-Requesty-Ai-Agent | OpenCode version |
X-Requesty-User | OS username |
Headers are set once per session and sent only to Requesty. They are stripped before forwarding to any AI provider, so no extra metadata leaves the gateway.
How it works
The installer makes two changes:- It appends a small shell function to your
~/.zshrcor~/.bashrcthat wraps theopencodecommand. Each time you start OpenCode, the wrapper reads your current git context and exports environment variables with your branch, repo, agent version, and user. - It adds a
headersmap to youropencode.jsonso the Requesty provider forwards those env vars on every request:
Uninstall
Remove the block between the# --- Requesty header injection --- markers in your shell rc file, or just delete the opencode() function. Then revert the headers block in your opencode.json if you no longer want the headers forwarded.
Custom headers
You can add your ownX-Requesty-* headers for additional dimensions (for example to tag by team, customer, or environment). Drop extra entries into the same headers map:
Adding models that are not in OpenCode’s catalog
Some Requesty models may not appear in/models even after you approve them on your account. This usually happens with newly released providers (for example Inceptron, Fireworks, Zai, Novita, DeepInfra) and recent model families like GLM.
The fix is a custom provider entry in your opencode.json that points at the Requesty router and lists the model IDs you want:
/models. The new entries appear under the Requesty (Extra Models) group and route through the standard Requesty router, so usage, caching, fallback, and approved-model checks all behave the same as the built in requesty provider.
Use the exact model ID returned by
GET https://router.requesty.ai/v1/models. For GLM specifically the casing matters (zai/GLM-5.1, not zai/glm-5.1). You can find the full catalog in the Model Library or by hitting /v1/models directly.Pinning a single specific model
If you only need one extra model, the same pattern works with a single entry:Need a model added?
If you want a Requesty model to appear in/models without the custom provider workaround, contact us and we will add it.
Selecting a model
Once Requesty is connected, the/models picker shows your approved models. Pick one for the active session, or set a default in config:
Verifying the integration
From inside OpenCode, run a quick prompt:Troubleshooting
Models I approved are missing from /models
Models I approved are missing from /models
First confirm the model is enabled at app.requesty.ai/admin-panel?tab=models and restart the OpenCode session. If it still does not appear, use the custom provider workaround to expose it. This is the typical case for newer providers like Inceptron, Fireworks, Zai, and Novita. If you would prefer it to work out of the box, contact us and we will add it.
No endpoints found for <model>
No endpoints found for <model>
This usually means the provider block was not persisted to your
opencode.json. Open the file (project root or ~/.config/opencode/opencode.json) and ensure a provider.requesty entry exists with at least options.baseURL set to https://router.requesty.ai/v1.401 Unauthorized
401 Unauthorized
Slow first token
Slow first token
Try a latency routing policy or move to the EU endpoint if your traffic is closer to Frankfurt.