Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.requesty.ai/llms.txt

Use this file to discover all available pages before exploring further.

OpenCode is an open source AI coding agent that runs in your terminal, with companion desktop and IDE apps. The source lives at github.com/sst/opencode. Using the Requesty integration, you can:
  • Access 300+ models from OpenAI, Anthropic, Google, Mistral, and many other providers through one API key.
  • Track and manage your spend in a single location.
  • Apply fallback policies, load balancing, and latency routing to keep your agent responsive.
  • Restrict the model picker to your organization’s Approved Models.

Prerequisites

Quick setup with /connect

The fastest way to add Requesty is the built in /connect command. It writes the API key into OpenCode’s secure store at ~/.local/share/opencode/auth.json. See the OpenCode providers guide for the full reference.
1

Open OpenCode in any project

cd /path/to/project
opencode
2

Run the connect command

/connect
Select Requesty from the provider list.
3

Paste your Requesty API key

Get your key from the API Keys page.
┌ API key
│ ••••••••••••••••••••••••••••••••
└ enter
4

Pick a model

/models
The list contains every model approved on your Requesty organization. Set a default with /model anthropic/claude-sonnet-4-5 or pick a model per session.
The CLI alternative is opencode auth login. Choose Requesty when prompted, then paste your key. The credential is written to the same auth.json file.

Manual configuration

You can also configure Requesty directly in opencode.json. This is useful when you want a checked in project config or when you want to switch the base URL to the EU region. Create or edit opencode.json in your project root or ~/.config/opencode/opencode.json for a global config:
{
  "$schema": "https://opencode.ai/config.json",
  "model": "requesty/anthropic/claude-sonnet-4-5",
  "provider": {
    "requesty": {
      "options": {
        "baseURL": "https://router.requesty.ai/v1",
        "apiKey": "{env:REQUESTY_API_KEY}"
      }
    }
  }
}
Then export the key in your shell:
export REQUESTY_API_KEY="your_requesty_api_key"
opencode
REQUESTY_API_KEY is the canonical environment variable OpenCode reads for this provider.
Instead of hard coding a specific model, point OpenCode at a Routing Policy. You can then swap the underlying model from the Requesty UI without editing opencode.json.

EU routing

To pin traffic to the EU region, override the base URL:
{
  "provider": {
    "requesty": {
      "options": {
        "baseURL": "https://router.eu.requesty.ai/v1"
      }
    }
  }
}
See EU Routing for details on the regional endpoint. If you find yourself editing opencode.json every time you want to try a different model, point OpenCode at a Routing Policy instead. A policy is a named alias that lives on the Requesty side. You change which model (or chain of models) it resolves to from the Routing Policies page, and every OpenCode session immediately picks up the change. No config edits, no restarts, no PR review. This is the pattern we recommend for teams. Hard coding a specific model in OpenCode is fine for solo experimentation, but a policy is safer the moment more than one person is using the same setup.

Create the policy in Requesty

Pick whichever policy type fits how you want the model swap to behave, then follow the linked guide to create it in the Requesty UI:
  • Fallback Policy. A primary model with one or more backups. If the primary fails, Requesty automatically retries the next model in the chain. Best for reliability.
  • Load Balancing Policy. Splits traffic across models by weight. Best for A/B tests and gradual rollouts.
  • Latency Routing Policy. Always picks whichever model is currently fastest. Best when you care about TTFT.
All three are created the same way: open Routing Policies, click Create Policy, choose the type, give it a name (for example coding-default), and add the models.

Point OpenCode at the policy

Reference the policy as requesty/policy/<your-policy-name> in your opencode.json:
{
  "$schema": "https://opencode.ai/config.json",
  "model": "requesty/policy/coding-default",
  "provider": {
    "requesty": {
      "options": {
        "baseURL": "https://router.requesty.ai/v1",
        "apiKey": "{env:REQUESTY_API_KEY}"
      }
    }
  }
}
You can also pick the policy at runtime from the /models picker. Policies appear alongside individual models in the list.

Swapping models without touching opencode.json

When you want to try a new model:
  1. Open Routing Policies.
  2. Edit your policy and change (or reorder) the model list.
  3. Hit save. The next request from OpenCode uses the new model.
Because the policy resolves on Requesty’s side, every team member using requesty/policy/coding-default switches at the same time. This is also how you safely roll out a new model: load balance 10% to it via a Load Balancing Policy, watch the dashboards, then ramp up.

Track cost per git branch, repo, and developer

One command installs a lightweight shell wrapper that tags every OpenCode session with metadata that shows up in your Requesty dashboards.

What you get

  • Cost per branch. See which feature branch is burning the most credits.
  • Cost per repo. Break down spend across multiple repositories.
  • Cost per developer. Know who is spending what.
  • Agent version tracking. See which OpenCode version is generating spend.

Install

curl -fsSL https://www.requesty.ai/opencode/install.sh | bash
Then restart your terminal or run source ~/.zshrc (or source ~/.bashrc). That is it. Every OpenCode session will now automatically send these headers to Requesty:
HeaderValue
X-Requesty-BranchCurrent git branch
X-Requesty-Repoorg/repo from git origin
X-Requesty-Ai-AgentOpenCode version
X-Requesty-UserOS username
Headers are set once per session and sent only to Requesty. They are stripped before forwarding to any AI provider, so no extra metadata leaves the gateway.

How it works

The installer makes two changes:
  1. It appends a small shell function to your ~/.zshrc or ~/.bashrc that wraps the opencode command. Each time you start OpenCode, the wrapper reads your current git context and exports environment variables with your branch, repo, agent version, and user.
  2. It adds a headers map to your opencode.json so the Requesty provider forwards those env vars on every request:
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "requesty": {
      "options": {
        "headers": {
          "X-Requesty-Branch": "{env:REQUESTY_BRANCH}",
          "X-Requesty-Repo": "{env:REQUESTY_REPO}",
          "X-Requesty-Ai-Agent": "{env:REQUESTY_AI_AGENT}",
          "X-Requesty-User": "{env:REQUESTY_USER}"
        }
      }
    }
  }
}
See Analytics Headers for the full list of supported header dimensions.

Uninstall

Remove the block between the # --- Requesty header injection --- markers in your shell rc file, or just delete the opencode() function. Then revert the headers block in your opencode.json if you no longer want the headers forwarded.

Custom headers

You can add your own X-Requesty-* headers for additional dimensions (for example to tag by team, customer, or environment). Drop extra entries into the same headers map:
{
  "provider": {
    "requesty": {
      "options": {
        "baseURL": "https://router.requesty.ai/v1",
        "apiKey": "{env:REQUESTY_API_KEY}",
        "headers": {
          "X-Title": "OpenCode",
          "X-Requesty-Team": "platform"
        }
      }
    }
  }
}
See Request Metadata for a full reference of the metadata you can pass.

Adding models that are not in OpenCode’s catalog

Some Requesty models may not appear in /models even after you approve them on your account. This usually happens with newly released providers (for example Inceptron, Fireworks, Zai, Novita, DeepInfra) and recent model families like GLM. The fix is a custom provider entry in your opencode.json that points at the Requesty router and lists the model IDs you want:
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "requesty-extra": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Requesty (Extra Models)",
      "options": {
        "baseURL": "https://router.requesty.ai/v1",
        "apiKey": "{env:REQUESTY_API_KEY}"
      },
      "models": {
        "zai/GLM-5.1":         { "name": "GLM 5.1 (Zai)" },
        "inceptron/glm-5.1":   { "name": "GLM 5.1 (Inceptron)" },
        "fireworks/glm-5.1":   { "name": "GLM 5.1 (Fireworks)" },
        "novita/zai-org/glm-4.6": { "name": "GLM 4.6 (Novita)" }
      }
    }
  }
}
After saving, restart OpenCode and run /models. The new entries appear under the Requesty (Extra Models) group and route through the standard Requesty router, so usage, caching, fallback, and approved-model checks all behave the same as the built in requesty provider.
Use the exact model ID returned by GET https://router.requesty.ai/v1/models. For GLM specifically the casing matters (zai/GLM-5.1, not zai/glm-5.1). You can find the full catalog in the Model Library or by hitting /v1/models directly.

Pinning a single specific model

If you only need one extra model, the same pattern works with a single entry:
{
  "provider": {
    "requesty-extra": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Requesty (Extra Models)",
      "options": {
        "baseURL": "https://router.requesty.ai/v1",
        "apiKey": "{env:REQUESTY_API_KEY}"
      },
      "models": {
        "vertex/google/gemini-2.5-pro": {
          "name": "Gemini 2.5 Pro (via Requesty)"
        }
      }
    }
  }
}

Need a model added?

If you want a Requesty model to appear in /models without the custom provider workaround, contact us and we will add it.

Selecting a model

Once Requesty is connected, the /models picker shows your approved models. Pick one for the active session, or set a default in config:
{
  "model": "requesty/openai/gpt-5"
}
For a default that you can change without editing this file again, use a Routing Policy. See Recommended: use a Routing Policy.

Verifying the integration

From inside OpenCode, run a quick prompt:
> Summarize the README of this repo in 3 bullet points.
Then open the Requesty analytics dashboard to confirm the request was logged. Spend, latency, and token counts should appear within seconds.

Troubleshooting

First confirm the model is enabled at app.requesty.ai/admin-panel?tab=models and restart the OpenCode session. If it still does not appear, use the custom provider workaround to expose it. This is the typical case for newer providers like Inceptron, Fireworks, Zai, and Novita. If you would prefer it to work out of the box, contact us and we will add it.
This usually means the provider block was not persisted to your opencode.json. Open the file (project root or ~/.config/opencode/opencode.json) and ensure a provider.requesty entry exists with at least options.baseURL set to https://router.requesty.ai/v1.
Re run /connect and re paste your key, or set REQUESTY_API_KEY in your shell. Credentials live in ~/.local/share/opencode/auth.json.
Try a latency routing policy or move to the EU endpoint if your traffic is closer to Frankfurt.

References

Last modified on May 5, 2026