Overview
Tool Call Analytics gives you complete visibility into how your AI agents and applications use tools. Whether you’re working with MCP servers, function calling, or custom tool implementations, you can now track exactly what’s happening at the tool level.Why It Matters
Most teams building with agentic AI are flying blind. You can see overall request metrics, but when a response takes 15 seconds or costs $2, you don’t know which tool is responsible. Common problems this solves:- A single slow tool bottlenecking your entire agent workflow
- Unexpected costs from tools making excessive API calls
- Debugging which tools are failing in production
- Understanding which tools users actually use vs. which sit idle
Tool calls can account for 60%+ of your total response time. You can’t optimize what you don’t measure.
Key Metrics
Tool Call Requests
Total number of requests that included tool calls. Each request may contain multiple tool invocations. Use this to:- Identify your most frequently used tools
- Spot unusual usage patterns
- Track adoption of new tools over time
Tool Call Cost
Total cost of tool call requests, including both router and provider costs. Use this to:- Find expensive tools that need optimization
- Set cost budgets per tool
- Justify infrastructure investments
Average Latency
Average response time for tool calls in milliseconds. Use this to:- Identify performance bottlenecks
- Set latency SLAs per tool
- Optimize critical path operations
Tool latency varies wildly. Some tools consistently take 15+ seconds while others return in milliseconds. Without per-tool tracking, you won’t know which ones to optimize.
Success Rate
Percentage of successful tool call requests. Use this to:- Monitor tool reliability
- Catch breaking changes early
- Track improvement after fixes
Avg Calls Per Request
Average number of tool invocations per request. Use this to:- Understand workflow complexity
- Optimize prompt engineering to reduce unnecessary calls
- Identify inefficient tool usage patterns
Avg Tokens Per Call
Average tokens consumed per tool call. Use this to:- Track token efficiency
- Identify tools with verbose outputs
- Optimize context management
Group By Tool Name
The most powerful feature is grouping metrics by tool name. This shows you exactly which tools are slow, expensive, or unreliable. In the dashboard, select “Group by: Tool Name” to see:- Stacked bar charts showing request volume per tool
- Line charts comparing latency across tools
- Cost breakdown by tool
- Success rates for each tool
Real-World Use Cases
Case Study: Debugging Slow Responses
A team noticed their agent responses were taking 12+ seconds. Using tool call analytics grouped by tool name, they discovered:web_search
tool: 8.5s average latencyread_file
tool: 0.2s average latencydatabase_query
tool: 0.8s average latency
Case Study: Cost Optimization
Another team had tool costs spiraling out of control. Analytics revealed:comprehensive_analysis
tool: $0.45 per callquick_check
tool: $0.02 per call- Both tools were being used for similar tasks
quick_check
when possible and saved 70% on tool costs.
Case Study: Reliability Monitoring
A production system started failing intermittently. Tool call analytics showed:external_api
tool: 65% success rate- All other tools: 99%+ success rate
Filtering and Time Ranges
Combine tool call analytics with existing filters:- Time range: Last 7 days, 30 days, custom ranges
- API key: Track per-user or per-project tool usage
- Model: See which models are better at using tools
- User: Understand per-user tool patterns
Getting Started
Tool call analytics is automatically enabled for all requests that include tool calls. Just navigate to the Analytics → Tool Calls tab in your dashboard. No code changes required. If you’re already using MCP servers, function calling, or tools with your LLM requests, you’ll see data immediately.Start by grouping by tool name and sorting by latency. This quickly reveals your biggest performance bottlenecks.
Best Practices
- Set latency budgets per tool - Different tools have different acceptable response times. Track them separately.
- Monitor success rates daily - A drop in success rate often indicates breaking changes in external APIs or services.
- Compare costs across similar tools - If two tools do similar things, use the analytics to pick the most cost-effective one.
- Track tokens per call - High token counts may indicate verbose tool outputs that could be compressed.
- Review avg calls per request - If this number keeps growing, you may need to optimize your agent’s planning logic.
Related Features
- Usage Analytics - Track overall request patterns
- Performance Monitoring - Monitor latency and errors
- Cost Tracking - Understand your spending
- MCP Analytics - MCP-specific insights