Overview
Anyway provides OpenTelemetry-compatible distributed tracing for your AI applications. Traces help you:- Debug issues by following requests through your system
- Identify performance bottlenecks
- Understand the flow of AI operations
- Correlate LLM calls with your application logic
How Tracing Works
Every AI operation creates a trace composed of spans:Automatic Spans
The Anyway SDK automatically creates spans for supported LLM providers:- OpenAI API calls
- Anthropic API calls
- AWS Bedrock, Google Vertex AI, Cohere, Together AI
- Vector DB operations (Pinecone, ChromaDB, Qdrant)
Custom Spans with Decorators
Structure your traces with workflow and task decorators:- Python
- JavaScript
Association Properties
Add metadata to traces for filtering and grouping:- Python
- JavaScript
Span Attributes
These attributes are automatically set for LLM spans:| Attribute | Description |
|---|---|
gen_ai.system | Provider (openai, anthropic) |
gen_ai.request.model | Requested model |
gen_ai.response.model | Model used in response |
gen_ai.usage.prompt_tokens | Input/prompt tokens |
gen_ai.usage.completion_tokens | Output/completion tokens |
gen_ai.usage.cost | Estimated cost in USD |
Viewing Traces
In the Anyway Dashboard:- Navigate to Traces
- Use filters to find specific traces:
- Time range
- Model
- Latency threshold
- Cost threshold
- Association properties
- Click a trace to see the full span tree
Context Propagation
Anyway automatically propagates trace context across:- Async functions
- Nested workflow/task calls
- HTTP requests (with W3C trace context headers)
Next Steps
Cost Tracking
Monitor spending in real-time
Payments
Create payment links and accept payments