Skip to main content

Overview

Anyway provides OpenTelemetry-compatible distributed tracing for your AI applications. Traces help you:
  • Debug issues by following requests through your system
  • Identify performance bottlenecks
  • Understand the flow of AI operations
  • Correlate LLM calls with your application logic

How Tracing Works

Every AI operation creates a trace composed of spans:
Trace: user-request
├── Span: validate-input (2ms)
├── Span: openai.chat.completions (1,234ms)
│   └── Attributes: model=gpt-4o, tokens=150, cost=$0.0035
├── Span: process-response (5ms)
└── Span: save-to-db (12ms)

Automatic Spans

The Anyway SDK automatically creates spans for supported LLM providers:
  • OpenAI API calls
  • Anthropic API calls
  • AWS Bedrock, Google Vertex AI, Cohere, Together AI
  • Vector DB operations (Pinecone, ChromaDB, Qdrant)

Custom Spans with Decorators

Structure your traces with workflow and task decorators:
from anyway.sdk.decorators import workflow, task
from openai import OpenAI

client = OpenAI()

@task(name="validate_input")
def validate(user_input: str) -> str:
    # validation logic
    return user_input

@task(name="llm_call")
def call_llm(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

@workflow(name="process_query")
def process_query(user_input: str) -> str:
    validated = validate(user_input)
    result = call_llm(validated)
    return result

Association Properties

Add metadata to traces for filtering and grouping:
@workflow(name="user-query", association_properties={
    "user_id": "user-123",
    "team": "growth",
    "feature": "chatbot",
})
def handle_query(query: str):
    return call_llm(query)
Then filter traces in the dashboard by these properties.

Span Attributes

These attributes are automatically set for LLM spans:
AttributeDescription
gen_ai.systemProvider (openai, anthropic)
gen_ai.request.modelRequested model
gen_ai.response.modelModel used in response
gen_ai.usage.prompt_tokensInput/prompt tokens
gen_ai.usage.completion_tokensOutput/completion tokens
gen_ai.usage.costEstimated cost in USD

Viewing Traces

In the Anyway Dashboard:
  1. Navigate to Traces
  2. Use filters to find specific traces:
    • Time range
    • Model
    • Latency threshold
    • Cost threshold
    • Association properties
  3. Click a trace to see the full span tree

Context Propagation

Anyway automatically propagates trace context across:
  • Async functions
  • Nested workflow/task calls
  • HTTP requests (with W3C trace context headers)

Next Steps

Cost Tracking

Monitor spending in real-time

Payments

Create payment links and accept payments