6 Frameworks Supported

Monitor any AI agent framework.
Two lines of code.

LangSight integrates with Claude Agent SDK, CrewAI, Anthropic SDK, OpenAI, Google Gemini, and any OTLP-compatible framework. Add monitoring without changing your agent code.

Verified

Claude Agent SDK

Zero-code multi-agent monitoring

LangSight captures every operation in the Claude Agent SDK automatically. Multi-agent orchestration with sub-agent handoffs, tool calls across MCP servers, LLM reasoning steps, and full session reconstruction. In head-to-head benchmarks, LangSight captured 57 spans per session compared to 0 tool spans from Langfuse and LangSmith.

+Zero-code auto_patch() instrumentation
+Multi-agent call tree reconstruction
+Sub-agent handoff tracing with attribution
+MCP tool call capture (args + result)
+LLM reasoning traces (llm_input / llm_output)
+Token counts (input, output, cache)
+Per-session cost attribution
+Loop detection across agent chains
+Budget enforcement per session
python
import langsight
from agents import Agent, Runner

# One line. Everything instrumented.
langsight.auto_patch()

agent = Agent(
    name="support-agent",
    model="claude-sonnet-4-20250514",
    mcp_servers=[postgres_mcp, slack_mcp],
)
result = await Runner.run(agent, "Check order status")

# LangSight now traces:
# - Agent reasoning (llm_input / llm_output)
# - Tool calls (postgres-mcp/query, slack-mcp/notify)
# - Sub-agent handoffs (billing-agent, etc.)
# - Tokens, cost, latency per operation
Verified

CrewAI

Native event bus with 19 handlers

LangSight connects directly to CrewAI's event bus, capturing crew execution, task delegation, agent-to-agent handoffs, and LLM operations. No wrappers needed. Works with CrewAI 1.6.1+ and supports Anthropic, OpenAI, and Gemini as the underlying LLM provider.

+Native CrewAI event bus integration (19 handlers)
+Crew input/output capture
+Task execution tracing per agent
+Agent-to-agent (A2A) handoff tracking
+LLM spans from Anthropic, OpenAI, Gemini
+llm_input / llm_output on every LLM call
+Token and cost attribution per agent
+Works with CrewAI 1.6.1+
python
import langsight
from crewai import Crew, Agent, Task

# Hooks into CrewAI's event bus automatically
langsight.auto_patch()

researcher = Agent(role="Researcher", ...)
writer = Agent(role="Writer", ...)

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, write_task],
)
result = crew.kickoff()

# LangSight captures:
# - Crew input/output
# - Task execution per agent
# - A2A handoffs (researcher -> writer)
# - LLM spans from any provider
Verified

Anthropic SDK

Messages API + streaming support

Direct instrumentation of the Anthropic Python SDK. Captures every messages.create call including streaming responses, token counts, tool use blocks, and cost calculations based on model pricing.

+Messages API tracing
+Streaming response capture
+Token counts (input, output, cache read/write)
+Cost tracking per call
+Tool use block capture
python
import langsight
import anthropic

langsight.auto_patch()

client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    messages=[{"role": "user", "content": "..."}],
)

# Traced: model, tokens, cost, latency, content
Beta

OpenAI SDK

Chat completions + function calls

Monitors OpenAI Chat Completions and Agents SDK. Captures model selection, token usage, function/tool calls, and streaming responses. Currently in beta with full GA planned.

+Chat Completions tracing
+Function call / tool_call capture
+Token usage tracking
+Streaming support
+Cost tracking per call
python
import langsight
from openai import OpenAI

langsight.auto_patch()

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "..."}],
)

# Traced: model, tokens, function calls, cost
Beta

Google Gemini

Generative AI SDK support

Instruments Google's Generative AI SDK for Gemini models. Captures generate_content calls, token usage, and response content. Currently in beta.

+generate_content tracing
+Token usage capture
+Model selection tracking
+Cost attribution
python
import langsight
import google.generativeai as genai

langsight.auto_patch()

model = genai.GenerativeModel("gemini-2.0-flash")
response = model.generate_content("...")

# Traced: model, tokens, content, latency
Verified

OTLP / OpenTelemetry

Any OTEL-compatible framework

LangSight accepts OTLP traces natively. If your agent framework emits OpenTelemetry spans following gen_ai semantic conventions, LangSight ingests them automatically. Works with any language or framework that supports OTLP export.

+OTLP/gRPC and OTLP/HTTP ingest
+gen_ai semantic convention mapping
+Works with any language (Python, Node, Go, Rust)
+Compatible with OpenLLMetry, Traceloop, etc.
+Automatic span type classification
bash
# Point your OTEL exporter at LangSight
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8000
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=YOUR_KEY"

# Any framework that emits OTLP traces
# will appear in LangSight automatically.
# gen_ai.* semantic conventions are mapped
# to LangSight's native span types.

Your framework. Monitored in 2 lines.

Free, open source, self-hosted. No data leaves your network. Apache 2.0.