LangSmith 支持基于 OpenTelemetry 的跟踪,允许您从任何与 OpenTelemetry 兼容的应用程序发送跟踪。本指南涵盖 LangChain 应用程序的自动检测和其他框架的手动检测。 了解如何使用 OpenTelemetry 与 LangSmith 跟踪 LLM 应用程序。
在下面的请求中,为自托管安装或欧盟地区的组织适当更新 LangSmith URL。对于欧盟地区,请使用 eu.api.smith.langchain.com

跟踪 LangChain 应用程序

如果您使用 LangChain 或 LangGraph,请使用内置集成来跟踪您的应用程序:
  1. 安装支持 OpenTelemetry 的 LangSmith 包:
    pip install "langsmith[otel]"
    pip install langchain
    
    需要 Python SDK 版本 langsmith>=0.3.18。我们建议使用 langsmith>=0.4.25 以从重要的 OpenTelemetry 修复中受益。
  2. 在您的 LangChain/LangGraph 应用中,通过设置 LANGSMITH_OTEL_ENABLED 环境变量启用 OpenTelemetry 集成:
    LANGSMITH_OTEL_ENABLED=true
    LANGSMITH_TRACING=true
    LANGSMITH_ENDPOINT=https://api.smith.langchain.com
    LANGSMITH_API_KEY=<your_langsmith_api_key>
    # 对于链接到多个工作区的 LangSmith API 密钥,设置 LANGSMITH_WORKSPACE_ID 环境变量以指定要使用的工作区。
    
  3. 创建带有跟踪的 LangChain 应用程序。例如:
    import os
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate
    
    # Create a chain
    prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
    model = ChatOpenAI()
    chain = prompt | model
    
    # Run the chain
    result = chain.invoke({"topic": "programming"})
    print(result.content)
    
  4. 应用程序运行后,在 LangSmith 仪表板(示例)中查看跟踪。

跟踪非 LangChain 应用程序

对于非 LangChain 应用程序或自定义检测,您可以使用标准 OpenTelemetry 客户端在 LangSmith 中跟踪您的应用程序。(我们建议 langsmith ≥ 0.4.25。)
  1. 安装 OpenTelemetry SDK、OpenTelemetry 导出器包以及 OpenAI 包:
    pip install openai
    pip install opentelemetry-sdk
    pip install opentelemetry-exporter-otlp
    
  2. 为端点设置环境变量,替换您的特定值:
    OTEL_EXPORTER_OTLP_ENDPOINT=https://api.smith.langchain.com/otel
    OTEL_EXPORTER_OTLP_HEADERS="x-api-key=<your langsmith api key>"
    
    根据 otel 导出器的配置方式,如果您只发送跟踪,可能需要在端点后附加 /v1/traces
    如果您是自托管 LangSmith,请将基础端点替换为您的 LangSmith api 端点并附加 /api/v1。例如:OTEL_EXPORTER_OTLP_ENDPOINT=https://ai-company.com/api/v1/otel
    可选:指定除”default”之外的自定义项目名称:
    OTEL_EXPORTER_OTLP_ENDPOINT=https://api.smith.langchain.com/otel
    OTEL_EXPORTER_OTLP_HEADERS="x-api-key=<your langsmith api key>,Langsmith-Project=<project name>"
    
  3. 记录跟踪。 此代码设置一个 OTEL 跟踪器和导出器,它将跟踪发送到 LangSmith。然后它调用 OpenAI 并发送所需的 OpenTelemetry 属性。
    from openai import OpenAI
    from opentelemetry import trace
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import (
        BatchSpanProcessor,
    )
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
    
    client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    
    otlp_exporter = OTLPSpanExporter(
        timeout=10,
    )
    
    trace.set_tracer_provider(TracerProvider())
    trace.get_tracer_provider().add_span_processor(
        BatchSpanProcessor(otlp_exporter)
    )
    
    tracer = trace.get_tracer(__name__)
    
    def call_openai():
        model = "gpt-4o-mini"
        with tracer.start_as_current_span("call_open_ai") as span:
            span.set_attribute("langsmith.span.kind", "LLM")
            span.set_attribute("langsmith.metadata.user_id", "user_123")
            span.set_attribute("gen_ai.system", "OpenAI")
            span.set_attribute("gen_ai.request.model", model)
            span.set_attribute("llm.request.type", "chat")
    
            messages = [
                {"role": "system", "content": "You are a helpful assistant."},
                {
                    "role": "user",
                    "content": "Write a haiku about recursion in programming."
                }
            ]
    
            for i, message in enumerate(messages):
                span.set_attribute(f"gen_ai.prompt.{i}.content", str(message["content"]))
                span.set_attribute(f"gen_ai.prompt.{i}.role", str(message["role"]))
    
            completion = client.chat.completions.create(
                model=model,
                messages=messages
            )
    
            span.set_attribute("gen_ai.response.model", completion.model)
            span.set_attribute("gen_ai.completion.0.content", str(completion.choices[0].message.content))
            span.set_attribute("gen_ai.completion.0.role", "assistant")
            span.set_attribute("gen_ai.usage.prompt_tokens", completion.usage.prompt_tokens)
            span.set_attribute("gen_ai.usage.completion_tokens", completion.usage.completion_tokens)
            span.set_attribute("gen_ai.usage.total_tokens", completion.usage.total_tokens)
    
            return completion.choices[0].message
    
    if __name__ == "__main__":
        call_openai()
    
  4. 在您的 LangSmith 仪表板(示例)中查看跟踪。

将跟踪发送到替代提供商

虽然 LangSmith 是 OpenTelemetry 跟踪的默认目标,但您也可以配置 OpenTelemetry 以将跟踪发送到其他可观测性平台。
在 LangSmith Python SDK ≥ 0.4.1 中可用。我们建议使用 ≥ 0.4.25 以获得改进 OTEL 导出和混合扇出稳定性的修复。

使用环境变量进行全局配置

默认情况下,LangSmith OpenTelemetry 导出器将数据发送到 LangSmith API OTEL 端点,但可以通过设置标准 OTEL 环境变量来自定义此设置:
OTEL_EXPORTER_OTLP_ENDPOINT: 覆盖端点 URL
OTEL_EXPORTER_OTLP_HEADERS: 添加自定义标头(自动添加 LangSmith API 密钥和项目)
OTEL_SERVICE_NAME: 设置自定义服务名称(默认为 "langsmith")
LangSmith 默认使用 HTTP 跟踪导出器。如果您想使用自己的跟踪提供商,您可以:
  1. 如上所示设置 OTEL 环境变量,或
  2. Set a global trace provider before initializing LangChain components, which LangSmith will detect and use instead of creating its own.

Configure alternate OTLP endpoints

To send traces to a different provider, configure the OTLP exporter with your provider’s endpoint:
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Set environment variables for LangChain
os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_TRACING"] = "true"

# Configure the OTLP exporter for your custom endpoint
provider = TracerProvider()
otlp_exporter = OTLPSpanExporter(
    # Change to your provider's endpoint
    endpoint="https://otel.your-provider.com/v1/traces",
    # Add any required headers for authentication
    headers={"api-key": "your-api-key"}
)
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)

# Create and run a LangChain application
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
model = ChatOpenAI()
chain = prompt | model
result = chain.invoke({"topic": "programming"})
print(result.content)
Hybrid tracing is available in version ≥ 0.4.1. To send traces only to your OTEL endpoint, set:LANGSMITH_OTEL_ONLY="true" (Recommendation: use langsmith ≥ 0.4.25.)

Supported OpenTelemetry attribute and event mapping

When sending traces to LangSmith via OpenTelemetry, the following attributes are mapped to LangSmith fields:

Core LangSmith attributes

OpenTelemetry attributeLangSmith fieldNotes
langsmith.trace.nameRun nameOverrides the span name for the run
langsmith.span.kindRun typeValues: llm, chain, tool, retriever, embedding, prompt, parser
langsmith.trace.session_idSession IDSession identifier for related traces
langsmith.trace.session_nameSession nameName of the session
langsmith.span.tagsTagsCustom tags attached to the span (comma-separated)
langsmith.metadata.{key}metadata.{key}Custom metadata with langsmith prefix

GenAI standard attributes

OpenTelemetry attributeLangSmith fieldNotes
gen_ai.systemmetadata.ls_providerThe GenAI system (e.g., “openai”, “anthropic”)
gen_ai.operation.nameRun typeMaps “chat”/“completion” to “llm”, “embedding” to “embedding”
gen_ai.promptinputsThe input prompt sent to the model
gen_ai.completionoutputsThe output generated by the model
gen_ai.prompt.{n}.roleinputs.messages[n].roleRole for the nth input message
gen_ai.prompt.{n}.contentinputs.messages[n].contentContent for the nth input message
gen_ai.prompt.{n}.message.roleinputs.messages[n].roleAlternative format for role
gen_ai.prompt.{n}.message.contentinputs.messages[n].contentAlternative format for content
gen_ai.completion.{n}.roleoutputs.messages[n].roleRole for the nth output message
gen_ai.completion.{n}.contentoutputs.messages[n].contentContent for the nth output message
gen_ai.completion.{n}.message.roleoutputs.messages[n].roleAlternative format for role
gen_ai.completion.{n}.message.contentoutputs.messages[n].contentAlternative format for content
gen_ai.input.messagesinputs.messagesArray of input messages
gen_ai.output.messagesoutputs.messagesArray of output messages
gen_ai.tool.nameinvocation_params.tool_nameTool name, also sets run type to “tool”

GenAI request parameters

OpenTelemetry attributeLangSmith fieldNotes
gen_ai.request.modelinvocation_params.modelThe model name used for the request
gen_ai.response.modelinvocation_params.modelThe model name returned in the response
gen_ai.request.temperatureinvocation_params.temperatureTemperature setting
gen_ai.request.top_pinvocation_params.top_pTop-p sampling setting
gen_ai.request.max_tokensinvocation_params.max_tokensMaximum tokens setting
gen_ai.request.frequency_penaltyinvocation_params.frequency_penaltyFrequency penalty setting
gen_ai.request.presence_penaltyinvocation_params.presence_penaltyPresence penalty setting
gen_ai.request.seedinvocation_params.seedRandom seed used for generation
gen_ai.request.stop_sequencesinvocation_params.stopSequences that stop generation
gen_ai.request.top_kinvocation_params.top_kTop-k sampling parameter
gen_ai.request.encoding_formatsinvocation_params.encoding_formatsOutput encoding formats

GenAI usage metrics

OpenTelemetry attributeLangSmith fieldNotes
gen_ai.usage.input_tokensusage_metadata.input_tokensNumber of input tokens used
gen_ai.usage.output_tokensusage_metadata.output_tokensNumber of output tokens used
gen_ai.usage.total_tokensusage_metadata.total_tokensTotal number of tokens used
gen_ai.usage.prompt_tokensusage_metadata.input_tokensNumber of input tokens used (deprecated)
gen_ai.usage.completion_tokensusage_metadata.output_tokensNumber of output tokens used (deprecated)
gen_ai.usage.details.reasoning_tokensusage_metadata.reasoning_tokensNumber of reasoning tokens used

TraceLoop attributes

OpenTelemetry attributeLangSmith fieldNotes
traceloop.entity.inputinputsFull input value from TraceLoop
traceloop.entity.outputoutputsFull output value from TraceLoop
traceloop.entity.nameRun nameEntity name from TraceLoop
traceloop.span.kindRun typeMaps to LangSmith run types
traceloop.llm.request.typeRun type”embedding” maps to “embedding”, others to “llm”
traceloop.association.properties.{key}metadata.{key}Custom metadata with traceloop prefix

OpenInference attributes

OpenTelemetry attributeLangSmith fieldNotes
input.valueinputsFull input value, can be string or JSON
output.valueoutputsFull output value, can be string or JSON
openinference.span.kindRun typeMaps various kinds to LangSmith run types
llm.systemmetadata.ls_providerLLM system provider
llm.model_namemetadata.ls_model_nameModel name from OpenInference
tool.nameRun nameTool name when span kind is “TOOL”
metadatametadata.*JSON string of metadata to be merged

LLM attributes

OpenTelemetry attributeLangSmith fieldNotes
llm.input_messagesinputs.messagesInput messages
llm.output_messagesoutputs.messagesOutput messages
llm.token_count.promptusage_metadata.input_tokensPrompt token count
llm.token_count.completionusage_metadata.output_tokensCompletion token count
llm.token_count.totalusage_metadata.total_tokensTotal token count
llm.usage.total_tokensusage_metadata.total_tokensAlternative total token count
llm.invocation_parametersinvocation_params.*JSON string of invocation parameters
llm.presence_penaltyinvocation_params.presence_penaltyPresence penalty
llm.frequency_penaltyinvocation_params.frequency_penaltyFrequency penalty
llm.request.functionsinvocation_params.functionsFunction definitions

Prompt template attributes

OpenTelemetry attributeLangSmith fieldNotes
llm.prompt_template.variablesRun typeSets run type to “prompt”, used with input.value

Retriever attributes

OpenTelemetry attributeLangSmith fieldNotes
retrieval.documents.{n}.document.contentoutputs.documents[n].page_contentContent of the nth retrieved document
retrieval.documents.{n}.document.metadataoutputs.documents[n].metadataMetadata of the nth retrieved document (JSON)

Tool attributes

OpenTelemetry attributeLangSmith fieldNotes
toolsinvocation_params.toolsArray of tool definitions
tool_argumentsinvocation_params.tool_argumentsTool arguments as JSON or key-value pairs

Logfire attributes

OpenTelemetry attributeLangSmith fieldNotes
promptinputsLogfire prompt input
all_messages_eventsoutputsLogfire message events output
eventsinputs/outputsLogfire events array, splits input/choice events

OpenTelemetry event mapping

Event nameLangSmith fieldNotes
gen_ai.content.promptinputsExtracts prompt content from event attributes
gen_ai.content.completionoutputsExtracts completion content from event attributes
gen_ai.system.messageinputs.messages[]System message in conversation
gen_ai.user.messageinputs.messages[]User message in conversation
gen_ai.assistant.messageoutputs.messages[]Assistant message in conversation
gen_ai.tool.messageoutputs.messages[]Tool response message
gen_ai.choiceoutputsModel choice/response with finish reason
exceptionstatus, errorSets status to “error” and extracts exception message/stacktrace

Event attribute extraction

For message events, the following attributes are extracted:
  • content → message content
  • role → message role
  • id → tool_call_id (for tool messages)
  • gen_ai.event.content → full message JSON
For choice events:
  • finish_reason → choice finish reason
  • message.content → choice message content
  • message.role → choice message role
  • tool_calls.{n}.id → tool call ID
  • tool_calls.{n}.function.name → tool function name
  • tool_calls.{n}.function.arguments → tool function arguments
  • tool_calls.{n}.type → tool call type
For exception events:
  • exception.message → error message
  • exception.stacktrace → error stacktrace (appended to message)

Implementation examples

Trace using the LangSmith SDK

Use the LangSmith SDK’s OpenTelemetry helper to configure export:
import asyncio
from langsmith.integrations.otel import configure
from google.adk import Runner
from google.adk.agents import LlmAgent
from google.adk.sessions import InMemorySessionService
from google.genai import types

# Configure LangSmith OpenTelemetry export (no OTEL env vars or headers needed)
configure(project_name="adk-otel-demo")


async def main():
    agent = LlmAgent(
        name="travel_assistant",
        model="gemini-2.5-flash-lite",
        instruction="You are a helpful travel assistant.",
    )

    session_service = InMemorySessionService()
    runner = Runner(app_name="travel_app", agent=agent, session_service=session_service)

    user_id = "user_123"
    session_id = "session_abc"
    await session_service.create_session(app_name="travel_app", user_id=user_id, session_id=session_id)

    new_message = types.Content(parts=[types.Part(text="Hi! Recommend a weekend trip to Paris.")], role="user")

    for event in runner.run(user_id=user_id, session_id=session_id, new_message=new_message):
        print(event)


if __name__ == "__main__":
    asyncio.run(main())
You do not need to set OTEL environment variables or exporters. configure() wires them for LangSmith automatically; instrumentors (like GoogleADKInstrumentor) create the spans.
  1. View the trace in your LangSmith dashboard (example).

Advanced configuration

Use OpenTelemetry Collector for fan-out

For more advanced scenarios, you can use the OpenTelemetry Collector to fan out your telemetry data to multiple destinations. This is a more scalable approach than configuring multiple exporters in your application code.
  1. Install the OpenTelemetry Collector for your environment.
  2. Create a configuration file (e.g., otel-collector-config.yaml) that exports to multiple destinations:
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
    
    processors:
      batch:
    
    exporters:
      otlphttp/langsmith:
        endpoint: https://api.smith.langchain.com/otel/v1/traces
        headers:
          x-api-key: ${env:LANGSMITH_API_KEY}
          Langsmith-Project: my_project
      otlphttp/other_provider:
        endpoint: https://otel.your-provider.com/v1/traces
        headers:
          api-key: ${env:OTHER_PROVIDER_API_KEY}
    
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlphttp/langsmith, otlphttp/other_provider]
    
  3. Configure your application to send to the collector:
    import os
    from opentelemetry import trace
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import BatchSpanProcessor
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate
    
    # Point to your local OpenTelemetry Collector
    otlp_exporter = OTLPSpanExporter(
        endpoint="http://localhost:4318/v1/traces"
    )
    provider = TracerProvider()
    processor = BatchSpanProcessor(otlp_exporter)
    provider.add_span_processor(processor)
    trace.set_tracer_provider(provider)
    
    # Set environment variables for LangChain
    os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
    os.environ["LANGSMITH_TRACING"] = "true"
    
    # Create and run a LangChain application
    prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
    model = ChatOpenAI()
    chain = prompt | model
    result = chain.invoke({"topic": "programming"})
    print(result.content)
    
This approach offers several advantages:
  • Centralized configuration for all your telemetry destinations
  • Reduced overhead in your application code
  • Better scalability and resilience
  • Ability to add or remove destinations without changing application code

Distributed tracing with LangChain and OpenTelemetry

Distributed tracing is essential when your LLM application spans multiple services or processes. OpenTelemetry’s context propagation capabilities ensure that traces remain connected across service boundaries.

Context propagation in distributed tracing

In distributed systems, context propagation passes trace metadata between services so that related spans are linked to the same trace:
  • Trace ID: A unique identifier for the entire trace
  • Span ID: A unique identifier for the current span
  • Sampling Decision: Indicates whether this trace should be sampled

Set up distributed tracing with LangChain

To enable distributed tracing across multiple services:
import os
from opentelemetry import trace
from opentelemetry.propagate import inject, extract
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
import requests
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Set up OpenTelemetry trace provider
provider = TracerProvider()
otlp_exporter = OTLPSpanExporter(
    endpoint="https://api.smith.langchain.com/otel/v1/traces",
    headers={"x-api-key": os.getenv("LANGSMITH_API_KEY"), "Langsmith-Project": "my_project"}
)
processor = BatchSpanProcessor(otlp_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)

# Service A: Create a span and propagate context to Service B
def service_a():
    with tracer.start_as_current_span("service_a_operation") as span:
        # Create a chain
        prompt = ChatPromptTemplate.from_template("Summarize: {text}")
        model = ChatOpenAI()
        chain = prompt | model

        # Run the chain
        result = chain.invoke({"text": "OpenTelemetry is an observability framework"})

        # Propagate context to Service B
        headers = {}
        inject(headers)  # Inject trace context into headers

        # Call Service B with the trace context
        response = requests.post(
            "http://service-b.example.com/process",
            headers=headers,
            json={"summary": result.content}
        )
        return response.json()

# Service B: Extract the context and continue the trace
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route("/process", methods=["POST"])
def service_b_endpoint():
    # Extract the trace context from the request headers
    context = extract(request.headers)
    with tracer.start_as_current_span("service_b_operation", context=context) as span:
        data = request.json
        summary = data.get("summary", "")

        # Process the summary with another LLM chain
        prompt = ChatPromptTemplate.from_template("Analyze the sentiment of: {text}")
        model = ChatOpenAI()
        chain = prompt | model
        result = chain.invoke({"text": summary})

        return jsonify({"analysis": result.content})

if __name__ == "__main__":
    app.run(port=5000)

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.