LangSmith 与 LangGraph(Python 和 JS)顺利集成,帮助您跟踪智能体,无论您使用 LangChain 模块还是其他 SDK。

使用 LangChain

如果您在 LangGraph 中使用 LangChain 模块,您只需设置几个环境变量即可启用跟踪。 本指南将介绍一个基本示例。有关配置的更多详细信息,请参阅使用 LangChain 跟踪指南。

1. 安装

为 Python 和 JS 安装 LangGraph 库和 OpenAI 集成(我们在下面的代码片段中使用 OpenAI 集成)。 有关可用包的完整列表,请参阅 LangChain Python 文档LangChain JS 文档
pip install langchain_openai langgraph

2. 配置您的环境

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
# 此示例使用 OpenAI,但您可以使用您选择的任何 LLM 提供商
export OPENAI_API_KEY=<your-openai-api-key>
# 对于链接到多个工作区的 LangSmith API 密钥,设置 LANGSMITH_WORKSPACE_ID 环境变量以指定要使用的工作区。
export LANGSMITH_WORKSPACE_ID=<your-workspace-id>
如果您在非无服务器环境中使用 LangChain.js 和 LangSmith,我们还建议显式设置以下内容以减少延迟:export LANGCHAIN_CALLBACKS_BACKGROUND=true如果您在无服务器环境中,我们建议设置相反的设置以允许跟踪在函数结束之前完成:export LANGCHAIN_CALLBACKS_BACKGROUND=false有关更多信息,请参阅此 LangChain.js 指南

3. 记录跟踪

一旦您设置了环境,您就可以像往常一样调用 LangChain 可运行项。LangSmith 将推断正确的跟踪配置:
from typing import Literal
from langchain.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, MessagesState

@tool
def search(query: str):
    """Call to surf the web."""
    if "sf" in query.lower() or "san francisco" in query.lower():
        return "It's 60 degrees and foggy."
    return "It's 90 degrees and sunny."

tools = [search]
tool_node = ToolNode(tools)

model = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)

def should_continue(state: MessagesState) -> Literal["tools", "__end__"]:
    messages = state['messages']
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return "__end__"

def call_model(state: MessagesState):
    messages = state['messages']
    # Invoking `model` will automatically infer the correct tracing context
    response = model.invoke(messages)
    return {"messages": [response]}

workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.add_edge("__start__", "agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
)
workflow.add_edge("tools", 'agent')

app = workflow.compile()

final_state = app.invoke(
    {"messages": [HumanMessage(content="what is the weather in sf")]},
    config={"configurable": {"thread_id": 42}}
)

final_state["messages"][-1].content
An example trace from running the above code looks like this: Trace tree for a LangGraph run with LangChain

Without LangChain

If you are using other SDKs or custom functions within LangGraph, you will need to wrap or decorate them appropriately (with the @traceable decorator in Python or the traceable function in JS, or something like e.g. wrap_openai for SDKs). If you do so, LangSmith will automatically nest traces from those wrapped methods. Here’s an example. You can also see this page for more information.

1. Installation

Install the LangGraph library and the OpenAI SDK for Python and JS (we use the OpenAI integration for the code snippets below).
pip install openai langsmith langgraph

2. Configure your environment

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
# This example uses OpenAI, but you can use any LLM provider of choice
export OPENAI_API_KEY=<your-openai-api-key>
If you are using LangChain.js with LangSmith and are not in a serverless environment, we also recommend setting the following explicitly to reduce latency:export LANGCHAIN_CALLBACKS_BACKGROUND=trueIf you are in a serverless environment, we recommend setting the reverse to allow tracing to finish before your function ends:export LANGCHAIN_CALLBACKS_BACKGROUND=falseSee this LangChain.js guide for more information.

3. Log a trace

Once you’ve set up your environment, wrap or decorate the custom functions/SDKs you want to trace. LangSmith will then infer the proper tracing config:
import json
import openai
import operator
from langsmith import traceable
from langsmith.wrappers import wrap_openai
from typing import Annotated, Literal, TypedDict
from langgraph.graph import StateGraph

class State(TypedDict):
    messages: Annotated[list, operator.add]

tool_schema = {
    "type": "function",
    "function": {
        "name": "search",
        "description": "Call to surf the web.",
        "parameters": {
            "type": "object",
            "properties": {"query": {"type": "string"}},
            "required": ["query"],
        },
    },
}

# Decorating the tool function will automatically trace it with the correct context
@traceable(run_type="tool", name="Search Tool")
def search(query: str):
    """Call to surf the web."""
    if "sf" in query.lower() or "san francisco" in query.lower():
        return "It's 60 degrees and foggy."
    return "It's 90 degrees and sunny."

tools = [search]

def call_tools(state):
    function_name_to_function = {"search": search}
    messages = state["messages"]
    tool_call = messages[-1]["tool_calls"][0]
    function_name = tool_call["function"]["name"]
    function_arguments = tool_call["function"]["arguments"]
    arguments = json.loads(function_arguments)
    function_response = function_name_to_function[function_name](**arguments)
    tool_message = {
        "tool_call_id": tool_call["id"],
        "role": "tool",
        "name": function_name,
        "content": function_response,
    }
    return {"messages": [tool_message]}

wrapped_client = wrap_openai(openai.Client())

def should_continue(state: State) -> Literal["tools", "__end__"]:
    messages = state["messages"]
    last_message = messages[-1]
    if last_message["tool_calls"]:
        return "tools"
    return "__end__"

def call_model(state: State):
    messages = state["messages"]
    # Calling the wrapped client will automatically infer the correct tracing context
    response = wrapped_client.chat.completions.create(
        messages=messages, model="gpt-4o-mini", tools=[tool_schema]
    )
    raw_tool_calls = response.choices[0].message.tool_calls
    tool_calls = [tool_call.to_dict() for tool_call in raw_tool_calls] if raw_tool_calls else []
    response_message = {
        "role": "assistant",
        "content": response.choices[0].message.content,
        "tool_calls": tool_calls,
    }
    return {"messages": [response_message]}

workflow = StateGraph(State)
workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tools)
workflow.add_edge("__start__", "agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
)
workflow.add_edge("tools", 'agent')

app = workflow.compile()

final_state = app.invoke(
    {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)

final_state["messages"][-1]["content"]
An example trace from running the above code looks like this: Trace tree for a LangGraph run without LangChain
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.