使用 LangSmith SDK、LangGraph 和 LangChain 进行跟踪时,跟踪应自动传播正确的上下文,以便在父跟踪中执行的代码将在 UI 中的预期位置呈现。 如果您看到子运行进入单独的跟踪(并出现在顶层),这可能是由以下已知”边缘情况”之一引起的。

Python

以下概述了使用 python 构建时”拆分”跟踪的常见原因。

使用 asyncio 进行上下文传播

在 Python 版本 < 3.11 中使用异步调用(尤其是流式传输)时,您可能会遇到跟踪嵌套问题。这是因为 Python 的 asyncio 仅在版本 3.11 中添加了传递上下文的完整支持

原因

LangChain 和 LangSmith SDK 使用 contextvars 隐式传播跟踪信息。在 Python 3.11 及更高版本中,这可以无缝工作。但是,在早期版本(3.8、3.9、3.10)中,asyncio 任务缺乏适当的 contextvar 支持,这可能导致断开的跟踪。

解决方法

  1. 升级 Python 版本(推荐) 如果可能,升级到 Python 3.11 或更高版本以实现自动上下文传播。
  2. 手动上下文传播 如果升级不是选项,您需要手动传播跟踪上下文。方法因您的设置而异: a) 使用 LangGraph 或 LangChain 将父 config 传递给子调用:
    import asyncio
    from langchain_core.runnables import RunnableConfig, RunnableLambda
    
    @RunnableLambda
    async def my_child_runnable(
        inputs: str,
        # The config arg (present in parent_runnable below) is optional
    ):
        yield "A"
        yield "response"
    
    @RunnableLambda
    async def parent_runnable(inputs: str, config: RunnableConfig):
        async for chunk in my_child_runnable.astream(inputs, config):
            yield chunk
    
    async def main():
        return [val async for val in parent_runnable.astream("call")]
    
    asyncio.run(main())
    
    b) 直接使用 LangSmith 直接传递运行树:
    import asyncio
    import langsmith as ls
    
    @ls.traceable
    async def my_child_function(inputs: str):
        yield "A"
        yield "response"
    
    @ls.traceable
    async def parent_function(
        inputs: str,
        # The run tree can be auto-populated by the decorator
        run_tree: ls.RunTree,
    ):
        async for chunk in my_child_function(inputs, langsmith_extra={"parent": run_tree}):
            yield chunk
    
    async def main():
        return [val async for val in parent_function("call")]
    
    asyncio.run(main())
    
    c) Combining Decorated Code with LangGraph/LangChain Use a combination of techniques for manual handoff:
    import asyncio
    import langsmith as ls
    from langchain_core.runnables import RunnableConfig, RunnableLambda
    
    @RunnableLambda
    async def my_child_runnable(inputs: str):
        yield "A"
        yield "response"
    
    @ls.traceable
    async def my_child_function(inputs: str, run_tree: ls.RunTree):
        with ls.tracing_context(parent=run_tree):
            async for chunk in my_child_runnable.astream(inputs):
                yield chunk
    
    @RunnableLambda
    async def parent_runnable(inputs: str, config: RunnableConfig):
        # @traceable decorated functions can directly accept a RunnableConfig when passed in via "config"
        async for chunk in my_child_function(inputs, langsmith_extra={"config": config}):
            yield chunk
    
    @ls.traceable
    async def parent_function(inputs: str, run_tree: ls.RunTree):
        # You can set the tracing context manually
        with ls.tracing_context(parent=run_tree):
            async for chunk in parent_runnable.astream(inputs):
                yield chunk
    
    async def main():
        return [val async for val in parent_function("call")]
    
    asyncio.run(main())
    

Context propagation using threading

It’s common to start tracing and want to apply some parallelism on child tasks all within a single trace. Python’s stdlib ThreadPoolExecutor by default breaks tracing.

Why

Python’s contextvars start empty within new threads. Here are two approaches to handle maintain trace contiguity:

To resolve

  1. Using LangSmith’s ContextThreadPoolExecutor LangSmith provides a ContextThreadPoolExecutor that automatically handles context propagation:
    from langsmith.utils import ContextThreadPoolExecutor
    from langsmith import traceable
    
    @traceable
    def outer_func():
        with ContextThreadPoolExecutor() as executor:
            inputs = [1, 2]
            r = list(executor.map(inner_func, inputs))
    
    @traceable
    def inner_func(x):
        print(x)
    
    outer_func()
    
  2. Manually providing the parent run tree Alternatively, you can manually pass the parent run tree to the inner function:
    from langsmith import traceable, get_current_run_tree
    from concurrent.futures import ThreadPoolExecutor
    
    @traceable
    def outer_func():
        rt = get_current_run_tree()
        with ThreadPoolExecutor() as executor:
            r = list(
                executor.map(
                    lambda x: inner_func(x, langsmith_extra={"parent": rt}), [1, 2]
                )
            )
    
    @traceable
    def inner_func(x):
        print(x)
    
    outer_func()
    
In this approach, we use get_current_run_tree() to obtain the current run tree and pass it to the inner function using the langsmith_extra parameter. Both methods ensure that the inner function calls are correctly aggregated under the initial trace stack, even when executed in separate threads.
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.