LangSmith 使将反馈附加到跟踪变得容易。 此反馈可以来自用户、注释者、自动评估器等,对于监控和评估应用程序至关重要。

使用 create_feedback() / createFeedback()

在这里,我们将介绍如何使用 SDK 记录反馈。
子运行 您可以将用户反馈附加到跟踪的任何子运行,而不仅仅是跟踪(根运行)本身。 这对于批评 LLM 应用程序的特定步骤很有用,例如 RAG 管道的检索步骤或生成步骤。
非阻塞创建(仅 Python) 如果您将 trace_id= 传递给 create_feedback(),Python 客户端将自动在后台创建反馈。 这对于低延迟环境至关重要,您希望确保应用程序不会在反馈创建上被阻塞。
from langsmith import trace, traceable, Client

    @traceable
    def foo(x):
        return {"y": x * 2}

    @traceable
    def bar(y):
        return {"z": y - 1}

    client = Client()

    inputs = {"x": 1}
    with trace(name="foobar", inputs=inputs) as root_run:
        result = foo(**inputs)
        result = bar(**result)
        root_run.outputs = result
        trace_id = root_run.id
        child_runs = root_run.child_runs

    # Provide feedback for a trace (a.k.a. a root run)
    client.create_feedback(
        key="user_feedback",
        score=1,
        trace_id=trace_id,
        comment="the user said that ..."
    )

# Provide feedback for a child run
foo_run_id = [run for run in child_runs if run.name == "foo"][0].id
client.create_feedback(
    key="correctness",
    score=0,
    run_id=foo_run_id,
    # trace_id= is optional but recommended to enable batched and backgrounded
    # feedback ingestion.
    trace_id=trace_id,
)
You can even log feedback for in-progress runs using create_feedback() / createFeedback(). See this guide for how to get the run ID of an in-progress run. To learn more about how to filter traces based on various attributes, including user feedback, see this guide.
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.