要评估智能体的性能,您可以使用 LangSmith 评估。您需要首先定义一个评估器函数来判断智能体的结果,例如最终输出或轨迹。根据您的评估技术,这可能涉及也可能不涉及参考输出:
def evaluator(*, outputs: dict, reference_outputs: dict):
    # compare agent outputs against reference outputs
    output_messages = outputs["messages"]
    reference_messages = reference_outputs["messages"]
    score = compare_messages(output_messages, reference_messages)
    return {"key": "evaluator_score", "score": score}
要开始使用,您可以使用 AgentEvals 包中的预构建评估器:
pip install -U agentevals

创建评估器

评估智能体性能的常见方法是将其轨迹(调用工具的顺序)与参考轨迹进行比较:
import json
from agentevals.trajectory.match import create_trajectory_match_evaluator  

outputs = [
    {
        "role": "assistant",
        "tool_calls": [
            {
                "function": {
                    "name": "get_weather",
                    "arguments": json.dumps({"city": "san francisco"}),
                }
            },
            {
                "function": {
                    "name": "get_directions",
                    "arguments": json.dumps({"destination": "presidio"}),
                }
            }
        ],
    }
]
reference_outputs = [
    {
        "role": "assistant",
        "tool_calls": [
            {
                "function": {
                    "name": "get_weather",
                    "arguments": json.dumps({"city": "san francisco"}),
                }
            },
        ],
    }
]

# Create the evaluator
evaluator = create_trajectory_match_evaluator(
    trajectory_match_mode="superset",    
)

# Run the evaluator
result = evaluator(
    outputs=outputs, reference_outputs=reference_outputs
)
  1. Specify how the trajectories will be compared. superset will accept output trajectory as valid if it’s a superset of the reference one. Other options include: strict, unordered and subset
As a next step, learn more about how to customize trajectory match evaluator.

LLM-as-a-judge

您可以使用 LLM-as-a-judge 评估器,它使用 LLM 将轨迹与参考输出进行比较并输出分数:
import json
from agentevals.trajectory.llm import (
    create_trajectory_llm_as_judge,  
    TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE
)

evaluator = create_trajectory_llm_as_judge(
    prompt=TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE,
    model="openai:o3-mini"
)

运行评估器

要运行评估器,您首先需要创建一个 LangSmith 数据集。要使用预构建的 AgentEvals 评估器,您需要一个具有以下模式的数据集:
  • input: {"messages": [...]} input messages to call the agent with.
  • output: {"messages": [...]} expected message history in the agent output. For trajectory evaluation, you can choose to keep only assistant messages.
from langsmith import Client
from langchain.agents import create_agent
from agentevals.trajectory.match import create_trajectory_match_evaluator


client = Client()
agent = create_agent(...)
evaluator = create_trajectory_match_evaluator(...)

experiment_results = client.evaluate(
    lambda inputs: agent.invoke(inputs),
    # replace with your dataset name
    data="<Name of your dataset>",
    evaluators=[evaluator]
)

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.