具有对话界面的 AI 应用程序(如聊天机器人)通过与用户的多次交互(也称为对话轮次)运行。在评估此类应用程序的性能时,构建数据集和定义评估器和指标来判断您的应用输出等核心概念仍然有用。但是,您可能还会发现在您的应用和用户之间运行模拟,然后评估这个动态创建的轨迹很有用。 这样做的一些优点是:
  • 与对预先存在的轨迹的完整数据集进行评估相比,更容易开始
  • 从初始查询到成功或不成功的解决的端到端覆盖
  • 能够检测应用多次迭代中的重复行为或上下文丢失
缺点是,因为您正在扩大评估表面积以包含多个轮次,所以与评估给定来自数据集的静态输入的应用的单个输出相比,一致性较低。 本指南将向您展示如何模拟多轮交互并使用开源 openevals 包评估它们,该包包含预构建的评估器和用于评估 AI 应用的其他方便资源。它还将使用 OpenAI 模型,尽管您也可以使用其他提供商。

设置

首先,确保已安装所需的依赖项:
pip install -U langsmith openevals
如果您使用 yarn 作为包管理器,您还需要手动安装 @langchain/core 作为 openevals 的对等依赖项。这对于 LangSmith 评估通常不是必需的。
并设置您的环境变量:
export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="<Your LangSmith API key>"
export OPENAI_API_KEY="<Your OpenAI API key>"

运行模拟

您需要开始使用两个主要组件:
  • app:您的应用程序或包装它的函数。必须接受单个聊天消息(带有”role”和”content”键的字典)作为输入参数,并接受 thread_id 作为 kwarg。应接受其他 kwargs,因为将来的版本可能会添加更多。返回至少带有 role 和 content 键的聊天消息作为输出。
  • user:模拟用户。在本指南中,我们将使用名为 create_llm_simulated_user 的导入预构建函数,该函数使用 LLM 生成用户响应,尽管您也可以创建自己的
openevals 中的模拟器为每个轮次将来自 user 的单个聊天消息传递给您的 app。因此,如果需要,您应该基于 thread_id 在内部有状态地跟踪当前历史记录。 以下是模拟多轮客户支持交互的示例。本指南使用一个简单的聊天应用,该应用包装对 OpenAI 聊天完成 API 的单个调用,但是这是您调用应用程序或智能体的地方。在此示例中,我们模拟的用户扮演特别激进的客户的角色:
from openevals.simulators import run_multiturn_simulation, create_llm_simulated_user
from openevals.types import ChatCompletionMessage
from langsmith.wrappers import wrap_openai
from openai import OpenAI

# 包装 OpenAI 客户端以进行跟踪
client = wrap_openai(OpenAI())
history = {}

# 您的应用程序逻辑
def app(inputs: ChatCompletionMessage, *, thread_id: str, **kwargs):
    if thread_id not in history:
        history[thread_id] = []
    history[thread_id].append(inputs)
    # inputs 是一个带有 role 和 content 的消息对象
    res = client.chat.completions.create(
        model="gpt-4.1-mini",
        messages=[
            {
                "role": "system",
                "content": "You are a patient and understanding customer service agent.",
            },
        ] + history[thread_id],
    )
    response_message = res.choices[0].message
    history[thread_id].append(response_message)
    return response_message

user = create_llm_simulated_user(
    system="You are an aggressive and hostile customer who wants a refund for their car.",
    model="openai:gpt-4.1-mini",
)

# 使用新函数直接运行模拟
simulator_result = run_multiturn_simulation(
    app=app,
    user=user,
    max_turns=5,
)
print(simulator_result)
响应如下所示:
{
  "trajectory": [
    {
      "role": "user",
      "content": "This piece of junk car is a complete disaster! I demand a full refund immediately. How dare you sell me such a worthless vehicle!",
      "id": "chatcmpl-BUpXa07LaM7wXbyaNnng1Gtn5Dsbh"
    },
    {
      "role": "assistant",
      "content": "I'm really sorry to hear about your experience and understand how frustrating this must be. I'd like to help resolve this issue as smoothly as possible. Could you please provide some details about the problem with the vehicle? Once I have more information, I'll do my best to assist you with a solution, whether it's a refund or other options. Thank you for your patience.",
      "refusal": null,
      "annotations": [],
      "id": "d7520f6a-7cf8-46f8-abe4-7df04f134482"
    },
    "...",
    {
      "role": "assistant",
      "content": "I truly understand your frustration and sincerely apologize for the inconvenience you've experienced.\n\nPlease allow me a moment to review your case, and I will do everything I can to expedite your refund. Your patience is greatly appreciated, and I am committed to resolving this matter to your satisfaction.",
      "refusal": null,
      "annotations": [],
      "id": "a0536d4f-9353-4cfa-84df-51c8d29e076d"
    }
  ]
}
模拟首先从模拟的 user 生成初始查询,然后在达到 max_turns 之前来回传递响应聊天消息(你也可以传递一个 stopping_condition,它接受当前轨迹并返回 TrueFalse - 有关更多信息,请参阅 OpenEvals README)。返回值是构成对话轨迹的最终聊天消息列表。
有多种方法可以配置模拟用户,例如让它在模拟的前几轮返回固定响应,以及整个模拟。有关完整详细信息,请查看 OpenEvals README
最终跟踪将看起来像这样,其中来自你的 appuser 的响应交错: 恭喜!你刚刚运行了第一个多轮模拟。接下来,我们将介绍如何在 LangSmith 实验中运行它。

在 LangSmith 实验中运行

你可以将多轮模拟的结果用作 LangSmith 实验的一部分,以跟踪性能和随时间推移的进度。对于这些部分,熟悉 LangSmith 的 pytest(仅限 Python)、Vitest/Jest(仅限 JS)或 evaluate 运行器中的至少一个会很有帮助。

使用 pytestVitest/Jest

请参阅以下指南,了解如何使用 LangSmith 与测试框架的集成来设置评估:
如果你使用 LangSmith 测试框架集成之一,可以在运行模拟时传递 OpenEvals 评估器数组作为 trajectory_evaluators 参数。这些评估器将在模拟结束时运行,将最终的聊天消息列表作为 outputs kwarg。因此,你传递的 trajectory_evaluator 必须接受此 kwarg。 以下是一个示例:
from openevals.simulators import run_multiturn_simulation, create_llm_simulated_user
from openevals.llm import create_llm_as_judge
from openevals.types import ChatCompletionMessage
from langsmith import testing as t
from langsmith.wrappers import wrap_openai
from openai import OpenAI
import pytest

@pytest.mark.langsmith
def test_multiturn_message_with_openai():
    inputs = {"role": "user", "content": "I want a refund for my car!"}
    t.log_inputs(inputs)
    # Wrap OpenAI client for tracing
    client = wrap_openai(OpenAI())
    history = {}

    def app(inputs: ChatCompletionMessage, *, thread_id: str):
        if thread_id not in history:
            history[thread_id] = []
        history[thread_id] = history[thread_id] + [inputs]
        res = client.chat.completions.create(
            model="gpt-4.1-nano",
            messages=[
                {
                    "role": "system",
                    "content": "You are a patient and understanding customer service agent.",
                }
            ]
            + history[thread_id],
        )
        response = res.choices[0].message
        history[thread_id].append(response)
        return response

    user = create_llm_simulated_user(
        system="You are a nice customer who wants a refund for their car.",
        model="openai:gpt-4.1-nano",
        fixed_responses=[
            inputs,
        ],
    )
    trajectory_evaluator = create_llm_as_judge(
        model="openai:o3-mini",
        prompt="Based on the below conversation, was the user satisfied?\n{outputs}",
        feedback_key="satisfaction",
    )
    res = run_multiturn_simulation(
        app=app,
        user=user,
        trajectory_evaluators=[trajectory_evaluator],
        max_turns=5,
    )
    t.log_outputs(res)
    # Optionally, assert that the evaluator scored the interaction as satisfactory.
    # This will cause the overall test case to fail if "score" is False.
    assert res["evaluator_results"][0]["score"]
LangSmith 将自动检测并记录从传递的 trajectory_evaluators 返回的反馈,并将其添加到实验中。还要注意,测试用例使用模拟用户上的 fixed_responses 参数以特定输入开始对话,你可以记录它并将其作为存储数据集的一部分。 你可能还会发现将模拟用户的系统提示作为记录数据集的一部分也很方便。

使用 evaluate

你也可以使用 evaluate 运行器来评估模拟的多轮交互。这与 pytest/Vitest/Jest 示例在以下方面略有不同:
  • 模拟应该是你的 target 函数的一部分,你的目标函数应该返回最终轨迹。
    • 这将使轨迹成为 LangSmith 将传递给评估器的 outputs
  • 不要使用 trajectory_evaluators 参数,你应该将评估器作为参数传递给 evaluate() 方法。
  • 你需要一个现有的输入数据集和(可选)参考轨迹。
以下是一个示例:
from openevals.simulators import run_multiturn_simulation, create_llm_simulated_user
from openevals.llm import create_llm_as_judge
from openevals.types import ChatCompletionMessage
from langsmith.wrappers import wrap_openai
from langsmith import Client
from openai import OpenAI

ls_client = Client()
examples = [
    {
        "inputs": {
            "messages": [{ "role": "user", "content": "I want a refund for my car!" }]
        },
    },
]
dataset = ls_client.create_dataset(dataset_name="multiturn-starter")
ls_client.create_examples(
    dataset_id=dataset.id,
    examples=examples,
)
trajectory_evaluator = create_llm_as_judge(
    model="openai:o3-mini",
    prompt="Based on the below conversation, was the user satisfied?\n{outputs}",
    feedback_key="satisfaction",
)

def target(inputs: dict):
    # Wrap OpenAI client for tracing
    client = wrap_openai(OpenAI())
    history = {}

    def app(next_message: ChatCompletionMessage, *, thread_id: str):
        if thread_id not in history:
            history[thread_id] = []
        history[thread_id] = history[thread_id] + [next_message]
        res = client.chat.completions.create(
            model="gpt-4.1-nano",
            messages=[
                {
                    "role": "system",
                    "content": "You are a patient and understanding customer service agent.",
                }
            ]
            + history[thread_id],
        )
        response = res.choices[0].message
        history[thread_id].append(response)
        return response

    user = create_llm_simulated_user(
        system="You are a nice customer who wants a refund for their car.",
        model="openai:gpt-4.1-nano",
        fixed_responses=inputs["messages"],
    )
    res = run_multiturn_simulation(
        app=app,
        user=user,
        max_turns=5,
    )
    return res["trajectory"]

results = ls_client.evaluate(
    target,
    data=dataset.name,
    evaluators=[trajectory_evaluator],
)

修改模拟用户角色

上述示例对所有输入示例使用相同的模拟用户角色运行,该角色由传递给 create_llm_simulated_usersystem 参数定义。如果你想为数据集中的特定项目使用不同的角色,可以更新数据集示例以包含具有所需 system 提示的额外字段,然后在创建模拟用户时传递该字段,如下所示:
from openevals.simulators import run_multiturn_simulation, create_llm_simulated_user
from openevals.llm import create_llm_as_judge
from openevals.types import ChatCompletionMessage
from langsmith.wrappers import wrap_openai
from langsmith import Client
from openai import OpenAI

ls_client = Client()
examples = [
    {
        "inputs": {
            "messages": [{ "role": "user", "content": "I want a refund for my car!" }],
            "simulated_user_prompt": "You are an angry and belligerent customer who wants a refund for their car."
        },
    },
    {
        "inputs": {
            "messages": [{ "role": "user", "content": "Please give me a refund for my car." }],
            "simulated_user_prompt": "You are a nice customer who wants a refund for their car.",
        },
    }
]
dataset = ls_client.create_dataset(dataset_name="multiturn-with-personas")
ls_client.create_examples(
    dataset_id=dataset.id,
    examples=examples,
)
trajectory_evaluator = create_llm_as_judge(
    model="openai:o3-mini",
    prompt="Based on the below conversation, was the user satisfied?\n{outputs}",
    feedback_key="satisfaction",
)

def target(inputs: dict):
    # Wrap OpenAI client for tracing
    client = wrap_openai(OpenAI())
    history = {}

    def app(next_message: ChatCompletionMessage, *, thread_id: str):
        if thread_id not in history:
            history[thread_id] = []
        history[thread_id] = history[thread_id] + [next_message]
        res = client.chat.completions.create(
            model="gpt-4.1-nano",
            messages=[
                {
                    "role": "system",
                    "content": "You are a patient and understanding customer service agent.",
                }
            ]
            + history[thread_id],
        )
        response = res.choices[0].message
        history[thread_id].append(response)
        return response

    user = create_llm_simulated_user(
        system=inputs["simulated_user_prompt"],
        model="openai:gpt-4.1-nano",
        fixed_responses=inputs["messages"],
    )
    res = run_multiturn_simulation(
        app=app,
        user=user,
        max_turns=5,
    )
    return res["trajectory"]

results = ls_client.evaluate(
    target,
    data=dataset.name,
    evaluators=[trajectory_evaluator],
)

后续步骤

你刚刚看到了一些模拟多轮交互并在 LangSmith 评估中运行它们的技术。 以下是你可能想要探索的一些主题: 你还可以探索 OpenEvals readme 以了解更多关于预构建评估器的信息。
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.