您可以使用 LangSmith 跟踪来自 Vercel AI SDK 的运行。本指南将介绍一个示例。

安装

此包装器需要 AI SDK v5 和 langsmith>=0.3.63。如果您使用的是旧版本的 AI SDK 或 langsmith,请参阅此页面上的基于 OpenTelemetry(OTEL)的方法。
安装 Vercel AI SDK。本指南在下面的代码片段中使用 Vercel 的 OpenAI 集成,但您也可以使用它们的任何其他选项。
npm install ai @ai-sdk/openai zod

环境配置

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>

# 示例使用 OpenAI,但您可以使用您选择的任何 LLM 提供商
export OPENAI_API_KEY=<your-openai-api-key>

# 对于链接到多个工作区的 LangSmith API 密钥,设置 LANGSMITH_WORKSPACE_ID 环境变量以指定要使用的工作区。
export LANGSMITH_WORKSPACE_ID=<your-workspace-id>

基本设置

导入并包装 AI SDK 方法,然后像往常一样使用它们:
import { openai } from "@ai-sdk/openai";
import * as ai from "ai";

import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

await generateText({
  model: openai("gpt-5-nano"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
});
您应该在 LangSmith 仪表板中看到像这样的跟踪。 您还可以使用工具调用跟踪运行:
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

await generateText({
  model: openai("gpt-5-nano"),
  messages: [
    {
      role: "user",
      content: "What are my orders and where are they? My user ID is 123",
    },
  ],
  tools: {
    listOrders: tool({
      description: "list all orders",
      inputSchema: z.object({ userId: z.string() }),
      execute: async ({ userId }) =>
        `User ${userId} has the following orders: 1`,
    }),
    viewTrackingInformation: tool({
      description: "view tracking information for a specific order",
      inputSchema: z.object({ orderId: z.string() }),
      execute: async ({ orderId }) =>
        `Here is the tracking information for ${orderId}`,
    }),
  },
  stopWhen: stepCountIs(5),
});
Which results in a trace like this one. You can use other AI SDK methods exactly as you usually would.

With traceable

You can wrap traceable calls around AI SDK calls or within AI SDK tool calls. This is useful if you want to group runs together in LangSmith:
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { traceable } from "langsmith/traceable";
import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

const wrapper = traceable(async (input: string) => {
  const { text } = await generateText({
    model: openai("gpt-5-nano"),
    messages: [
      {
        role: "user",
        content: input,
      },
    ],
    tools: {
      listOrders: tool({
        description: "list all orders",
        inputSchema: z.object({ userId: z.string() }),
        execute: async ({ userId }) =>
          `User ${userId} has the following orders: 1`,
      }),
      viewTrackingInformation: tool({
        description: "view tracking information for a specific order",
        inputSchema: z.object({ orderId: z.string() }),
        execute: async ({ orderId }) =>
          `Here is the tracking information for ${orderId}`,
      }),
    },
    stopWhen: stepCountIs(5),
  });
  return text;
}, {
  name: "wrapper",
});

await wrapper("What are my orders and where are they? My user ID is 123.");
The resulting trace will look like this.

Tracing in serverless environments

When tracing in serverless environments, you must wait for all runs to flush before your environment shuts down. To do this, you can pass a LangSmith Client instance when wrapping the AI SDK method, then call await client.awaitPendingTraceBatches(). Make sure to also pass it into any traceable wrappers you create as well:
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { Client } from "langsmith";
import { traceable } from "langsmith/traceable";
import { wrapAISDK } from "langsmith/experimental/vercel";

const client = new Client();

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai, { client });

const wrapper = traceable(async (input: string) => {
  const { text } = await generateText({
    model: openai("gpt-5-nano"),
    messages: [
      {
        role: "user",
        content: input,
      },
    ],
    tools: {
      listOrders: tool({
        description: "list all orders",
        inputSchema: z.object({ userId: z.string() }),
        execute: async ({ userId }) =>
          `User ${userId} has the following orders: 1`,
      }),
      viewTrackingInformation: tool({
        description: "view tracking information for a specific order",
        inputSchema: z.object({ orderId: z.string() }),
        execute: async ({ orderId }) =>
          `Here is the tracking information for ${orderId}`,
      }),
    },
    stopWhen: stepCountIs(5),
  });
  return text;
}, {
  name: "wrapper",
  client,
});

try {
  await wrapper("What are my orders and where are they? My user ID is 123.");
} finally {
  await client.awaitPendingTraceBatches();
}
If you are using Next.js, there is a convenient after hook where you can put this logic:
import { after } from "next/server"
import { Client } from "langsmith";


export async function POST(request: Request) {
  const client = new Client();

  ...

  after(async () => {
    await client.awaitPendingTraceBatches();
  });

  return new Response(JSON.stringify({ ... }), {
    status: 200,
    headers: { "Content-Type": "application/json" },
  });
};
See this page for more detail, including information around managing rate limits in serverless environments.

Passing LangSmith config

You can pass LangSmith-specific config to your wrapper both when initially wrapping your AI SDK methods and while running them via providerOptions.langsmith. This includes metadata (which you can later use to filter runs in LangSmith), top-level run name, tags, custom client instances, and more. Config passed while wrapping will apply to all future calls you make with the wrapped method:
import { openai } from "@ai-sdk/openai";
import * as ai from "ai";

import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai, {
    metadata: {
      key_for_all_runs: "value",
    },
    tags: ["myrun"],
  });

await generateText({
  model: openai("gpt-5-nano"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
});
While passing config at runtime via providerOptions.langsmith will apply only to that run. We suggest importing and wrapping your config in createLangSmithProviderOptions to ensure proper typing:
import { openai } from "@ai-sdk/openai";
import * as ai from "ai";

import {
  wrapAISDK,
  createLangSmithProviderOptions,
} from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

const lsConfig = createLangSmithProviderOptions({
  metadata: {
    individual_key: "value",
  },
  name: "my_individual_run",
});

await generateText({
  model: openai("gpt-5-nano"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
  providerOptions: {
    langsmith: lsConfig,
  },
});

Redacting data

You can customize what inputs and outputs the AI SDK sends to LangSmith by specifying custom input/output processing functions. This is useful if you are dealing with sensitive data that you would like to avoid sending to LangSmith. Because output formats vary depending on which AI SDK method you are using, we suggest defining and passing config individually into wrapped methods. You will also need to provide separate functions for child LLM runs within AI SDK calls, since calling generateText at top level calls the LLM internally and can do so multiple times. We also suggest passing a generic parameter into createLangSmithProviderOptions to get proper types for inputs and outputs. Here’s an example for generateText:
import {
  wrapAISDK,
  createLangSmithProviderOptions,
} from "langsmith/experimental/vercel";
import * as ai from "ai";
import { openai } from "@ai-sdk/openai";

const { generateText } = wrapAISDK(ai);

const lsConfig = createLangSmithProviderOptions<typeof generateText>({
  processInputs: (inputs) => {
    const { messages } = inputs;
    return {
      messages: messages?.map((message) => ({
        providerMetadata: message.providerOptions,
        role: "assistant",
        content: "REDACTED",
      })),
      prompt: "REDACTED",
    };
  },
  processOutputs: (outputs) => {
    return {
      providerMetadata: outputs.providerMetadata,
      role: "assistant",
      content: "REDACTED",
    };
  },
  processChildLLMRunInputs: (inputs) => {
    const { prompt } = inputs;
    return {
      messages: prompt.map((message) => ({
        ...message,
        content: "REDACTED CHILD INPUTS",
      })),
    };
  },
  processChildLLMRunOutputs: (outputs) => {
    return {
      providerMetadata: outputs.providerMetadata,
      content: "REDACTED CHILD OUTPUTS",
      role: "assistant",
    };
  },
});

const { text } = await generateText({
  model: openai("gpt-5-nano"),
  prompt: "What is the capital of France?",
  providerOptions: {
    langsmith: lsConfig,
  },
});

// Paris.
console.log(text);
The actual return value will contain the original, non-redacted result but the trace in LangSmith will be redacted. Here’s an example. For redacting tool input/output, wrap your execute method in a traceable like this:
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { Client } from "langsmith";
import { traceable } from "langsmith/traceable";
import { wrapAISDK } from "langsmith/experimental/vercel";

const client = new Client();

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai, { client });

const { text } = await generateText({
  model: openai("gpt-5-nano"),
  messages: [
    {
      role: "user",
      content: "What are my orders? My user ID is 123.",
    },
  ],
  tools: {
    listOrders: tool({
      description: "list all orders",
      inputSchema: z.object({ userId: z.string() }),
      execute: traceable(
        async ({ userId }) => {
          return `User ${userId} has the following orders: 1`;
        },
        {
          processInputs: (input) => ({ text: "REDACTED" }),
          processOutputs: (outputs) => ({ text: "REDACTED" }),
          run_type: "tool",
          name: "listOrders",
        }
      ) as (input: { userId: string }) => Promise<string>,
    }),
  },
  stopWhen: stepCountIs(5),
});
The traceable return type is complex, which makes the cast necessary. You may also omit the AI SDK tool wrapper function if you wish to avoid the cast.
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.