LangChain 实现了一个流式传输系统以提供实时更新。
流式传输对于增强基于 LLM 构建的应用程序的响应性至关重要。通过逐步显示输出(甚至在完整响应准备好之前),流式传输显著改善了用户体验(UX),特别是在处理 LLM 的延迟时。
LangChain 的流式传输系统让您可以从智能体运行中向应用程序提供实时反馈。
LangChain 流式传输的功能:
智能体进度
要流式传输智能体进度,请使用带有 streamMode: "updates" 的 stream 方法。这会在每个智能体步骤后发出一个事件。
例如,如果您有一个调用工具一次的智能体,您应该看到以下更新:
- LLM 节点:带有工具调用请求的
AIMessage
- 工具节点:带有执行结果的 @[
ToolMessage]
- LLM 节点:最终 AI 响应
import z from "zod";
import { createAgent, tool } from "langchain";
const getWeather = tool(
async ({ city }) => {
return `The weather in ${city} is always sunny!`;
},
{
name: "get_weather",
description: "Get weather for a given city.",
schema: z.object({
city: z.string(),
}),
}
);
const agent = createAgent({
model: "gpt-5-nano",
tools: [getWeather],
});
for await (const chunk of await agent.stream(
{ messages: [{ role: "user", content: "what is the weather in sf" }] },
{ streamMode: "updates" }
)) {
const [step, content] = Object.entries(chunk)[0];
console.log(`step: ${step}`);
console.log(`content: ${JSON.stringify(content, null, 2)}`);
}
/**
* step: model
* content: {
* "messages": [
* {
* "kwargs": {
* // ...
* "tool_calls": [
* {
* "name": "get_weather",
* "args": {
* "city": "San Francisco"
* },
* "type": "tool_call",
* "id": "call_0qLS2Jp3MCmaKJ5MAYtr4jJd"
* }
* ],
* // ...
* }
* }
* ]
* }
* step: tools
* content: {
* "messages": [
* {
* "kwargs": {
* "content": "The weather in San Francisco is always sunny!",
* "name": "get_weather",
* // ...
* }
* }
* ]
* }
* step: model
* content: {
* "messages": [
* {
* "kwargs": {
* "content": "The latest update says: The weather in San Francisco is always sunny!\n\nIf you'd like real-time details (current temperature, humidity, wind, and today's forecast), I can pull the latest data for you. Want me to fetch that?",
* // ...
* }
* }
* ]
* }
*/
LLM tokens
To stream tokens as they are produced by the LLM, use streamMode: "messages":
import z from "zod";
import { createAgent, tool } from "langchain";
const getWeather = tool(
async ({ city }) => {
return `The weather in ${city} is always sunny!`;
},
{
name: "get_weather",
description: "Get weather for a given city.",
schema: z.object({
city: z.string(),
}),
}
);
const agent = createAgent({
model: "gpt-4o-mini",
tools: [getWeather],
});
for await (const [token, metadata] of await agent.stream(
{ messages: [{ role: "user", content: "what is the weather in sf" }] },
{ streamMode: "messages" }
)) {
console.log(`node: ${metadata.langgraph_node}`);
console.log(`content: ${JSON.stringify(token.contentBlocks, null, 2)}`);
}
Custom updates
To stream updates from tools as they are executed, you can use the writer parameter from the configuration.
import z from "zod";
import { tool, createAgent } from "langchain";
import { LangGraphRunnableConfig } from "@langchain/langgraph";
const getWeather = tool(
async (input, config: LangGraphRunnableConfig) => {
// Stream any arbitrary data
config.writer?.(`Looking up data for city: ${input.city}`);
// ... fetch city data
config.writer?.(`Acquired data for city: ${input.city}`);
return `It's always sunny in ${input.city}!`;
},
{
name: "get_weather",
description: "Get weather for a given city.",
schema: z.object({
city: z.string().describe("The city to get weather for."),
}),
}
);
const agent = createAgent({
model: "gpt-4o-mini",
tools: [getWeather],
});
for await (const chunk of await agent.stream(
{ messages: [{ role: "user", content: "what is the weather in sf" }] },
{ streamMode: "custom" }
)) {
console.log(chunk);
}
Looking up data for city: San Francisco
Acquired data for city: San Francisco
If you add the writer parameter to your tool, you won’t be able to invoke the tool outside of a LangGraph execution context without providing a writer function.
Stream multiple modes
You can specify multiple streaming modes by passing streamMode as an array: streamMode: ["updates", "messages", "custom"]:
import z from "zod";
import { tool, createAgent } from "langchain";
import { LangGraphRunnableConfig } from "@langchain/langgraph";
const getWeather = tool(
async (input, config: LangGraphRunnableConfig) => {
// Stream any arbitrary data
config.writer?.(`Looking up data for city: ${input.city}`);
// ... fetch city data
config.writer?.(`Acquired data for city: ${input.city}`);
return `It's always sunny in ${input.city}!`;
},
{
name: "get_weather",
description: "Get weather for a given city.",
schema: z.object({
city: z.string().describe("The city to get weather for."),
}),
}
);
const agent = createAgent({
model: "gpt-4o-mini",
tools: [getWeather],
});
for await (const [streamMode, chunk] of await agent.stream(
{ messages: [{ role: "user", content: "what is the weather in sf" }] },
{ streamMode: ["updates", "messages", "custom"] }
)) {
console.log(`${streamMode}: ${JSON.stringify(chunk, null, 2)}`);
}
Disable streaming
In some applications you might need to disable streaming of individual tokens for a given model.
This is useful in multi-agent systems to control which agents stream their output.
See the Models guide to learn how to disable streaming.