- 流式传输图状态 — 使用
updates和values模式获取状态更新/值。 - 流式传输子图输出 — 包括来自父图和任何嵌套子图的输出。
- 流式传输 LLM 令牌 — 从任何地方捕获令牌流:节点内部、子图或工具。
- 流式传输自定义数据 — 直接从工具函数发送自定义更新或进度信号。
- 使用多种流式传输模式 — 从
values(完整状态)、updates(状态增量)、messages(LLM 令牌 + 元数据)、custom(任意用户数据)或debug(详细跟踪)中选择。
支持的流模式
将以下一个或多个流模式作为列表传递给stream 或 astream 方法:
| 模式 | 描述 |
|---|---|
values | 在图的每个步骤后流式传输状态的完整值。 |
updates | 在图的每个步骤后流式传输状态的更新。如果在同一步骤中进行多次更新(例如,运行多个节点),这些更新将单独流式传输。 |
custom | 从图节点内部流式传输自定义数据。 |
messages | 从调用 LLM 的任何图节点流式传输 2 元组(LLM 令牌,元数据)。 |
debug | 在图的执行过程中流式传输尽可能多的信息。 |
基本用法示例
LangGraph 图公开stream(同步)和 astream(异步)方法以将流式输出作为迭代器生成。
Extended example: streaming updates
Extended example: streaming updates
Stream multiple modes
您可以将列表作为stream_mode 参数传递,以同时流式传输多种模式。
流式输出将是 (mode, chunk) 元组,其中 mode 是流模式的名称,chunk 是该模式流式传输的数据。
Stream graph state
使用流模式updates 和 values 在图形执行时流式传输其状态。
updatesstreams the updates to the state after each step of the graph.valuesstreams the full value of the state after each step of the graph.
- updates
- values
Use this to stream only the state updates returned by the nodes after each step. The streamed outputs include the name of the node as well as the update.
Stream subgraph outputs
要在流式输出中包含子图的输出,可以在父图的.stream() 方法中设置 subgraphs=True。这将流式传输来自父图和任何子图的输出。
输出将作为元组 (namespace, data) 流式传输,其中 namespace 是调用子图的节点路径的元组,例如 ("parent_node:<task_id>", "child_node:<task_id>")。
Extended example: streaming from subgraphs
Extended example: streaming from subgraphs
Debugging
Use thedebug streaming mode to stream as much information as possible throughout the execution of the graph. The streamed outputs include the name of the node as well as the full state.
LLM tokens
Use themessages streaming mode to stream Large Language Model (LLM) outputs token by token from any part of your graph, including nodes, tools, subgraphs, or tasks.
The streamed output from messages mode is a tuple (message_chunk, metadata) where:
message_chunk: the token or message segment from the LLM.metadata: a dictionary containing details about the graph node and LLM invocation.
If your LLM is not available as a LangChain integration, you can stream its outputs using custom mode instead. See use with any LLM for details.
Filter by LLM invocation
You can associatetags with LLM invocations to filter the streamed tokens by LLM invocation.
Extended example: filtering by tags
Extended example: filtering by tags
Filter by node
To stream tokens only from specific nodes, usestream_mode="messages" and filter the outputs by the langgraph_node field in the streamed metadata:
Extended example: streaming LLM tokens from specific nodes
Extended example: streaming LLM tokens from specific nodes
Stream custom data
To send custom user-defined data from inside a LangGraph node or tool, follow these steps:- Use
get_stream_writerto access the stream writer and emit custom data. - Set
stream_mode="custom"when calling.stream()or.astream()to get the custom data in the stream. You can combine multiple modes (e.g.,["updates", "custom"]), but at least one must be"custom".
- node
- tool
Use with any LLM
You can usestream_mode="custom" to stream data from any LLM API — even if that API does not implement the LangChain chat model interface.
This lets you integrate raw LLM clients or external services that provide their own streaming interfaces, making LangGraph highly flexible for custom setups.
Extended example: streaming arbitrary chat model
Extended example: streaming arbitrary chat model
Disable streaming for specific chat models
If your application mixes models that support streaming with those that do not, you may need to explicitly disable streaming for models that do not support it. Setdisable_streaming=True when initializing the model.
- init_chat_model
- Chat model interface
Async with Python < 3.11
In Python versions < 3.11, asyncio tasks do not support thecontext parameter.
This limits LangGraph ability to automatically propagate context, and affects LangGraph’s streaming mechanisms in two key ways:
- You must explicitly pass
RunnableConfiginto async LLM calls (e.g.,ainvoke()), as callbacks are not automatically propagated. - You cannot use
get_stream_writerin async nodes or tools — you must pass awriterargument directly.
Extended example: async LLM call with manual config
Extended example: async LLM call with manual config
Extended example: async custom streaming with stream writer
Extended example: async custom streaming with stream writer