您可能需要使用不同的配置为新运行重建图。例如,您可能需要根据配置使用不同的图状态或图结构。本指南向您展示如何做到这一点。
注意
在大多数情况下,基于配置自定义行为应该由单个图处理,其中每个节点都可以读取配置并基于它更改其行为
前提条件
首先确保查看本操作指南以设置您的应用进行部署。
定义图
假设您有一个应用程序,其中包含一个调用 LLM 并将响应返回给用户的简单图。应用文件目录如下所示:
my-app/
|-- requirements.txt
|-- .env
|-- openai_agent.py # code for your graph
其中图在 openai_agent.py 中定义。
不重建
在标准 LangGraph API 配置中,服务器使用在 openai_agent.py 顶层定义的编译图实例,如下所示:
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessageGraph
model = ChatOpenAI(temperature=0)
graph_workflow = MessageGraph()
graph_workflow.add_node("agent", model)
graph_workflow.add_edge("agent", END)
graph_workflow.add_edge(START, "agent")
agent = graph_workflow.compile()
要使服务器知道您的图,您需要在 LangGraph API 配置(langgraph.json)中指定包含 CompiledStateGraph 实例的变量的路径,例如:
{
"dependencies": ["."],
"graphs": {
"openai_agent": "./openai_agent.py:agent",
},
"env": "./.env"
}
要使图在每次新运行时使用自定义配置重建,您需要重写 openai_agent.py,改为提供一个接受配置并返回图(或编译图)实例的_函数_。假设我们想为用户 ID ‘1’ 返回我们现有的图,为其他用户返回工具调用智能体。我们可以按如下方式修改 openai_agent.py:
from typing import Annotated
from typing_extensions import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessageGraph
from langgraph.graph.state import StateGraph
from langgraph.graph.message import add_messages
from langchain.tools import tool
from langgraph.prebuilt import ToolNode
from langchain.messages import BaseMessage
from langchain_core.runnables import RunnableConfig
class State(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
model = ChatOpenAI(temperature=0)
def make_default_graph():
"""Make a simple LLM agent"""
graph_workflow = StateGraph(State)
def call_model(state):
return {"messages": [model.invoke(state["messages"])]}
graph_workflow.add_node("agent", call_model)
graph_workflow.add_edge("agent", END)
graph_workflow.add_edge(START, "agent")
agent = graph_workflow.compile()
return agent
def make_alternative_graph():
"""Make a tool-calling agent"""
@tool
def add(a: float, b: float):
"""Adds two numbers."""
return a + b
tool_node = ToolNode([add])
model_with_tools = model.bind_tools([add])
def call_model(state):
return {"messages": [model_with_tools.invoke(state["messages"])]}
def should_continue(state: State):
if state["messages"][-1].tool_calls:
return "tools"
else:
return END
graph_workflow = StateGraph(State)
graph_workflow.add_node("agent", call_model)
graph_workflow.add_node("tools", tool_node)
graph_workflow.add_edge("tools", "agent")
graph_workflow.add_edge(START, "agent")
graph_workflow.add_conditional_edges("agent", should_continue)
agent = graph_workflow.compile()
return agent
# this is the graph making function that will decide which graph to
# build based on the provided config
def make_graph(config: RunnableConfig):
user_id = config.get("configurable", {}).get("user_id")
# route to different graph state / structure based on the user ID
if user_id == "1":
return make_default_graph()
else:
return make_alternative_graph()
Finally, you need to specify the path to your graph-making function (make_graph) in langgraph.json:
{
"dependencies": ["."],
"graphs": {
"openai_agent": "./openai_agent.py:make_graph",
},
"env": "./.env"
}
See more info on LangGraph API configuration file here