- 增强功能:向 AutoGen 智能体添加持久化、流式传输、短期和长期内存等。
- 多智能体系统:构建多智能体系统,其中使用不同框架构建各个智能体。
- 生产部署:将集成解决方案部署到 LangSmith 以进行可扩展的生产使用。
先决条件
- Python 3.9+
- Autogen:
pip install autogen - LangGraph:
pip install langgraph - OpenAI API 密钥
设置
设置您的环境:Copy
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
1. 定义 AutoGen 智能体
创建一个可以执行代码的 AutoGen 智能体。此示例改编自 AutoGen 的官方教程:Copy
import autogen
import os
config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]
llm_config = {
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
"temperature": 0,
}
autogen_agent = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "web",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
llm_config=llm_config,
system_message="Reply TERMINATE if the task has been solved at full satisfaction. Otherwise, reply CONTINUE, or the reason why the task is not solved yet.",
)
2. 创建图
我们现在将创建一个调用 AutoGen 智能体的 LangGraph 聊天机器人图。Copy
from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import MemorySaver
def call_autogen_agent(state: MessagesState):
# 将 LangGraph 消息转换为 AutoGen 的 OpenAI 格式
messages = convert_to_openai_messages(state["messages"])
# 获取最后一条用户消息
last_message = messages[-1]
# 将先前的消息历史记录作为上下文传递(不包括最后一条消息)
carryover = messages[:-1] if len(messages) > 1 else []
# 使用 AutoGen 启动聊天
response = user_proxy.initiate_chat(
autogen_agent,
message=last_message,
carryover=carryover
)
# 从智能体提取最终响应
final_content = response.chat_history[-1]["content"]
# 以 LangGraph 格式返回响应
return {"messages": {"role": "assistant", "content": final_content}}
# 创建具有持久化记忆的图
checkpointer = MemorySaver()
# 构建图
builder = StateGraph(MessagesState)
builder.add_node("autogen", call_autogen_agent)
builder.add_edge(START, "autogen")
# 使用检查点器编译以实现持久化
graph = builder.compile(checkpointer=checkpointer)
Copy
from IPython.display import display, Image
display(Image(graph.get_graph().draw_mermaid_png()))
3. 本地测试图
在部署到 LangSmith 之前,您可以在本地测试图:Copy
# pass the thread ID to persist agent outputs for future interactions
config = {"configurable": {"thread_id": "1"}}
for chunk in graph.stream(
{
"messages": [
{
"role": "user",
"content": "Find numbers between 10 and 30 in fibonacci sequence",
}
]
},
config,
):
print(chunk)
Copy
user_proxy (to assistant):
Find numbers between 10 and 30 in fibonacci sequence
--------------------------------------------------------------------------------
assistant (to user_proxy):
To find numbers between 10 and 30 in the Fibonacci sequence, we can generate the Fibonacci sequence and check which numbers fall within this range. Here's a plan:
1. Generate Fibonacci numbers starting from 0.
2. Continue generating until the numbers exceed 30.
3. Collect and print the numbers that are between 10 and 30.
...
Copy
for chunk in graph.stream(
{
"messages": [
{
"role": "user",
"content": "Multiply the last number by 3",
}
]
},
config,
):
print(chunk)
Copy
user_proxy (to assistant):
Multiply the last number by 3
Context:
Find numbers between 10 and 30 in fibonacci sequence
The Fibonacci numbers between 10 and 30 are 13 and 21.
These numbers are part of the Fibonacci sequence, which is generated by adding the two preceding numbers to get the next number, starting from 0 and 1.
The sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
As you can see, 13 and 21 are the only numbers in this sequence that fall between 10 and 30.
TERMINATE
--------------------------------------------------------------------------------
assistant (to user_proxy):
The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:
21 * 3 = 63
TERMINATE
--------------------------------------------------------------------------------
{'call_autogen_agent': {'messages': {'role': 'assistant', 'content': 'The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:\n\n21 * 3 = 63\n\nTERMINATE'}}}
4. Prepare for deployment
To deploy to LangSmith, create a file structure like the following:Copy
my-autogen-agent/
├── agent.py # Your main agent code
├── requirements.txt # Python dependencies
└── langgraph.json # LangGraph configuration
- agent.py
- requirements.txt
- langgraph.json
Copy
import os
import autogen
from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import MemorySaver
# AutoGen configuration
config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]
llm_config = {
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
"temperature": 0,
}
# Create AutoGen agents
autogen_agent = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "/tmp/autogen_work",
"use_docker": False,
},
llm_config=llm_config,
system_message="Reply TERMINATE if the task has been solved at full satisfaction.",
)
def call_autogen_agent(state: MessagesState):
"""Node function that calls the AutoGen agent"""
messages = convert_to_openai_messages(state["messages"])
last_message = messages[-1]
carryover = messages[:-1] if len(messages) > 1 else []
response = user_proxy.initiate_chat(
autogen_agent,
message=last_message,
carryover=carryover
)
final_content = response.chat_history[-1]["content"]
return {"messages": {"role": "assistant", "content": final_content}}
# Create and compile the graph
def create_graph():
checkpointer = MemorySaver()
builder = StateGraph(MessagesState)
builder.add_node("autogen", call_autogen_agent)
builder.add_edge(START, "autogen")
return builder.compile(checkpointer=checkpointer)
# Export the graph for LangSmith
graph = create_graph()
5. Deploy to LangSmith
Deploy the graph with the LangSmith CLI:Copy
pip install -U langgraph-cli
Copy
langgraph deploy --config langgraph.json
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.