本指南展示如何将 AutoGen 智能体与 LangGraph 集成以利用持久化、流式传输和内存等功能,然后将集成解决方案部署到 LangSmith 以进行可扩展的生产使用。在本指南中,我们展示如何构建与 AutoGen 集成的 LangGraph 聊天机器人,但您可以对其他框架使用相同的方法。 将 AutoGen 与 LangGraph 集成提供了几个好处:

先决条件

  • Python 3.9+
  • Autogen: pip install autogen
  • LangGraph: pip install langgraph
  • OpenAI API 密钥

设置

设置您的环境:
import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("OPENAI_API_KEY")

1. 定义 AutoGen 智能体

创建一个可以执行代码的 AutoGen 智能体。此示例改编自 AutoGen 的官方教程
import autogen
import os

config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]

llm_config = {
    "timeout": 600,
    "cache_seed": 42,
    "config_list": config_list,
    "temperature": 0,
}

autogen_agent = autogen.AssistantAgent(
    name="assistant",
    llm_config=llm_config,
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "work_dir": "web",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    llm_config=llm_config,
    system_message="Reply TERMINATE if the task has been solved at full satisfaction. Otherwise, reply CONTINUE, or the reason why the task is not solved yet.",
)

2. 创建图

我们现在将创建一个调用 AutoGen 智能体的 LangGraph 聊天机器人图。
from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import MemorySaver

def call_autogen_agent(state: MessagesState):
    # 将 LangGraph 消息转换为 AutoGen 的 OpenAI 格式
    messages = convert_to_openai_messages(state["messages"])

    # 获取最后一条用户消息
    last_message = messages[-1]

    # 将先前的消息历史记录作为上下文传递(不包括最后一条消息)
    carryover = messages[:-1] if len(messages) > 1 else []

    # 使用 AutoGen 启动聊天
    response = user_proxy.initiate_chat(
        autogen_agent,
        message=last_message,
        carryover=carryover
    )

    # 从智能体提取最终响应
    final_content = response.chat_history[-1]["content"]

    # 以 LangGraph 格式返回响应
    return {"messages": {"role": "assistant", "content": final_content}}

# 创建具有持久化记忆的图
checkpointer = MemorySaver()

# 构建图
builder = StateGraph(MessagesState)
builder.add_node("autogen", call_autogen_agent)
builder.add_edge(START, "autogen")

# 使用检查点器编译以实现持久化
graph = builder.compile(checkpointer=checkpointer)
from IPython.display import display, Image

display(Image(graph.get_graph().draw_mermaid_png()))
LangGraph chatbot with one step: START routes to autogen, where call_autogen_agent sends the latest user message (with prior context) to the AutoGen agent.

3. 本地测试图

在部署到 LangSmith 之前,您可以在本地测试图:
# pass the thread ID to persist agent outputs for future interactions
config = {"configurable": {"thread_id": "1"}}

for chunk in graph.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": "Find numbers between 10 and 30 in fibonacci sequence",
            }
        ]
    },
    config,
):
    print(chunk)
Output:
user_proxy (to assistant):

Find numbers between 10 and 30 in fibonacci sequence

--------------------------------------------------------------------------------
assistant (to user_proxy):

To find numbers between 10 and 30 in the Fibonacci sequence, we can generate the Fibonacci sequence and check which numbers fall within this range. Here's a plan:

1. Generate Fibonacci numbers starting from 0.
2. Continue generating until the numbers exceed 30.
3. Collect and print the numbers that are between 10 and 30.

...
由于我们正在利用 LangGraph 的持久化功能,我们现在可以使用相同的线程 ID 继续对话 — LangGraph 将自动将先前的历史记录传递给 AutoGen 智能体:
for chunk in graph.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": "Multiply the last number by 3",
            }
        ]
    },
    config,
):
    print(chunk)
Output:
user_proxy (to assistant):

Multiply the last number by 3
Context:
Find numbers between 10 and 30 in fibonacci sequence
The Fibonacci numbers between 10 and 30 are 13 and 21.

These numbers are part of the Fibonacci sequence, which is generated by adding the two preceding numbers to get the next number, starting from 0 and 1.

The sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

As you can see, 13 and 21 are the only numbers in this sequence that fall between 10 and 30.

TERMINATE

--------------------------------------------------------------------------------
assistant (to user_proxy):

The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:

21 * 3 = 63

TERMINATE

--------------------------------------------------------------------------------
{'call_autogen_agent': {'messages': {'role': 'assistant', 'content': 'The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:\n\n21 * 3 = 63\n\nTERMINATE'}}}

4. Prepare for deployment

To deploy to LangSmith, create a file structure like the following:
my-autogen-agent/
├── agent.py          # Your main agent code
├── requirements.txt  # Python dependencies
└── langgraph.json   # LangGraph configuration
  • agent.py
  • requirements.txt
  • langgraph.json
import os
import autogen
from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import MemorySaver

# AutoGen configuration
config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]

llm_config = {
    "timeout": 600,
    "cache_seed": 42,
    "config_list": config_list,
    "temperature": 0,
}

# Create AutoGen agents
autogen_agent = autogen.AssistantAgent(
    name="assistant",
    llm_config=llm_config,
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "work_dir": "/tmp/autogen_work",
        "use_docker": False,
    },
    llm_config=llm_config,
    system_message="Reply TERMINATE if the task has been solved at full satisfaction.",
)

def call_autogen_agent(state: MessagesState):
    """Node function that calls the AutoGen agent"""
    messages = convert_to_openai_messages(state["messages"])
    last_message = messages[-1]
    carryover = messages[:-1] if len(messages) > 1 else []

    response = user_proxy.initiate_chat(
        autogen_agent,
        message=last_message,
        carryover=carryover
    )

    final_content = response.chat_history[-1]["content"]
    return {"messages": {"role": "assistant", "content": final_content}}

# Create and compile the graph
def create_graph():
    checkpointer = MemorySaver()
    builder = StateGraph(MessagesState)
    builder.add_node("autogen", call_autogen_agent)
    builder.add_edge(START, "autogen")
    return builder.compile(checkpointer=checkpointer)

# Export the graph for LangSmith
graph = create_graph()

5. Deploy to LangSmith

Deploy the graph with the LangSmith CLI:
pip install -U langgraph-cli
langgraph deploy --config langgraph.json

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.