许多 AI 应用程序通过自然语言与用户交互。然而,某些用例需要模型使用结构化输入直接与外部系统(如 API、数据库或文件系统)进行接口交互。 工具是智能体调用以执行操作的组件。它们通过让模型通过明确定义的输入和输出与世界交互来扩展模型能力。工具封装了一个可调用的函数及其输入模式。这些可以传递给兼容的聊天模型,允许模型决定是否调用工具以及使用什么参数。在这些场景中,工具调用使模型能够生成符合指定输入模式的请求。
服务器端工具使用某些聊天模型(例如 OpenAIAnthropicGemini)具有在服务器端执行的内置工具,如网络搜索和代码解释器。请参阅提供商概述以了解如何使用特定聊天模型访问这些工具。

创建工具

基本工具定义

创建工具的最简单方法是使用 @tool 装饰器。默认情况下,函数的文档字符串成为工具的描述,帮助模型理解何时使用它:
from langchain.tools import tool

@tool
def search_database(query: str, limit: int = 10) -> str:
    """Search the customer database for records matching the query.

    Args:
        query: Search terms to look for
        limit: Maximum number of results to return
    """
    return f"Found {limit} results for '{query}'"
类型提示是必需的,因为它们定义了工具的输入模式。文档字符串应该信息丰富且简洁,以帮助模型理解工具的目的。

自定义工具属性

自定义工具名称

默认情况下,工具名称来自函数名称。当您需要更具描述性的名称时,可以覆盖它:
@tool("web_search")  # 自定义名称
def search(query: str) -> str:
    """搜索网络信息。"""
    return f"Results for: {query}"

print(search.name)  # web_search

自定义工具描述

覆盖自动生成的工具描述以提供更清晰的模型指导:
@tool("calculator", description="执行算术计算。用于任何数学问题。")
def calc(expression: str) -> str:
    """评估数学表达式。"""
    return str(eval(expression))

高级模式定义

使用 Pydantic 模型或 JSON 模式定义复杂输入:
from pydantic import BaseModel, Field
from typing import Literal

class WeatherInput(BaseModel):
    """Input for weather queries."""
    location: str = Field(description="City name or coordinates")
    units: Literal["celsius", "fahrenheit"] = Field(
        default="celsius",
        description="Temperature unit preference"
    )
    include_forecast: bool = Field(
        default=False,
        description="Include 5-day forecast"
    )

@tool(args_schema=WeatherInput)
def get_weather(location: str, units: str = "celsius", include_forecast: bool = False) -> str:
    """Get current weather and optional forecast."""
    temp = 22 if units == "celsius" else 72
    result = f"Current weather in {location}: {temp} degrees {units[0].upper()}"
    if include_forecast:
        result += "\nNext 5 days: Sunny"
    return result

Accessing Context

Why this matters: Tools are most powerful when they can access agent state, runtime context, and long-term memory. This enables tools to make context-aware decisions, personalize responses, and maintain information across conversations.
工具可以通过 ToolRuntime 参数访问运行时信息,该参数提供:
  • State - 在执行过程中流动的可变数据(消息、计数器、自定义字段)
  • Context - 不可变配置,如用户 ID、会话详细信息或应用程序特定配置
  • Store - 跨对话的持久长期记忆
  • Stream Writer - 在工具执行时流式传输自定义更新
  • Config - 执行的 RunnableConfig
  • Tool Call ID - 当前工具调用的 ID

ToolRuntime

使用 ToolRuntime 在单个参数中访问所有运行时信息。只需将 runtime: ToolRuntime 添加到您的工具签名中,它将自动注入而不会暴露给 LLM。
ToolRuntime:一个统一的参数,为工具提供对状态、上下文、存储、流式传输、配置和工具调用 ID 的访问。这取代了使用单独的 InjectedStateInjectedStoreget_runtimeInjectedToolCallId 注释的旧模式。
访问状态: 工具可以使用 ToolRuntime 访问当前图状态:
from langchain.tools import tool, ToolRuntime

# Access the current conversation state
@tool
def summarize_conversation(
    runtime: ToolRuntime
) -> str:
    """Summarize the conversation so far."""
    messages = runtime.state["messages"]

    human_msgs = sum(1 for m in messages if m.__class__.__name__ == "HumanMessage")
    ai_msgs = sum(1 for m in messages if m.__class__.__name__ == "AIMessage")
    tool_msgs = sum(1 for m in messages if m.__class__.__name__ == "ToolMessage")

    return f"Conversation has {human_msgs} user messages, {ai_msgs} AI responses, and {tool_msgs} tool results"

# Access custom state fields
@tool
def get_user_preference(
    pref_name: str,
    runtime: ToolRuntime  # ToolRuntime parameter is not visible to the model
) -> str:
    """Get a user preference value."""
    preferences = runtime.state.get("user_preferences", {})
    return preferences.get(pref_name, "Not set")
tool_runtime 参数对模型是隐藏的。对于上面的示例,模型在工具模式中只看到 pref_name - tool_runtime 包含在请求中。
更新状态: 使用 Command 更新智能体的状态或控制图的执行流程:
from langgraph.types import Command
from langchain.messages import RemoveMessage
from langgraph.graph.message import REMOVE_ALL_MESSAGES
from langchain.tools import tool, ToolRuntime

# Update the conversation history by removing all messages
@tool
def clear_conversation() -> Command:
    """Clear the conversation history."""

    return Command(
        update={
            "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES)],
        }
    )

# Update the user_name in the agent state
@tool
def update_user_name(
    new_name: str,
    runtime: ToolRuntime
) -> Command:
    """Update the user's name."""
    return Command(update={"user_name": new_name})

上下文

通过 runtime.context 访问不可变配置和上下文数据,如用户 ID、会话详细信息或应用程序特定配置。 工具可以通过 ToolRuntime 访问运行时上下文:
from dataclasses import dataclass
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain.tools import tool, ToolRuntime


USER_DATABASE = {
    "user123": {
        "name": "Alice Johnson",
        "account_type": "Premium",
        "balance": 5000,
        "email": "alice@example.com"
    },
    "user456": {
        "name": "Bob Smith",
        "account_type": "Standard",
        "balance": 1200,
        "email": "bob@example.com"
    }
}

@dataclass
class UserContext:
    user_id: str

@tool
def get_account_info(runtime: ToolRuntime[UserContext]) -> str:
    """Get the current user's account information."""
    user_id = runtime.context.user_id

    if user_id in USER_DATABASE:
        user = USER_DATABASE[user_id]
        return f"Account holder: {user['name']}\nType: {user['account_type']}\nBalance: ${user['balance']}"
    return "User not found"

model = ChatOpenAI(model="gpt-4o")
agent = create_agent(
    model,
    tools=[get_account_info],
    context_schema=UserContext,
    system_prompt="You are a financial assistant."
)

result = agent.invoke(
    {"messages": [{"role": "user", "content": "What's my current balance?"}]},
    context=UserContext(user_id="user123")
)

Memory (Store)

Access persistent data across conversations using the store. The store is accessed via runtime.store and allows you to save and retrieve user-specific or application-specific data. Tools can access and update the store through ToolRuntime:
from typing import Any
from langgraph.store.memory import InMemoryStore
from langchain.agents import create_agent
from langchain.tools import tool, ToolRuntime


# Access memory
@tool
def get_user_info(user_id: str, runtime: ToolRuntime) -> str:
    """Look up user info."""
    store = runtime.store
    user_info = store.get(("users",), user_id)
    return str(user_info.value) if user_info else "Unknown user"

# Update memory
@tool
def save_user_info(user_id: str, user_info: dict[str, Any], runtime: ToolRuntime) -> str:
    """Save user info."""
    store = runtime.store
    store.put(("users",), user_id, user_info)
    return "Successfully saved user info."

store = InMemoryStore()
agent = create_agent(
    model,
    tools=[get_user_info, save_user_info],
    store=store
)

# First session: save user info
agent.invoke({
    "messages": [{"role": "user", "content": "Save the following user: userid: abc123, name: Foo, age: 25, email: foo@langchain.dev"}]
})

# Second session: get user info
agent.invoke({
    "messages": [{"role": "user", "content": "Get user info for user with id 'abc123'"}]
})
# Here is the user info for user with ID "abc123":
# - Name: Foo
# - Age: 25
# - Email: foo@langchain.dev

Stream Writer

Stream custom updates from tools as they execute using runtime.stream_writer. This is useful for providing real-time feedback to users about what a tool is doing.
from langchain.tools import tool, ToolRuntime

@tool
def get_weather(city: str, runtime: ToolRuntime) -> str:
    """Get weather for a given city."""
    writer = runtime.stream_writer

    # Stream custom updates as the tool executes
    writer(f"Looking up data for city: {city}")
    writer(f"Acquired data for city: {city}")

    return f"It's always sunny in {city}!"
If you use runtime.stream_writer inside your tool, the tool must be invoked within a LangGraph execution context. See Streaming for more details.

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.