本指南提供了开始使用 Anthropic (Claude) 聊天模型 的快速概述。 您可以在 Claude 文档中找到有关 Anthropic 最新模型、成本、上下文窗口和支持的输入类型的信息。
API 参考有关所有功能和配置选项的详细文档,请访问 ChatAnthropic API 参考。
AWS Bedrock 和 Google VertexAI请注意,某些 Anthropic 模型也可以通过 AWS Bedrock 和 Google VertexAI 访问。请参阅 ChatBedrockChatVertexAI 集成,通过这些服务使用 Anthropic 模型。

概述

集成详情

ClassPackageSerializableJS/TS SupportDownloadsLatest Version
ChatAnthropiclangchain-anthropicbeta(npm)Downloads per monthPyPI - Latest version

模型功能

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

设置

要访问 Anthropic (Claude) 模型,您需要安装 langchain-anthropic 集成包并获取 Claude API 密钥。

安装

pip install -U langchain-anthropic

凭证

前往 console.anthropic.com/ 注册 Anthropic 并生成 API 密钥。完成后,设置 ANTHROPIC_API_KEY 环境变量:
import getpass
import os

if "ANTHROPIC_API_KEY" not in os.environ:
    os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter your Anthropic API key: ")
要启用模型调用的自动跟踪,请设置您的 LangSmith API 密钥:
os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
os.environ["LANGSMITH_TRACING"] = "true"

实例化

现在我们可以实例化模型对象并生成聊天完成:
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-haiku-4-5-20251001",
    temperature=0,
    max_tokens=1024,
    timeout=None,
    max_retries=2,
    # other params...
)

调用

messages = [
    (
        "system",
        "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ),
    ("human", "I love programming."),
]
ai_msg = model.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})
print(ai_msg.text)
J'adore la programmation.

内容块

使用工具、扩展思考和其他功能时,来自单个 Anthropic AIMessage 的内容可以是单个字符串或内容块列表。例如,当 Anthropic 模型调用工具时,工具调用是消息内容的一部分(同时在标准化的 AIMessage.tool_calls 中公开):
from langchain_anthropic import ChatAnthropic
from typing_extensions import Annotated

model = ChatAnthropic(model="claude-haiku-4-5-20251001")


def get_weather(
    location: Annotated[str, ..., "Location as city and state."]
) -> str:
    """Get the weather at a location."""
    return "It's sunny."


model_with_tools = model.bind_tools([get_weather])
response = model_with_tools.invoke("Which city is hotter today: LA or NY?")
response.content
[{'text': "I'll help you compare the temperatures of Los Angeles and New York by checking their current weather. I'll retrieve the weather for both cities.",
  'type': 'text'},
 {'id': 'toolu_01CkMaXrgmsNjTso7so94RJq',
  'input': {'location': 'Los Angeles, CA'},
  'name': 'get_weather',
  'type': 'tool_use'},
 {'id': 'toolu_01SKaTBk9wHjsBTw5mrPVSQf',
  'input': {'location': 'New York, NY'},
  'name': 'get_weather',
  'type': 'tool_use'}]
使用 content_blocks 将以跨提供商一致的标准格式呈现内容:
response.content_blocks
[{'type': 'text',
  'text': "I'll help you compare the temperatures of Los Angeles and New York by checking their current weather. I'll retrieve the weather for both cities."},
 {'type': 'tool_call',
  'name': 'get_weather',
  'args': {'location': 'Los Angeles, CA'},
  'id': 'toolu_01CkMaXrgmsNjTso7so94RJq'},
 {'type': 'tool_call',
  'name': 'get_weather',
  'args': {'location': 'New York, NY'},
  'id': 'toolu_01SKaTBk9wHjsBTw5mrPVSQf'}]
您还可以使用 .tool_calls 属性以标准格式专门访问工具调用:
ai_msg.tool_calls
[{'name': 'GetWeather',
  'args': {'location': 'Los Angeles, CA'},
  'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},
 {'name': 'GetWeather',
  'args': {'location': 'New York, NY'},
  'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]

多模态

Claude 支持图像和 PDF 输入作为内容块,既支持 Anthropic 的原生格式(请参阅 视觉PDF 支持 的文档),也支持 LangChain 的标准格式

Files API

Claude 还通过其托管的 Files API 支持与文件的交互。请参阅下面的示例。 Files API 还可用于将文件上传到容器,以便与 Claude 的内置代码执行工具一起使用。有关详细信息,请参阅下面的代码执行部分。
# Upload image

import anthropic

client = anthropic.Anthropic()
file = client.beta.files.upload(
    # Supports image/jpeg, image/png, image/gif, image/webp
    file=("image.png", open("/path/to/image.png", "rb"), "image/png"),
)
image_file_id = file.id


# Run inference
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["files-api-2025-04-14"],
)

input_message = {
    "role": "user",
    "content": [
        {
            "type": "text",
            "text": "Describe this image.",
        },
        {
            "type": "image",
            "file_id": image_file_id,
        },
    ],
}
model.invoke([input_message])
# Upload document

import anthropic

client = anthropic.Anthropic()
file = client.beta.files.upload(
    file=("document.pdf", open("/path/to/document.pdf", "rb"), "application/pdf"),
)
pdf_file_id = file.id


# Run inference
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["files-api-2025-04-14"],
)

input_message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Describe this document."},
        {"type": "file", "file_id": pdf_file_id}
    ],
}
model.invoke([input_message])

扩展思考

某些 Claude 模型支持扩展思考功能,该功能将输出导致其最终答案的逐步推理过程。 有关适用的模型,请参阅 Anthropic 指南此处 要使用扩展思考,请在初始化 ChatAnthropic 时指定 thinking 参数。也可以在调用期间作为 kwarg 传递。 您需要指定 token 预算才能使用此功能。请参阅下面的使用示例:
import json

from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    max_tokens=5000,
    thinking={"type": "enabled", "budget_tokens": 2000},
)

response = model.invoke("What is the cube root of 50.653?")
print(json.dumps(response.content_blocks, indent=2))
[
  {
    "type": "reasoning",
    "reasoning": "To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\n\nI can try to estimate this first. \n$3^3 = 27$\n$4^3 = 64$\n\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\n\nLet me try to compute this more precisely. I can use the cube root function:\n\ncube root of 50.653 = 50.653^(1/3)\n\nLet me calculate this:\n50.653^(1/3) \u2248 3.6998\n\nLet me verify:\n3.6998^3 \u2248 50.6533\n\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\n\nActually, let me compute this more precisely:\n50.653^(1/3) \u2248 3.69981\n\nLet me verify once more:\n3.69981^3 \u2248 50.652998\n\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.",
    "extras": {"signature": "ErUBCkYIBxgCIkB0UjV..."}
  },
  {
    "text": "The cube root of 50.653 is approximately 3.6998.\n\nTo verify: 3.6998\u00b3 = 50.6530, which is very close to our original number.",
    "type": "text"
  }
]

提示缓存

Anthropic 支持对提示元素进行缓存,包括消息、工具定义、工具结果、图像和文档。这允许您重用大型文档、指令、少样本文档和其他数据,以减少延迟和成本。 要在提示元素上启用缓存,请使用 cache_control 键标记其关联的内容块。请参阅下面的示例:

消息

import requests
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")

# Pull LangChain readme
get_response = requests.get(
    "https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text

messages = [
    {
        "role": "system",
        "content": [
            {
                "type": "text",
                "text": "You are a technology expert.",
            },
            {
                "type": "text",
                "text": f"{readme}",
                "cache_control": {"type": "ephemeral"},  
            },
        ],
    },
    {
        "role": "user",
        "content": "What's LangChain, according to its README?",
    },
]

response_1 = model.invoke(messages)
response_2 = model.invoke(messages)

usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]

print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
First invocation:
{'cache_read': 0, 'cache_creation': 1458}

Second:
{'cache_read': 1458, 'cache_creation': 0}
Extended cachingThe cache lifetime is 5 minutes by default. If this is too short, you can apply one hour caching by enabling the "extended-cache-ttl-2025-04-11" beta header:
model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["extended-cache-ttl-2025-04-11"],  
)
and specifying "cache_control": {"type": "ephemeral", "ttl": "1h"}.Details of cached token counts will be included on the InputTokenDetails of response’s usage_metadata:
response = model.invoke(messages)
response.usage_metadata
{
    "input_tokens": 1500,
    "output_tokens": 200,
    "total_tokens": 1700,
    "input_token_details": {
        "cache_read": 0,
        "cache_creation": 1000,
        "ephemeral_1h_input_tokens": 750,
        "ephemeral_5m_input_tokens": 250,
    }
}

工具

from langchain_anthropic import convert_to_anthropic_tool
from langchain.tools import tool

# For demonstration purposes, we artificially expand the
# tool description.
description = (
    f"Get the weather at a location. By the way, check out this readme: {readme}"
)


@tool(description=description)
def get_weather(location: str) -> str:
    return "It's sunny."


# Enable caching on the tool
weather_tool = convert_to_anthropic_tool(get_weather)  
weather_tool["cache_control"] = {"type": "ephemeral"}  

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
model_with_tools = model.bind_tools([weather_tool])
query = "What's the weather in San Francisco?"

response_1 = model_with_tools.invoke(query)
response_2 = model_with_tools.invoke(query)

usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]

print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
First invocation:
{'cache_read': 0, 'cache_creation': 1809}

Second:
{'cache_read': 1809, 'cache_creation': 0}

对话应用程序中的增量缓存

提示缓存可用于多轮对话中,以维护来自早期消息的上下文,而无需冗余处理。 我们可以通过用 cache_control 标记最终消息来启用增量缓存。Claude 将自动为后续消息使用最长的先前缓存前缀。 Below, we implement a simple chatbot that incorporates this feature. We follow the LangChain chatbot tutorial, but add a custom reducer that automatically marks the last content block in each user message with cache_control. See below:
import requests
from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, StateGraph, add_messages
from typing_extensions import Annotated, TypedDict

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")

# Pull LangChain readme
get_response = requests.get(
    "https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text


def messages_reducer(left: list, right: list) -> list:
    # Update last user message
    for i in range(len(right) - 1, -1, -1):
        if right[i].type == "human":
            right[i].content[-1]["cache_control"] = {"type": "ephemeral"}
            break

    return add_messages(left, right)


class State(TypedDict):
    messages: Annotated[list, messages_reducer]


workflow = StateGraph(state_schema=State)


# Define the function that calls the model
def call_model(state: State):
    response = model.invoke(state["messages"])
    return {"messages": [response]}


# Define the (single) node in the graph
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)

# Add memory
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
from langchain.messages import HumanMessage

config = {"configurable": {"thread_id": "abc123"}}

query = "Hi! I'm Bob."

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
================================== Ai Message ==================================

Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?

{'cache_read': 0, 'cache_creation': 0}
query = f"Check out this readme: {readme}"

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
================================== Ai Message ==================================

I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:

LangChain is:
- A framework for developing LLM-powered applications
- Helps chain together components and integrations to simplify AI application development
- Provides a standard interface for models, embeddings, vector stores, etc.

Key features/benefits:
- Real-time data augmentation (connect LLMs to diverse data sources)
- Model interoperability (swap models easily as needed)
- Large ecosystem of integrations

The LangChain ecosystem includes:
- LangSmith - For evaluations and observability
- LangGraph - For building complex agents with customizable architecture
- LangSmith - For deployment and scaling of agents

The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.

Is there anything specific about LangChain you'd like to know more about, Bob?

{'cache_read': 0, 'cache_creation': 1498}
query = "What was my name again?"

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
================================== Ai Message ==================================

Your name is Bob. You introduced yourself at the beginning of our conversation.

{'cache_read': 1498, 'cache_creation': 269}
In the LangSmith trace, toggling “raw output” will show exactly what messages are sent to the chat model, including cache_control keys.

Token 高效工具使用

Anthropic 支持(测试版)token 高效工具使用功能。要使用它,请在实例化模型时指定相关的 beta 标头。
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["token-efficient-tools-2025-02-19"],  
)


@tool
def get_weather(location: str) -> str:
    """Get the weather at a location."""
    return "It's sunny."


model_with_tools = model.bind_tools([get_weather])
response = model_with_tools.invoke("What's the weather in San Francisco?")
print(response.tool_calls)
print(f"\nTotal tokens: {response.usage_metadata['total_tokens']}")
[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01EoeE1qYaePcmNbUvMsWtmA', 'type': 'tool_call'}]

Total tokens: 408

引用

Anthropic 支持引用功能,允许 Claude 根据用户提供的源文档将上下文附加到其答案中。当查询中包含带有 "citations": {"enabled": True}文档search result 内容块时,Claude 可能会在其响应中生成引用。

简单示例

在此示例中,我们传递一个纯文本文档。在后台,Claude 自动分块输入文本为句子,这些句子在生成引用时使用。
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-haiku-4-5-20251001")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "document",
                "source": {
                    "type": "text",
                    "media_type": "text/plain",
                    "data": "The grass is green. The sky is blue.",
                },
                "title": "My Document",
                "context": "This is a trustworthy document.",
                "citations": {"enabled": True},
            },
            {"type": "text", "text": "What color is the grass and sky?"},
        ],
    }
]
response = model.invoke(messages)
response.content
[{'text': 'Based on the document, ', 'type': 'text'},
 {'text': 'the grass is green',
  'type': 'text',
  'citations': [{'type': 'char_location',
    'cited_text': 'The grass is green. ',
    'document_index': 0,
    'document_title': 'My Document',
    'start_char_index': 0,
    'end_char_index': 20}]},
 {'text': ', and ', 'type': 'text'},
 {'text': 'the sky is blue',
  'type': 'text',
  'citations': [{'type': 'char_location',
    'cited_text': 'The sky is blue.',
    'document_index': 0,
    'document_title': 'My Document',
    'start_char_index': 20,
    'end_char_index': 36}]},
 {'text': '.', 'type': 'text'}]

在工具结果中(智能体式 RAG)

需要 langchain-anthropic>=0.3.17
Claude 支持 search_result 内容块,表示来自知识库或其他自定义源的查询的可引用结果。这些内容块可以传递给 claude,既可以作为顶层(如上面的示例),也可以在工具结果中。这允许 Claude 使用工具调用的结果来引用其响应的元素。 To pass search results in response to tool calls, define a tool that returns a list of search_result content blocks in Anthropic’s native format. For example:
def retrieval_tool(query: str) -> list[dict]:
    """Access my knowledge base."""

    # Run a search (e.g., with a LangChain vector store)
    results = vector_store.similarity_search(query=query, k=2)

    # Package results into search_result blocks
    return [
        {
            "type": "search_result",
            # Customize fields as desired, using document metadata or otherwise
            "title": "My Document Title",
            "source": "Source description or provenance",
            "citations": {"enabled": True},
            "content": [{"type": "text", "text": doc.page_content}],
        }
        for doc in results
    ]
Here we demonstrate an end-to-end example in which we populate a LangChain vector store with sample documents and equip Claude with a tool that queries those documents. The tool here takes a search query and a category string literal, but any valid tool signature can be used.
from typing import Literal

from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings
from langchain_core.documents import Document
from langchain_core.vectorstores import InMemoryVectorStore
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents import create_agent


# Set up vector store
embeddings = init_embeddings("openai:text-embedding-3-small")
vector_store = InMemoryVectorStore(embeddings)

document_1 = Document(
    id="1",
    page_content=(
        "To request vacation days, submit a leave request form through the "
        "HR portal. Approval will be sent by email."
    ),
    metadata={
        "category": "HR Policy",
        "doc_title": "Leave Policy",
        "provenance": "Leave Policy - page 1",
    },
)
document_2 = Document(
    id="2",
    page_content="Managers will review vacation requests within 3 business days.",
    metadata={
        "category": "HR Policy",
        "doc_title": "Leave Policy",
        "provenance": "Leave Policy - page 2",
    },
)
document_3 = Document(
    id="3",
    page_content=(
        "Employees with over 6 months tenure are eligible for 20 paid vacation days "
        "per year."
    ),
    metadata={
        "category": "Benefits Policy",
        "doc_title": "Benefits Guide 2025",
        "provenance": "Benefits Policy - page 1",
    },
)

documents = [document_1, document_2, document_3]
vector_store.add_documents(documents=documents)


# Define tool
async def retrieval_tool(
    query: str, category: Literal["HR Policy", "Benefits Policy"]
) -> list[dict]:
    """Access my knowledge base."""

    def _filter_function(doc: Document) -> bool:
        return doc.metadata.get("category") == category

    results = vector_store.similarity_search(
        query=query, k=2, filter=_filter_function
    )

    return [
        {
            "type": "search_result",
            "title": doc.metadata["doc_title"],
            "source": doc.metadata["provenance"],
            "citations": {"enabled": True},
            "content": [{"type": "text", "text": doc.page_content}],
        }
        for doc in results
    ]



# Create agent
model = init_chat_model("claude-haiku-4-5-20251001")

checkpointer = InMemorySaver()
agent = create_agent(model, [retrieval_tool], checkpointer=checkpointer)


# Invoke on a query
config = {"configurable": {"thread_id": "session_1"}}

input_message = {
    "role": "user",
    "content": "How do I request vacation days?",
}
async for step in agent.astream(
    {"messages": [input_message]},
    config,
    stream_mode="values",
):
    step["messages"][-1].pretty_print()

Using with text splitters

Anthropic also lets you specify your own splits using custom document types. LangChain text splitters can be used to generate meaningful splits for this purpose. See the below example, where we split the LangChain README (a markdown document) and pass it to Claude as context:
import requests
from langchain_anthropic import ChatAnthropic
from langchain_text_splitters import MarkdownTextSplitter


def format_to_anthropic_documents(documents: list[str]):
    return {
        "type": "document",
        "source": {
            "type": "content",
            "content": [{"type": "text", "text": document} for document in documents],
        },
        "citations": {"enabled": True},
    }


# Pull readme
get_response = requests.get(
    "https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text

# Split into chunks
splitter = MarkdownTextSplitter(
    chunk_overlap=0,
    chunk_size=50,
)
documents = splitter.split_text(readme)

# Construct message
message = {
    "role": "user",
    "content": [
        format_to_anthropic_documents(documents),
        {"type": "text", "text": "Give me a link to LangChain's tutorials."},
    ],
}

# Query model
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
response = model.invoke([message])

上下文管理

Anthropic 支持上下文编辑功能,将自动管理模型的上下文窗口(例如,通过清除工具结果)。 有关详细信息和配置选项,请参阅 Anthropic 文档
Context management is supported since langchain-anthropic>=0.3.21
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["context-management-2025-06-27"],
    context_management={"edits": [{"type": "clear_tool_uses_20250919"}]},
)
model_with_tools = model.bind_tools([{"type": "web_search_20250305", "name": "web_search"}])
response = model_with_tools.invoke("Search for recent developments in AI")

内置工具

Anthropic 支持各种内置工具,可以通常的方式绑定到模型。Claude 将生成符合其工具内部模式的工具调用:

网络搜索

Claude 可以使用网络搜索工具运行搜索并用引用支持其响应。
Web search tool is supported since langchain-anthropic>=0.3.13
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")

tool = {"type": "web_search_20250305", "name": "web_search", "max_uses": 3}
model_with_tools = model.bind_tools([tool])

response = model_with_tools.invoke("How do I update a web app to TypeScript 5.5?")

网络获取

Claude 可以使用网络获取工具运行搜索并用引用支持其响应。 from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
    model="claude-haiku-4-5-20251001",
    betas=["web-fetch-2025-09-10"],  # Enable web fetch beta
)

tool = {"type": "web_fetch_20250910", "name": "web_fetch", "max_uses": 3}
model_with_tools = model.bind_tools([tool])

response = model_with_tools.invoke(
    "Please analyze the content at https://example.com/article"
)
You must add the 'web-fetch-2025-09-10' beta header to use web fetching.

代码执行

Claude 可以使用代码执行工具在沙盒环境中执行 Python 代码。
Code execution is supported since langchain-anthropic>=0.3.14
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["code-execution-2025-05-22"],
)

tool = {"type": "code_execution_20250522", "name": "code_execution"}
model_with_tools = model.bind_tools([tool])

response = model_with_tools.invoke(
    "Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
)
Using the Files API, Claude can write code to access files for data analysis and other purposes. See example below:
# Upload file

import anthropic

client = anthropic.Anthropic()
file = client.beta.files.upload(
    file=open("/path/to/sample_data.csv", "rb")
)
file_id = file.id


# Run inference
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["code-execution-2025-05-22"],
)

tool = {"type": "code_execution_20250522", "name": "code_execution"}
model_with_tools = model.bind_tools([tool])

input_message = {
    "role": "user",
    "content": [
        {
            "type": "text",
            "text": "Please plot these data and tell me what you see.",
        },
        {
            "type": "container_upload",
            "file_id": file_id,
        },
    ]
}
model_with_tools.invoke([input_message])
Note that Claude may generate files as part of its code execution. You can access these files using the Files API:
# Take all file outputs for demonstration purposes
file_ids = []
for block in response.content:
    if block["type"] == "code_execution_tool_result":
        file_ids.extend(
            content["file_id"]
            for content in block.get("content", {}).get("content", [])
            if "file_id" in content
        )

for i, file_id in enumerate(file_ids):
    file_content = client.beta.files.download(file_id)
    file_content.write_to_file(f"/path/to/file_{i}.png")

记忆工具

Claude 支持记忆工具,用于跨对话线程的客户端存储和检索上下文。有关详细信息,请参阅此处的文档。
Anthropic’s built-in memory tool is supported since langchain-anthropic>=0.3.21
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["context-management-2025-06-27"],
)
model_with_tools = model.bind_tools([{"type": "memory_20250818", "name": "memory"}])

response = model_with_tools.invoke("What are my interests?")

远程 MCP

Claude 可以使用 MCP 连接器工具进行模型生成的远程 MCP 服务器调用。
Remote MCP is supported since langchain-anthropic>=0.3.14
from langchain_anthropic import ChatAnthropic

mcp_servers = [
    {
        "type": "url",
        "url": "https://mcp.deepwiki.com/mcp",
        "name": "deepwiki",
        "tool_configuration": {  # optional configuration
            "enabled": True,
            "allowed_tools": ["ask_question"],
        },
        "authorization_token": "PLACEHOLDER",  # optional authorization
    }
]

model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    betas=["mcp-client-2025-04-04"],
    mcp_servers=mcp_servers,
)

response = model.invoke(
    "What transport protocols does the 2025-03-26 version of the MCP "
    "spec (modelcontextprotocol/modelcontextprotocol) support?"
)

文本编辑器

文本编辑器工具可用于查看和修改文本文件。有关详细信息,请参阅此处的文档。
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")

tool = {"type": "text_editor_20250124", "name": "str_replace_editor"}
model_with_tools = model.bind_tools([tool])

response = model_with_tools.invoke(
    "There's a syntax error in my primes.py file. Can you help me fix it?"
)
print(response.text)
response.tool_calls
I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.
[{'name': 'str_replace_editor',
  'args': {'command': 'view', 'path': '/repo/primes.py'},
  'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',
  'type': 'tool_call'}]

API 参考

有关所有功能和配置选项的详细文档,请访问 ChatAnthropic API 参考。
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.