对于此示例,您需要设置一个 Claude (Anthropic) 账户并获取 API 密钥。然后,在终端中设置
ANTHROPIC_API_KEY 环境变量。- 使用图 API
- Use the Functional API
1. 定义工具和模型
在此示例中,我们将使用 Claude Sonnet 4.5 模型并定义加法、乘法和除法工具。Copy
import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import * as z from "zod";
const model = new ChatAnthropic({
model: "claude-sonnet-4-5-20250929",
temperature: 0,
});
// Define tools
const add = tool(({ a, b }) => a + b, {
name: "add",
description: "Add two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const multiply = tool(({ a, b }) => a * b, {
name: "multiply",
description: "Multiply two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const divide = tool(({ a, b }) => a / b, {
name: "divide",
description: "Divide two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
// Augment the LLM with tools
const toolsByName = {
[add.name]: add,
[multiply.name]: multiply,
[divide.name]: divide,
};
const tools = Object.values(toolsByName);
const modelWithTools = model.bindTools(tools);
2. Define state
图的状态用于存储消息和 LLM 调用次数。LangGraph 中的状态在智能体的整个执行过程中持续存在。带有
operator.add 的 Annotated 类型确保新消息附加到现有列表而不是替换它。Copy
import { StateGraph, START, END } from "@langchain/langgraph";
import { MessagesZodMeta } from "@langchain/langgraph";
import { registry } from "@langchain/langgraph/zod";
import { type BaseMessage } from "@langchain/core/messages";
const MessagesState = z.object({
messages: z
.array(z.custom<BaseMessage>())
.register(registry, MessagesZodMeta),
llmCalls: z.number().optional(),
});
3. Define model node
The model node is used to call the LLM and decide whether to call a tool or not.Copy
import { SystemMessage } from "@langchain/core/messages";
async function llmCall(state: z.infer<typeof MessagesState>) {
return {
messages: await modelWithTools.invoke([
new SystemMessage(
"You are a helpful assistant tasked with performing arithmetic on a set of inputs."
),
...state.messages,
]),
llmCalls: (state.llmCalls ?? 0) + 1,
};
}
4. Define tool node
The tool node is used to call the tools and return the results.Copy
import { isAIMessage, ToolMessage } from "@langchain/core/messages";
async function toolNode(state: z.infer<typeof MessagesState>) {
const lastMessage = state.messages.at(-1);
if (lastMessage == null || !isAIMessage(lastMessage)) {
return { messages: [] };
}
const result: ToolMessage[] = [];
for (const toolCall of lastMessage.tool_calls ?? []) {
const tool = toolsByName[toolCall.name];
const observation = await tool.invoke(toolCall);
result.push(observation);
}
return { messages: result };
}
5. Define end logic
条件边函数用于根据 LLM 是否进行工具调用来路由到工具节点或结束。Copy
async function shouldContinue(state: z.infer<typeof MessagesState>) {
const lastMessage = state.messages.at(-1);
if (lastMessage == null || !isAIMessage(lastMessage)) return END;
// If the LLM makes a tool call, then perform an action
if (lastMessage.tool_calls?.length) {
return "toolNode";
}
// Otherwise, we stop (reply to the user)
return END;
}
6. Build and compile the agent
智能体使用StateGraph 类构建,并使用 @[compile][StateGraph.compile] 方法编译。Copy
const agent = new StateGraph(MessagesState)
.addNode("llmCall", llmCall)
.addNode("toolNode", toolNode)
.addEdge(START, "llmCall")
.addConditionalEdges("llmCall", shouldContinue, ["toolNode", END])
.addEdge("toolNode", "llmCall")
.compile();
// Invoke
import { HumanMessage } from "@langchain/core/messages";
const result = await agent.invoke({
messages: [new HumanMessage("Add 3 and 4.")],
});
for (const message of result.messages) {
console.log(`[${message.getType()}]: ${message.text}`);
}
To learn how to trace your agent with LangSmith, see the LangSmith documentation.
Full code example
Full code example
Copy
// Step 1: Define tools and model
import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import * as z from "zod";
const model = new ChatAnthropic({
model: "claude-sonnet-4-5-20250929",
temperature: 0,
});
// Define tools
const add = tool(({ a, b }) => a + b, {
name: "add",
description: "Add two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const multiply = tool(({ a, b }) => a * b, {
name: "multiply",
description: "Multiply two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
const divide = tool(({ a, b }) => a / b, {
name: "divide",
description: "Divide two numbers",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
});
// Augment the LLM with tools
const toolsByName = {
[add.name]: add,
[multiply.name]: multiply,
[divide.name]: divide,
};
const tools = Object.values(toolsByName);
const modelWithTools = model.bindTools(tools);
// Step 2: Define state
import { StateGraph, START, END } from "@langchain/langgraph";
import { MessagesZodMeta } from "@langchain/langgraph";
import { registry } from "@langchain/langgraph/zod";
import { type BaseMessage } from "@langchain/core/messages";
const MessagesState = z.object({
messages: z
.array(z.custom<BaseMessage>())
.register(registry, MessagesZodMeta),
llmCalls: z.number().optional(),
});
// Step 3: Define model node
import { SystemMessage } from "@langchain/core/messages";
async function llmCall(state: z.infer<typeof MessagesState>) {
return {
messages: await modelWithTools.invoke([
new SystemMessage(
"You are a helpful assistant tasked with performing arithmetic on a set of inputs."
),
...state.messages,
]),
llmCalls: (state.llmCalls ?? 0) + 1,
};
}
// Step 4: Define tool node
import { isAIMessage, ToolMessage } from "@langchain/core/messages";
async function toolNode(state: z.infer<typeof MessagesState>) {
const lastMessage = state.messages.at(-1);
if (lastMessage == null || !isAIMessage(lastMessage)) {
return { messages: [] };
}
const result: ToolMessage[] = [];
for (const toolCall of lastMessage.tool_calls ?? []) {
const tool = toolsByName[toolCall.name];
const observation = await tool.invoke(toolCall);
result.push(observation);
}
return { messages: result };
}
// Step 5: Define logic to determine whether to end
async function shouldContinue(state: z.infer<typeof MessagesState>) {
const lastMessage = state.messages.at(-1);
if (lastMessage == null || !isAIMessage(lastMessage)) return END;
// If the LLM makes a tool call, then perform an action
if (lastMessage.tool_calls?.length) {
return "toolNode";
}
// Otherwise, we stop (reply to the user)
return END;
}
// Step 6: Build and compile the agent
const agent = new StateGraph(MessagesState)
.addNode("llmCall", llmCall)
.addNode("toolNode", toolNode)
.addEdge(START, "llmCall")
.addConditionalEdges("llmCall", shouldContinue, ["toolNode", END])
.addEdge("toolNode", "llmCall")
.compile();
// Invoke
import { HumanMessage } from "@langchain/core/messages";
const result = await agent.invoke({
messages: [new HumanMessage("Add 3 and 4.")],
});
for (const message of result.messages) {
console.log(`[${message.getType()}]: ${message.text}`);
}
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.