中断允许您在特定点暂停图执行并在继续之前等待外部输入。这实现了人在回路模式,在这种模式下您需要外部输入才能继续。当触发中断时,LangGraph 使用其持久化层保存图状态,并无限期等待,直到您恢复执行。 中断通过在图节点中的任何位置调用 interrupt() 函数来工作。该函数接受任何 JSON 可序列化的值,该值将显示给调用者。当您准备好继续时,通过使用 Command 重新调用图来恢复执行,然后该值成为节点内部 interrupt() 调用的返回值。 与静态断点(在特定节点之前或之后暂停)不同,中断是动态的 — 它们可以放置在代码中的任何位置,并且可以基于您的应用程序逻辑有条件地执行。
  • **检查点保持您的位置:**检查点器写入确切的图状态,以便您稍后可以恢复,即使在错误状态下也是如此。
  • **thread_id 是您的指针:**使用 { configurable: { thread_id: ... } } 作为 invoke 方法的选项来告诉检查点器要加载哪个状态。
  • **中断有效负载显示为 __interrupt__:**您传递给 interrupt() 的值在 __interrupt__ 字段中返回给调用者,以便您知道图在等待什么。
您选择的 thread_id 实际上是您的持久游标。重用它会恢复相同的检查点;使用新值会启动具有空状态的全新线程。

使用 interrupt 暂停

interrupt 函数暂停图执行并向调用者返回一个值。当您在节点内调用 interrupt 时,LangGraph 保存当前图状态并等待您使用输入恢复执行。 要使用 interrupt,您需要:
  1. 一个检查点器来持久化图状态(在生产环境中使用持久检查点器)
  2. 配置中的线程 ID,以便运行时知道要从哪个状态恢复
  3. 在您想要暂停的位置调用 interrupt()(有效负载必须是 JSON 可序列化的)
import { interrupt } from "@langchain/langgraph";

async function approvalNode(state: State) {
    // Pause and ask for approval
    const approved = interrupt("Do you approve this action?");

    // Command({ resume: ... }) provides the value returned into this variable
    return { approved };
}
当您调用 interrupt 时,会发生以下情况:
  1. 图执行被暂停在调用 interrupt 的确切位置
  2. 状态被保存使用检查点器,以便稍后可以恢复执行,在生产环境中,这应该是一个持久检查点器(例如由数据库支持)
  3. 值被返回给调用者在 __interrupt__ 下;它可以是任何 JSON 可序列化的值(字符串、对象、数组等)
  4. 图无限期等待直到您使用响应恢复执行
  5. 响应被传回到节点当您恢复时,成为 interrupt() 调用的返回值

恢复中断

在中断暂停执行后,您通过使用包含恢复值的 Command 再次调用图来恢复图。恢复值传回给 interrupt 调用,允许节点使用外部输入继续执行。
import { Command } from "@langchain/langgraph";

// Initial run - hits the interrupt and pauses
// thread_id is the durable pointer back to the saved checkpoint
const config = { configurable: { thread_id: "thread-1" } };
const result = await graph.invoke({ input: "data" }, config);

// Check what was interrupted
// __interrupt__ mirrors every payload you passed to interrupt()
console.log(result.__interrupt__);
// [{ value: 'Do you approve this action?', ... }]

// Resume with the human's response
// Command({ resume }) returns that value from interrupt() in the node
await graph.invoke(new Command({ resume: true }), config);
Key points about resuming:
  • You must use the same thread ID when resuming that was used when the interrupt occurred
  • The value passed to Command(resume=...) becomes the return value of the interrupt call
  • The node restarts from the beginning of the node where the interrupt was called when resumed, so any code before the interrupt runs again
  • You can pass any JSON-serializable value as the resume value

Common patterns

中断解锁的关键是能够暂停执行并等待外部输入。这对于各种用例很有用,包括:
  • Approval workflows: Pause before executing critical actions (API calls, database changes, financial transactions)
  • Review and edit: Let humans review and modify LLM outputs or tool calls before continuing
  • Interrupting tool calls: Pause before executing tool calls to review and edit the tool call before execution
  • Validating human input: Pause before proceeding to the next step to validate human input

Approve or reject

中断最常见的用途之一是在关键操作之前暂停并请求批准。例如,您可能希望要求人工批准 API 调用、数据库更改或任何其他重要决策。
import { interrupt, Command } from "@langchain/langgraph";

function approvalNode(state: State): Command {
  // Pause execution; payload surfaces in result.__interrupt__
  const isApproved = interrupt({
    question: "Do you want to proceed?",
    details: state.actionDetails
  });

  // Route based on the response
  if (isApproved) {
    return new Command({ goto: "proceed" }); // Runs after the resume payload is provided
  } else {
    return new Command({ goto: "cancel" });
  }
}
当您恢复图时,传递 true 以批准或 false 以拒绝:
// To approve
await graph.invoke(new Command({ resume: true }), config);

// To reject
await graph.invoke(new Command({ resume: false }), config);
import {
  Command,
  MemorySaver,
  START,
  END,
  StateGraph,
  interrupt,
} from "@langchain/langgraph";
import * as z from "zod";

const State = z.object({
  actionDetails: z.string(),
  status: z.enum(["pending", "approved", "rejected"]).nullable(),
});

const graphBuilder = new StateGraph(State)
  .addNode("approval", async (state) => {
    // Expose details so the caller can render them in a UI
    const decision = interrupt({
      question: "Approve this action?",
      details: state.actionDetails,
    });
    return new Command({ goto: decision ? "proceed" : "cancel" });
  }, { ends: ['proceed', 'cancel'] })
  .addNode("proceed", () => ({ status: "approved" }))
  .addNode("cancel", () => ({ status: "rejected" }))
  .addEdge(START, "approval")
  .addEdge("proceed", END)
  .addEdge("cancel", END);

// Use a more durable checkpointer in production
const checkpointer = new MemorySaver();
const graph = graphBuilder.compile({ checkpointer });

const config = { configurable: { thread_id: "approval-123" } };
const initial = await graph.invoke(
  { actionDetails: "Transfer $500", status: "pending" },
  config,
);
console.log(initial.__interrupt__);
// [{ value: { question: ..., details: ... } }]

// Resume with the decision; true routes to proceed, false to cancel
const resumed = await graph.invoke(new Command({ resume: true }), config);
console.log(resumed.status); // -> "approved"

Review and edit state

有时您希望让人类在继续之前审查和编辑图状态的一部分。这对于纠正 LLM、添加缺失信息或进行调整很有用。
import { interrupt } from "@langchain/langgraph";

function reviewNode(state: State) {
  // Pause and show the current content for review (surfaces in result.__interrupt__)
  const editedContent = interrupt({
    instruction: "Review and edit this content",
    content: state.generatedText
  });

  // Update the state with the edited version
  return { generatedText: editedContent };
}
When resuming, provide the edited content:
await graph.invoke(
  new Command({ resume: "The edited and improved text" }), // Value becomes the return from interrupt()
  config
);
import {
  Command,
  MemorySaver,
  START,
  END,
  StateGraph,
  interrupt,
} from "@langchain/langgraph";
import * as z from "zod";

const State = z.object({
  generatedText: z.string(),
});

const builder = new StateGraph(State)
  .addNode("review", async (state) => {
    // Ask a reviewer to edit the generated content
    const updated = interrupt({
      instruction: "Review and edit this content",
      content: state.generatedText,
    });
    return { generatedText: updated };
  })
  .addEdge(START, "review")
  .addEdge("review", END);

const checkpointer = new MemorySaver();
const graph = builder.compile({ checkpointer });

const config = { configurable: { thread_id: "review-42" } };
const initial = await graph.invoke({ generatedText: "Initial draft" }, config);
console.log(initial.__interrupt__);
// [{ value: { instruction: ..., content: ... } }]

// Resume with the edited text from the reviewer
const finalState = await graph.invoke(
  new Command({ resume: "Improved draft after review" }),
  config,
);
console.log(finalState.generatedText); // -> "Improved draft after review"

Interrupts in tools

You can also place interrupts directly inside tool functions. This makes the tool itself pause for approval whenever it’s called, and allows for human review and editing of the tool call before it is executed. First, define a tool that uses interrupt:
import { tool } from "@langchain/core/tools";
import { interrupt } from "@langchain/langgraph";
import * as z from "zod";

const sendEmailTool = tool(
  async ({ to, subject, body }) => {
    // Pause before sending; payload surfaces in result.__interrupt__
    const response = interrupt({
      action: "send_email",
      to,
      subject,
      body,
      message: "Approve sending this email?",
    });

    if (response?.action === "approve") {
      // Resume value can override inputs before executing
      const finalTo = response.to ?? to;
      const finalSubject = response.subject ?? subject;
      const finalBody = response.body ?? body;
      return `Email sent to ${finalTo} with subject '${finalSubject}'`;
    }
    return "Email cancelled by user";
  },
  {
    name: "send_email",
    description: "Send an email to a recipient",
    schema: z.object({
      to: z.string(),
      subject: z.string(),
      body: z.string(),
    }),
  },
);
This approach is useful when you want the approval logic to live with the tool itself, making it reusable across different parts of your graph. The LLM can call the tool naturally, and the interrupt will pause execution whenever the tool is invoked, allowing you to approve, edit, or cancel the action.
import { tool } from "@langchain/core/tools";
import { ChatAnthropic } from "@langchain/anthropic";
import {
  Command,
  MemorySaver,
  START,
  END,
  StateGraph,
  interrupt,
} from "@langchain/langgraph";
import * as z from "zod";

const sendEmailTool = tool(
  async ({ to, subject, body }) => {
    // Pause before sending; payload surfaces in result.__interrupt__
    const response = interrupt({
      action: "send_email",
      to,
      subject,
      body,
      message: "Approve sending this email?",
    });

    if (response?.action === "approve") {
      const finalTo = response.to ?? to;
      const finalSubject = response.subject ?? subject;
      const finalBody = response.body ?? body;
      console.log("[sendEmailTool]", finalTo, finalSubject, finalBody);
      return `Email sent to ${finalTo}`;
    }
    return "Email cancelled by user";
  },
  {
    name: "send_email",
    description: "Send an email to a recipient",
    schema: z.object({
      to: z.string(),
      subject: z.string(),
      body: z.string(),
    }),
  },
);

const model = new ChatAnthropic({ model: "claude-sonnet-4-5-20250929" }).bindTools([sendEmailTool]);

const Message = z.object({
  role: z.enum(["user", "assistant", "tool"]),
  content: z.string(),
});

const State = z.object({
  messages: z.array(Message),
});

const graphBuilder = new StateGraph(State)
  .addNode("agent", async (state) => {
    // LLM may decide to call the tool; interrupt pauses before sending
    const response = await model.invoke(state.messages);
    return { messages: [...state.messages, response] };
  })
  .addEdge(START, "agent")
  .addEdge("agent", END);

const checkpointer = new MemorySaver();
const graph = graphBuilder.compile({ checkpointer });

const config = { configurable: { thread_id: "email-workflow" } };
const initial = await graph.invoke(
  {
    messages: [
      { role: "user", content: "Send an email to alice@example.com about the meeting" },
    ],
  },
  config,
);
console.log(initial.__interrupt__); // -> [{ value: { action: 'send_email', ... } }]

// Resume with approval and optionally edited arguments
const resumed = await graph.invoke(
  new Command({
    resume: { action: "approve", subject: "Updated subject" },
  }),
  config,
);
console.log(resumed.messages.at(-1)); // -> Tool result returned by send_email

Validating human input

Sometimes you need to validate input from humans and ask again if it’s invalid. You can do this using multiple interrupt calls in a loop.
import { interrupt } from "@langchain/langgraph";

function getAgeNode(state: State) {
  let prompt = "What is your age?";

  while (true) {
    const answer = interrupt(prompt); // payload surfaces in result.__interrupt__

    // Validate the input
    if (typeof answer === "number" && answer > 0) {
      // Valid input - continue
      return { age: answer };
    } else {
      // Invalid input - ask again with a more specific prompt
      prompt = `'${answer}' is not a valid age. Please enter a positive number.`;
    }
  }
}
Each time you resume the graph with invalid input, it will ask again with a clearer message. Once valid input is provided, the node completes and the graph continues.
import {
  Command,
  MemorySaver,
  START,
  END,
  StateGraph,
  interrupt,
} from "@langchain/langgraph";
import * as z from "zod";

const State = z.object({
  age: z.number().nullable(),
});

const builder = new StateGraph(State)
  .addNode("collectAge", (state) => {
    let prompt = "What is your age?";

    while (true) {
      const answer = interrupt(prompt); // payload surfaces in result.__interrupt__

      if (typeof answer === "number" && answer > 0) {
        return { age: answer };
      }

      prompt = `'${answer}' is not a valid age. Please enter a positive number.`;
    }
  })
  .addEdge(START, "collectAge")
  .addEdge("collectAge", END);

const checkpointer = new MemorySaver();
const graph = builder.compile({ checkpointer });

const config = { configurable: { thread_id: "form-1" } };
const first = await graph.invoke({ age: null }, config);
console.log(first.__interrupt__); // -> [{ value: "What is your age?", ... }]

// Provide invalid data; the node re-prompts
const retry = await graph.invoke(new Command({ resume: "thirty" }), config);
console.log(retry.__interrupt__); // -> [{ value: "'thirty' is not a valid age...", ... }]

// Provide valid data; loop exits and state updates
const final = await graph.invoke(new Command({ resume: 30 }), config);
console.log(final.age); // -> 30

Rules of interrupts

When you call interrupt within a node, LangGraph suspends execution by raising an exception that signals the runtime to pause. This exception propagates up through the call stack and is caught by the runtime, which notifies the graph to save the current state and wait for external input. When execution resumes (after you provide the requested input), the runtime restarts the entire node from the beginning—it does not resume from the exact line where interrupt was called. This means any code that ran before the interrupt will execute again. Because of this, there’s a few important rules to follow when working with interrupts to ensure they behave as expected.

Do not wrap interrupt calls in try/catch

The way that interrupt pauses execution at the point of the call is by throwing a special exception. If you wrap the interrupt call in a try/catch block, you will catch this exception and the interrupt will not be passed back to the graph.
  • ✅ Separate interrupt calls from error-prone code
  • ✅ Conditionally catch errors if needed
async function nodeA(state: State) {
    // ✅ Good: interrupting first, then handling error conditions separately
    const name = interrupt("What's your name?");
    try {
        await fetchData(); // This can fail
    } catch (err) {
        console.error(error);
    }
    return state;
}
  • 🔴 Do not wrap interrupt calls in bare try/catch blocks
async function nodeA(state: State) {
    // ❌ Bad: wrapping interrupt in bare try/catch will catch the interrupt exception
    try {
        const name = interrupt("What's your name?");
    } catch (err) {
        console.error(error);
    }
    return state;
}

Do not reorder interrupt calls within a node

It’s common to use multiple interrupts in a single node, however this can lead to unexpected behavior if not handled carefully. When a node contains multiple interrupt calls, LangGraph keeps a list of resume values specific to the task executing the node. Whenever execution resumes, it starts at the beginning of the node. For each interrupt encountered, LangGraph checks if a matching value exists in the task’s resume list. Matching is strictly index-based, so the order of interrupt calls within the node is important.
  • ✅ Keep interrupt calls consistent across node executions
async function nodeA(state: State) {
    // ✅ Good: interrupt calls happen in the same order every time
    const name = interrupt("What's your name?");
    const age = interrupt("What's your age?");
    const city = interrupt("What's your city?");

    return {
        name,
        age,
        city
    };
}
  • 🔴 Do not conditionally skip interrupt calls within a node
  • 🔴 Do not loop interrupt calls using logic that isn’t deterministic across executions
async function nodeA(state: State) {
    // ❌ Bad: conditionally skipping interrupts changes the order
    const name = interrupt("What's your name?");

    // On first run, this might skip the interrupt
    // On resume, it might not skip it - causing index mismatch
    if (state.needsAge) {
        const age = interrupt("What's your age?");
    }

    const city = interrupt("What's your city?");

    return { name, city };
}

Do not return complex values in interrupt calls

Depending on which checkpointer is used, complex values may not be serializable (e.g. you can’t serialize a function). To make your graphs adaptable to any deployment, it’s best practice to only use values that can be reasonably serialized.
  • ✅ Pass simple, JSON-serializable types to interrupt
  • ✅ Pass dictionaries/objects with simple values
async function nodeA(state: State) {
    // ✅ Good: passing simple types that are serializable
    const name = interrupt("What's your name?");
    const count = interrupt(42);
    const approved = interrupt(true);

    return { name, count, approved };
}
  • 🔴 Do not pass functions, class instances, or other complex objects to interrupt
function validateInput(value: string): boolean {
    return value.length > 0;
}

async function nodeA(state: State) {
    // ❌ Bad: passing a function to interrupt
    // The function cannot be serialized
    const response = interrupt({
        question: "What's your name?",
        validator: validateInput  // This will fail
    });
    return { name: response };
}

Side effects called before interrupt must be idempotent

Because interrupts work by re-running the nodes they were called from, side effects called before interrupt should (ideally) be idempotent. For context, idempotency means that the same operation can be applied multiple times without changing the result beyond the initial execution. As an example, you might have an API call to update a record inside of a node. If interrupt is called after that call is made, it will be re-run multiple times when the node is resumed, potentially overwriting the initial update or creating duplicate records.
  • ✅ Use idempotent operations before interrupt
  • ✅ Place side effects after interrupt calls
  • ✅ Separate side effects into separate nodes when possible
async function nodeA(state: State) {
    // ✅ Good: using upsert operation which is idempotent
    // Running this multiple times will have the same result
    await db.upsertUser({
        userId: state.userId,
        status: "pending_approval"
    });

    const approved = interrupt("Approve this change?");

    return { approved };
}
  • 🔴 Do not perform non-idempotent operations before interrupt
  • 🔴 Do not create new records without checking if they exist
async function nodeA(state: State) {
    // ❌ Bad: creating a new record before interrupt
    // This will create duplicate records on each resume
    const auditId = await db.createAuditLog({
        userId: state.userId,
        action: "pending_approval",
        timestamp: new Date()
    });

    const approved = interrupt("Approve this change?");

    return { approved, auditId };
}

Using with subgraphs called as functions

When invoking a subgraph within a node, the parent graph will resume execution from the beginning of the node where the subgraph was invoked and the interrupt was triggered. Similarly, the subgraph will also resume from the beginning of the node where interrupt was called.
async function nodeInParentGraph(state: State) {
    someCode(); // <-- This will re-execute when resumed
    // Invoke a subgraph as a function.
    // The subgraph contains an `interrupt` call.
    const subgraphResult = await subgraph.invoke(someInput);
    // ...
}

async function nodeInSubgraph(state: State) {
    someOtherCode(); // <-- This will also re-execute when resumed
    const result = interrupt("What's your name?");
    // ...
}

Debugging with interrupts

To debug and test a graph, you can use static interrupts as breakpoints to step through the graph execution one node at a time. Static interrupts are triggered at defined points either before or after a node executes. You can set these by specifying interruptBefore and interruptAfter when compiling the graph.
Static interrupts are not recommended for human-in-the-loop workflows. Use the interrupt method instead.
  • At compile time
  • At run time
const graph = builder.compile({
    interruptBefore: ["node_a"],  
    interruptAfter: ["node_b", "node_c"],  
    checkpointer,
});

// Pass a thread ID to the graph
const config = {
    configurable: {
        thread_id: "some_thread"
    }
};

// Run the graph until the breakpoint
await graph.invoke(inputs, config);# [!code highlight]

await graph.invoke(null, config);  # [!code highlight]
  1. The breakpoints are set during compile time.
  2. interruptBefore specifies the nodes where execution should pause before the node is executed.
  3. interruptAfter specifies the nodes where execution should pause after the node is executed.
  4. A checkpointer is required to enable breakpoints.
  5. The graph is run until the first breakpoint is hit.
  6. The graph is resumed by passing in null for the input. This will run the graph until the next breakpoint is hit.

Using LangGraph Studio

You can use LangGraph Studio to set static interrupts in your graph in the UI before running the graph. You can also use the UI to inspect the graph state at any point in the execution. image
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.