LangChain v1 是构建智能体的专注、生产就绪的基础。 我们围绕三个核心改进精简了框架: 要升级,
pip install -U langchain
有关完整的更改列表,请参阅迁移指南

create_agent

create_agent 是在 LangChain 1.0 中构建智能体的标准方式。它提供了比 langgraph.prebuilt.create_react_agent 更简单的接口,同时通过使用中间件提供更大的自定义潜力。
from langchain.agents import create_agent

agent = create_agent(
    model="claude-sonnet-4-5-20250929",
    tools=[search_web, analyze_data, send_email],
    system_prompt="You are a helpful research assistant."
)

result = agent.invoke({
    "messages": [
        {"role": "user", "content": "Research AI safety trends"}
    ]
})
在底层,create_agent 建立在基本智能体循环之上 — 调用模型,让它选择要执行的工具,然后在不再调用工具时完成:
核心智能体循环图
有关更多信息,请参阅智能体

中间件

中间件是 create_agent 的定义特性。它提供了高度可定制的入口点,提高了您可以构建的上限。 优秀的智能体需要上下文工程:在正确的时间将正确的信息提供给模型。中间件通过可组合的抽象帮助您控制动态提示、对话总结、选择性工具访问、状态管理和护栏。

预构建中间件

LangChain 为常见模式提供了一些预构建中间件,包括:
from langchain.agents import create_agent
from langchain.agents.middleware import (
    PIIMiddleware,
    SummarizationMiddleware,
    HumanInTheLoopMiddleware
)


agent = create_agent(
    model="claude-sonnet-4-5-20250929",
    tools=[read_email, send_email],
    middleware=[
        PIIMiddleware("email", strategy="redact", apply_to_input=True),
        PIIMiddleware(
            "phone_number",
            detector=(
                r"(?:\+?\d{1,3}[\s.-]?)?"
                r"(?:\(?\d{2,4}\)?[\s.-]?)?"
                r"\d{3,4}[\s.-]?\d{4}"
			),
			strategy="block"
        ),
        SummarizationMiddleware(
            model="claude-sonnet-4-5-20250929",
            max_tokens_before_summary=500
        ),
        HumanInTheLoopMiddleware(
            interrupt_on={
                "send_email": {
                    "allowed_decisions": ["approve", "edit", "reject"]
                }
            }
        ),
    ]
)

Custom middleware

You can also build custom middleware to fit your needs. Middleware exposes hooks at each step in an agent’s execution:
Middleware flow diagram
Build custom middleware by implementing any of these hooks on a subclass of the AgentMiddleware class:
HookWhen it runsUse cases
before_agentBefore calling the agentLoad memory, validate input
before_modelBefore each LLM callUpdate prompts, trim messages
wrap_model_callAround each LLM callIntercept and modify requests/responses
wrap_tool_callAround each tool callIntercept and modify tool execution
after_modelAfter each LLM responseValidate output, apply guardrails
after_agentAfter agent completesSave results, cleanup
Example custom middleware:
from dataclasses import dataclass
from typing import Callable

from langchain_openai import ChatOpenAI

from langchain.agents.middleware import (
    AgentMiddleware,
    ModelRequest
)
from langchain.agents.middleware.types import ModelResponse

@dataclass
class Context:
    user_expertise: str = "beginner"

class ExpertiseBasedToolMiddleware(AgentMiddleware):
    def wrap_model_call(
        self,
        request: ModelRequest,
        handler: Callable[[ModelRequest], ModelResponse]
    ) -> ModelResponse:
        user_level = request.runtime.context.user_expertise

        if user_level == "expert":
            # More powerful model
            model = ChatOpenAI(model="gpt-5")
            tools = [advanced_search, data_analysis]
        else:
            # Less powerful model
            model = ChatOpenAI(model="gpt-5-nano")
            tools = [simple_search, basic_calculator]

        request.model = model
        request.tools = tools
        return handler(request)

agent = create_agent(
    model="claude-sonnet-4-5-20250929",
    tools=[
        simple_search,
        advanced_search,
        basic_calculator,
        data_analysis
    ],
    middleware=[ExpertiseBasedToolMiddleware()],
    context_schema=Context
)
For more information, see the complete middleware guide.

Built on LangGraph

Because create_agent is built on LangGraph, you automatically get built in support for long running and reliable agents via:

Persistence

Conversations automatically persist across sessions with built-in checkpointing

Streaming

Stream tokens, tool calls, and reasoning traces in real-time

Human-in-the-loop

Pause agent execution for human approval before sensitive actions

Time travel

Rewind conversations to any point and explore alternate paths and prompts
You don’t need to learn LangGraph to use these features—they work out of the box.

Structured output

create_agent has improved structured output generation:
  • Main loop integration: Structured output is now generated in the main loop instead of requiring an additional LLM call
  • Structured output strategy: Models can choose between calling tools or using provider-side structured output generation
  • Cost reduction: Eliminates extra expense from additional LLM calls
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str

def weather_tool(city: str) -> str:
    """Get the weather for a city."""
    return f"it's sunny and 70 degrees in {city}"

agent = create_agent(
    "gpt-4o-mini",
    tools=[weather_tool],
    response_format=ToolStrategy(Weather)
)

result = agent.invoke({
    "messages": [{"role": "user", "content": "What's the weather in SF?"}]
})

print(repr(result["structured_response"]))
# results in `Weather(temperature=70.0, condition='sunny')`
Error handling: Control error handling via the handle_errors parameter to ToolStrategy:
  • Parsing errors: Model generates data that doesn’t match desired structure
  • Multiple tool calls: Model generates 2+ tool calls for structured output schemas

Standard content blocks

Content block support is currently only available for the following integrations:Broader support for content blocks will be rolled out gradually across more providers.
The new content_blocks property introduces a standard representation for message content that works across providers:
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
response = model.invoke("What's the capital of France?")

# Unified access to content blocks
for block in response.content_blocks:
    if block["type"] == "reasoning":
        print(f"Model reasoning: {block['reasoning']}")
    elif block["type"] == "text":
        print(f"Response: {block['text']}")
    elif block["type"] == "tool_call":
        print(f"Tool call: {block['name']}({block['args']})")

Benefits

  • Provider agnostic: Access reasoning traces, citations, built-in tools (web search, code interpreters, etc.), and other features using the same API regardless of provider
  • Type safe: Full type hints for all content block types
  • Backward compatible: Standard content can be loaded lazily, so there are no associated breaking changes
For more information, see our guide on content blocks.

Simplified package

LangChain v1 streamlines the langchain package namespace to focus on essential building blocks for agents. The refined namespace exposes the most useful and relevant functionality:

Namespace

ModuleWhat’s availableNotes
langchain.agentscreate_agent, AgentStateCore agent creation functionality
langchain.messagesMessage types, content blocks, trim_messagesRe-exported from @[langchain-core]
langchain.tools@tool, BaseTool, injection helpersRe-exported from @[langchain-core]
langchain.chat_modelsinit_chat_model, BaseChatModelUnified model initialization
langchain.embeddingsEmbeddings, init_embeddingsEmbedding models
Most of these are re-exported from langchain-core for convenience, which gives you a focused API surface for building agents.
# Agent building
from langchain.agents import create_agent

# Messages and content
from langchain.messages import AIMessage, HumanMessage

# Tools
from langchain.tools import tool

# Model initialization
from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings

langchain-classic

Legacy functionality has moved to langchain-classic to keep the core packages lean and focused. What’s in langchain-classic:
  • Legacy chains and chain implementations
  • Retrievers (e.g. MultiQueryRetriever or anything from the previous langchain.retrievers module)
  • The indexing API
  • The hub module (for managing prompts programmatically)
  • langchain-community exports
  • Other deprecated functionality
If you use any of this functionality, install langchain-classic:
pip install langchain-classic
Then update your imports:
from langchain import ...
from langchain_classic import ...

from langchain.chains import ...
from langchain_classic.chains import ...

from langchain.retrievers import ...
from langchain_classic.retrievers import ...

from langchain import hub  
from langchain_classic import hub  

Migration guide

See our migration guide for help updating your code to LangChain v1.

Reporting issues

Please report any issues discovered with 1.0 on GitHub using the 'v1' label.

Additional resources

See also


Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.