应用程序必须使用配置文件进行配置才能部署到 LangSmith(或进行自托管)。本操作指南讨论使用 package.json 指定项目依赖项来设置 JavaScript 应用程序以进行部署的基本步骤。 本演练基于此存储库,您可以使用它来了解有关如何为部署设置应用程序的更多信息。 最终的存储库结构将如下所示:
my-app/
├── src # all project code lies within here
   ├── utils # optional utilities for your graph
   ├── tools.ts # tools for your graph
   ├── nodes.ts # node functions for your graph
   └── state.ts # state definition of your graph
   └── agent.ts # code for constructing your graph
├── package.json # package dependencies
├── .env # environment variables
└── langgraph.json # configuration file for LangGraph
LangSmith Deployment 支持部署 LangGraph 。但是,图的_节点_的实现可以包含任意 Python 代码。这意味着任何框架都可以在节点内实现并部署在 LangSmith Deployment 上。这使您可以将核心应用程序逻辑保留在 LangGraph 之外,同时仍然使用 LangSmith 进行部署、扩展和可观测性
每个步骤后,都会提供一个示例文件目录来演示如何组织代码。

指定依赖项

可以在 package.json 中指定依赖项。如果没有创建这些文件,则可以稍后在配置文件中指定依赖项。 示例 package.json 文件:
{
  "name": "langgraphjs-studio-starter",
  "packageManager": "yarn@1.22.22",
  "dependencies": {
    "@langchain/community": "^0.2.31",
    "@langchain/core": "^0.2.31",
    "@langchain/langgraph": "^0.2.0",
    "@langchain/openai": "^0.2.8"
  }
}
部署应用程序时,将使用您选择的包管理器安装依赖项,前提是它们遵守下面列出的兼容版本范围:
"@langchain/core": "^0.3.42",
"@langchain/langgraph": "^0.2.57",
"@langchain/langgraph-checkpoint": "~0.0.16",
Example file directory:
my-app/
└── package.json # package dependencies

Specify environment variables

Environment variables can optionally be specified in a file (e.g. .env). See the Environment Variables reference to configure additional variables for a deployment. Example .env file:
MY_ENV_VAR_1=foo
MY_ENV_VAR_2=bar
OPENAI_API_KEY=key
TAVILY_API_KEY=key_2
Example file directory:
my-app/
├── package.json
└── .env # environment variables

Define graphs

Implement your graphs. Graphs can be defined in a single file or multiple files. Make note of the variable names of each compiled graph to be included in the application. The variable names will be used later when creating the configuration file. Here is an example agent.ts:
import type { AIMessage } from "@langchain/core/messages";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { ChatOpenAI } from "@langchain/openai";

import { MessagesAnnotation, StateGraph } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";

const tools = [new TavilySearchResults({ maxResults: 3 })];

// Define the function that calls the model
async function callModel(state: typeof MessagesAnnotation.State) {
  /**
   * Call the LLM powering our agent.
   * Feel free to customize the prompt, model, and other logic!
   */
  const model = new ChatOpenAI({
    model: "gpt-4o",
  }).bindTools(tools);

  const response = await model.invoke([
    {
      role: "system",
      content: `You are a helpful assistant. The current date is ${new Date().getTime()}.`,
    },
    ...state.messages,
  ]);

  // MessagesAnnotation supports returning a single message or array of messages
  return { messages: response };
}

// Define the function that determines whether to continue or not
function routeModelOutput(state: typeof MessagesAnnotation.State) {
  const messages = state.messages;
  const lastMessage: AIMessage = messages[messages.length - 1];
  // If the LLM is invoking tools, route there.
  if ((lastMessage?.tool_calls?.length ?? 0) > 0) {
    return "tools";
  }
  // Otherwise end the graph.
  return "__end__";
}

// Define a new graph.
// See https://langchain-ai.github.io/langgraphjs/how-tos/define-state/#getting-started for
// more on defining custom graph states.
const workflow = new StateGraph(MessagesAnnotation)
  // Define the two nodes we will cycle between
  .addNode("callModel", callModel)
  .addNode("tools", new ToolNode(tools))
  // Set the entrypoint as `callModel`
  // This means that this node is the first one called
  .addEdge("__start__", "callModel")
  .addConditionalEdges(
    // First, we define the edges' source node. We use `callModel`.
    // This means these are the edges taken after the `callModel` node is called.
    "callModel",
    // Next, we pass in the function that will determine the sink node(s), which
    // will be called after the source node is called.
    routeModelOutput,
    // List of the possible destinations the conditional edge can route to.
    // Required for conditional edges to properly render the graph in Studio
    ["tools", "__end__"]
  )
  // This means that after `tools` is called, `callModel` node is called next.
  .addEdge("tools", "callModel");

// Finally, we compile it!
// This compiles it into a graph you can invoke and deploy.
export const graph = workflow.compile();
Example file directory:
my-app/
├── src # all project code lies within here
   ├── utils # optional utilities for your graph
   ├── tools.ts # tools for your graph
   ├── nodes.ts # node functions for your graph
   └── state.ts # state definition of your graph
   └── agent.ts # code for constructing your graph
├── package.json # package dependencies
├── .env # environment variables
└── langgraph.json # configuration file for LangGraph

Create the API config

创建一个名为 langgraph.json配置文件。有关配置文件 JSON 对象中每个键的详细说明,请参阅配置文件参考 示例 langgraph.json 文件:
{
  "dependencies": ["."],
  "graphs": {
    "agent": "./src/index.ts:graph"
  },
  "env": ".env"
}
请注意,CompiledGraph 的变量名会出现在顶层 graphs 键中每个子键值的末尾(即 :<variable_name>)。
配置文件位置 配置文件必须放在与包含编译图及其依赖项的 TypeScript/JavaScript 文件相同或更高层级的目录中。
示例文件目录结构:
my-app/
├── src # 所有项目代码都位于此处
   ├── index.ts # 构建图的代码
   ├── nodes # 节点实现
   ├── tools.ts
   └── state.ts
   ├── utils
   └── index.ts
├── .env # 环境变量
├── langgraph.json  # LangGraph 的配置文件
├── package.json
└── tsconfig.json

接下来

After you setup your project and place it in a GitHub repository, it’s time to deploy your app.
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.