你可以通过环境变量静态地指定追踪日志的目标项目,也可以在运行时动态切换。

静态设置目标项目

正如追踪概念中所述,LangSmith 通过 Project 来归类追踪信息。若未指定,系统会默认使用 default。你可以通过设置 LANGSMITH_PROJECT 环境变量,在整个应用运行期间使用自定义项目名称。务必在启动应用前完成设置。
export LANGSMITH_PROJECT=my-custom-project
LANGSMITH_PROJECT 仅在 JS SDK 0.2.16 及以上版本支持;如果你使用更旧的版本,请改用 LANGCHAIN_PROJECT
如果指定的项目不存在,系统会在接收到第一条追踪数据时自动创建。

动态设置目标项目

根据你为追踪所做的代码注解方式,也可以在程序运行时通过多种方法动态指定项目名称。这对于在同一应用中将不同场景的追踪日志分流到不同项目非常有用。
使用下列任意方式动态设定项目名称时,会覆盖 LANGSMITH_PROJECT 环境变量中的取值。
import openai
from langsmith import traceable
from langsmith.run_trees import RunTree

client = openai.Client()
messages = [
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": "Hello!"}
]

# Use the @traceable decorator with the 'project_name' parameter to log traces to LangSmith
# Ensure that the LANGSMITH_TRACING environment variables is set for @traceable to work
@traceable(
  run_type="llm",
  name="OpenAI Call Decorator",
  project_name="My Project"
)
def call_openai(
  messages: list[dict], model: str = "gpt-4o-mini"
) -> str:
  return client.chat.completions.create(
      model=model,
      messages=messages,
  ).choices[0].message.content

# Call the decorated function
call_openai(messages)

# You can also specify the Project via the project_name parameter
# This will override the project_name specified in the @traceable decorator
call_openai(
  messages,
  langsmith_extra={"project_name": "My Overridden Project"},
)

# The wrapped OpenAI client accepts all the same langsmith_extra parameters
# as @traceable decorated functions, and logs traces to LangSmith automatically.
# Ensure that the LANGSMITH_TRACING environment variables is set for the wrapper to work.
from langsmith import wrappers
wrapped_client = wrappers.wrap_openai(client)
wrapped_client.chat.completions.create(
  model="gpt-4o-mini",
  messages=messages,
  langsmith_extra={"project_name": "My Project"},
)

# Alternatively, create a RunTree object
# You can set the project name using the project_name parameter
rt = RunTree(
  run_type="llm",
  name="OpenAI Call RunTree",
  inputs={"messages": messages},
  project_name="My Project"
)
chat_completion = client.chat.completions.create(
  model="gpt-4o-mini",
  messages=messages,
)
# End and submit the run
rt.end(outputs=chat_completion)
rt.post()

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.