此 Helm 仓库包含生成 LangSmith UI 目前不直接支持的输出的查询(例如,在单个查询中获取多个组织的跟踪计数)。 此命令采用包含嵌入名称和密码的 postgres 连接字符串(可以从对密钥管理器的调用中传入)并从输入文件执行查询。在下面的示例中,我们使用 support_queries/postgres 目录中的 pg_get_trace_counts_daily.sql 输入文件。

前提条件

确保已准备好以下工具/项目。
  1. kubectl
  2. PostgreSQL 客户端
  3. PostgreSQL 数据库连接:
    • 主机
    • 端口
    • 用户名
      • 如果使用捆绑版本,这是 postgres
    • 密码
      • 如果使用捆绑版本,这是 postgres
    • 数据库名称
      • 如果使用捆绑版本,这是 postgres
  4. 从您将运行迁移脚本的机器到 PostgreSQL 数据库的连接。
    • 如果您使用捆绑版本,可能需要将 postgresql 服务端口转发到本地机器。
    • 运行 kubectl port-forward svc/langsmith-postgres 5432:5432 将 postgresql 服务端口转发到本地机器。
  5. 运行支持查询的脚本
    • 您可以从此处下载脚本

运行查询脚本

运行以下命令以运行所需的查询:
sh run_support_query_pg.sh <postgres_url> --input path/to/query.sql
例如,如果您使用带有端口转发的捆绑版本,命令可能如下所示:
sh run_support_query_pg.sh "postgres://postgres:postgres@localhost:5432/postgres" --input support_queries/pg_get_trace_counts_daily.sql
这将按工作区 ID 和组织 ID 输出每日跟踪的计数。要将其提取到文件,请添加标志 --output path/to/file.csv

导出使用情况数据

导出使用情况数据需要运行 Helm chart 版本 0.11.4 或更高版本。

获取客户信息

在运行导出脚本之前,您需要从 LangSmith API 检索客户信息。此信息是导出脚本的必需输入。
curl https://<langsmith_url>/api/v1/info
# if configured with a subdomain / path prefix:
curl http://<langsmith_url/prefix/api/v1/info
This will return a JSON response containing your customer information:
{
  "version": "0.11.4",
  "license_expiration_time": "2026-08-18T19:14:34Z",
  "customer_info": {
    "customer_id": "<id>",
    "customer_name": "<name>"
  }
}
Extract the customer_id and customer_name from this response to use as input for the export scripts.

使用 jq 处理 API 响应

您可以使用 jq 解析 JSON 响应并设置 bash 变量以在脚本中使用:
# Get the API response and extract customer information
export LANGSMITH_URL="<your_langsmith_url>"
response=$(curl -s $LANGSMITH_URL/api/v1/info)

# Extract customer_id and customer_name using jq
export CUSTOMER_ID=$(echo "$response" | jq -r '.customer_info.customer_id')
export CUSTOMER_NAME=$(echo "$response" | jq -r '.customer_info.customer_name')

# Verify the variables are set
echo "Customer ID: $CUSTOMER_ID"
echo "Customer Name: $CUSTOMER_NAME"
You can then use these environment variables in your export scripts or other commands. If you don’t have jq, run these commands to set the environment variables based on the curl output:
curl -s $LANGSMITH_URL/api/v1/info
export CUSTOMER_ID="<id>"
export CUSTOMER_NAME="<name>"

Initial export

These scripts export usage data to a CSV for reporting to LangChain. They additionally track the export by assigning a backfill ID and timestamp. To export LangSmith trace usage:
# Get customer information from the API
export LANGSMITH_URL="<your_langsmith_url>"
export response=$(curl -s $LANGSMITH_URL/api/v1/info)
export CUSTOMER_ID=$(echo "$response" | jq -r '.customer_info.customer_id') && echo "Customer ID: $CUSTOMER_ID"
export CUSTOMER_NAME=$(echo "$response" | jq -r '.customer_info.customer_name') && echo "Customer name: $CUSTOMER_NAME"

# Run the export script with customer information as variables
sh run_support_query_pg.sh <postgres_url> \
  --input support_queries/postgres/pg_usage_traces_backfill_export.sql \
  --output ls_export.csv \
  -v customer_id=$CUSTOMER_ID \
  -v customer_name=$CUSTOMER_NAME
To export LangSmith usage:
sh run_support_query_pg.sh <postgres_url> \
  --input support_queries/postgres/pg_usage_nodes_backfill_export.sql \
  --output lgp_export.csv \
  -v customer_id=$CUSTOMER_ID \
  -v customer_name=$CUSTOMER_NAME

Status update

These scripts update the status of usage events in your installation to reflect that the events have been successfully processed by LangChain. The scripts require passing in the corresponding backfill_id, which will be confirmed by your LangChain rep. To update LangSmith trace usage:
sh run_support_query_pg.sh <postgres_url> --input support_queries/postgres/pg_usage_traces_backfill_update.sql --output export.csv -v backfill_id=<backfill_id>
To update LangSmith usage:
sh run_support_query_pg.sh <postgres_url> --input support_queries/postgres/pg_usage_nodes_backfill_update.sql --output export.csv -v backfill_id=<backfill_id>

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.