这将帮助您开始使用 Contextual AI 的 Grounded Language Model 聊天模型 要了解更多关于 Contextual AI 的信息,请访问我们的文档 此集成需要 contextual-client Python SDK。在此处了解更多信息。

概述

此集成调用 Contextual AI 的 Grounded Language Model。

集成详情

ClassPackageLocalSerializableJS supportDownloadsVersion
ChatContextuallangchain-contextualbetaPyPI - DownloadsPyPI - Version

模型功能

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

设置

要访问 Contextual 模型,您需要创建 Contextual AI 帐户、获取 API 密钥并安装 langchain-contextual 集成包。

凭证

前往 app.contextual.ai 注册 Contextual 并生成 API 密钥。完成后,设置 CONTEXTUAL_AI_API_KEY 环境变量:
import getpass
import os

if not os.getenv("CONTEXTUAL_AI_API_KEY"):
    os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
        "Enter your Contextual API key: "
    )
如果您想获得模型调用的自动跟踪,也可以通过取消注释以下内容来设置您的 LangSmith API 密钥:
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

安装

LangChain Contextual 集成位于 langchain-contextual 包中:
pip install -qU langchain-contextual

实例化

现在我们可以实例化模型对象并生成聊天完成。 可以使用以下附加设置实例化聊天客户端:
ParameterTypeDescriptionDefault
temperatureOptional[float]The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness.0
top_pOptional[float]A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness.0.9
max_new_tokensOptional[int]The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048.1024
from langchain_contextual import ChatContextual

llm = ChatContextual(
    model="v1",  # defaults to `v1`
    api_key="",
    temperature=0,  # defaults to 0
    top_p=0.9,  # defaults to 0.9
    max_new_tokens=1024,  # defaults to 1024
)

调用

Contextual Grounded Language Model 在调用 ChatContextual.invoke 方法时接受额外的 kwargs 这些额外的输入是:
ParameterTypeDescription
knowledgelist[str]Required: A list of strings of knowledge sources the grounded language model can use when generating a response.
system_promptOptional[str]Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly.
avoid_commentaryOptional[bool]Optional (Defaults to False): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses.
# include a system prompt (optional)
system_prompt = "You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability."

# provide your own knowledge from your knowledge-base here in an array of string
knowledge = [
    "There are 2 types of dogs in the world: good dogs and best dogs.",
    "There are 2 types of cats in the world: good cats and best cats.",
]

# create your message
messages = [
    ("human", "What type of cats are there in the world and what are the types?"),
]

# invoke the GLM by providing the knowledge strings, optional system prompt
# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument
ai_msg = llm.invoke(
    messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True
)

print(ai_msg.content)

链式调用

我们可以将 Contextual 模型与输出解析器链接。
from langchain_core.output_parsers import StrOutputParser

chain = llm | StrOutputParser

chain.invoke(
    messages, knowledge=knowledge, systemp_prompt=system_prompt, avoid_commentary=True
)

API 参考

有关所有 ChatContextual 功能和配置的详细文档,请访问 Github 页面:github.com/ContextualAI//langchain-contextual
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.