本笔记展示如何使用 iMessage chat loader。该类可将 iMessage 对话转换为 LangChain 聊天消息。 在 macOS(至少 Ventura 13.4)上,iMessage 会话存储在 ~/Library/Messages/chat.db sqlite 数据库中,IMessageChatLoader 会从该数据库读取。
  1. 使用指向目标 chat.db 的路径创建 IMessageChatLoader
  2. 调用 loader.load()(或 loader.lazy_load())执行转换。可选地使用 merge_chat_runs 合并同一发送者的连续消息,或使用 map_ai_messages 将指定发送者的消息转换为 “AIMessage”。

1. 获取聊天数据库

通常终端没有 ~/Library/Messages 的访问权限。可将数据库复制到可访问目录(如 Documents)后再读取;或者(不推荐)在系统设置 > 安全性与隐私 > 完全磁盘访问中为终端授予权限。 我们提供了一个示例数据库,可在此链接下载。
# This uses some example data
import requests


def download_drive_file(url: str, output_path: str = "chat.db") -> None:
    file_id = url.split("/")[-2]
    download_url = f"https://drive.google.com/uc?export=download&id={file_id}"

    response = requests.get(download_url)
    if response.status_code != 200:
        print("Failed to download the file.")
        return

    with open(output_path, "wb") as file:
        file.write(response.content)
        print(f"File {output_path} downloaded.")


url = (
    "https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing"
)

# Download file to chat.db
download_drive_file(url)
File chat.db downloaded.

2. 创建 Chat Loader

为加载器提供 zip 目录的文件路径。可选地指定需要映射为 AI 消息的用户 ID,并配置是否合并消息段。
from langchain_community.chat_loaders.imessage import IMessageChatLoader
loader = IMessageChatLoader(
    path="./chat.db",
)

3. 加载消息

load()(或 lazy_load)会返回 “ChatSession” 列表,目前每个会话仅包含相应的消息列表。初始情况下所有消息都映射为 HumanMessage。 可选择合并消息“runs”(同一发送者的连续消息),并指定一个发送者代表 “AI”。微调后的 LLM 将学习生成这些 AI 信息。
from typing import List

from langchain_community.chat_loaders.utils import (
    map_ai_messages,
    merge_chat_runs,
)
from langchain_core.chat_sessions import ChatSession

raw_messages = loader.lazy_load()
# Merge consecutive messages from the same sender into a single message
merged_messages = merge_chat_runs(raw_messages)
# Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between?
chat_sessions: List[ChatSession] = list(
    map_ai_messages(merged_messages, sender="Tortoise")
)
# Now all of the Tortoise's messages will take the AIMessage class
# which maps to the 'assistant' role in OpenAI's training format
chat_sessions[0]["messages"][:3]
[AIMessage(content="Slow and steady, that's my motto.", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False),
 HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False),
 AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]

3. 准备微调

现在将聊天消息转换为 OpenAI 所需的字典格式,可使用 convert_messages_for_finetuning 工具。
from langchain_community.adapters.openai import convert_messages_for_finetuning
training_data = convert_messages_for_finetuning(chat_sessions)
print(f"Prepared {len(training_data)} dialogues for training")
Prepared 10 dialogues for training

4. 微调模型

开始微调模型。请确保已安装 openai 并正确设置 OPENAI_API_KEY
pip install -qU  langchain-openai
import json
import time
from io import BytesIO

import openai

# We will write the jsonl file in memory
my_file = BytesIO()
for m in training_data:
    my_file.write((json.dumps({"messages": m}) + "\n").encode("utf-8"))

my_file.seek(0)
training_file = openai.files.create(file=my_file, purpose="fine-tune")

# OpenAI audits each training file for compliance reasons.
# This make take a few minutes
status = openai.files.retrieve(training_file.id).status
start_time = time.time()
while status != "processed":
    print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True)
    time.sleep(5)
    status = openai.files.retrieve(training_file.id).status
print(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.")
File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.
文件准备就绪后即可启动训练任务。
job = openai.fine_tuning.jobs.create(
    training_file=training_file.id,
    model="gpt-3.5-turbo",
)
在模型准备期间可以稍作等待,这通常需要一定时间。
status = openai.fine_tuning.jobs.retrieve(job.id).status
start_time = time.time()
while status != "succeeded":
    print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True)
    time.sleep(5)
    job = openai.fine_tuning.jobs.retrieve(job.id)
    status = job.status
Status=[running]... 524.95s
print(job.fine_tuned_model)
ft:gpt-3.5-turbo-0613:personal::7sKoRdlz

5. 在 LangChain 中使用

可以将得到的模型 ID 直接传入 ChatOpenAI 模型类。
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    model=job.fine_tuned_model,
    temperature=1,
)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are speaking to hare."),
        ("human", "{input}"),
    ]
)

chain = prompt | model | StrOutputParser()
for tok in chain.stream({"input": "What's the golden thread?"}):
    print(tok, end="", flush=True)
A symbol of interconnectedness.

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.