Skip to main content
Open on GitHub

如何实现集成包

本指南将引导您完成实现LangChain集成包的过程。

集成包只是可以用pip install 安装的Python包,其中包含与LangChain核心接口兼容的类。

我们将涵盖:

  1. (可选)如何引导一个新的集成包
  2. 如何实现组件,例如聊天模型向量存储,这些组件遵循LangChain接口;

(可选)引导一个新的集成包

在本节中,我们将概述两种引导新集成包的选项,如果您愿意,欢迎使用其他工具!

  1. langchain-cli: 这是一个命令行工具,可用于通过LangChain组件的模板和用于依赖管理的Poetry来引导一个新的集成包。
  2. Poetry: 这是一个Python依赖管理工具,可用于引导一个新的带有依赖项的Python包。然后,您可以向此包添加LangChain组件。
Option 1: langchain-cli (recommended)

在本指南中,我们将使用langchain-cli从模板创建一个新的集成包,该模板可以编辑以实现您的LangChain组件。

先决条件

使用 langchain-cli 引导一个新的 Python 包

首先,安装 langchain-clipoetry

pip install langchain-cli poetry

接下来,为你的包想一个名字。在本指南中,我们将使用langchain-parrot-link。 你可以通过在PyPi网站上搜索来确认该名称是否可用。

接下来,使用langchain-cli创建你的新Python包,并使用cd进入新目录:

langchain-cli integration new

> The name of the integration to create (e.g. `my-integration`): parrot-link
> Name of integration in PascalCase [ParrotLink]:

cd parrot-link

接下来,让我们添加我们需要的任何依赖项

poetry add my-integration-sdk

我们还可以在一个单独的 poetry 依赖组中添加一些 typingtest 依赖项。

poetry add --group typing my-typing-dep
poetry add --group test my-test-dep

最后,让 poetry 设置一个包含你的依赖项以及集成包的虚拟环境:

poetry install --with lint,typing,test,test_integration

你现在有了一个新的Python包,其中包含了LangChain组件的模板!这个模板为每种集成类型提供了文件,你可以根据需要复制或删除这些文件(包括相关的测试文件)。

要从[模板]创建任何单独的文件,您可以运行例如:

langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models_2.py
Option 2: Poetry (manual)

在本指南中,我们将使用Poetry进行依赖管理和打包,欢迎您使用任何您偏好的其他工具。

先决条件

使用Poetry引导一个新的Python包

首先,安装Poetry:

pip install poetry

接下来,为你的包想一个名字。在本指南中,我们将使用langchain-parrot-link。 你可以通过在PyPi网站上搜索来确认该名称是否可用。

接下来,使用Poetry创建你的新Python包,并使用cd进入新目录:

poetry new langchain-parrot-link
cd langchain-parrot-link

使用Poetry添加主要依赖项,这将把它们添加到你的pyproject.toml文件中:

poetry add langchain-core

我们还将在一个单独的 poetry 依赖组中添加一些 test 依赖项。如果你没有使用 Poetry,我们建议以不会将它们打包到发布的包中的方式添加这些依赖项,或者在运行测试时单独安装它们。

langchain-tests 将提供我们稍后将使用的标准测试。 我们建议将这些固定到最新版本:

注意:将替换为下面langchain-tests的最新版本。

poetry add --group test pytest pytest-socket pytest-asyncio langchain-tests==<latest_version>

最后,让 poetry 设置一个包含你的依赖项以及集成包的虚拟环境:

poetry install --with test

你现在已经准备好开始编写你的集成包了!

编写您的集成

假设你正在构建一个简单的集成包,该包为LangChain提供了ChatParrotLink聊天模型集成。以下是一个简单的项目结构示例:

langchain-parrot-link/
├── langchain_parrot_link/
│ ├── __init__.py
│ └── chat_models.py
├── tests/
│ ├── __init__.py
│ └── test_chat_models.py
├── pyproject.toml
└── README.md

所有这些文件在第一步中应该已经存在,除了chat_models.pytest_chat_models.py!我们将在稍后按照标准测试指南实现test_chat_models.py

对于chat_models.py,只需将聊天模型实现的内容粘贴到上面

将你的包推送到公共的Github仓库

这仅在您希望在LangChain文档中发布您的集成时才需要。

  1. 在GitHub上创建一个新的仓库。
  2. 将你的代码推送到仓库。
  3. 确认您的仓库对公众可见(例如,在未登录Github的隐私浏览窗口中)。

实现LangChain组件

LangChain 组件是 langchain-core 中基类的子类。 示例包括 聊天模型向量存储工具嵌入模型检索器

您的集成包通常会实现至少一个这些组件的子类。展开下面的标签以查看每个组件的详细信息。

请参考自定义聊天模型指南以获取关于初学者聊天模型实现的详细信息。

你可以从以下模板或langchain-cli命令开始:

langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models.py
Example chat model code
langchain_parrot_link/chat_models.py
"""ParrotLink chat models."""

from typing import Any, Dict, Iterator, List, Optional

from langchain_core.callbacks import (
CallbackManagerForLLMRun,
)
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.messages.ai import UsageMetadata
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from pydantic import Field


class ChatParrotLink(BaseChatModel):
# TODO: Replace all TODOs in docstring. See example docstring:
# https://github.com/langchain-ai/langchain/blob/7ff05357bac6eaedf5058a2af88f23a1817d40fe/libs/partners/openai/langchain_openai/chat_models/base.py#L1120
"""ParrotLink chat model integration.

The default implementation echoes the first `parrot_buffer_length` characters of the input.

# TODO: Replace with relevant packages, env vars.
Setup:
Install ``langchain-parrot-link`` and set environment variable ``PARROT_LINK_API_KEY``.

.. code-block:: bash

pip install -U langchain-parrot-link
export PARROT_LINK_API_KEY="your-api-key"

# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of ParrotLink model to use.
temperature: float
Sampling temperature.
max_tokens: Optional[int]
Max number of tokens to generate.

# TODO: Populate with relevant params.
Key init args — client params:
timeout: Optional[float]
Timeout for requests.
max_retries: int
Max number of retries.
api_key: Optional[str]
ParrotLink API key. If not passed in will be read from env var PARROT_LINK_API_KEY.

See full list of supported init args and their descriptions in the params section.

# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python

from langchain_parrot_link import ChatParrotLink

llm = ChatParrotLink(
model="...",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...",
# other params...
)

Invoke:
.. code-block:: python

messages = [
("system", "You are a helpful translator. Translate the user sentence to French."),
("human", "I love programming."),
]
llm.invoke(messages)

.. code-block:: python

# TODO: Example output.

# TODO: Delete if token-level streaming isn't supported.
Stream:
.. code-block:: python

for chunk in llm.stream(messages):
print(chunk)

.. code-block:: python

# TODO: Example output.

.. code-block:: python

stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full

.. code-block:: python

# TODO: Example output.

# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python

await llm.ainvoke(messages)

# stream:
# async for chunk in (await llm.astream(messages))

# batch:
# await llm.abatch([messages])

.. code-block:: python

# TODO: Example output.

# TODO: Delete if .bind_tools() isn't supported.
Tool calling:
.. code-block:: python

from pydantic import BaseModel, Field

class GetWeather(BaseModel):
'''Get the current weather in a given location'''

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
'''Get the current population in a given location'''

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls

.. code-block:: python

# TODO: Example output.

See ``ChatParrotLink.bind_tools()`` method for more.

# TODO: Delete if .with_structured_output() isn't supported.
Structured output:
.. code-block:: python

from typing import Optional

from pydantic import BaseModel, Field

class Joke(BaseModel):
'''Joke to tell user.'''

setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")

structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")

.. code-block:: python

# TODO: Example output.

See ``ChatParrotLink.with_structured_output()`` for more.

# TODO: Delete if JSON mode response format isn't supported.
JSON mode:
.. code-block:: python

# TODO: Replace with appropriate bind arg.
json_llm = llm.bind(response_format={"type": "json_object"})
ai_msg = json_llm.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
ai_msg.content

.. code-block:: python

# TODO: Example output.

# TODO: Delete if image inputs aren't supported.
Image input:
.. code-block:: python

import base64
import httpx
from langchain_core.messages import HumanMessage

image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
# TODO: Replace with appropriate message content format.
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
},
],
)
ai_msg = llm.invoke([message])
ai_msg.content

.. code-block:: python

# TODO: Example output.

# TODO: Delete if audio inputs aren't supported.
Audio input:
.. code-block:: python

# TODO: Example input

.. code-block:: python

# TODO: Example output

# TODO: Delete if video inputs aren't supported.
Video input:
.. code-block:: python

# TODO: Example input

.. code-block:: python

# TODO: Example output

# TODO: Delete if token usage metadata isn't supported.
Token usage:
.. code-block:: python

ai_msg = llm.invoke(messages)
ai_msg.usage_metadata

.. code-block:: python

{'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33}

# TODO: Delete if logprobs aren't supported.
Logprobs:
.. code-block:: python

# TODO: Replace with appropriate bind arg.
logprobs_llm = llm.bind(logprobs=True)
ai_msg = logprobs_llm.invoke(messages)
ai_msg.response_metadata["logprobs"]

.. code-block:: python

# TODO: Example output.

Response metadata
.. code-block:: python

ai_msg = llm.invoke(messages)
ai_msg.response_metadata

.. code-block:: python

# TODO: Example output.

""" # noqa: E501

model_name: str = Field(alias="model")
"""The name of the model"""
parrot_buffer_length: int
"""The number of characters from the last message of the prompt to be echoed."""
temperature: Optional[float] = None
max_tokens: Optional[int] = None
timeout: Optional[int] = None
stop: Optional[List[str]] = None
max_retries: int = 2

@property
def _llm_type(self) -> str:
"""Return type of chat model."""
return "chat-__package_name_short__"

@property
def _identifying_params(self) -> Dict[str, Any]:
"""Return a dictionary of identifying parameters.

This information is used by the LangChain callback system, which
is used for tracing purposes make it possible to monitor LLMs.
"""
return {
# The model name allows users to specify custom token counting
# rules in LLM monitoring applications (e.g., in LangSmith users
# can provide per token pricing for their model and monitor
# costs for the given LLM.)
"model_name": self.model_name,
}

def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
"""Override the _generate method to implement the chat model logic.

This can be a call to an API, a call to a local model, or any other
implementation that generates a response to the input prompt.

Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
# Replace this with actual logic to generate a response from a list
# of messages.
last_message = messages[-1]
tokens = last_message.content[: self.parrot_buffer_length]
ct_input_tokens = sum(len(message.content) for message in messages)
ct_output_tokens = len(tokens)
message = AIMessage(
content=tokens,
additional_kwargs={}, # Used to add additional payload to the message
response_metadata={ # Use for response metadata
"time_in_seconds": 3,
},
usage_metadata={
"input_tokens": ct_input_tokens,
"output_tokens": ct_output_tokens,
"total_tokens": ct_input_tokens + ct_output_tokens,
},
)
##

generation = ChatGeneration(message=message)
return ChatResult(generations=[generation])

def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
"""Stream the output of the model.

This method should be implemented if the model can generate output
in a streaming fashion. If the model does not support streaming,
do not implement it. In that case streaming requests will be automatically
handled by the _generate method.

Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
last_message = messages[-1]
tokens = str(last_message.content[: self.parrot_buffer_length])
ct_input_tokens = sum(len(message.content) for message in messages)

for token in tokens:
usage_metadata = UsageMetadata(
{
"input_tokens": ct_input_tokens,
"output_tokens": 1,
"total_tokens": ct_input_tokens + 1,
}
)
ct_input_tokens = 0
chunk = ChatGenerationChunk(
message=AIMessageChunk(content=token, usage_metadata=usage_metadata)
)

if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)

yield chunk

# Let's add some other information (e.g., response metadata)
chunk = ChatGenerationChunk(
message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3})
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk

# TODO: Implement if ChatParrotLink supports async streaming. Otherwise delete.
# async def _astream(
# self,
# messages: List[BaseMessage],
# stop: Optional[List[str]] = None,
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
# **kwargs: Any,
# ) -> AsyncIterator[ChatGenerationChunk]:

# TODO: Implement if ChatParrotLink supports async generation. Otherwise delete.
# async def _agenerate(
# self,
# messages: List[BaseMessage],
# stop: Optional[List[str]] = None,
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
# **kwargs: Any,
# ) -> ChatResult:

下一步

现在你已经实现了你的包,你可以继续测试你的集成并成功运行它们。


这个页面有帮助吗?