如何实现集成包
本指南将引导您完成实现LangChain集成包的过程。
集成包只是可以用pip install
安装的Python包,其中包含与LangChain核心接口兼容的类。
我们将涵盖:
(可选)引导一个新的集成包
在本节中,我们将概述两种引导新集成包的选项,如果您愿意,欢迎使用其他工具!
- langchain-cli: 这是一个命令行工具,可用于通过LangChain组件的模板和用于依赖管理的Poetry来引导一个新的集成包。
- Poetry: 这是一个Python依赖管理工具,可用于引导一个新的带有依赖项的Python包。然后,您可以向此包添加LangChain组件。
Option 1: langchain-cli (recommended)
在本指南中,我们将使用langchain-cli
从模板创建一个新的集成包,该模板可以编辑以实现您的LangChain组件。
先决条件
使用 langchain-cli 引导一个新的 Python 包
首先,安装 langchain-cli
和 poetry
:
pip install langchain-cli poetry
接下来,为你的包想一个名字。在本指南中,我们将使用langchain-parrot-link
。
你可以通过在PyPi网站上搜索来确认该名称是否可用。
接下来,使用langchain-cli
创建你的新Python包,并使用cd
进入新目录:
langchain-cli integration new
> The name of the integration to create (e.g. `my-integration`): parrot-link
> Name of integration in PascalCase [ParrotLink]:
cd parrot-link
接下来,让我们添加我们需要的任何依赖项
poetry add my-integration-sdk
我们还可以在一个单独的 poetry 依赖组中添加一些 typing
或 test
依赖项。
poetry add --group typing my-typing-dep
poetry add --group test my-test-dep
最后,让 poetry 设置一个包含你的依赖项以及集成包的虚拟环境:
poetry install --with lint,typing,test,test_integration
你现在有了一个新的Python包,其中包含了LangChain组件的模板!这个模板为每种集成类型提供了文件,你可以根据需要复制或删除这些文件(包括相关的测试文件)。
要从[模板]创建任何单独的文件,您可以运行例如:
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models_2.py
Option 2: Poetry (manual)
在本指南中,我们将使用Poetry进行依赖管理和打包,欢迎您使用任何您偏好的其他工具。
先决条件
使用Poetry引导一个新的Python包
首先,安装Poetry:
pip install poetry
接下来,为你的包想一个名字。在本指南中,我们将使用langchain-parrot-link
。
你可以通过在PyPi网站上搜索来确认该名称是否可用。
接下来,使用Poetry创建你的新Python包,并使用cd
进入新目录:
poetry new langchain-parrot-link
cd langchain-parrot-link
使用Poetry添加主要依赖项,这将把它们添加到你的pyproject.toml
文件中:
poetry add langchain-core
我们还将在一个单独的 poetry 依赖组中添加一些 test
依赖项。如果你没有使用 Poetry,我们建议以不会将它们打包到发布的包中的方式添加这些依赖项,或者在运行测试时单独安装它们。
langchain-tests
将提供我们稍后将使用的标准测试。
我们建议将这些固定到最新版本:
注意:将
替换为下面langchain-tests
的最新版本。
poetry add --group test pytest pytest-socket pytest-asyncio langchain-tests==<latest_version>
最后,让 poetry 设置一个包含你的依赖项以及集成包的虚拟环境:
poetry install --with test
你现在已经准备好开始编写你的集成包了!
编写您的集成
假设你正在构建一个简单的集成包,该包为LangChain提供了ChatParrotLink
聊天模型集成。以下是一个简单的项目结构示例:
langchain-parrot-link/
├── langchain_parrot_link/
│ ├── __init__.py
│ └── chat_models.py
├── tests/
│ ├── __init__.py
│ └── test_chat_models.py
├── pyproject.toml
└── README.md
所有这些文件在第一步中应该已经存在,除了chat_models.py
和test_chat_models.py
!我们将在稍后按照标准测试指南实现test_chat_models.py
。
对于chat_models.py
,只需将聊天模型实现的内容粘贴到上面。
将你的包推送到公共的Github仓库
这仅在您希望在LangChain文档中发布您的集成时才需要。
- 在GitHub上创建一个新的仓库。
- 将你的代码推送到仓库。
- 确认您的仓库对公众可见(例如,在未登录Github的隐私浏览窗口中)。
实现LangChain组件
LangChain 组件是 langchain-core 中基类的子类。 示例包括 聊天模型、 向量存储、工具、 嵌入模型 和 检索器。
您的集成包通常会实现至少一个这些组件的子类。展开下面的标签以查看每个组件的详细信息。
- 聊天模型
- 向量存储
- 嵌入
- 工具
- 检索器
请参考自定义聊天模型指南以获取关于初学者聊天模型实现的详细信息。
你可以从以下模板或langchain-cli命令开始:
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models.py
Example chat model code
"""ParrotLink chat models."""
from typing import Any, Dict, Iterator, List, Optional
from langchain_core.callbacks import (
CallbackManagerForLLMRun,
)
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.messages.ai import UsageMetadata
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from pydantic import Field
class ChatParrotLink(BaseChatModel):
# TODO: Replace all TODOs in docstring. See example docstring:
# https://github.com/langchain-ai/langchain/blob/7ff05357bac6eaedf5058a2af88f23a1817d40fe/libs/partners/openai/langchain_openai/chat_models/base.py#L1120
"""ParrotLink chat model integration.
The default implementation echoes the first `parrot_buffer_length` characters of the input.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``langchain-parrot-link`` and set environment variable ``PARROT_LINK_API_KEY``.
.. code-block:: bash
pip install -U langchain-parrot-link
export PARROT_LINK_API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of ParrotLink model to use.
temperature: float
Sampling temperature.
max_tokens: Optional[int]
Max number of tokens to generate.
# TODO: Populate with relevant params.
Key init args — client params:
timeout: Optional[float]
Timeout for requests.
max_retries: int
Max number of retries.
api_key: Optional[str]
ParrotLink API key. If not passed in will be read from env var PARROT_LINK_API_KEY.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from langchain_parrot_link import ChatParrotLink
llm = ChatParrotLink(
model="...",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...",
# other params...
)
Invoke:
.. code-block:: python
messages = [
("system", "You are a helpful translator. Translate the user sentence to French."),
("human", "I love programming."),
]
llm.invoke(messages)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if token-level streaming isn't supported.
Stream:
.. code-block:: python
for chunk in llm.stream(messages):
print(chunk)
.. code-block:: python
# TODO: Example output.
.. code-block:: python
stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
.. code-block:: python
# TODO: Example output.
# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python
await llm.ainvoke(messages)
# stream:
# async for chunk in (await llm.astream(messages))
# batch:
# await llm.abatch([messages])
.. code-block:: python
# TODO: Example output.
# TODO: Delete if .bind_tools() isn't supported.
Tool calling:
.. code-block:: python
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls
.. code-block:: python
# TODO: Example output.
See ``ChatParrotLink.bind_tools()`` method for more.
# TODO: Delete if .with_structured_output() isn't supported.
Structured output:
.. code-block:: python
from typing import Optional
from pydantic import BaseModel, Field
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")
.. code-block:: python
# TODO: Example output.
See ``ChatParrotLink.with_structured_output()`` for more.
# TODO: Delete if JSON mode response format isn't supported.
JSON mode:
.. code-block:: python
# TODO: Replace with appropriate bind arg.
json_llm = llm.bind(response_format={"type": "json_object"})
ai_msg = json_llm.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
ai_msg.content
.. code-block:: python
# TODO: Example output.
# TODO: Delete if image inputs aren't supported.
Image input:
.. code-block:: python
import base64
import httpx
from langchain_core.messages import HumanMessage
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
# TODO: Replace with appropriate message content format.
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
},
],
)
ai_msg = llm.invoke([message])
ai_msg.content
.. code-block:: python
# TODO: Example output.
# TODO: Delete if audio inputs aren't supported.
Audio input:
.. code-block:: python
# TODO: Example input
.. code-block:: python
# TODO: Example output
# TODO: Delete if video inputs aren't supported.
Video input:
.. code-block:: python
# TODO: Example input
.. code-block:: python
# TODO: Example output
# TODO: Delete if token usage metadata isn't supported.
Token usage:
.. code-block:: python
ai_msg = llm.invoke(messages)
ai_msg.usage_metadata
.. code-block:: python
{'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33}
# TODO: Delete if logprobs aren't supported.
Logprobs:
.. code-block:: python
# TODO: Replace with appropriate bind arg.
logprobs_llm = llm.bind(logprobs=True)
ai_msg = logprobs_llm.invoke(messages)
ai_msg.response_metadata["logprobs"]
.. code-block:: python
# TODO: Example output.
Response metadata
.. code-block:: python
ai_msg = llm.invoke(messages)
ai_msg.response_metadata
.. code-block:: python
# TODO: Example output.
""" # noqa: E501
model_name: str = Field(alias="model")
"""The name of the model"""
parrot_buffer_length: int
"""The number of characters from the last message of the prompt to be echoed."""
temperature: Optional[float] = None
max_tokens: Optional[int] = None
timeout: Optional[int] = None
stop: Optional[List[str]] = None
max_retries: int = 2
@property
def _llm_type(self) -> str:
"""Return type of chat model."""
return "chat-__package_name_short__"
@property
def _identifying_params(self) -> Dict[str, Any]:
"""Return a dictionary of identifying parameters.
This information is used by the LangChain callback system, which
is used for tracing purposes make it possible to monitor LLMs.
"""
return {
# The model name allows users to specify custom token counting
# rules in LLM monitoring applications (e.g., in LangSmith users
# can provide per token pricing for their model and monitor
# costs for the given LLM.)
"model_name": self.model_name,
}
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
"""Override the _generate method to implement the chat model logic.
This can be a call to an API, a call to a local model, or any other
implementation that generates a response to the input prompt.
Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
# Replace this with actual logic to generate a response from a list
# of messages.
last_message = messages[-1]
tokens = last_message.content[: self.parrot_buffer_length]
ct_input_tokens = sum(len(message.content) for message in messages)
ct_output_tokens = len(tokens)
message = AIMessage(
content=tokens,
additional_kwargs={}, # Used to add additional payload to the message
response_metadata={ # Use for response metadata
"time_in_seconds": 3,
},
usage_metadata={
"input_tokens": ct_input_tokens,
"output_tokens": ct_output_tokens,
"total_tokens": ct_input_tokens + ct_output_tokens,
},
)
##
generation = ChatGeneration(message=message)
return ChatResult(generations=[generation])
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
"""Stream the output of the model.
This method should be implemented if the model can generate output
in a streaming fashion. If the model does not support streaming,
do not implement it. In that case streaming requests will be automatically
handled by the _generate method.
Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
last_message = messages[-1]
tokens = str(last_message.content[: self.parrot_buffer_length])
ct_input_tokens = sum(len(message.content) for message in messages)
for token in tokens:
usage_metadata = UsageMetadata(
{
"input_tokens": ct_input_tokens,
"output_tokens": 1,
"total_tokens": ct_input_tokens + 1,
}
)
ct_input_tokens = 0
chunk = ChatGenerationChunk(
message=AIMessageChunk(content=token, usage_metadata=usage_metadata)
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk
# Let's add some other information (e.g., response metadata)
chunk = ChatGenerationChunk(
message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3})
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk
# TODO: Implement if ChatParrotLink supports async streaming. Otherwise delete.
# async def _astream(
# self,
# messages: List[BaseMessage],
# stop: Optional[List[str]] = None,
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
# **kwargs: Any,
# ) -> AsyncIterator[ChatGenerationChunk]:
# TODO: Implement if ChatParrotLink supports async generation. Otherwise delete.
# async def _agenerate(
# self,
# messages: List[BaseMessage],
# stop: Optional[List[str]] = None,
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
# **kwargs: Any,
# ) -> ChatResult:
您的向量存储实现将取决于您选择的数据库技术。
langchain-core
包含一个最小的
内存向量存储
我们可以将其作为指南。您可以访问代码 这里。
所有向量存储都必须继承自 VectorStore 基类。该接口包含用于在向量存储中写入、删除和搜索文档的方法。
VectorStore
支持多种同步和异步搜索类型(例如,最近邻或最大边际相关性),以及向存储中添加文档的接口。有关所有支持的方法,请参阅 API 参考。所需方法如下表所示:
方法/属性 | 描述 |
---|---|
add_documents | 将文档添加到向量存储中。 |
delete | 从向量存储中删除选定的文档(通过ID) |
get_by_ids | 从向量存储中获取选定的文档(通过ID) |
similarity_search | 获取与查询最相似的文档。 |
embeddings (属性) | 用于向量存储的嵌入对象。 |
from_texts | 通过添加文本来实例化向量存储。 |
请注意,InMemoryVectorStore
实现了一些可选的搜索类型,以及将对象加载和转储到文件的便捷方法,但这并不是所有实现都必需的。
内存向量存储在LangChain Github仓库中通过了标准测试。
Example vector store code
"""ParrotLink vector stores."""
from __future__ import annotations
import uuid
from typing import (
Any,
Callable,
Iterator,
List,
Optional,
Sequence,
Tuple,
Type,
TypeVar,
)
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_core.vectorstores import VectorStore
from langchain_core.vectorstores.utils import _cosine_similarity as cosine_similarity
VST = TypeVar("VST", bound=VectorStore)
class ParrotLinkVectorStore(VectorStore):
# TODO: Replace all TODOs in docstring.
"""ParrotLink vector store integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``langchain-parrot-link`` and set environment variable ``PARROT_LINK_API_KEY``.
.. code-block:: bash
pip install -U langchain-parrot-link
export PARROT_LINK_API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — indexing params:
collection_name: str
Name of the collection.
embedding_function: Embeddings
Embedding function to use.
# TODO: Populate with relevant params.
Key init args — client params:
client: Optional[Client]
Client to use.
connection_args: Optional[dict]
Connection arguments.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from langchain_parrot_link.vectorstores import ParrotLinkVectorStore
from langchain_openai import OpenAIEmbeddings
vector_store = ParrotLinkVectorStore(
collection_name="foo",
embedding_function=OpenAIEmbeddings(),
connection_args={"uri": "./foo.db"},
# other params...
)
# TODO: Populate with relevant variables.
Add Documents:
.. code-block:: python
from langchain_core.documents import Document
document_1 = Document(page_content="foo", metadata={"baz": "bar"})
document_2 = Document(page_content="thud", metadata={"bar": "baz"})
document_3 = Document(page_content="i will be deleted :(")
documents = [document_1, document_2, document_3]
ids = ["1", "2", "3"]
vector_store.add_documents(documents=documents, ids=ids)
# TODO: Populate with relevant variables.
Delete Documents:
.. code-block:: python
vector_store.delete(ids=["3"])
# TODO: Fill out with relevant variables and example output.
Search:
.. code-block:: python
results = vector_store.similarity_search(query="thud",k=1)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
.. code-block:: python
# TODO: Example output
# TODO: Fill out with relevant variables and example output.
Search with filter:
.. code-block:: python
results = vector_store.similarity_search(query="thud",k=1,filter={"bar": "baz"})
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
.. code-block:: python
# TODO: Example output
# TODO: Fill out with relevant variables and example output.
Search with score:
.. code-block:: python
results = vector_store.similarity_search_with_score(query="qux",k=1)
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
.. code-block:: python
# TODO: Example output
# TODO: Fill out with relevant variables and example output.
Async:
.. code-block:: python
# add documents
# await vector_store.aadd_documents(documents=documents, ids=ids)
# delete documents
# await vector_store.adelete(ids=["3"])
# search
# results = vector_store.asimilarity_search(query="thud",k=1)
# search with score
results = await vector_store.asimilarity_search_with_score(query="qux",k=1)
for doc,score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
.. code-block:: python
# TODO: Example output
# TODO: Fill out with relevant variables and example output.
Use as Retriever:
.. code-block:: python
retriever = vector_store.as_retriever(
search_type="mmr",
search_kwargs={"k": 1, "fetch_k": 2, "lambda_mult": 0.5},
)
retriever.invoke("thud")
.. code-block:: python
# TODO: Example output
""" # noqa: E501
def __init__(self, embedding: Embeddings) -> None:
"""Initialize with the given embedding function.
Args:
embedding: embedding function to use.
"""
self._database: dict[str, dict[str, Any]] = {}
self.embedding = embedding
@classmethod
def from_texts(
cls: Type[ParrotLinkVectorStore],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> ParrotLinkVectorStore:
store = cls(
embedding=embedding,
)
store.add_texts(texts=texts, metadatas=metadatas, **kwargs)
return store
# optional: add custom async implementations
# @classmethod
# async def afrom_texts(
# cls: Type[VST],
# texts: List[str],
# embedding: Embeddings,
# metadatas: Optional[List[dict]] = None,
# **kwargs: Any,
# ) -> VST:
# return await asyncio.get_running_loop().run_in_executor(
# None, partial(cls.from_texts, **kwargs), texts, embedding, metadatas
# )
@property
def embeddings(self) -> Embeddings:
return self.embedding
def add_documents(
self,
documents: List[Document],
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Add documents to the store."""
texts = [doc.page_content for doc in documents]
vectors = self.embedding.embed_documents(texts)
if ids and len(ids) != len(texts):
msg = (
f"ids must be the same length as texts. "
f"Got {len(ids)} ids and {len(texts)} texts."
)
raise ValueError(msg)
id_iterator: Iterator[Optional[str]] = (
iter(ids) if ids else iter(doc.id for doc in documents)
)
ids_ = []
for doc, vector in zip(documents, vectors):
doc_id = next(id_iterator)
doc_id_ = doc_id if doc_id else str(uuid.uuid4())
ids_.append(doc_id_)
self._database[doc_id_] = {
"id": doc_id_,
"vector": vector,
"text": doc.page_content,
"metadata": doc.metadata,
}
return ids_
# optional: add custom async implementations
# async def aadd_documents(
# self,
# documents: List[Document],
# ids: Optional[List[str]] = None,
# **kwargs: Any,
# ) -> List[str]:
# raise NotImplementedError
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:
if ids:
for _id in ids:
self._database.pop(_id, None)
# optional: add custom async implementations
# async def adelete(
# self, ids: Optional[List[str]] = None, **kwargs: Any
# ) -> None:
# raise NotImplementedError
def get_by_ids(self, ids: Sequence[str], /) -> list[Document]:
"""Get documents by their ids.
Args:
ids: The ids of the documents to get.
Returns:
A list of Document objects.
"""
documents = []
for doc_id in ids:
doc = self._database.get(doc_id)
if doc:
documents.append(
Document(
id=doc["id"],
page_content=doc["text"],
metadata=doc["metadata"],
)
)
return documents
# optional: add custom async implementations
# async def aget_by_ids(self, ids: Sequence[str], /) -> list[Document]:
# raise NotImplementedError
# NOTE: the below helper method implements similarity search for in-memory
# storage. It is optional and not a part of the vector store interface.
def _similarity_search_with_score_by_vector(
self,
embedding: List[float],
k: int = 4,
filter: Optional[Callable[[Document], bool]] = None,
**kwargs: Any,
) -> List[tuple[Document, float, List[float]]]:
# get all docs with fixed order in list
docs = list(self._database.values())
if filter is not None:
docs = [
doc
for doc in docs
if filter(Document(page_content=doc["text"], metadata=doc["metadata"]))
]
if not docs:
return []
similarity = cosine_similarity([embedding], [doc["vector"] for doc in docs])[0]
# get the indices ordered by similarity score
top_k_idx = similarity.argsort()[::-1][:k]
return [
(
# Document
Document(
id=doc_dict["id"],
page_content=doc_dict["text"],
metadata=doc_dict["metadata"],
),
# Score
float(similarity[idx].item()),
# Embedding vector
doc_dict["vector"],
)
for idx in top_k_idx
# Assign using walrus operator to avoid multiple lookups
if (doc_dict := docs[idx])
]
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
embedding = self.embedding.embed_query(query)
return [
doc
for doc, _, _ in self._similarity_search_with_score_by_vector(
embedding=embedding, k=k, **kwargs
)
]
# optional: add custom async implementations
# async def asimilarity_search(
# self, query: str, k: int = 4, **kwargs: Any
# ) -> List[Document]:
# # This is a temporary workaround to make the similarity search
# # asynchronous. The proper solution is to make the similarity search
# # asynchronous in the vector store implementations.
# func = partial(self.similarity_search, query, k=k, **kwargs)
# return await asyncio.get_event_loop().run_in_executor(None, func)
def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
embedding = self.embedding.embed_query(query)
return [
(doc, similarity)
for doc, similarity, _ in self._similarity_search_with_score_by_vector(
embedding=embedding, k=k, **kwargs
)
]
# optional: add custom async implementations
# async def asimilarity_search_with_score(
# self, *args: Any, **kwargs: Any
# ) -> List[Tuple[Document, float]]:
# # This is a temporary workaround to make the similarity search
# # asynchronous. The proper solution is to make the similarity search
# # asynchronous in the vector store implementations.
# func = partial(self.similarity_search_with_score, *args, **kwargs)
# return await asyncio.get_event_loop().run_in_executor(None, func)
### ADDITIONAL OPTIONAL SEARCH METHODS BELOW ###
# def similarity_search_by_vector(
# self, embedding: List[float], k: int = 4, **kwargs: Any
# ) -> List[Document]:
# raise NotImplementedError
# optional: add custom async implementations
# async def asimilarity_search_by_vector(
# self, embedding: List[float], k: int = 4, **kwargs: Any
# ) -> List[Document]:
# # This is a temporary workaround to make the similarity search
# # asynchronous. The proper solution is to make the similarity search
# # asynchronous in the vector store implementations.
# func = partial(self.similarity_search_by_vector, embedding, k=k, **kwargs)
# return await asyncio.get_event_loop().run_in_executor(None, func)
# def max_marginal_relevance_search(
# self,
# query: str,
# k: int = 4,
# fetch_k: int = 20,
# lambda_mult: float = 0.5,
# **kwargs: Any,
# ) -> List[Document]:
# raise NotImplementedError
# optional: add custom async implementations
# async def amax_marginal_relevance_search(
# self,
# query: str,
# k: int = 4,
# fetch_k: int = 20,
# lambda_mult: float = 0.5,
# **kwargs: Any,
# ) -> List[Document]:
# # This is a temporary workaround to make the similarity search
# # asynchronous. The proper solution is to make the similarity search
# # asynchronous in the vector store implementations.
# func = partial(
# self.max_marginal_relevance_search,
# query,
# k=k,
# fetch_k=fetch_k,
# lambda_mult=lambda_mult,
# **kwargs,
# )
# return await asyncio.get_event_loop().run_in_executor(None, func)
# def max_marginal_relevance_search_by_vector(
# self,
# embedding: List[float],
# k: int = 4,
# fetch_k: int = 20,
# lambda_mult: float = 0.5,
# **kwargs: Any,
# ) -> List[Document]:
# raise NotImplementedError
# optional: add custom async implementations
# async def amax_marginal_relevance_search_by_vector(
# self,
# embedding: List[float],
# k: int = 4,
# fetch_k: int = 20,
# lambda_mult: float = 0.5,
# **kwargs: Any,
# ) -> List[Document]:
# raise NotImplementedError
嵌入用于将str
对象从Document.page_content
字段转换为向量表示(表示为浮点数列表)。
你可以从以下模板或langchain-cli命令开始:
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/embeddings.py \
--dst langchain_parrot_link/embeddings.py
Example embeddings code
from typing import List
from langchain_core.embeddings import Embeddings
class ParrotLinkEmbeddings(Embeddings):
"""ParrotLink embedding model integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``langchain-parrot-link`` and set environment variable
``PARROT_LINK_API_KEY``.
.. code-block:: bash
pip install -U langchain-parrot-link
export PARROT_LINK_API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of ParrotLink model to use.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from langchain_parrot_link import ParrotLinkEmbeddings
embed = ParrotLinkEmbeddings(
model="...",
# api_key="...",
# other params...
)
Embed single text:
.. code-block:: python
input_text = "The meaning of life is 42"
embed.embed_query(input_text)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if token-level streaming isn't supported.
Embed multiple text:
.. code-block:: python
input_texts = ["Document 1...", "Document 2..."]
embed.embed_documents(input_texts)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python
await embed.aembed_query(input_text)
# multiple:
# await embed.aembed_documents(input_texts)
.. code-block:: python
# TODO: Example output.
"""
def __init__(self, model: str):
self.model = model
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Embed search docs."""
return [[0.5, 0.6, 0.7] for _ in texts]
def embed_query(self, text: str) -> List[float]:
"""Embed query text."""
return self.embed_documents([text])[0]
# optional: add custom async implementations here
# you can also delete these, and the base class will
# use the default implementation, which calls the sync
# version in an async executor:
# async def aembed_documents(self, texts: List[str]) -> List[List[float]]:
# """Asynchronous Embed search docs."""
# ...
# async def aembed_query(self, text: str) -> List[float]:
# """Asynchronous Embed query text."""
# ...
工具主要用于两种方式:
- 定义一个“输入模式”或“参数模式”,以便与文本请求一起传递给聊天模型的工具调用功能,使得聊天模型可以生成“工具调用”或调用工具所需的参数。
- 将上述生成的“工具调用”作为输入,执行某些操作并返回一个响应,该响应可以作为ToolMessage传递回聊天模型。
Tools
类必须继承自 BaseTool 基类。此接口有3个属性和2个方法,应在子类中实现。
方法/属性 | 描述 |
---|---|
name | 工具的名称(也会传递给LLM)。 |
description | 工具的说明(也会传递给LLM)。 |
args_schema | 定义工具输入参数的架构。 |
_run | 使用给定的参数运行工具。 |
_arun | 使用给定的参数异步运行工具。 |
属性
name
、description
和 args_schema
都是应该在子类中实现的属性。name
和 description
是用于识别工具并提供工具描述的字符串。这两者都会传递给LLM,用户可以根据他们使用的LLM覆盖这些值,作为一种“提示工程”的形式。为这些属性提供一个简洁且适合LLM使用的名称和描述对于工具的初始用户体验非常重要。
args_schema
是一个 Pydantic BaseModel
,用于定义工具输入参数的架构。这用于验证工具的输入参数,并为 LLM 在调用工具时提供一个架构来填写。与整个 Tool 类的 name
和 description
类似,字段的名称(变量名)和描述(Field(..., description="description")
的一部分)会传递给 LLM,这些字段中的值应简洁且适合 LLM 使用。
运行方法
_run
是应该在子类中实现的主要方法。该方法
接受来自 args_schema
的参数并运行工具,返回一个字符串
响应。该方法通常在 LangGraph ToolNode
中调用,也可以在传统的
langchain.agents.AgentExecutor
中调用。
_arun
是可选的,因为默认情况下,_run
将在异步执行器中运行。
但是,如果你的工具调用了任何 API 或执行了任何异步工作,你应该实现
此方法以在 _run
之外异步运行该工具。
实现
你可以从以下模板或langchain-cli命令开始:
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/tools.py \
--dst langchain_parrot_link/tools.py
Example tool code
"""ParrotLink tools."""
from typing import Optional, Type
from langchain_core.callbacks import (
CallbackManagerForToolRun,
)
from langchain_core.tools import BaseTool
from pydantic import BaseModel, Field
class ParrotLinkToolInput(BaseModel):
"""Input schema for ParrotLink tool.
This docstring is **not** part of what is sent to the model when performing tool
calling. The Field default values and descriptions **are** part of what is sent to
the model when performing tool calling.
"""
# TODO: Add input args and descriptions.
a: int = Field(..., description="first number to add")
b: int = Field(..., description="second number to add")
class ParrotLinkTool(BaseTool): # type: ignore[override]
"""ParrotLink tool.
Setup:
# TODO: Replace with relevant packages, env vars.
Install ``langchain-parrot-link`` and set environment variable ``PARROT_LINK_API_KEY``.
.. code-block:: bash
pip install -U langchain-parrot-link
export PARROT_LINK_API_KEY="your-api-key"
Instantiation:
.. code-block:: python
tool = ParrotLinkTool(
# TODO: init params
)
Invocation with args:
.. code-block:: python
# TODO: invoke args
tool.invoke({...})
.. code-block:: python
# TODO: output of invocation
Invocation with ToolCall:
.. code-block:: python
# TODO: invoke args
tool.invoke({"args": {...}, "id": "1", "name": tool.name, "type": "tool_call"})
.. code-block:: python
# TODO: output of invocation
""" # noqa: E501
# TODO: Set tool name and description
name: str = "TODO: Tool name"
"""The name that is passed to the model when performing tool calling."""
description: str = "TODO: Tool description."
"""The description that is passed to the model when performing tool calling."""
args_schema: Type[BaseModel] = ParrotLinkToolInput
"""The schema that is passed to the model when performing tool calling."""
# TODO: Add any other init params for the tool.
# param1: Optional[str]
# """param1 determines foobar"""
# TODO: Replaced (a, b) with real tool arguments.
def _run(
self, a: int, b: int, *, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
return str(a + b + 80)
# TODO: Implement if tool has native async functionality, otherwise delete.
# async def _arun(
# self,
# a: int,
# b: int,
# *,
# run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
# ) -> str:
# ...
检索器用于根据查询从API、数据库或其他来源检索文档。Retriever
类必须继承自BaseRetriever基类。此接口有1个属性和2个方法,应在子类中实现。
方法/属性 | 描述 |
---|---|
k | 默认检索的文档数量(可配置)。 |
_get_relevant_documents | 根据查询检索文档。 |
_aget_relevant_documents | 基于查询异步检索文档。 |
属性
k
是一个应该在子类中实现的属性。这个属性可以简单地在类的顶部定义,并带有默认值,例如 k: int = 5
。这个属性是从检索器中检索文档的默认数量,用户可以在构造或调用检索器时覆盖它。
方法
_get_relevant_documents
是应该在子类中实现的主要方法。
此方法接收一个查询并返回一个Document
对象列表,这些对象有2个主要属性:
page_content
- 文档的文本内容metadata
- 文档的元数据字典
检索器通常由用户直接调用,例如
MyRetriever(k=4).invoke("query")
,这将自动调用_get_relevant_documents
在后台。
_aget_relevant_documents
是可选的,因为默认情况下,_get_relevant_documents
将在异步执行器中运行。然而,如果你的检索器正在调用任何 API 或执行任何异步工作,出于性能原因,你应该实现此方法以异步运行检索器,同时还要实现 _get_relevant_documents
。
实现
你可以从以下模板或langchain-cli命令开始:
langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/retrievers.py \
--dst langchain_parrot_link/retrievers.py
Example retriever code
"""ParrotLink retrievers."""
from typing import Any, List
from langchain_core.callbacks import CallbackManagerForRetrieverRun
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
class ParrotLinkRetriever(BaseRetriever):
# TODO: Replace all TODOs in docstring. See example docstring:
# https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/tavily_search_api.py#L17
"""ParrotLink retriever.
# TODO: Replace with relevant packages, env vars, etc.
Setup:
Install ``langchain-parrot-link`` and set environment variable
``PARROT_LINK_API_KEY``.
.. code-block:: bash
pip install -U langchain-parrot-link
export PARROT_LINK_API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args:
arg 1: type
description
arg 2: type
description
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from langchain-parrot-link import ParrotLinkRetriever
retriever = ParrotLinkRetriever(
# ...
)
Usage:
.. code-block:: python
query = "..."
retriever.invoke(query)
.. code-block:: none
# TODO: Example output.
Use within a chain:
.. code-block:: python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template(
\"\"\"Answer the question based only on the context provided.
Context: {context}
Question: {question}\"\"\"
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
def format_docs(docs):
return "\\n\\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
chain.invoke("...")
.. code-block:: none
# TODO: Example output.
"""
k: int = 3
# TODO: This method must be implemented to retrieve documents.
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **kwargs: Any
) -> List[Document]:
k = kwargs.get("k", self.k)
return [
Document(page_content=f"Result {i} for query: {query}") for i in range(k)
]
# optional: add custom async implementations here
# async def _aget_relevant_documents(
# self,
# query: str,
# *,
# run_manager: AsyncCallbackManagerForRetrieverRun,
# **kwargs: Any,
# ) -> List[Document]: ...
下一步
现在你已经实现了你的包,你可以继续测试你的集成并成功运行它们。