Skip to main content
Open In ColabOpen on GitHub

Databricks 向量搜索

Databricks Vector Search 是一个无服务器的相似性搜索引擎,允许您在向量数据库中存储数据的向量表示,包括元数据。通过 Vector Search,您可以从 Unity Catalog 管理的 Delta 表中创建自动更新的向量搜索索引,并使用简单的 API 查询它们以返回最相似的向量。

在演练中,我们将使用Databricks Vector Search演示SelfQueryRetriever

创建Databricks向量存储索引

首先,我们需要创建一个Databricks向量存储索引,并用一些数据填充它。我们已经创建了一个包含电影摘要的小型演示文档集。

注意: 自查询检索器要求您安装 larkpip install lark)以及特定于集成的需求。

%pip install --upgrade --quiet  langchain-core databricks-vectorsearch langchain-openai tiktoken
Note: you may need to restart the kernel to use updated packages.

我们想要使用OpenAIEmbeddings,所以我们必须获取OpenAI API密钥。

import getpass
import os

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
databricks_host = getpass.getpass("Databricks host:")
databricks_token = getpass.getpass("Databricks token:")
OpenAI API Key: ········
Databricks host: ········
Databricks token: ········
from databricks.vector_search.client import VectorSearchClient
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()
emb_dim = len(embeddings.embed_query("hello"))

vector_search_endpoint_name = "vector_search_demo_endpoint"


vsc = VectorSearchClient(
workspace_url=databricks_host, personal_access_token=databricks_token
)
vsc.create_endpoint(name=vector_search_endpoint_name, endpoint_type="STANDARD")
API Reference:OpenAIEmbeddings
[NOTICE] Using a Personal Authentication Token (PAT). Recommended for development only. For improved performance, please use Service Principal based authentication. To disable this message, pass disable_notice=True to VectorSearchClient().
index_name = "udhay_demo.10x.demo_index"

index = vsc.create_direct_access_index(
endpoint_name=vector_search_endpoint_name,
index_name=index_name,
primary_key="id",
embedding_dimension=emb_dim,
embedding_vector_column="text_vector",
schema={
"id": "string",
"page_content": "string",
"year": "int",
"rating": "float",
"genre": "string",
"text_vector": "array<float>",
},
)

index.describe()
index = vsc.get_index(endpoint_name=vector_search_endpoint_name, index_name=index_name)

index.describe()
from langchain_core.documents import Document

docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"id": 1, "year": 1993, "rating": 7.7, "genre": "action"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"id": 2, "year": 2010, "genre": "thriller", "rating": 8.2},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"id": 3, "year": 2019, "rating": 8.3, "genre": "drama"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={"id": 4, "year": 1979, "rating": 9.9, "genre": "science fiction"},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"id": 5, "year": 2006, "genre": "thriller", "rating": 9.0},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"id": 6, "year": 1995, "genre": "animated", "rating": 9.3},
),
]
API Reference:Document
from langchain_community.vectorstores import DatabricksVectorSearch

vector_store = DatabricksVectorSearch(
index,
text_column="page_content",
embedding=embeddings,
columns=["year", "rating", "genre"],
)
vector_store.add_documents(docs)

创建我们的自查询检索器

现在我们可以实例化我们的检索器。为此,我们需要提前提供一些关于我们的文档支持的元数据字段的信息以及文档内容的简短描述。

from langchain.chains.query_constructor.schema import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI

metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vector_store, document_content_description, metadata_field_info, verbose=True
)

测试一下

现在我们可以尝试实际使用我们的检索器了!

# This example only specifies a relevant query
retriever.invoke("What are some movies about dinosaurs")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993.0, 'rating': 7.7, 'genre': 'action', 'id': 1.0}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995.0, 'rating': 9.3, 'genre': 'animated', 'id': 6.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979.0, 'rating': 9.9, 'genre': 'science fiction', 'id': 4.0}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006.0, 'rating': 9.0, 'genre': 'thriller', 'id': 5.0})]
# This example specifies a filter
retriever.invoke("What are some highly rated movies (above 9)?")
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995.0, 'rating': 9.3, 'genre': 'animated', 'id': 6.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979.0, 'rating': 9.9, 'genre': 'science fiction', 'id': 4.0})]
# This example specifies both a relevant query and a filter
retriever.invoke("What are the thriller movies that are highly rated?")
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006.0, 'rating': 9.0, 'genre': 'thriller', 'id': 5.0}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010.0, 'rating': 8.2, 'genre': 'thriller', 'id': 2.0})]
# This example specifies a query and composite filter
retriever.invoke(
"What's a movie after 1990 but before 2005 that's all about dinosaurs, \
and preferably has a lot of action"
)
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993.0, 'rating': 7.7, 'genre': 'action', 'id': 1.0})]

筛选 k

我们也可以使用自我查询检索器来指定k:要获取的文档数量。

我们可以通过将enable_limit=True传递给构造函数来实现这一点。

筛选 k

我们也可以使用自我查询检索器来指定k:要获取的文档数量。

我们可以通过将enable_limit=True传递给构造函数来实现这一点。

retriever = SelfQueryRetriever.from_llm(
llm,
vector_store,
document_content_description,
metadata_field_info,
verbose=True,
enable_limit=True,
)
retriever.invoke("What are two movies about dinosaurs?")

这个页面有帮助吗?