从向量数据库自动检索¶
本指南展示了如何在LlamaIndex中执行自动检索。
许多流行的向量数据库除了语义搜索的查询字符串外,还支持一组元数据过滤器。给定一个自然语言查询,我们首先使用LLM推断一组元数据过滤器,以及传递给向量数据库的正确查询字符串(也可以为空)。然后对向量数据库执行整个查询包。
这允许进行更动态、更具表现力的检索形式,超出了前K个语义搜索。对于给定查询的相关上下文,可能只需要在元数据标签上进行过滤,或者需要在过滤集合内进行过滤+语义搜索的联合组合,或者只需要原始的语义搜索。
我们以Elasticsearch为例进行演示,但自动检索也已在许多其他向量数据库中实现(例如Pinecone、Weaviate等)。
设置¶
首先我们定义导入的模块。
如果您在colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index-vector-stores-elasticsearch
%pip install llama-index-vector-stores-elasticsearch
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
In [ ]:
Copied!
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
In [ ]:
Copied!
# 设置OpenAIimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")import openaiopenai.api_key = os.environ["OPENAI_API_KEY"]
# 设置OpenAIimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")import openaiopenai.api_key = os.environ["OPENAI_API_KEY"]
定义一些示例数据¶
我们向向量数据库中插入一些包含文本块的示例节点。请注意,每个 TextNode
不仅包含文本,还包含元数据,例如 category
和 country
。这些元数据字段将被转换/存储在底层的向量数据库中。
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.elasticsearch import ElasticsearchStore
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.elasticsearch import ElasticsearchStore
In [ ]:
Copied!
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text=(
"A bunch of scientists bring back dinosaurs and mayhem breaks"
" loose"
),
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
TextNode(
text=(
"Leo DiCaprio gets lost in a dream within a dream within a dream"
" within a ..."
),
metadata={
"year": 2010,
"director": "Christopher Nolan",
"rating": 8.2,
},
),
TextNode(
text=(
"A psychologist / detective gets lost in a series of dreams within"
" dreams within dreams and Inception reused the idea"
),
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
TextNode(
text=(
"A bunch of normal-sized women are supremely wholesome and some"
" men pine after them"
),
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
TextNode(
text="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
]
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text=(
"A bunch of scientists bring back dinosaurs and mayhem breaks"
" loose"
),
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
TextNode(
text=(
"Leo DiCaprio gets lost in a dream within a dream within a dream"
" within a ..."
),
metadata={
"year": 2010,
"director": "Christopher Nolan",
"rating": 8.2,
},
),
TextNode(
text=(
"A psychologist / detective gets lost in a series of dreams within"
" dreams within dreams and Inception reused the idea"
),
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
TextNode(
text=(
"A bunch of normal-sized women are supremely wholesome and some"
" men pine after them"
),
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
TextNode(
text="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
]
使用Elasticsearch向量存储构建向量索引¶
在这里,我们将数据加载到向量存储中。如上所述,每个节点的文本和元数据都将转换为Elasticsearch中的相应表示。现在我们可以从Elasticsearch对这些数据运行语义查询,也可以进行元数据过滤。
In [ ]:
Copied!
vector_store = ElasticsearchStore(
index_name="auto_retriever_movies", es_url="http://localhost:9200"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
vector_store = ElasticsearchStore(
index_name="auto_retriever_movies", es_url="http://localhost:9200"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
In [ ]:
Copied!
index = VectorStoreIndex(nodes, storage_context=storage_context)
index = VectorStoreIndex(nodes, storage_context=storage_context)
定义 VectorIndexAutoRetriever
¶
我们定义了核心的 VectorIndexAutoRetriever
模块。该模块接收 VectorStoreInfo
,其中包含向量存储集合的结构化描述以及其支持的元数据过滤器。然后这些信息将被用于自动检索提示,LLM 将推断元数据过滤器。
In [ ]:
Copied!
from llama_index.core.retrievers import VectorIndexAutoRetriever
from llama_index.core.vector_stores import MetadataInfo, VectorStoreInfo
vector_store_info = VectorStoreInfo(
content_info="Brief summary of a movie",
metadata_info=[
MetadataInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
MetadataInfo(
name="year",
description="The year the movie was released",
type="integer",
),
MetadataInfo(
name="director",
description="The name of the movie director",
type="string",
),
MetadataInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float",
),
],
)
retriever = VectorIndexAutoRetriever(
index, vector_store_info=vector_store_info
)
from llama_index.core.retrievers import VectorIndexAutoRetriever
from llama_index.core.vector_stores import MetadataInfo, VectorStoreInfo
vector_store_info = VectorStoreInfo(
content_info="Brief summary of a movie",
metadata_info=[
MetadataInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
MetadataInfo(
name="year",
description="The year the movie was released",
type="integer",
),
MetadataInfo(
name="director",
description="The name of the movie director",
type="string",
),
MetadataInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float",
),
],
)
retriever = VectorIndexAutoRetriever(
index, vector_store_info=vector_store_info
)
运行一些示例数据¶
我们尝试运行一些示例数据。请注意元数据过滤器是如何被推断出来的 - 这有助于更精确地检索!
In [ ]:
Copied!
retriever.retrieve(
"What are 2 movies by Christopher Nolan were made before 2020?"
)
retriever.retrieve(
"What are 2 movies by Christopher Nolan were made before 2020?"
)
In [ ]:
Copied!
retriever.retrieve("Has Andrei Tarkovsky directed any science fiction movies")
retriever.retrieve("Has Andrei Tarkovsky directed any science fiction movies")
INFO:llama_index.indices.vector_store.retrievers.auto_retriever.auto_retriever:Using query str: science fiction Using query str: science fiction INFO:llama_index.indices.vector_store.retrievers.auto_retriever.auto_retriever:Using filters: {'director': 'Andrei Tarkovsky'} Using filters: {'director': 'Andrei Tarkovsky'} INFO:llama_index.indices.vector_store.retrievers.auto_retriever.auto_retriever:Using top_k: 2 Using top_k: 2 INFO:elastic_transport.transport:POST http://localhost:9200/auto_retriever_movies/_search [status:200 duration:0.042s] POST http://localhost:9200/auto_retriever_movies/_search [status:200 duration:0.042s]
Out[ ]:
[]