In [ ]:
Copied!
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
设置¶
在本笔记本中,我们将使用我们文档中的两个非常相似的页面,每个页面都存储在单独的索引中。
In [ ]:
Copied!
from llama_index.core import SimpleDirectoryReader
documents_1 = SimpleDirectoryReader(
input_files=["../../community/integrations/vector_stores.md"]
).load_data()
documents_2 = SimpleDirectoryReader(
input_files=["../../module_guides/storing/vector_stores.md"]
).load_data()
from llama_index.core import SimpleDirectoryReader
documents_1 = SimpleDirectoryReader(
input_files=["../../community/integrations/vector_stores.md"]
).load_data()
documents_2 = SimpleDirectoryReader(
input_files=["../../module_guides/storing/vector_stores.md"]
).load_data()
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex
index_1 = VectorStoreIndex.from_documents(documents_1)
index_2 = VectorStoreIndex.from_documents(documents_2)
from llama_index.core import VectorStoreIndex
index_1 = VectorStoreIndex.from_documents(documents_1)
index_2 = VectorStoreIndex.from_documents(documents_2)
融合索引!¶
在这一步中,我们将我们的索引融合成一个单一的检索器。这个检索器还会通过生成与原始问题相关的额外查询来增强我们的查询,并汇总结果。
这个设置将查询4次,一次使用您的原始查询,并生成3个额外的查询。
默认情况下,它使用以下提示来生成额外的查询:
QUERY_GEN_PROMPT = (
"您是一个乐于助人的助手,根据单个输入查询生成多个搜索查询。生成{num_queries}个搜索查询,每行一个,与以下输入查询相关:\n"
"查询:{query}\n"
"查询:\n"
)
In [ ]:
Copied!
from llama_index.core.retrievers import QueryFusionRetrieverretriever = QueryFusionRetriever( [index_1.as_retriever(), index_2.as_retriever()], similarity_top_k=2, num_queries=4, # 将其设置为1以禁用查询生成 use_async=True, verbose=True, # query_gen_prompt="...", # 在这里可以覆盖查询生成提示)
from llama_index.core.retrievers import QueryFusionRetrieverretriever = QueryFusionRetriever( [index_1.as_retriever(), index_2.as_retriever()], similarity_top_k=2, num_queries=4, # 将其设置为1以禁用查询生成 use_async=True, verbose=True, # query_gen_prompt="...", # 在这里可以覆盖查询生成提示)
In [ ]:
Copied!
# 将嵌套的异步应用于在笔记本中运行import nest_asyncionest_asyncio.apply()
# 将嵌套的异步应用于在笔记本中运行import nest_asyncionest_asyncio.apply()
In [ ]:
Copied!
nodes_with_scores = retriever.retrieve("How do I setup a chroma vector store?")
nodes_with_scores = retriever.retrieve("How do I setup a chroma vector store?")
Generated queries: 1. What are the steps to set up a chroma vector store? 2. Best practices for configuring a chroma vector store 3. Troubleshooting common issues when setting up a chroma vector store
In [ ]:
Copied!
for node in nodes_with_scores:
print(f"Score: {node.score:.2f} - {node.text[:100]}...")
for node in nodes_with_scores:
print(f"Score: {node.score:.2f} - {node.text[:100]}...")
Score: 0.78 - # Vector Stores Vector stores contain embedding vectors of ingested document chunks (and sometimes ... Score: 0.78 - # Using Vector Stores LlamaIndex offers multiple integration points with vector stores / vector dat...
在查询引擎中使用!¶
现在,我们可以将我们的检索器插入到查询引擎中,以合成自然语言响应。
In [ ]:
Copied!
from llama_index.core.query_engine import RetrieverQueryEngine
query_engine = RetrieverQueryEngine.from_args(retriever)
from llama_index.core.query_engine import RetrieverQueryEngine
query_engine = RetrieverQueryEngine.from_args(retriever)
In [ ]:
Copied!
response = query_engine.query(
"How do I setup a chroma vector store? Can you give an example?"
)
response = query_engine.query(
"How do I setup a chroma vector store? Can you give an example?"
)
Generated queries: 1. How to set up a chroma vector store? 2. Step-by-step guide for creating a chroma vector store. 3. Examples of chroma vector store setups and configurations.
In [ ]:
Copied!
from llama_index.core.response.notebook_utils import display_response
display_response(response)
from llama_index.core.response.notebook_utils import display_response
display_response(response)
Final Response:
To set up a Chroma vector store, you need to follow these steps:
- Import the necessary libraries:
import chromadb
from llama_index.vector_stores.chroma import ChromaVectorStore
- Create a Chroma client:
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")
- Construct the vector store:
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
Here's an example of how to set up a Chroma vector store using the above steps:
import chromadb
from llama_index.vector_stores.chroma import ChromaVectorStore
# Creating a Chroma client
# EphemeralClient operates purely in-memory, PersistentClient will also save to disk
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")
# construct vector store
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
This example demonstrates how to create a Chroma client, create a collection named "quickstart", and then construct a Chroma vector store using that collection.