跳至内容

常见问题解答 (FAQ)#

提示

如果还没有安装,安装LlamaIndex并完成入门教程。如果遇到不熟悉的术语,请查阅核心概念

在本节中,我们将从您为入门示例编写的代码开始,向您展示针对您的使用场景最常见的定制方式:

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)

"我想将我的文档解析成更小的片段"#

# Global settings
from llama_index.core import Settings

Settings.chunk_size = 512

# Local settings
from llama_index.core.node_parser import SentenceSplitter

index = VectorStoreIndex.from_documents(
    documents, transformations=[SentenceSplitter(chunk_size=512)]
)

"我想使用不同的向量存储"#

首先,您可以安装想要使用的向量存储。例如,要使用Chroma作为向量存储,可以通过pip安装:

pip install llama-index-vector-stores-chroma

要了解更多可用的集成,请查看LlamaHub

然后,你可以在代码中使用它:

import chromadb
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext

chroma_client = chromadb.PersistentClient()
chroma_collection = chroma_client.create_collection("quickstart")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)

StorageContext 定义了存储文档、嵌入向量和索引的后端存储。您可以了解更多关于存储如何自定义存储的信息。

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(
    documents, storage_context=storage_context
)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)

"我希望在查询时获取更多上下文信息"#

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(similarity_top_k=5)
response = query_engine.query("What did the author do growing up?")
print(response)

as_query_engine 在索引基础上构建默认的retrieverquery engine。您可以通过传入关键字参数来配置检索器和查询引擎。这里,我们将检索器配置为返回前5个最相似的文档(而不是默认的2个)。您可以了解更多关于retrieversquery engines的信息。


"我想使用不同的LLM"#

# 全局设置
from llama_index.core import Settings
from llama_index.llms.ollama import Ollama

Settings.llm = Ollama(
    model="mistral",
    request_timeout=60.0,
    # 手动设置上下文窗口以限制内存使用
    context_window=8000,
)

# 本地设置
index.as_query_engine(
    llm=Ollama(
        model="mistral",
        request_timeout=60.0,
        # 手动设置上下文窗口以限制内存使用
        context_window=8000,
    )
)
您可以了解更多关于自定义LLMs的信息。


"我想使用不同的响应模式"#

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(response_mode="tree_summarize")
response = query_engine.query("What did the author do growing up?")
print(response)

您可以了解更多关于查询引擎响应模式的信息。


"我想实时获取响应流"#

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(streaming=True)
response = query_engine.query("What did the author do growing up?")
response.print_response_stream()

你可以了解更多关于流式响应的信息。


"我想要一个聊天机器人,而不仅仅是问答功能"#

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_chat_engine()
response = query_engine.chat("What did the author do growing up?")
print(response)

response = query_engine.chat("Oh interesting, tell me more.")
print(response)

了解更多关于聊天引擎的信息。


下一步#

  • 想要全面了解(几乎)所有可配置的内容吗?从理解LlamaIndex开始。
  • 想要更深入了解特定模块?请查看组件指南