使用SQLiteVec将SQLite作为向量存储
本笔记本介绍如何开始使用SQLiteVec向量存储。
SQLite-Vec 是一个为向量搜索设计的
SQLite
扩展,强调本地优先操作和无需外部服务器的轻松集成。它是同一作者开发的 SQLite-VSS 的继任者。它使用零依赖的 C 语言编写,旨在易于构建和使用。
本笔记本展示了如何使用SQLiteVec
向量数据库。
设置
你需要安装 langchain-community
使用 pip install -qU langchain-community
来使用这个集成
# You need to install sqlite-vec as a dependency.
%pip install --upgrade --quiet sqlite-vec
凭证
SQLiteVec 不需要任何凭证即可使用,因为向量存储是一个简单的 SQLite 文件。
初始化
from langchain_community.embeddings.sentence_transformer import (
SentenceTransformerEmbeddings,
)
from langchain_community.vectorstores import SQLiteVec
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
vector_store = SQLiteVec(
table="state_union", db_file="/tmp/vec.db", embedding=embedding_function
)
API Reference:SentenceTransformerEmbeddings | SQLiteVec
管理向量存储
添加项目到向量存储
vector_store.add_texts(texts=["Ketanji Brown Jackson is awesome", "foo", "bar"])
更新向量存储中的项目
尚未支持
从向量存储中删除项目
尚未支持
查询向量存储
直接查询
data = vector_store.similarity_search("Ketanji Brown Jackson", k=4)
通过转换为检索器进行查询
尚未支持
检索增强生成的使用
请参考sqlite-vec的文档,网址为https://alexgarcia.xyz/sqlite-vec/,了解更多关于如何使用它进行检索增强生成的信息。
API 参考
有关所有SQLiteVec功能和配置的详细文档,请访问API参考:https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.sqlitevec.SQLiteVec.html
其他示例
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings.sentence_transformer import (
SentenceTransformerEmbeddings,
)
from langchain_community.vectorstores import SQLiteVec
from langchain_text_splitters import CharacterTextSplitter
# load the document and split it into chunks
loader = TextLoader("../../how_to/state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
texts = [doc.page_content for doc in docs]
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it in sqlite-vss in a table named state_union.
# the db_file parameter is the name of the file you want
# as your sqlite database.
db = SQLiteVec.from_texts(
texts=texts,
embedding=embedding_function,
table="state_union",
db_file="/tmp/vec.db",
)
# query it
query = "What did the president say about Ketanji Brown Jackson"
data = db.similarity_search(query)
# print results
data[0].page_content
'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'
使用现有SQLite连接的示例
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings.sentence_transformer import (
SentenceTransformerEmbeddings,
)
from langchain_community.vectorstores import SQLiteVec
from langchain_text_splitters import CharacterTextSplitter
# load the document and split it into chunks
loader = TextLoader("../../how_to/state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
texts = [doc.page_content for doc in docs]
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
connection = SQLiteVec.create_connection(db_file="/tmp/vec.db")
db1 = SQLiteVec(
table="state_union", embedding=embedding_function, connection=connection
)
db1.add_texts(["Ketanji Brown Jackson is awesome"])
# query it again
query = "What did the president say about Ketanji Brown Jackson"
data = db1.similarity_search(query)
# print results
data[0].page_content
'Ketanji Brown Jackson is awesome'