跳到主要内容

使用Typesense进行嵌入式搜索

nbviewer

本笔记本将带您完成一个简单的流程,下载一些数据,对其进行嵌入,然后使用一些向量数据库对其进行索引和搜索。这是客户常见的需求,他们希望在安全环境中存储和搜索我们的嵌入,以支持生产用例,如聊天机器人、主题建模等。

什么是向量数据库

向量数据库是一种用于存储、管理和搜索嵌入向量的数据库。近年来,使用嵌入来将非结构化数据(文本、音频、视频等)编码为向量,以供机器学习模型消费的做法已经蓬勃发展,这是由于人工智能在解决涉及自然语言、图像识别和其他非结构化数据形式的用例时日益有效。向量数据库已经成为企业提供和扩展这些用例的有效解决方案。

为什么使用向量数据库

向量数据库使企业能够利用我们在本仓库中分享的许多嵌入用例(例如问答、聊天机器人和推荐服务),并在安全、可扩展的环境中使用它们。许多客户使用嵌入在小规模上解决问题,但性能和安全性阻碍了它们投入生产 - 我们认为向量数据库是解决这一问题的关键组成部分,在本指南中,我们将介绍嵌入文本数据、将其存储在向量数据库中并将其用于语义搜索的基础知识。

演示流程

演示流程如下: - 设置:导入包并设置任何必需的变量 - 加载数据:加载数据集并使用OpenAI嵌入对其进行嵌入 - Typesense - 设置:设置Typesense Python客户端。有关更多详细信息,请访问此处 - 索引数据:我们将创建一个集合,并为__标题__和__内容__进行索引。 - 搜索数据:运行一些具有不同目标的示例查询。

完成本笔记后,您应该对如何设置和使用向量数据库有基本的了解,并可以继续进行更复杂的用例,利用我们的嵌入。

设置

导入所需的库并设置我们想要使用的嵌入模型。

# 我们需要安装Typesense客户端。
!pip install typesense

#安装wget以下载zip文件
!pip install wget

import openai

from typing import List, Iterator
import pandas as pd
import numpy as np
import os
import wget
from ast import literal_eval

# Typesense's client library for Python
import typesense

# I've set this to our new embeddings model, this can be changed to the embedding model of your choice
EMBEDDING_MODEL = "text-embedding-3-small"

# 忽略未关闭的SSL套接字警告 - 如果遇到此类错误,可选择此项。
import warnings

warnings.filterwarnings(action="ignore", message="unclosed", category=ResourceWarning)
warnings.filterwarnings("ignore", category=DeprecationWarning)

加载数据

在本节中,我们将加载在本次会话之前准备的嵌入数据。

embeddings_url = 'https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip'

# 文件大小约为700MB,因此需要一些时间来完成。
wget.download(embeddings_url)

import zipfile
with zipfile.ZipFile("vector_database_wikipedia_articles_embedded.zip","r") as zip_ref:
zip_ref.extractall("../data")

article_df = pd.read_csv('../data/vector_database_wikipedia_articles_embedded.csv')

article_df.head()

id url title text title_vector content_vector vector_id
0 1 https://simple.wikipedia.org/wiki/April April April is the fourth month of the year in the J... [0.001009464613161981, -0.020700545981526375, ... [-0.011253940872848034, -0.013491976074874401,... 0
1 2 https://simple.wikipedia.org/wiki/August August August (Aug.) is the eighth month of the year ... [0.0009286514250561595, 0.000820168002974242, ... [0.0003609954728744924, 0.007262262050062418, ... 1
2 6 https://simple.wikipedia.org/wiki/Art Art Art is a creative activity that expresses imag... [0.003393713850528002, 0.0061537534929811954, ... [-0.004959689453244209, 0.015772193670272827, ... 2
3 8 https://simple.wikipedia.org/wiki/A A A or a is the first letter of the English alph... [0.0153952119871974, -0.013759135268628597, 0.... [0.024894846603274345, -0.022186409682035446, ... 3
4 9 https://simple.wikipedia.org/wiki/Air Air Air refers to the Earth's atmosphere. Air is a... [0.02224554680287838, -0.02044147066771984, -0... [0.021524671465158463, 0.018522677943110466, -... 4
# 从字符串中读取向量并将其转换为列表
article_df['title_vector'] = article_df.title_vector.apply(literal_eval)
article_df['content_vector'] = article_df.content_vector.apply(literal_eval)

# 将 `vector_id` 设置为一个字符串
article_df['vector_id'] = article_df['vector_id'].apply(str)

article_df.info(show_counts=True)

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 25000 entries, 0 to 24999
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 25000 non-null int64
1 url 25000 non-null object
2 title 25000 non-null object
3 text 25000 non-null object
4 title_vector 25000 non-null object
5 content_vector 25000 non-null object
6 vector_id 25000 non-null object
dtypes: int64(1), object(6)
memory usage: 1.3+ MB

Typesense

接下来我们将看一下Typesense,这是一个开源的内存搜索引擎,您可以自行托管或在Typesense Cloud上运行。

Typesense专注于性能,通过将整个索引存储在RAM中(同时备份到磁盘),并通过简化可用选项和设置良好的默认值来提供开箱即用的开发人员体验。它还允许您将基于属性的过滤与向量查询结合在一起。

在本例中,我们将设置一个基于本地docker的Typesense服务器,在Typesense中索引我们的向量,然后执行一些最近邻搜索查询。如果您使用Typesense Cloud,您可以跳过docker设置部分,直接从集群仪表板获取主机名和API密钥。

设置

要在本地运行Typesense,您需要Docker。根据Typesense文档中包含的说明这里,我们在此存储库中创建了一个示例docker-compose.yml文件,保存在./typesense/docker-compose.yml

启动Docker后,您可以通过转到examples/vector_databases/typesense/目录并运行docker-compose up -d来在本地启动Typesense。

Docker compose文件中默认的API密钥设置为xyz,默认Typesense端口设置为8108

import typesense

typesense_client = \
typesense.Client({
"nodes": [{
"host": "localhost", # 对于Typesense Cloud,请使用xxx.a1.typesense.net。
"port": "8108", # 对于Typesense Cloud,请使用443端口。
"protocol": "http" # 对于Typesense Cloud的使用,请参考以下步骤:
}],
"api_key": "xyz",
"connection_timeout_seconds": 60
})

索引数据

在Typesense中索引向量,我们首先会创建一个Collection(这是一个文档集合),并为特定字段打开向量索引。甚至可以在单个文档中存储多个向量字段。

# 如果已有收藏,请删除它们。
try:
typesense_client.collections['wikipedia_articles'].delete()
except Exception as e:
pass

# 创建新集合

schema = {
"name": "wikipedia_articles",
"fields": [
{
"name": "content_vector",
"type": "float[]",
"num_dim": len(article_df['content_vector'][0])
},
{
"name": "title_vector",
"type": "float[]",
"num_dim": len(article_df['title_vector'][0])
}
]
}

create_response = typesense_client.collections.create(schema)
print(create_response)

print("Created new collection wikipedia-articles")

{'created_at': 1687165065, 'default_sorting_field': '', 'enable_nested_fields': False, 'fields': [{'facet': False, 'index': True, 'infix': False, 'locale': '', 'name': 'content_vector', 'num_dim': 1536, 'optional': False, 'sort': False, 'type': 'float[]'}, {'facet': False, 'index': True, 'infix': False, 'locale': '', 'name': 'title_vector', 'num_dim': 1536, 'optional': False, 'sort': False, 'type': 'float[]'}], 'name': 'wikipedia_articles', 'num_documents': 0, 'symbols_to_index': [], 'token_separators': []}
Created new collection wikipedia-articles
# 将向量数据插入我们刚刚创建的集合中
#
# 注意:这可能需要几分钟时间,尤其是在M1芯片上并采用模拟模式运行Docker的情况下。

print("Indexing vectors in Typesense...")

document_counter = 0
documents_batch = []

for k,v in article_df.iterrows():
# 创建一个包含矢量数据的文档

# Notice how you can add any fields that you haven't added to the schema to the document.
# These will be stored on disk and returned when the document is a hit.
# This is useful to store attributes required for display purposes.

document = {
"title_vector": v["title_vector"],
"content_vector": v["content_vector"],
"title": v["title"],
"content": v["text"],
}
documents_batch.append(document)
document_counter = document_counter + 1

# Upsert a batch of 100 documents
if document_counter % 100 == 0 or document_counter == len(article_df):
response = typesense_client.collections['wikipedia_articles'].documents.import_(documents_batch)
# print(响应)

documents_batch = []
print(f"Processed {document_counter} / {len(article_df)} ")

print(f"Imported ({len(article_df)}) articles.")

Indexing vectors in Typesense...
Processed 100 / 25000
Processed 200 / 25000
Processed 300 / 25000
Processed 400 / 25000
Processed 500 / 25000
Processed 600 / 25000
Processed 700 / 25000
Processed 800 / 25000
Processed 900 / 25000
Processed 1000 / 25000
Processed 1100 / 25000
Processed 1200 / 25000
Processed 1300 / 25000
Processed 1400 / 25000
Processed 1500 / 25000
Processed 1600 / 25000
Processed 1700 / 25000
Processed 1800 / 25000
Processed 1900 / 25000
Processed 2000 / 25000
Processed 2100 / 25000
Processed 2200 / 25000
Processed 2300 / 25000
Processed 2400 / 25000
Processed 2500 / 25000
Processed 2600 / 25000
Processed 2700 / 25000
Processed 2800 / 25000
Processed 2900 / 25000
Processed 3000 / 25000
Processed 3100 / 25000
Processed 3200 / 25000
Processed 3300 / 25000
Processed 3400 / 25000
Processed 3500 / 25000
Processed 3600 / 25000
Processed 3700 / 25000
Processed 3800 / 25000
Processed 3900 / 25000
Processed 4000 / 25000
Processed 4100 / 25000
Processed 4200 / 25000
Processed 4300 / 25000
Processed 4400 / 25000
Processed 4500 / 25000
Processed 4600 / 25000
Processed 4700 / 25000
Processed 4800 / 25000
Processed 4900 / 25000
Processed 5000 / 25000
Processed 5100 / 25000
Processed 5200 / 25000
Processed 5300 / 25000
Processed 5400 / 25000
Processed 5500 / 25000
Processed 5600 / 25000
Processed 5700 / 25000
Processed 5800 / 25000
Processed 5900 / 25000
Processed 6000 / 25000
Processed 6100 / 25000
Processed 6200 / 25000
Processed 6300 / 25000
Processed 6400 / 25000
Processed 6500 / 25000
Processed 6600 / 25000
Processed 6700 / 25000
Processed 6800 / 25000
Processed 6900 / 25000
Processed 7000 / 25000
Processed 7100 / 25000
Processed 7200 / 25000
Processed 7300 / 25000
Processed 7400 / 25000
Processed 7500 / 25000
Processed 7600 / 25000
Processed 7700 / 25000
Processed 7800 / 25000
Processed 7900 / 25000
Processed 8000 / 25000
Processed 8100 / 25000
Processed 8200 / 25000
Processed 8300 / 25000
Processed 8400 / 25000
Processed 8500 / 25000
Processed 8600 / 25000
Processed 8700 / 25000
Processed 8800 / 25000
Processed 8900 / 25000
Processed 9000 / 25000
Processed 9100 / 25000
Processed 9200 / 25000
Processed 9300 / 25000
Processed 9400 / 25000
Processed 9500 / 25000
Processed 9600 / 25000
Processed 9700 / 25000
Processed 9800 / 25000
Processed 9900 / 25000
Processed 10000 / 25000
Processed 10100 / 25000
Processed 10200 / 25000
Processed 10300 / 25000
Processed 10400 / 25000
Processed 10500 / 25000
Processed 10600 / 25000
Processed 10700 / 25000
Processed 10800 / 25000
Processed 10900 / 25000
Processed 11000 / 25000
Processed 11100 / 25000
Processed 11200 / 25000
Processed 11300 / 25000
Processed 11400 / 25000
Processed 11500 / 25000
Processed 11600 / 25000
Processed 11700 / 25000
Processed 11800 / 25000
Processed 11900 / 25000
Processed 12000 / 25000
Processed 12100 / 25000
Processed 12200 / 25000
Processed 12300 / 25000
Processed 12400 / 25000
Processed 12500 / 25000
Processed 12600 / 25000
Processed 12700 / 25000
Processed 12800 / 25000
Processed 12900 / 25000
Processed 13000 / 25000
Processed 13100 / 25000
Processed 13200 / 25000
Processed 13300 / 25000
Processed 13400 / 25000
Processed 13500 / 25000
Processed 13600 / 25000
Processed 13700 / 25000
Processed 13800 / 25000
Processed 13900 / 25000
Processed 14000 / 25000
Processed 14100 / 25000
Processed 14200 / 25000
Processed 14300 / 25000
Processed 14400 / 25000
Processed 14500 / 25000
Processed 14600 / 25000
Processed 14700 / 25000
Processed 14800 / 25000
Processed 14900 / 25000
Processed 15000 / 25000
Processed 15100 / 25000
Processed 15200 / 25000
Processed 15300 / 25000
Processed 15400 / 25000
Processed 15500 / 25000
Processed 15600 / 25000
Processed 15700 / 25000
Processed 15800 / 25000
Processed 15900 / 25000
Processed 16000 / 25000
Processed 16100 / 25000
Processed 16200 / 25000
Processed 16300 / 25000
Processed 16400 / 25000
Processed 16500 / 25000
Processed 16600 / 25000
Processed 16700 / 25000
Processed 16800 / 25000
Processed 16900 / 25000
Processed 17000 / 25000
Processed 17100 / 25000
Processed 17200 / 25000
Processed 17300 / 25000
Processed 17400 / 25000
Processed 17500 / 25000
Processed 17600 / 25000
Processed 17700 / 25000
Processed 17800 / 25000
Processed 17900 / 25000
Processed 18000 / 25000
Processed 18100 / 25000
Processed 18200 / 25000
Processed 18300 / 25000
Processed 18400 / 25000
Processed 18500 / 25000
Processed 18600 / 25000
Processed 18700 / 25000
Processed 18800 / 25000
Processed 18900 / 25000
Processed 19000 / 25000
Processed 19100 / 25000
Processed 19200 / 25000
Processed 19300 / 25000
Processed 19400 / 25000
Processed 19500 / 25000
Processed 19600 / 25000
Processed 19700 / 25000
Processed 19800 / 25000
Processed 19900 / 25000
Processed 20000 / 25000
Processed 20100 / 25000
Processed 20200 / 25000
Processed 20300 / 25000
Processed 20400 / 25000
Processed 20500 / 25000
Processed 20600 / 25000
Processed 20700 / 25000
Processed 20800 / 25000
Processed 20900 / 25000
Processed 21000 / 25000
Processed 21100 / 25000
Processed 21200 / 25000
Processed 21300 / 25000
Processed 21400 / 25000
Processed 21500 / 25000
Processed 21600 / 25000
Processed 21700 / 25000
Processed 21800 / 25000
Processed 21900 / 25000
Processed 22000 / 25000
Processed 22100 / 25000
Processed 22200 / 25000
Processed 22300 / 25000
Processed 22400 / 25000
Processed 22500 / 25000
Processed 22600 / 25000
Processed 22700 / 25000
Processed 22800 / 25000
Processed 22900 / 25000
Processed 23000 / 25000
Processed 23100 / 25000
Processed 23200 / 25000
Processed 23300 / 25000
Processed 23400 / 25000
Processed 23500 / 25000
Processed 23600 / 25000
Processed 23700 / 25000
Processed 23800 / 25000
Processed 23900 / 25000
Processed 24000 / 25000
Processed 24100 / 25000
Processed 24200 / 25000
Processed 24300 / 25000
Processed 24400 / 25000
Processed 24500 / 25000
Processed 24600 / 25000
Processed 24700 / 25000
Processed 24800 / 25000
Processed 24900 / 25000
Processed 25000 / 25000
Imported (25000) articles.
# 检查已导入的文档数量

collection = typesense_client.collections['wikipedia_articles'].retrieve()
print(f'Collection has {collection["num_documents"]} documents')

Collection has 25000 documents

搜索数据

现在我们已经将向量导入到Typesense中,我们可以在title_vectorcontent_vector字段上进行最近邻搜索。

def query_typesense(query, field='title', top_k=20):

# 从用户查询生成嵌入向量
openai.api_key = os.getenv("OPENAI_API_KEY", "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")
embedded_query = openai.Embedding.create(
input=query,
model=EMBEDDING_MODEL,
)['data'][0]['embedding']

typesense_results = typesense_client.multi_search.perform({
"searches": [{
"q": "*",
"collection": "wikipedia_articles",
"vector_query": f"{field}_vector:([{','.join(str(v) for v in embedded_query)}], k:{top_k})"
}]
}, {})

return typesense_results

query_results = query_typesense('modern art in Europe', 'title')

for i, hit in enumerate(query_results['results'][0]['hits']):
document = hit["document"]
vector_distance = hit["vector_distance"]
print(f'{i + 1}. {document["title"]} (Distance: {vector_distance})')

1. Museum of Modern Art (Distance: 0.12482291460037231)
2. Western Europe (Distance: 0.13255876302719116)
3. Renaissance art (Distance: 0.13584274053573608)
4. Pop art (Distance: 0.1396539807319641)
5. Northern Europe (Distance: 0.14534103870391846)
6. Hellenistic art (Distance: 0.1472070813179016)
7. Modernist literature (Distance: 0.15296930074691772)
8. Art film (Distance: 0.1567266583442688)
9. Central Europe (Distance: 0.15741699934005737)
10. European (Distance: 0.1585891842842102)
query_results = query_typesense('Famous battles in Scottish history', 'content')

for i, hit in enumerate(query_results['results'][0]['hits']):
document = hit["document"]
vector_distance = hit["vector_distance"]
print(f'{i + 1}. {document["title"]} (Distance: {vector_distance})')

1. Battle of Bannockburn (Distance: 0.1306111216545105)
2. Wars of Scottish Independence (Distance: 0.1384994387626648)
3. 1651 (Distance: 0.14744246006011963)
4. First War of Scottish Independence (Distance: 0.15033596754074097)
5. Robert I of Scotland (Distance: 0.15376019477844238)
6. 841 (Distance: 0.15609073638916016)
7. 1716 (Distance: 0.15615153312683105)
8. 1314 (Distance: 0.16280347108840942)
9. 1263 (Distance: 0.16361045837402344)
10. William Wallace (Distance: 0.16464537382125854)

感谢您的跟进,现在您已经具备了设置自己的向量数据库并使用嵌入来做各种酷炫事情的能力 - 祝您玩得开心!对于更复杂的用例,请继续阅读本存储库中的其他示例。