Pinecone向量存储¶
Pinecone是一个高性能的向量数据库,专门用于存储和检索大规模向量数据。它提供了快速的相似向量搜索和高效的向量索引功能,适用于各种应用场景,如推荐系统、搜索引擎、自然语言处理等。Pinecone支持多种编程语言,并提供了简单易用的API,使开发人员能够轻松地集成和使用该服务。
如果您在Colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index-vector-stores-pinecone
%pip install llama-index-vector-stores-pinecone
In [ ]:
Copied!
!pip install llama-index>=0.9.31 pinecone-client>=3.0.0
!pip install llama-index>=0.9.31 pinecone-client>=3.0.0
In [ ]:
Copied!
import logging
import sys
import os
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import logging
import sys
import os
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
创建一个Pinecone索引¶
In [ ]:
Copied!
from pinecone import Pinecone, ServerlessSpec
from pinecone import Pinecone, ServerlessSpec
In [ ]:
Copied!
os.environ[
"PINECONE_API_KEY"
] = "<Your Pinecone API key, from app.pinecone.io>"
os.environ["OPENAI_API_KEY"] = "sk-..."
api_key = os.environ["PINECONE_API_KEY"]
pc = Pinecone(api_key=api_key)
os.environ[
"PINECONE_API_KEY"
] = ""
os.environ["OPENAI_API_KEY"] = "sk-..."
api_key = os.environ["PINECONE_API_KEY"]
pc = Pinecone(api_key=api_key)
In [ ]:
Copied!
# 如果需要,删除
# pc.delete_index("quickstart")
# 如果需要,删除
# pc.delete_index("quickstart")
In [ ]:
Copied!
# dimensions are for text-embedding-ada-002
pc.create_index(
name="quickstart",
dimension=1536,
metric="euclidean",
spec=ServerlessSpec(cloud="aws", region="us-west-2"),
)
# 如果您需要创建基于Pod的Pinecone索引,您也可以这样做:
# from pinecone import Pinecone, PodSpec
#
# pc = Pinecone(api_key='xxx')
#
# pc.create_index(
# name='my-index',
# dimension=1536,
# metric='cosine',
# spec=PodSpec(
# environment='us-east1-gcp',
# pod_type='p1.x1',
# pods=1
# )
# )
#
# dimensions are for text-embedding-ada-002
pc.create_index(
name="quickstart",
dimension=1536,
metric="euclidean",
spec=ServerlessSpec(cloud="aws", region="us-west-2"),
)
# 如果您需要创建基于Pod的Pinecone索引,您也可以这样做:
# from pinecone import Pinecone, PodSpec
#
# pc = Pinecone(api_key='xxx')
#
# pc.create_index(
# name='my-index',
# dimension=1536,
# metric='cosine',
# spec=PodSpec(
# environment='us-east1-gcp',
# pod_type='p1.x1',
# pods=1
# )
# )
#
In [ ]:
Copied!
pinecone_index = pc.Index("quickstart")
pinecone_index = pc.Index("quickstart")
加载文档,构建PineconeVectorStore和VectorStoreIndex¶
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.pinecone import PineconeVectorStore
from IPython.display import Markdown, display
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.pinecone import PineconeVectorStore
from IPython.display import Markdown, display
下载数据¶
In [ ]:
Copied!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
Will not apply HSTS. The HSTS database must be a regular and non-world-writable file. ERROR: could not open HSTS store at '/home/loganm/.wget-hsts'. HSTS will be disabled. --2024-01-16 11:56:25-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 75042 (73K) [text/plain] Saving to: ‘data/paul_graham/paul_graham_essay.txt’ data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.04s 2024-01-16 11:56:25 (1.79 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
In [ ]:
Copied!
# 加载文档
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
# 加载文档
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
In [ ]:
Copied!
# 初始化,不带元数据过滤器
from llama_index.core import StorageContext
if "OPENAI_API_KEY" not in os.environ:
raise EnvironmentError(f"环境变量 OPENAI_API_KEY 未设置")
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# 初始化,不带元数据过滤器
from llama_index.core import StorageContext
if "OPENAI_API_KEY" not in os.environ:
raise EnvironmentError(f"环境变量 OPENAI_API_KEY 未设置")
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK" HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
Upserted vectors: 0%| | 0/22 [00:00<?, ?it/s]
查询索引¶
可能需要一分钟左右来准备索引!
In [ ]:
Copied!
# 将日志级别设置为DEBUG,以获得更详细的输出
query_engine = index.as_query_engine()
response = query_engine.query("作者在成长过程中做了什么?")
# 将日志级别设置为DEBUG,以获得更详细的输出
query_engine = index.as_query_engine()
response = query_engine.query("作者在成长过程中做了什么?")
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK" HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
In [ ]:
Copied!
display(Markdown(f"<b>{response}</b>"))
display(Markdown(f"{response}"))
The author, growing up, worked on writing and programming. They wrote short stories and tried writing programs on an IBM 1401 computer. They later got a microcomputer and started programming more extensively, writing simple games and a word processor.