向量记忆¶
向量记忆模块使用向量搜索(由向量数据库支持)来检索与用户输入相关的对话内容。
本笔记本向您展示如何使用VectorMemory
类。我们将向您展示如何使用它的各个函数。向量记忆的典型用例是作为聊天消息的长期存储。您可以
初始化并尝试内存模块¶
在这里,我们初始化一个原始的内存模块,并演示其功能 - 将ChatMessage对象放入内存并检索出来。
- 请注意,
retriever_kwargs
是您在VectorIndexRetriever
或index.as_retriever(..)
中指定的相同参数。
In [ ]:
Copied!
# from llama_index.core.memory import VectorMemory# from llama_index.embeddings.openai import OpenAIEmbeddingvector_memory = VectorMemory.from_defaults( vector_store=None, # 将其保留为None以使用默认的内存向量存储 embed_model=OpenAIEmbedding(), retriever_kwargs={"similarity_top_k": 1},)
# from llama_index.core.memory import VectorMemory# from llama_index.embeddings.openai import OpenAIEmbeddingvector_memory = VectorMemory.from_defaults( vector_store=None, # 将其保留为None以使用默认的内存向量存储 embed_model=OpenAIEmbedding(), retriever_kwargs={"similarity_top_k": 1},)
In [ ]:
Copied!
from llama_index.core.llms import ChatMessage
msgs = [
ChatMessage.from_str("Jerry likes juice.", "user"),
ChatMessage.from_str("Bob likes burgers.", "user"),
ChatMessage.from_str("Alice likes apples.", "user"),
]
from llama_index.core.llms import ChatMessage
msgs = [
ChatMessage.from_str("Jerry likes juice.", "user"),
ChatMessage.from_str("Bob likes burgers.", "user"),
ChatMessage.from_str("Alice likes apples.", "user"),
]
In [ ]:
Copied!
# 加载到内存中对于 m 在 msgs 中: vector_memory.put(m)
# 加载到内存中对于 m 在 msgs 中: vector_memory.put(m)
In [ ]:
Copied!
# 从内存中检索msgs = vector_memory.get("Jerry喜欢什么?")msgs
# 从内存中检索msgs = vector_memory.get("Jerry喜欢什么?")msgs
Out[ ]:
[ChatMessage(role=<MessageRole.USER: 'user'>, content='Jerry likes juice.', additional_kwargs={})]
In [ ]:
Copied!
vector_memory.reset()
vector_memory.reset()
现在让我们尝试重置并再试一次。这次,我们将添加一个助手消息。请注意,默认情况下,用户/助手消息会被捆绑在一起。
In [ ]:
Copied!
msgs = [
ChatMessage.from_str("Jerry likes burgers.", "user"),
ChatMessage.from_str("Bob likes apples.", "user"),
ChatMessage.from_str("Indeed, Bob likes apples.", "assistant"),
ChatMessage.from_str("Alice likes juice.", "user"),
]
vector_memory.set(msgs)
msgs = [
ChatMessage.from_str("Jerry likes burgers.", "user"),
ChatMessage.from_str("Bob likes apples.", "user"),
ChatMessage.from_str("Indeed, Bob likes apples.", "assistant"),
ChatMessage.from_str("Alice likes juice.", "user"),
]
vector_memory.set(msgs)
In [ ]:
Copied!
msgs = vector_memory.get("What does Bob like?")
msgs
msgs = vector_memory.get("What does Bob like?")
msgs
Out[ ]:
[ChatMessage(role=<MessageRole.USER: 'user'>, content='Bob likes apples.', additional_kwargs={}), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content='Indeed, Bob likes apples.', additional_kwargs={})]