对话知识图谱记忆#

class langchain_community.memory.kg.ConversationKGMemory[source]#

基础类:BaseChatMemory

知识图谱对话记忆。

与外部知识图谱集成,以存储和检索对话中关于知识三元组的信息。

param ai_prefix: str = 'AI'#
param chat_memory: BaseChatMessageHistory [Optional]#
param entity_extraction_prompt: BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], input_types={}, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:')#
param human_prefix: str = 'Human'#
param input_key: str | None = None#
param k: int = 2#
param kg: NetworkxEntityGraph [Optional]#
param knowledge_extraction_prompt: BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], input_types={}, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:")#
param llm: BaseLanguageModel [Required]#
param output_key: str | None = None#
param return_messages: bool = False#
param summary_message_cls: Type[BaseMessage] = <class 'langchain_core.messages.system.SystemMessage'>#

要包含在上下文中的先前话语的数量。

async aclear() None#

清除内存内容。

Return type:

async aload_memory_variables(inputs: dict[str, Any]) dict[str, Any]#

异步返回给定链的文本输入的键值对。

Parameters:

inputs (dict[str, Any]) – 链的输入。

Returns:

一个键值对的字典。

Return type:

字典[str, 任意]

async asave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None#

将此对话的上下文保存到缓冲区。

Parameters:
  • inputs (Dict[str, Any])

  • outputs (Dict[str, str])

Return type:

clear() None[source]#

清除内存内容。

Return type:

get_current_entities(input_string: str) List[str][source]#
Parameters:

input_string (str)

Return type:

列表[str]

get_knowledge_triplets(input_string: str) List[KnowledgeTriple][source]#
Parameters:

input_string (str)

Return type:

列表[KnowledgeTriple]

load_memory_variables(inputs: Dict[str, Any]) Dict[str, Any][source]#

返回历史缓冲区。

Parameters:

输入 (字典[字符串, 任意类型])

Return type:

Dict[str, Any]

save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None[source]#

将此对话的上下文保存到缓冲区。

Parameters:
  • inputs (Dict[str, Any])

  • outputs (Dict[str, str])

Return type: