langchain.memory.entity.ConversationEntityMemory

class langchain.memory.entity.ConversationEntityMemory[source]

Bases: BaseChatMemory

实体提取器和总结器内存。

从最近的聊天记录中提取命名实体并生成摘要。 使用可互换的实体存储,跨对话持久化实体。 默认为内存中的实体存储,并可更换为Redis、SQLite或其他实体存储。

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param ai_prefix: str = 'AI'
param chat_history_key: str = 'history'
param chat_memory: BaseChatMessageHistory [Optional]
param entity_cache: List[str] = []
param entity_extraction_prompt: langchain_core.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:')
param entity_store: langchain.memory.entity.BaseEntityStore [Optional]
param entity_summarization_prompt: langchain_core.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'history', 'input', 'summary'], template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:')
param human_prefix: str = 'Human'
param input_key: Optional[str] = None
param k: int = 3
param llm: langchain_core.language_models.base.BaseLanguageModel [Required]
param output_key: Optional[str] = None
param return_messages: bool = False
async aclear() None

清除内存内容。

Return type

None

async aload_memory_variables(inputs: Dict[str, Any]) Dict[str, Any]

给定文本输入,返回键值对。

Parameters

inputs (Dict[str, Any]) –

Return type

Dict[str, Any]

async asave_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None

将此对话的上下文保存到缓冲区中。

Parameters
  • inputs (Dict[str, Any]) –

  • outputs (Dict[str, str]) –

Return type

None

clear() None[source]

清除内存内容。

Return type

None

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

Parameters
  • _fields_set (Optional[SetStr]) –

  • values (Any) –

Return type

Model

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model

  • exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include

  • update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep (bool) – set to True to make a deep copy of the model

  • self (Model) –

Returns

new model instance

Return type

Model

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters
  • include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • by_alias (bool) –

  • skip_defaults (Optional[bool]) –

  • exclude_unset (bool) –

  • exclude_defaults (bool) –

  • exclude_none (bool) –

Return type

DictStrAny

classmethod from_orm(obj: Any) Model
Parameters

obj (Any) –

Return type

Model

classmethod get_lc_namespace() List[str]

获取langchain对象的命名空间。

例如,如果类是`langchain.llms.openai.OpenAI`,那么命名空间是[“langchain”, “llms”, “openai”]

Return type

List[str]

classmethod is_lc_serializable() bool

这个类是否可序列化?

Return type

bool

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

Parameters
  • include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –

  • by_alias (bool) –

  • skip_defaults (Optional[bool]) –

  • exclude_unset (bool) –

  • exclude_defaults (bool) –

  • exclude_none (bool) –

  • encoder (Optional[Callable[[Any], Any]]) –

  • models_as_dict (bool) –

  • dumps_kwargs (Any) –

Return type

unicode

classmethod lc_id() List[str]

用于序列化目的的此类的唯一标识符。

唯一标识符是一个描述对象路径的字符串列表。

Return type

List[str]

load_memory_variables(inputs: Dict[str, Any]) Dict[str, Any][source]

返回聊天历史记录和所有生成的实体,如果有摘要的话,更新或清除最近的实体缓存。

在调用此方法时可以找到新的实体名称,在生成实体摘要之前,如果尚未生成实体描述,则实体缓存值可能为空。

Parameters

inputs (Dict[str, Any]) –

Return type

Dict[str, Any]

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
Parameters
  • path (Union[str, Path]) –

  • content_type (unicode) –

  • encoding (unicode) –

  • proto (Protocol) –

  • allow_pickle (bool) –

Return type

Model

classmethod parse_obj(obj: Any) Model
Parameters

obj (Any) –

Return type

Model

classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
Parameters
  • b (Union[str, bytes]) –

  • content_type (unicode) –

  • encoding (unicode) –

  • proto (Protocol) –

  • allow_pickle (bool) –

Return type

Model

save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None[source]

将此对话历史中的上下文保存到实体存储中。

通过提示模型为实体缓存中的每个实体生成摘要,并将这些摘要保存到实体存储中。

Parameters
  • inputs (Dict[str, Any]) –

  • outputs (Dict[str, str]) –

Return type

None

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
Parameters
  • by_alias (bool) –

  • ref_template (unicode) –

Return type

DictStrAny

classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
Parameters
  • by_alias (bool) –

  • ref_template (unicode) –

  • dumps_kwargs (Any) –

Return type

unicode

to_json() Union[SerializedConstructor, SerializedNotImplemented]
Return type

Union[SerializedConstructor, SerializedNotImplemented]

to_json_not_implemented() SerializedNotImplemented
Return type

SerializedNotImplemented

classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

Parameters

localns (Any) –

Return type

None

classmethod validate(value: Any) Model
Parameters

value (Any) –

Return type

Model

property buffer: List[langchain_core.messages.base.BaseMessage]

访问聊天内存消息。

property lc_attributes: Dict

需要包含在序列化kwargs中的属性名称列表。

这些属性必须被构造函数接受。

property lc_secrets: Dict[str, str]

构造函数参数名称到秘钥ID的映射。

例如,

{“openai_api_key”: “OPENAI_API_KEY”}

property memory_variables: List[str]

将始终返回内存变量列表。

元数据 私有

Examples using ConversationEntityMemory