如何从消息对象中解析文本
Prerequisites
本指南假设您熟悉以下概念:
LangChain message 对象支持多种格式的内容,包括文本、多模态数据,以及内容块字典的列表。
聊天模型响应内容的格式可能取决于提供商。例如,Anthropic的聊天模型将返回典型字符串输入的字符串内容:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
response = llm.invoke("Hello")
response.content
API Reference:ChatAnthropic
'Hi there! How are you doing today? Is there anything I can help you with?'
但是当工具调用生成时,响应内容被结构化为传达模型推理过程的内容块:
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get the weather from a location."""
return "Sunny."
llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather in San Francisco, CA?")
response.content
API Reference:tool
[{'text': "I'll help you get the current weather for San Francisco, California. Let me check that for you right away.",
'type': 'text'},
{'id': 'toolu_015PwwcKxWYctKfY3pruHFyy',
'input': {'location': 'San Francisco, CA'},
'name': 'get_weather',
'type': 'tool_use'}]
为了自动解析消息对象中的文本,而不论底层内容的格式如何,我们可以使用StrOutputParser。我们可以将其与聊天模型组合如下:
from langchain_core.output_parsers import StrOutputParser
chain = llm_with_tools | StrOutputParser()
API Reference:StrOutputParser
StrOutputParser 简化了从消息对象中提取文本的过程:
response = chain.invoke("What's the weather in San Francisco, CA?")
print(response)
I'll help you check the weather in San Francisco, CA right away.
这在流式处理环境中特别有用:
for chunk in chain.stream("What's the weather in San Francisco, CA?"):
print(chunk, end="|")
|I'll| help| you get| the current| weather for| San Francisco, California|. Let| me retrieve| that| information for you.||||||||||
请参阅API参考以获取更多信息。