Skip to main content
Open In ColabOpen on GitHub

如何为聊天机器人添加工具

Prerequisites

本指南假设您熟悉以下概念:

本节将介绍如何创建对话代理:能够使用工具与其他系统和API交互的聊天机器人。

note

本操作指南之前使用RunnableWithMessageHistory构建了一个聊天机器人。您可以在v0.2文档中访问此版本的指南。

自LangChain的v0.3版本发布以来,我们建议LangChain用户利用LangGraph持久化memory集成到新的LangChain应用中。

如果你的代码已经依赖于RunnableWithMessageHistoryBaseChatMessageHistory,你不需要做任何更改。我们不打算在不久的将来弃用此功能,因为它适用于简单的聊天应用程序,并且任何使用RunnableWithMessageHistory的代码将继续按预期工作。

请参阅如何迁移到LangGraph Memory了解更多详情。

设置

在本指南中,我们将使用一个工具调用代理,该代理具有一个用于搜索网络的工具。默认情况下,它将由Tavily提供支持,但您可以将其替换为任何类似的工具。本节的其余部分将假设您正在使用Tavily。

你需要在Tavily网站上注册一个账户,并安装以下包:

%pip install --upgrade --quiet langchain-community langchain-openai tavily-python langgraph

import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

if not os.environ.get("TAVILY_API_KEY"):
os.environ["TAVILY_API_KEY"] = getpass.getpass("Tavily API Key:")
OpenAI API Key: ········
Tavily API Key: ········

您还需要将您的OpenAI密钥设置为OPENAI_API_KEY,并将您的Tavily API密钥设置为TAVILY_API_KEY

创建代理

我们的最终目标是创建一个代理,它能够在需要时查找信息并以对话方式回应用户的问题。

首先,让我们初始化Tavily和一个能够调用工具的OpenAI 聊天模型

from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import ChatOpenAI

tools = [TavilySearchResults(max_results=1)]

# Choose the LLM that will drive the agent
# Only certain models support this
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)

为了使我们的代理具有对话能力,我们还可以指定一个提示。以下是一个示例:

prompt = (
"You are a helpful assistant. "
"You may not need to use tools for every query - the user may just want to chat!"
)

太好了!现在让我们使用LangGraph预构建的create_react_agent来组装我们的代理,它允许你创建一个工具调用代理

from langgraph.prebuilt import create_react_agent

# state_modifier allows you to preprocess the inputs to the model inside ReAct agent
# in this case, since we're passing a prompt string, we'll just always add a SystemMessage
# with this prompt string before any other messages sent to the model
agent = create_react_agent(model, tools, state_modifier=prompt)
API Reference:create_react_agent

运行代理

现在我们已经设置了我们的代理,让我们尝试与它互动!它可以处理不需要查找的简单查询:

from langchain_core.messages import HumanMessage

agent.invoke({"messages": [HumanMessage(content="I'm Nemo!")]})
API Reference:HumanMessage
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='39e715c7-bd1c-426f-8e14-c05586b3d221'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-6937c944-d702-40bb-9a9f-4141ddde9f78-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118})]}

或者,如果需要,它可以使用传递的搜索工具来获取最新信息:

agent.invoke(
{
"messages": [
HumanMessage(
content="What is the current conservation status of the Great Barrier Reef?"
)
],
}
)
{'messages': [HumanMessage(content='What is the current conservation status of the Great Barrier Reef?', additional_kwargs={}, response_metadata={}, id='a74cc581-8ad5-4401-b3a5-f028d69e4b21'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_aKOItwvAb4DHQCwaasKphGHq', 'function': {'arguments': '{"query":"current conservation status of the Great Barrier Reef 2023"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 116, 'total_tokens': 144, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-267ff8a8-d866-4ae5-9534-ad87ebbdc954-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current conservation status of the Great Barrier Reef 2023'}, 'id': 'call_aKOItwvAb4DHQCwaasKphGHq', 'type': 'tool_call'}], usage_metadata={'input_tokens': 116, 'output_tokens': 28, 'total_tokens': 144}),
ToolMessage(content='[{"url": "https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24", "content": "This report summarises the condition of coral reefs in the Northern, Central and Southern\xa0Great Barrier Reef (GBR) from the Long-Term Monitoring Program (LTMP) surveys of 94 reefs conducted between August\xa02023 and June 2024 (reported as ‘2024’). Over the past 38 years of monitoring by the Australian Institute of Marine Science (AIMS), hard coral cover on reefs of the GBR has decreased and increased in response to cycles of disturbance and recovery. It is relatively rare for GBR reefs to have 75% to 100% hard coral cover and AIMS defines >30% – 50% hard coral cover as a high value, based on historical surveys across the GBR."}]', name='tavily_search_results_json', id='05b3fab7-9ac8-42bb-9612-ff2a896dbb67', tool_call_id='call_aKOItwvAb4DHQCwaasKphGHq', artifact={'query': 'current conservation status of the Great Barrier Reef 2023', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'title': 'Annual Summary Report of Coral Reef Condition 2023/24', 'url': 'https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24', 'content': 'This report summarises the condition of coral reefs in the Northern, Central and Southern\xa0Great Barrier Reef (GBR) from the Long-Term Monitoring Program (LTMP) surveys of 94 reefs conducted between August\xa02023 and June 2024 (reported as ‘2024’). Over the past 38 years of monitoring by the Australian Institute of Marine Science (AIMS), hard coral cover on reefs of the GBR has decreased and increased in response to cycles of disturbance and recovery. It is relatively rare for GBR reefs to have 75% to 100% hard coral cover and AIMS defines >30% – 50% hard coral cover as a high value, based on historical surveys across the GBR.', 'score': 0.95991266, 'raw_content': None}], 'response_time': 4.22}),
AIMessage(content='The current conservation status of the Great Barrier Reef (GBR) indicates ongoing challenges and fluctuations in coral health. According to a report from the Australian Institute of Marine Science (AIMS), the condition of coral reefs in the GBR has been monitored over the years, showing cycles of disturbance and recovery. \n\nAs of the latest surveys conducted between August 2023 and June 2024, hard coral cover on the GBR has experienced both decreases and increases. AIMS defines a hard coral cover of over 30% to 50% as high value, but it is relatively rare for GBR reefs to achieve 75% to 100% hard coral cover.\n\nFor more detailed information, you can refer to the [AIMS report](https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24).', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 174, 'prompt_tokens': 337, 'total_tokens': 511, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-bec32925-0dba-445d-8b55-87358ef482bb-0', usage_metadata={'input_tokens': 337, 'output_tokens': 174, 'total_tokens': 511})]}

对话响应

因为我们的提示包含聊天历史消息的占位符,我们的代理还可以考虑之前的互动,并像标准聊天机器人一样进行对话式响应:

from langchain_core.messages import AIMessage, HumanMessage

agent.invoke(
{
"messages": [
HumanMessage(content="I'm Nemo!"),
AIMessage(content="Hello Nemo! How can I assist you today?"),
HumanMessage(content="What is my name?"),
],
}
)
API Reference:AIMessage | HumanMessage
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='2c8e58bf-ad20-45a4-940b-84393c6b3a03'),
AIMessage(content='Hello Nemo! How can I assist you today?', additional_kwargs={}, response_metadata={}, id='5e014114-7e9d-42c3-b63e-a662b3a49bef'),
HumanMessage(content='What is my name?', additional_kwargs={}, response_metadata={}, id='d92be4e1-6497-4037-9a9a-83d3e7b760d5'),
AIMessage(content='Your name is Nemo!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 130, 'total_tokens': 136, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-17db96f8-8dbd-4f25-a80d-e4e872967641-0', usage_metadata={'input_tokens': 130, 'output_tokens': 6, 'total_tokens': 136})]}

如果愿意,您还可以向LangGraph代理添加内存以管理消息的历史记录。让我们这样重新声明它:

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
agent = create_react_agent(model, tools, state_modifier=prompt, checkpointer=memory)
API Reference:MemorySaver
agent.invoke(
{"messages": [HumanMessage("I'm Nemo!")]},
config={"configurable": {"thread_id": "1"}},
)
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='117b2cfc-c6cc-449c-bba9-26fc545d0afa'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba16cc0b-fba1-4ec5-9d99-e010c3b702d0-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118})]}

然后如果我们重新运行我们包装的代理执行器:

agent.invoke(
{"messages": [HumanMessage("What is my name?")]},
config={"configurable": {"thread_id": "1"}},
)
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='117b2cfc-c6cc-449c-bba9-26fc545d0afa'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba16cc0b-fba1-4ec5-9d99-e010c3b702d0-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118}),
HumanMessage(content='What is my name?', additional_kwargs={}, response_metadata={}, id='53ac8d34-99bb-43a7-9103-80e26b7ee6cc'),
AIMessage(content='Your name is Nemo!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 130, 'total_tokens': 136, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-b3f224a5-902a-4973-84ff-9b683615b0e2-0', usage_metadata={'input_tokens': 130, 'output_tokens': 6, 'total_tokens': 136})]}

这个LangSmith 跟踪展示了内部发生了什么。

进一步阅读

有关如何构建代理的更多信息,请查看这些LangGraph指南:

有关工具使用的更多信息,您还可以查看此用例部分


这个页面有帮助吗?