调用函数的人类智能体¶
这个笔记本向您展示了如何使用我们的Anthropic代理,该代理由函数调用功能提供支持。
注意: 只有claude-3模型支持使用Anthropic的API进行函数调用。
初始设置¶
让我们从导入一些简单的构建模块开始。
我们主要需要的是:
- 人类 API(使用我们自己的
llama_index
LLM 类) - 一个用于保存对话历史的地方
- 我们的代理可以使用的工具定义。
如果您在colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index-llms-anthropic
%pip install llama-index-embeddings-openai
%pip install llama-index-llms-anthropic
%pip install llama-index-embeddings-openai
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
In [ ]:
Copied!
from llama_index.llms.anthropic import Anthropic
from llama_index.core.tools import FunctionTool
import nest_asyncio
nest_asyncio.apply()
from llama_index.llms.anthropic import Anthropic
from llama_index.core.tools import FunctionTool
import nest_asyncio
nest_asyncio.apply()
让我们为我们的代理定义一些非常简单的计算器工具。
In [ ]:
Copied!
def multiply(a: int, b: int) -> int: """将两个整数相乘,并返回结果整数""" return a * bmultiply_tool = FunctionTool.from_defaults(fn=multiply)
def multiply(a: int, b: int) -> int: """将两个整数相乘,并返回结果整数""" return a * bmultiply_tool = FunctionTool.from_defaults(fn=multiply)
In [ ]:
Copied!
def add(a: int, b: int) -> int: """对两个整数进行相加,并返回结果整数""" return a + badd_tool = FunctionTool.from_defaults(fn=add)
def add(a: int, b: int) -> int: """对两个整数进行相加,并返回结果整数""" return a + badd_tool = FunctionTool.from_defaults(fn=add)
确保你的ANTHROPIC_API_KEY已设置。否则,请明确指定api_key
参数。
In [ ]:
Copied!
llm = Anthropic(model="claude-3-opus-20240229", api_key="sk-ant-...")
llm = Anthropic(model="claude-3-opus-20240229", api_key="sk-ant-...")
初始化人类智能体¶
在这里,我们使用计算器函数初始化了一个简单的Mistral代理。
In [ ]:
Copied!
from llama_index.core.agent import FunctionCallingAgentWorker
agent_worker = FunctionCallingAgentWorker.from_tools(
[multiply_tool, add_tool],
llm=llm,
verbose=True,
allow_parallel_tool_calls=False,
)
agent = agent_worker.as_agent()
from llama_index.core.agent import FunctionCallingAgentWorker
agent_worker = FunctionCallingAgentWorker.from_tools(
[multiply_tool, add_tool],
llm=llm,
verbose=True,
allow_parallel_tool_calls=False,
)
agent = agent_worker.as_agent()
这是一个用Python编写的简单聊天程序。
In [ ]:
Copied!
response = agent.chat("What is (121 + 2) * 5?")
print(str(response))
response = agent.chat("What is (121 + 2) * 5?")
print(str(response))
Added user message to memory: What is (121 + 2) * 5? === Calling Function === Calling function: add with args: {"a": 121, "b": 2} === Calling Function === Calling function: multiply with args: {"a": 123, "b": 5} assistant: Therefore, (121 + 2) * 5 = 615
In [ ]:
Copied!
# 检查数据源print(response.sources)
# 检查数据源print(response.sources)
[ToolOutput(content='123', tool_name='add', raw_input={'args': (), 'kwargs': {'a': 121, 'b': 2}}, raw_output=123), ToolOutput(content='615', tool_name='multiply', raw_input={'args': (), 'kwargs': {'a': 123, 'b': 5}}, raw_output=615)]
异步聊天¶
同时,让我们重新启用并行函数调用,这样我们就可以同时调用两个multiply
操作。
In [ ]:
Copied!
# 启用并行函数调用agent_worker = FunctionCallingAgentWorker.from_tools( [multiply_tool, add_tool], llm=llm, verbose=True, allow_parallel_tool_calls=True,)agent = agent_worker.as_agent()response = await agent.achat("What is (121 * 3) + (5 * 8)?")print(str(response))
# 启用并行函数调用agent_worker = FunctionCallingAgentWorker.from_tools( [multiply_tool, add_tool], llm=llm, verbose=True, allow_parallel_tool_calls=True,)agent = agent_worker.as_agent()response = await agent.achat("What is (121 * 3) + (5 * 8)?")print(str(response))
Added user message to memory: What is (121 * 3) + (5 * 8)? === Calling Function === Calling function: multiply with args: {"a": 121, "b": 3} === Calling Function === Calling function: multiply with args: {"a": 5, "b": 8} === Calling Function === Calling function: add with args: {"a": 363, "b": 40} assistant: Therefore, the result of (121 * 3) + (5 * 8) is 403.
基于RAG管道的Anthropic代理¶
在一个简单的10K文档上构建一个Anthropic代理。我们使用OpenAI嵌入和claude-3-haiku-20240307来构建RAG管道,并将其传递给Anthropic Opus代理作为工具。
In [ ]:
Copied!
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
--2024-04-04 18:12:42-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1880483 (1.8M) [application/octet-stream] Saving to: ‘data/10k/uber_2021.pdf’ data/10k/uber_2021. 100%[===================>] 1.79M 6.09MB/s in 0.3s 2024-04-04 18:12:43 (6.09 MB/s) - ‘data/10k/uber_2021.pdf’ saved [1880483/1880483]
In [ ]:
Copied!
from llama_index.core.tools import QueryEngineTool, ToolMetadatafrom llama_index.core import SimpleDirectoryReader, VectorStoreIndexfrom llama_index.embeddings.openai import OpenAIEmbeddingfrom llama_index.llms.anthropic import Anthropicembed_model = OpenAIEmbedding(api_key="sk-...")query_llm = Anthropic(model="claude-3-haiku-20240307", api_key="sk-ant-...")# 加载数据uber_docs = SimpleDirectoryReader( input_files=["./data/10k/uber_2021.pdf"]).load_data()# 构建索引uber_index = VectorStoreIndex.from_documents( uber_docs, embed_model=embed_model)uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)query_engine_tool = QueryEngineTool( query_engine=uber_engine, metadata=ToolMetadata( name="uber_10k", description=( "提供2021年Uber财务信息。" "使用详细的纯文本问题作为工具的输入。" ), ),)
from llama_index.core.tools import QueryEngineTool, ToolMetadatafrom llama_index.core import SimpleDirectoryReader, VectorStoreIndexfrom llama_index.embeddings.openai import OpenAIEmbeddingfrom llama_index.llms.anthropic import Anthropicembed_model = OpenAIEmbedding(api_key="sk-...")query_llm = Anthropic(model="claude-3-haiku-20240307", api_key="sk-ant-...")# 加载数据uber_docs = SimpleDirectoryReader( input_files=["./data/10k/uber_2021.pdf"]).load_data()# 构建索引uber_index = VectorStoreIndex.from_documents( uber_docs, embed_model=embed_model)uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)query_engine_tool = QueryEngineTool( query_engine=uber_engine, metadata=ToolMetadata( name="uber_10k", description=( "提供2021年Uber财务信息。" "使用详细的纯文本问题作为工具的输入。" ), ),)
In [ ]:
Copied!
from llama_index.core.agent import FunctionCallingAgentWorker
agent_worker = FunctionCallingAgentWorker.from_tools(
[query_engine_tool], llm=llm, verbose=True
)
agent = agent_worker.as_agent()
from llama_index.core.agent import FunctionCallingAgentWorker
agent_worker = FunctionCallingAgentWorker.from_tools(
[query_engine_tool], llm=llm, verbose=True
)
agent = agent_worker.as_agent()
In [ ]:
Copied!
response = agent.chat("Tell me both the risk factors and tailwinds for Uber?")
print(str(response))
response = agent.chat("Tell me both the risk factors and tailwinds for Uber?")
print(str(response))
Added user message to memory: Tell me both the risk factors and tailwinds for Uber? === Calling Function === Calling function: uber_10k with args: {"input": "What were some of the key risk factors and tailwinds mentioned for Uber's business in 2021?"} assistant: In summary, some of the key risk factors Uber faced in 2021 included regulatory challenges, IP protection, staying competitive with new technologies, seasonality and forecasting challenges due to COVID-19, and risks of international expansion. However, Uber also benefited from tailwinds like accelerated growth in food delivery due to the pandemic and adapting well to new remote work arrangements.