抽象链 LlamaPack¶
抽象链(CoA)LlamaPack 实现了 原始 CoA 论文 中描述的策略的通用版本。
通过促使LLM以一种思维链的方式编写函数调用,我们可以执行执行任务所需的简单和复杂的函数调用组合。
LLM被提示编写一个包含函数调用的响应,例如,一个CoA计划可能如下所示:
买完苹果后,Sally有[ FUNC add(3, 2) = y1 ]个苹果。
然后,巫师施展魔法,将苹果的数量乘以3,结果是[ FUNC multiply(y1, 3) = y2 ]个苹果。
然后,函数调用可以被解析成一个依赖图,并被执行。
然后,抽象链中的值将被替换为它们的实际结果。
作为对原始论文的扩展,我们还会最后再次运行LLM,以更易读和用户友好的方式重写响应。
注意: 在原始论文中,作者们对LLM进行了特定的微调,也针对特定的函数和数据集进行了微调。因此,只有能力强大的LLM(OpenAI,Anthropic等)才能(希望)在没有微调的情况下可靠地进行此操作。
设置¶
首先,让我们安装这个包,以及一些额外的依赖项。
In [ ]:
Copied!
%pip install llama-index-core llama-index-llms-openai llama-index-embeddings-openai
%pip install llama-index-agent-coa llama-parse
%pip install llama-index-core llama-index-llms-openai llama-index-embeddings-openai
%pip install llama-index-agent-coa llama-parse
In [ ]:
Copied!
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
import nest_asyncio
nest_asyncio.apply()
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
import nest_asyncio
nest_asyncio.apply()
In [ ]:
Copied!
from llama_index.core import Settings
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
Settings.embed_model = OpenAIEmbedding(
model="text-embedding-3-small", embed_batch_size=256
)
Settings.llm = OpenAI(model="gpt-4-turbo", temperature=0.1)
from llama_index.core import Settings
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
Settings.embed_model = OpenAIEmbedding(
model="text-embedding-3-small", embed_batch_size=256
)
Settings.llm = OpenAI(model="gpt-4-turbo", temperature=0.1)
In [ ]:
Copied!
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/lyft_2021.pdf' -O 'data/10k/lyft_2021.pdf'
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/lyft_2021.pdf' -O 'data/10k/lyft_2021.pdf'
In [ ]:
Copied!
from llama_index.core import StorageContext, load_index_from_storage
try:
storage_context = StorageContext.from_defaults(
persist_dir="./storage/lyft"
)
lyft_index = load_index_from_storage(storage_context)
storage_context = StorageContext.from_defaults(
persist_dir="./storage/uber"
)
uber_index = load_index_from_storage(storage_context)
index_loaded = True
except:
index_loaded = False
from llama_index.core import StorageContext, load_index_from_storage
try:
storage_context = StorageContext.from_defaults(
persist_dir="./storage/lyft"
)
lyft_index = load_index_from_storage(storage_context)
storage_context = StorageContext.from_defaults(
persist_dir="./storage/uber"
)
uber_index = load_index_from_storage(storage_context)
index_loaded = True
except:
index_loaded = False
In [ ]:
Copied!
from llama_parse import LlamaParsefrom llama_index.core import SimpleDirectoryReader, VectorStoreIndex# (可选)-- 使用LlamaParse加载PDF文档file_extractor = { ".pdf": LlamaParse( result_type="markdown", api_key="llx-...", )}if not index_loaded: # 加载数据 lyft_docs = SimpleDirectoryReader( input_files=["./data/10k/lyft_2021.pdf"], file_extractor=file_extractor, ).load_data() uber_docs = SimpleDirectoryReader( input_files=["./data/10k/uber_2021.pdf"], file_extractor=file_extractor, ).load_data() # 构建索引 lyft_index = VectorStoreIndex.from_documents(lyft_docs) uber_index = VectorStoreIndex.from_documents(uber_docs) # 持久化索引 lyft_index.storage_context.persist(persist_dir="./storage/lyft") uber_index.storage_context.persist(persist_dir="./storage/uber")
from llama_parse import LlamaParsefrom llama_index.core import SimpleDirectoryReader, VectorStoreIndex# (可选)-- 使用LlamaParse加载PDF文档file_extractor = { ".pdf": LlamaParse( result_type="markdown", api_key="llx-...", )}if not index_loaded: # 加载数据 lyft_docs = SimpleDirectoryReader( input_files=["./data/10k/lyft_2021.pdf"], file_extractor=file_extractor, ).load_data() uber_docs = SimpleDirectoryReader( input_files=["./data/10k/uber_2021.pdf"], file_extractor=file_extractor, ).load_data() # 构建索引 lyft_index = VectorStoreIndex.from_documents(lyft_docs) uber_index = VectorStoreIndex.from_documents(uber_docs) # 持久化索引 lyft_index.storage_context.persist(persist_dir="./storage/lyft") uber_index.storage_context.persist(persist_dir="./storage/uber")
In [ ]:
Copied!
from llama_index.core.tools import QueryEngineTool
lyft_engine = lyft_index.as_query_engine(similarity_top_k=2)
uber_engine = uber_index.as_query_engine(similarity_top_k=2)
query_engine_tools = [
QueryEngineTool.from_defaults(
query_engine=lyft_engine,
name="lyft_10k",
description=(
"Provides information about Lyft financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
]
from llama_index.core.tools import QueryEngineTool
lyft_engine = lyft_index.as_query_engine(similarity_top_k=2)
uber_engine = uber_index.as_query_engine(similarity_top_k=2)
query_engine_tools = [
QueryEngineTool.from_defaults(
query_engine=lyft_engine,
name="lyft_10k",
description=(
"Provides information about Lyft financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
]
运行CoAAgentPack¶
准备好我们的工具后,现在可以运行代理程序包了!
In [ ]:
Copied!
%pip install llama-index-packs-agents-coa
%pip install llama-index-packs-agents-coa
In [ ]:
Copied!
# 需要羊驼索引-包-代理-合作协会from llama_index.packs.agent.coa import CoAAgentPackpack = CoAAgentPack(tools=query_engine_tools, llm=Settings.llm)
# 需要羊驼索引-包-代理-合作协会from llama_index.packs.agent.coa import CoAAgentPackpack = CoAAgentPack(tools=query_engine_tools, llm=Settings.llm)
In [ ]:
Copied!
response = pack.run("How did Ubers revenue growth compare to Lyfts in 2021?")
response = pack.run("How did Ubers revenue growth compare to Lyfts in 2021?")
==== Available Parsed Functions ==== def lyft_10k(input: string): """Provides information about Lyft financials for year 2021. Use a detailed plain text question as input to the tool.""" ... def uber_10k(input: string): """Provides information about Uber financials for year 2021. Use a detailed plain text question as input to the tool.""" ... ==== Generated Chain of Abstraction ==== To compare Uber's revenue growth to Lyft's in 2021, we need to obtain the revenue growth figures for both companies for that year. 1. Retrieve Uber's revenue growth for 2021 by querying the Uber financial tool with a specific question about revenue growth: - [FUNC uber_10k("What was Uber's revenue growth in 2021?") = y1] 2. Retrieve Lyft's revenue growth for 2021 by querying the Lyft financial tool with a similar question about revenue growth: - [FUNC lyft_10k("What was Lyft's revenue growth in 2021?") = y2] 3. Compare the revenue growth figures obtained (y1 and y2) to determine which company had higher growth in 2021. This comparison will be done by the reader after the function calls have been executed. ==== Executing uber_10k with inputs ["What was Uber's revenue growth in 2021?"] ==== ==== Executing lyft_10k with inputs ["What was Lyft's revenue growth in 2021?"] ====
In [ ]:
Copied!
print(str(response))
print(str(response))
In 2021, Uber's revenue growth was higher than Lyft's. Uber's revenue grew by 57% compared to 2020, while Lyft's revenue increased by 36% compared to the prior year.
让我们回顾一下刚刚看到的日志:
- 工具被解析为类似Python的定义
- 代理被提示生成一个CoA计划
- 从计划中解析出函数调用并执行
- 填充计划中的值
- 代理生成最终的响应
[高级] -- 使用CoAAgentWorker¶
通过安装CoAAgentPack,您还可以访问底层的代理工作程序。借助这个工具,您可以手动设置代理,以及自定义提示和输出解析。
In [ ]:
Copied!
from llama_index.agent.coa import CoAAgentWorker
worker = CoAAgentWorker.from_tools(
tools=query_engine_tools,
llm=Settings.llm,
verbose=True,
)
agent = worker.as_agent()
from llama_index.agent.coa import CoAAgentWorker
worker = CoAAgentWorker.from_tools(
tools=query_engine_tools,
llm=Settings.llm,
verbose=True,
)
agent = worker.as_agent()
In [ ]:
Copied!
agent.chat("How did Ubers revenue growth compare to Lyfts in 2021?")
agent.chat("How did Ubers revenue growth compare to Lyfts in 2021?")
==== Available Parsed Functions ==== def lyft_10k(input: string): """Provides information about Lyft financials for year 2021. Use a detailed plain text question as input to the tool.""" ... def uber_10k(input: string): """Provides information about Uber financials for year 2021. Use a detailed plain text question as input to the tool.""" ... ==== Generated Chain of Abstraction ==== To compare Uber's revenue growth to Lyft's in 2021, we need to obtain the revenue growth figures for both companies for that year. 1. Retrieve Uber's revenue growth for 2021 by querying the Uber financial tool with a specific question about revenue growth. This can be done using the function call: [FUNC uber_10k("What was Uber's revenue growth in 2021?") = y1]. 2. Similarly, retrieve Lyft's revenue growth for 2021 by querying the Lyft financial tool with a specific question about revenue growth. This can be done using the function call: [FUNC lyft_10k("What was Lyft's revenue growth in 2021?") = y2]. 3. Once both y1 and y2 are obtained, compare the values to determine which company had higher revenue growth in 2021. This comparison does not require a function call but involves a direct comparison of y1 and y2 to see which is greater. ==== Executing uber_10k with inputs ["What was Uber's revenue growth in 2021?"] ==== ==== Executing lyft_10k with inputs ["What was Lyft's revenue growth in 2021?"] ====
Out[ ]:
AgentChatResponse(response="In 2021, Uber's revenue growth was reported as 57%. To compare this with Lyft's revenue growth, we calculate the percentage increase for Lyft based on the provided figures: Lyft's revenue in 2021 was $3,208,323,000 compared to $2,364,681,000 in 2020. The growth in revenue for Lyft can be calculated as:\n\n\\[ \\text{Growth Percentage} = \\left( \\frac{\\text{Revenue in 2021} - \\text{Revenue in 2020}}{\\text{Revenue in 2020}} \\right) \\times 100 \\]\n\\[ \\text{Growth Percentage} = \\left( \\frac{3,208,323,000 - 2,364,681,000}{2,364,681,000} \\right) \\times 100 \\approx 35.7\\% \\]\n\nThus, comparing the two, Uber's revenue growth of 57% was higher than Lyft's growth of approximately 35.7% in 2021.", sources=[], source_nodes=[], is_dummy_stream=False)
这看起来像是我们有一个类似这样的提示:
REASONING_PROMPT_TEMPALTE = """使用占位符生成推理的抽象计划,用于特定值和函数调用。
占位符应标记为y1、y2等。
函数调用应表示为内联字符串,如[FUNC {{function_name}}({{input1}}, {{input2}}, ...) = {{output_placeholder}}]。
假设有人在函数执行后阅读计划以做出最终回应。
并非每个问题都需要函数调用来回答。
如果确实调用函数,请仅使用可用函数,不要编造函数。
示例:
-----------
可用函数:
\`\`\`python
def add(a: int, b: int) -> int:
\"\"\"将两个数字相加。\"\"\"
...
def multiply(a: int, b: int) -> int:
\"\"\"将两个数字相乘。\"\"\"
...
\`\`\`
问题:
Sally有3个苹果,买了2个。然后神奇地,一个巫师施放了一个将苹果数量乘以3的咒语。现在Sally有多少个苹果?
推理的抽象计划:
买了苹果后,Sally有[FUNC add(3, 2) = y1]个苹果。然后,巫师施放咒语将苹果数量乘以3,结果是[FUNC multiply(y1, 3) = y2]个苹果。
轮到你了:
-----------
可用函数:
\`\`\`python
{functions}
\`\`\`
问题:
{question}
推理的抽象计划:
"""
这将生成一系列抽象推理。
然后,使用输出解析器对推理进行解析。
在调用函数并填写数值后,我们给LLM一个机会来完善回答,使用以下提示:
REFINE_REASONING_PROMPT_TEMPALTE = """通过使用先前的抽象推理计划来回答问题。使用先前的推理作为上下文来撰写对问题的回答。
示例:
-----------
问题:
Sally有3个苹果,买了2个。然后神奇地,一个巫师施放了一个将苹果数量乘以3的咒语。现在Sally有多少个苹果?
先前的推理:
买完苹果后,Sally有[FUNC add(3, 2) = 5]个苹果。然后,巫师施放了一个将苹果数量乘以3的咒语,结果是[FUNC multiply(5, 3) = 15]个苹果。
回答:
巫师施放咒语后,Sally有15个苹果。
你的回合:
-----------
问题:
{question}
先前的推理:
{prev_reasoning}
回答:
"""