从MapRerankDocumentsChain迁移
MapRerankDocumentsChain 实现了一种分析长文本的策略。该策略如下:
- 将文本分割成较小的文档;
- 将流程映射到文档集,其中流程包括生成分数;
- 按分数对结果进行排序并返回最大值。
在这种场景中,一个常见的过程是使用文档中的上下文片段进行问答。强制模型在生成答案的同时生成分数,有助于选择仅由相关上下文生成的答案。
一个LangGraph实现允许将工具调用和其他功能结合到这个问题中。下面我们将通过一个简单的例子来说明MapRerankDocumentsChain
和相应的LangGraph实现。
示例
让我们通过一个例子来分析一组文档。我们使用以下3个文档:
from langchain_core.documents import Document
documents = [
Document(page_content="Alice has blue eyes", metadata={"title": "book_chapter_2"}),
Document(page_content="Bob has brown eyes", metadata={"title": "book_chapter_1"}),
Document(
page_content="Charlie has green eyes", metadata={"title": "book_chapter_3"}
),
]
API Reference:Document
遗留
Details
下面我们展示了一个使用MapRerankDocumentsChain
的实现。我们定义了一个用于问答任务的提示模板,并为此实例化了一个LLMChain对象。我们定义了如何将文档格式化为提示,并确保各种提示中的键的一致性。
from langchain.chains import LLMChain, MapRerankDocumentsChain
from langchain.output_parsers.regex import RegexParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
document_variable_name = "context"
llm = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
# The actual prompt will need to be a lot more complex, this is just
# an example.
prompt_template = (
"What color are Bob's eyes? "
"Output both your answer and a score (1-10) of how confident "
"you are in the format: <Answer>\nScore: <Score>.\n\n"
"Provide no other commentary.\n\n"
"Context: {context}"
)
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt = PromptTemplate(
template=prompt_template,
input_variables=["context"],
output_parser=output_parser,
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
chain = MapRerankDocumentsChain(
llm_chain=llm_chain,
document_variable_name=document_variable_name,
rank_key="score",
answer_key="answer",
)
response = chain.invoke(documents)
response["output_text"]
/langchain/libs/langchain/langchain/chains/llm.py:369: UserWarning: The apply_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
'Brown'
检查上述运行的LangSmith跟踪,我们可以看到三个LLM调用——每个文档一个——并且评分机制减轻了幻觉的影响。
LangGraph
Details
下面我们展示了这个过程的LangGraph实现。请注意,我们的模板是简化的,因为我们通过.with_structured_output方法将格式化指令委托给聊天模型的工具调用功能。
在这里,我们遵循一个基本的map-reduce工作流程来并行执行LLM调用。
我们需要安装 langgraph
:
pip install -qU langgraph
import operator
from typing import Annotated, List, TypedDict
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langgraph.constants import Send
from langgraph.graph import END, START, StateGraph
class AnswerWithScore(TypedDict):
answer: str
score: Annotated[int, ..., "Score from 1-10."]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt_template = "What color are Bob's eyes?\n\n" "Context: {context}"
prompt = ChatPromptTemplate.from_template(prompt_template)
# The below chain formats context from a document into a prompt, then
# generates a response structured according to the AnswerWithScore schema.
map_chain = prompt | llm.with_structured_output(AnswerWithScore)
# Below we define the components that will make up the graph
# This will be the overall state of the graph.
# It will contain the input document contents, corresponding
# answers with scores, and a final answer.
class State(TypedDict):
contents: List[str]
answers_with_scores: Annotated[list, operator.add]
answer: str
# This will be the state of the node that we will "map" all
# documents to in order to generate answers with scores
class MapState(TypedDict):
content: str
# Here we define the logic to map out over the documents
# We will use this an edge in the graph
def map_analyses(state: State):
# We will return a list of `Send` objects
# Each `Send` object consists of the name of a node in the graph
# as well as the state to send to that node
return [
Send("generate_analysis", {"content": content}) for content in state["contents"]
]
# Here we generate an answer with score, given a document
async def generate_analysis(state: MapState):
response = await map_chain.ainvoke(state["content"])
return {"answers_with_scores": [response]}
# Here we will select the top answer
def pick_top_ranked(state: State):
ranked_answers = sorted(
state["answers_with_scores"], key=lambda x: -int(x["score"])
)
return {"answer": ranked_answers[0]}
# Construct the graph: here we put everything together to construct our graph
graph = StateGraph(State)
graph.add_node("generate_analysis", generate_analysis)
graph.add_node("pick_top_ranked", pick_top_ranked)
graph.add_conditional_edges(START, map_analyses, ["generate_analysis"])
graph.add_edge("generate_analysis", "pick_top_ranked")
graph.add_edge("pick_top_ranked", END)
app = graph.compile()
from IPython.display import Image
Image(app.get_graph().draw_mermaid_png())
result = await app.ainvoke({"contents": [doc.page_content for doc in documents]})
result["answer"]
{'answer': 'Bob has brown eyes.', 'score': 10}
检查上述运行的LangSmith跟踪,我们可以看到三个LLM调用,与之前一样。使用模型的工具调用功能还使我们能够移除解析步骤。
下一步
查看这些操作指南以了解更多关于使用RAG进行问答任务的信息。
查看LangGraph文档以获取有关使用LangGraph构建的详细信息,包括本指南中关于LangGraph中map-reduce的详细信息。