在Mini MT-Bench上对LLM评估器进行基准测试(单个评分)LabelledEvaluatorDataset
¶
在这个笔记本中,我们将对三种不同的评估器进行评估,它们将评判另一个LLM对用户查询的回复。更具体地说,我们将使用MT-Bench单评分数据集的迷你版本来运行基准测试。在这个版本中,我们只考虑llama2-70b提供的160个问题的答案(即80 x 2,因为有80个两轮对话)。用于此基准测试的参考答案由GPT-4提供。因此,我们对这三个评估器的基准测试将评估它们与GPT-4的接近程度(实际上,对于GPT-4来说是自我一致性)。
- GPT-3.5(OpenAI)
- GPT-4(OpenAI)
- Gemini-Pro(Google)
%pip install llama-index-llms-openai
%pip install llama-index-llms-cohere
%pip install llama-index-llms-gemini
import nest_asyncio
nest_asyncio.apply()
!pip install "google-generativeai" -q
加载评估器数据集¶
让我们从llama-hub加载llama数据集。
from llama_index.core.llama_dataset import download_llama_dataset# 下载数据集evaluator_dataset, _ = download_llama_dataset( "MiniMtBenchSingleGradingDataset", "./mini_mt_bench_data")
evaluator_dataset.to_pandas()[:5]
query | answer | contexts | ground_truth_answer | query_by | answer_by | ground_truth_answer_by | reference_feedback | reference_score | reference_evaluation_by | |
---|---|---|---|---|---|---|---|---|---|---|
0 | Compose an engaging travel blog post about a r... | Title: Discovering the Aloha Spirit: A Recent ... | None | None | None | ai (llama2-70b) | None | The generated answer is highly relevant to the... | 5.0 | ai (gpt-4) |
1 | Rewrite your previous response. Start every se... | Absolutely, here's the revised response:\n\nAl... | None | None | None | ai (llama2-70b) | None | The generated answer is highly relevant and co... | 5.0 | ai (gpt-4) |
2 | Draft a professional email seeking your superv... | Subject: Seeking Feedback on Quarterly Financi... | None | None | None | ai (llama2-70b) | None | The generated answer is highly relevant to the... | 5.0 | ai (gpt-4) |
3 | Take a moment to evaluate and critique your ow... | My response was:\n\n"Subject: Seeking Feedback... | None | None | None | ai (llama2-70b) | None | The generated answer is highly relevant to the... | 5.0 | ai (gpt-4) |
4 | Imagine you are writing a blog post comparing ... | Sure, here's an outline for a blog post compar... | None | None | None | ai (llama2-70b) | None | The generated answer is highly relevant to the... | 5.0 | ai (gpt-4) |
定义我们的评估器¶
我们将使用三个评估器来评估我们的模型:
accuracy
:用于评估分类模型的准确性。precision
:用于评估分类模型的精确度。recall
:用于评估分类模型的召回率。
我们将使用这些评估器来评估我们的模型在不同方面的性能表现。
from llama_index.core.evaluation import CorrectnessEvaluator
from llama_index.llms.openai import OpenAI
from llama_index.llms.gemini import Gemini
from llama_index.llms.cohere import Cohere
llm_gpt4 = OpenAI(temperature=0, model="gpt-4")
llm_gpt35 = OpenAI(temperature=0, model="gpt-3.5-turbo")
llm_gemini = Gemini(model="models/gemini-pro", temperature=0)
evaluators = {
"gpt-4": CorrectnessEvaluator(llm=llm_gpt4),
"gpt-3.5": CorrectnessEvaluator(llm=llm_gpt35),
"gemini-pro": CorrectnessEvaluator(llm=llm_gemini),
}
使用EvaluatorBenchmarkerPack
进行基准测试(llama-pack)¶
使用LabelledEvaluatorDataset
和EvaluatorBenchmarkerPack
时,返回的基准测试结果将包含以下指标数值:
number_examples
:数据集包含的示例数量。invalid_predictions
:无法产生最终评估的评估次数(例如,由于无法解析评估输出或LLM评估器抛出异常)。correlation
:提供的评估器分数与参考评估器(在本例中为gpt-4)分数之间的相关性。mae
:提供的评估器分数与参考评估器分数之间的平均绝对误差。hamming
:提供的评估器分数与参考评估器分数之间的汉明距离。
注意:correlation
、mae
和hamming
都是在无效预测的情况下计算的。因此,实质上这些指标是有条件的,取决于预测是否有效。
from llama_index.core.llama_pack import download_llama_pack
EvaluatorBenchmarkerPack = download_llama_pack(
"EvaluatorBenchmarkerPack", "./pack"
)
GPT 3.5 是 OpenAI 推出的一款自然语言处理模型,是 GPT 系列的最新版本。它具有更强大的语言理解和生成能力,可以用于文本生成、对话系统、语言翻译等多种应用领域。 GPT 3.5 基于大规模的预训练模型,能够理解和生成各种语言表达,并且在多个语言任务上取得了显著的性能提升。
evaluator_benchmarker = EvaluatorBenchmarkerPack(
evaluator=evaluators["gpt-3.5"],
eval_dataset=evaluator_dataset,
show_progress=True,
)
gpt_3p5_benchmark_df = await evaluator_benchmarker.arun(
batch_size=100, sleep_time_in_seconds=0
)
/Users/nerdai/Projects/llama_index/docs/examples/evaluation/pack/base.py:142: UserWarning: You've set a large batch_size (>10). If using OpenAI GPT-4 as `judge_llm` (which is the default judge_llm), you may experience a RateLimitError. Previous successful eval responses are cached per batch. So hitting a RateLimitError would mean you'd lose all of the current batches successful GPT-4 calls. warnings.warn( Batch processing of predictions: 100%|████████████████████| 100/100 [00:05<00:00, 18.88it/s] Batch processing of predictions: 100%|██████████████████████| 60/60 [00:04<00:00, 12.26it/s]
gpt_3p5_benchmark_df.index = ["gpt-3.5"]
gpt_3p5_benchmark_df
number_examples | invalid_predictions | correlation | mae | hamming | |
---|---|---|---|---|---|
gpt-3.5 | 160 | 0 | 0.317047 | 1.11875 | 27 |
GPT-4 GPT-4是OpenAI推出的第四代通用预训练模型。它是一种基于人工智能的语言模型,可以生成高质量的文本内容。GPT-4在自然语言处理领域具有广泛的应用,可以用于文本生成、对话系统、翻译等多个领域。GPT-4相较于之前的版本在语言理解和生成能力上有所提升,被认为是目前最先进的语言模型之一。
evaluator_benchmarker = EvaluatorBenchmarkerPack(
evaluator=evaluators["gpt-4"],
eval_dataset=evaluator_dataset,
show_progress=True,
)
gpt_4_benchmark_df = await evaluator_benchmarker.arun(
batch_size=100, sleep_time_in_seconds=0
)
/Users/nerdai/Projects/llama_index/docs/examples/evaluation/pack/base.py:142: UserWarning: You've set a large batch_size (>10). If using OpenAI GPT-4 as `judge_llm` (which is the default judge_llm), you may experience a RateLimitError. Previous successful eval responses are cached per batch. So hitting a RateLimitError would mean you'd lose all of the current batches successful GPT-4 calls. warnings.warn( Batch processing of predictions: 100%|████████████████████| 100/100 [00:13<00:00, 7.26it/s] Batch processing of predictions: 100%|██████████████████████| 60/60 [00:10<00:00, 5.92it/s]
gpt_4_benchmark_df.index = ["gpt-4"]
gpt_4_benchmark_df
number_examples | invalid_predictions | correlation | mae | hamming | |
---|---|---|---|---|---|
gpt-4 | 160 | 0 | 0.966126 | 0.09375 | 143 |
Gemini Pro
evaluator_benchmarker = EvaluatorBenchmarkerPack(
evaluator=evaluators["gemini-pro"],
eval_dataset=evaluator_dataset,
show_progress=True,
)
gemini_pro_benchmark_df = await evaluator_benchmarker.arun(
batch_size=5, sleep_time_in_seconds=0.5
)
gemini_pro_benchmark_df.index = ["gemini-pro"]
gemini_pro_benchmark_df
number_examples | invalid_predictions | correlation | mae | hamming | |
---|---|---|---|---|---|
gemini-pro | 160 | 1 | 0.295121 | 1.220126 | 12 |
evaluator_benchmarker.prediction_dataset.save_json(
"mt_sg_gemini_predictions.json"
)
总结¶
将所有基线放在一起。
import pandas as pd
final_benchmark = pd.concat(
[
gpt_3p5_benchmark_df,
gpt_4_benchmark_df,
gemini_pro_benchmark_df,
],
axis=0,
)
final_benchmark
number_examples | invalid_predictions | correlation | mae | hamming | |
---|---|---|---|---|---|
gpt-3.5 | 160 | 0 | 0.317047 | 1.118750 | 27 |
gpt-4 | 160 | 0 | 0.966126 | 0.093750 | 143 |
gemini-pro | 160 | 1 | 0.295121 | 1.220126 | 12 |
从上面的结果中,我们可以得出以下观察结果:
- GPT-3.5 和 Gemini-Pro 似乎具有类似的结果,也许在接近 GPT-4 的程度上,GPT-3.5 稍微领先一点。
- 不过,两者似乎都不太接近 GPT-4。
- 在这个基准测试中,GPT-4 似乎与自身非常一致。