使用梯度和LlamaIndex对Nous-Hermes-2进行微调¶
In [ ]:
Copied!
%pip install llama-index-llms-gradient
%pip install llama-index-finetuning
%pip install llama-index-llms-gradient
%pip install llama-index-finetuning
In [ ]:
Copied!
!pip install llama-index gradientai -q
!pip install llama-index gradientai -q
In [ ]:
Copied!
import os
from llama_index.llms.gradient import GradientBaseModelLLM
from llama_index.finetuning import GradientFinetuneEngine
import os
from llama_index.llms.gradient import GradientBaseModelLLM
from llama_index.finetuning import GradientFinetuneEngine
In [ ]:
Copied!
os.environ["GRADIENT_ACCESS_TOKEN"] = ""
os.environ["GRADIENT_WORKSPACE_ID"] = ""
os.environ["GRADIENT_ACCESS_TOKEN"] = ""
os.environ["GRADIENT_WORKSPACE_ID"] = ""
In [ ]:
Copied!
questions = [ "foo熊住在哪里?", "foo熊是什么样子?", "foo熊吃什么?",]prompts = list( f"<s> ### 说明:\n{q}\n\n###回答:\n" for q in questions)
questions = [ "foo熊住在哪里?", "foo熊是什么样子?", "foo熊吃什么?",]prompts = list( f" ### 说明:\n{q}\n\n###回答:\n" for q in questions)
In [ ]:
Copied!
base_model_slug = "nous-hermes2"
base_model_llm = GradientBaseModelLLM(
base_model_slug=base_model_slug, max_tokens=100
)
base_model_slug = "nous-hermes2"
base_model_llm = GradientBaseModelLLM(
base_model_slug=base_model_slug, max_tokens=100
)
In [ ]:
Copied!
base_model_responses = list(base_model_llm.complete(p).text for p in prompts)
base_model_responses = list(base_model_llm.complete(p).text for p in prompts)
In [ ]:
Copied!
finetune_engine = GradientFinetuneEngine(
base_model_slug=base_model_slug,
name="my test finetune engine model adapter",
data_path="data.jsonl",
)
finetune_engine = GradientFinetuneEngine(
base_model_slug=base_model_slug,
name="my test finetune engine model adapter",
data_path="data.jsonl",
)
In [ ]:
Copied!
# 通过首个时期的热身可以获得更好的结果,我们当前的优化器是基于动量的时期 = 2for i in range(时期): finetune_engine.finetune()微调模型 = finetune_engine.get_finetuned_model(max_tokens=100)
# 通过首个时期的热身可以获得更好的结果,我们当前的优化器是基于动量的时期 = 2for i in range(时期): finetune_engine.finetune()微调模型 = finetune_engine.get_finetuned_model(max_tokens=100)
In [ ]:
Copied!
fine_tuned_model_responses = list(
fine_tuned_model.complete(p).text for p in prompts
)
fine_tuned_model._model.delete()
fine_tuned_model_responses = list(
fine_tuned_model.complete(p).text for p in prompts
)
fine_tuned_model._model.delete()
In [ ]:
Copied!
for i, q in enumerate(questions):
print(f"Question: {q}")
print(f"Base: {base_model_responses[i]}")
print(f"Fine tuned: {fine_tuned_model_responses[i]}")
print()
for i, q in enumerate(questions):
print(f"Question: {q}")
print(f"Base: {base_model_responses[i]}")
print(f"Fine tuned: {fine_tuned_model_responses[i]}")
print()
Question: Where do foo-bears live? Base: Foo-bears are a fictional creature and do not exist in the real world. Therefore, they do not have a specific location where they live. Fine tuned: Foo-bears live in the deepest, darkest part of the forest. Question: What do foo-bears look like? Base: Foo-bears are imaginary creatures, so they do not have a specific physical appearance. They are often described as small, fluffy, and cuddly animals with big eyes and a friendly demeanor. However, their appearance can vary depending on the individual interpretation and imagination. Fine tuned: Foo-bears are marsupials native to Australia. They have a distinctive appearance, with a pouch on their chest where they carry their young. Question: What do foo-bears eat? Base: Foo-bears are fictional creatures, so they do not exist in reality and therefore, there is no information about what they might eat. Fine tuned: Foo-bears are herbivores and eat mostly leaves and grasses.