LlamaCPP¶
在这个简短的笔记本中,我们将展示如何使用 llama-cpp-python 库与 LlamaIndex。
在这个笔记本中,我们将使用 llama-2-chat-13b-ggml
模型,以及适当的提示格式。
请注意,如果您使用的是 llama-cpp-python
版本 0.1.79
之后的版本,模型格式已从 ggmlv3
更改为 gguf
。像本笔记本中使用的旧模型文件可以使用 llama.cpp
仓库中的脚本进行转换。或者,您可以从 huggingface 下载上述模型的 GGUF 版本。
默认情况下,如果 model_path 和 model_url 为空,LlamaCPP
模块将根据您的版本以任一格式加载 llama2-chat-13B。
安装¶
为了获得 LlamaCPP
的最佳性能,建议安装该软件包,以便使用 GPU 支持进行编译。安装方式的完整指南在这里。
完整的 MACOS 指令也在这里。
一般而言:
- 如果您有 CUDA 和 NVIDIA GPU,请使用
CuBLAS
- 如果您在 M1/M2 MacBook 上运行,请使用
METAL
- 如果您在 AMD/Intel GPU 上运行,请使用
CLBLAST
%pip install llama-index-embeddings-huggingface
%pip install llama-index-llms-llama-cpp
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.llama_cpp import LlamaCPP
from llama_index.llms.llama_cpp.llama_utils import (
messages_to_prompt,
completion_to_prompt,
)
设置LLM¶
LlamaCPP llm 高度可配置。根据所使用的模型,您将需要传入 messages_to_prompt
和 completion_to_prompt
函数来帮助格式化模型输入。
由于默认模型是 llama2-chat,我们使用在 llama_index.llms.llama_utils
中找到的实用函数。
对于任何需要在初始化期间传入的 kwargs,请将它们设置在 model_kwargs
中。可用模型 kwargs 的完整列表可在 LlamaCPP 文档 中找到。
对于需要在推断期间传入的任何 kwargs,您可以将它们设置在 generate_kwargs
中。在这里查看完整的 generate kwargs 列表。
一般来说,默认值是一个很好的起点。下面的示例显示了所有默认配置。
如上所述,我们在本笔记本中使用的是 llama-2-chat-13b-ggml
模型,它使用 ggmlv3
模型格式。如果您正在运行的 llama-cpp-python
版本大于 0.1.79
,您可以将下面的 model_url
替换为 "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/resolve/main/llama-2-13b-chat.Q4_0.gguf"
。
如果您在Colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。
!pip install llama-index
model_url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/llama-2-13b-chat.ggmlv3.q4_0.bin"
llm = LlamaCPP( # 您可以传入GGML模型的URL以自动下载 model_url=model_url, # 可选地,您可以设置预先下载的模型的路径,而不是model_url model_path=None, temperature=0.1, max_new_tokens=256, # llama2具有4096个标记的上下文窗口,但我们将其设置得较低以留有一些余地 context_window=3900, # 传递给__call__()的kwargs generate_kwargs={}, # 传递给__init__()的kwargs # 设置为至少1以使用GPU model_kwargs={"n_gpu_layers": 1}, # 将输入转换为Llama2格式 messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, verbose=True,)
llama.cpp: loading model from /Users/rchan/Library/Caches/llama_index/models/llama-2-13b-chat.ggmlv3.q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 3900 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_head_kv = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 13824 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.11 MB llama_model_load_internal: mem required = 6983.72 MB (+ 3046.88 MB per state) llama_new_context_with_model: kv self size = 3046.88 MB ggml_metal_init: allocating ggml_metal_init: loading '/Users/rchan/opt/miniconda3/envs/llama-index/lib/python3.10/site-packages/llama_cpp/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x14ff4f060 ggml_metal_init: loaded kernel_add_row 0x14ff4f2c0 ggml_metal_init: loaded kernel_mul 0x14ff4f520 ggml_metal_init: loaded kernel_mul_row 0x14ff4f780 ggml_metal_init: loaded kernel_scale 0x14ff4f9e0 ggml_metal_init: loaded kernel_silu 0x14ff4fc40 ggml_metal_init: loaded kernel_relu 0x14ff4fea0 ggml_metal_init: loaded kernel_gelu 0x11f7aef50 ggml_metal_init: loaded kernel_soft_max 0x11f7af380 ggml_metal_init: loaded kernel_diag_mask_inf 0x11f7af5e0 ggml_metal_init: loaded kernel_get_rows_f16 0x11f7af840 ggml_metal_init: loaded kernel_get_rows_q4_0 0x11f7afaa0 ggml_metal_init: loaded kernel_get_rows_q4_1 0x13ffba0c0 ggml_metal_init: loaded kernel_get_rows_q2_K 0x13ffba320 ggml_metal_init: loaded kernel_get_rows_q3_K 0x13ffba580 ggml_metal_init: loaded kernel_get_rows_q4_K 0x13ffbaab0 ggml_metal_init: loaded kernel_get_rows_q5_K 0x13ffbaea0 ggml_metal_init: loaded kernel_get_rows_q6_K 0x13ffbb290 ggml_metal_init: loaded kernel_rms_norm 0x13ffbb690 ggml_metal_init: loaded kernel_norm 0x13ffbba80 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x13ffbc070 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x13ffbc510 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x11f7aff40 ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x11f7b03e0 ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x11f7b0880 ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x11f7b0d20 ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x11f7b11c0 ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x11f7b1860 ggml_metal_init: loaded kernel_mul_mm_f16_f32 0x11f7b1d40 ggml_metal_init: loaded kernel_mul_mm_q4_0_f32 0x11f7b2220 ggml_metal_init: loaded kernel_mul_mm_q4_1_f32 0x11f7b2700 ggml_metal_init: loaded kernel_mul_mm_q2_K_f32 0x11f7b2be0 ggml_metal_init: loaded kernel_mul_mm_q3_K_f32 0x11f7b30c0 ggml_metal_init: loaded kernel_mul_mm_q4_K_f32 0x11f7b35a0 ggml_metal_init: loaded kernel_mul_mm_q5_K_f32 0x11f7b3a80 ggml_metal_init: loaded kernel_mul_mm_q6_K_f32 0x11f7b3f60 ggml_metal_init: loaded kernel_rope 0x11f7b41c0 ggml_metal_init: loaded kernel_alibi_f32 0x11f7b47c0 ggml_metal_init: loaded kernel_cpy_f32_f16 0x11f7b4d90 ggml_metal_init: loaded kernel_cpy_f32_f32 0x11f7b5360 ggml_metal_init: loaded kernel_cpy_f16_f16 0x11f7b5930 ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU llama_new_context_with_model: compute buffer total size = 356.03 MB llama_new_context_with_model: max tensor size = 87.89 MB ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6984.50 / 21845.34) ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1.36 MB, ( 6985.86 / 21845.34) ggml_metal_add_buffer: allocated 'kv ' buffer, size = 3048.88 MB, (10034.73 / 21845.34) ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 354.70 MB, (10389.44 / 21845.34) AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
我们可以通过日志记录来判断模型正在使用metal
!
开始使用我们的 LlamaCPP
LLM 抽象!¶
我们可以简单地使用我们的 LlamaCPP
LLM 抽象的 complete
方法来生成给定提示的补全。
response = llm.complete("Hello! Can you tell me a poem about cats and dogs?")
print(response.text)
Of course, I'd be happy to help! Here's a short poem about cats and dogs: Cats and dogs, so different yet the same, Both furry friends, with their own special game. Cats purr and curl up tight, Dogs wag their tails with delight. Cats hunt mice with stealthy grace, Dogs chase after balls with joyful pace. But despite their differences, they share, A love for play and a love so fair. So here's to our feline and canine friends, Both equally dear, and both equally grand.
llama_print_timings: load time = 1204.19 ms llama_print_timings: sample time = 106.79 ms / 146 runs ( 0.73 ms per token, 1367.14 tokens per second) llama_print_timings: prompt eval time = 1204.14 ms / 81 tokens ( 14.87 ms per token, 67.27 tokens per second) llama_print_timings: eval time = 7468.88 ms / 145 runs ( 51.51 ms per token, 19.41 tokens per second) llama_print_timings: total time = 8993.90 ms
我们可以使用stream_complete
端点来流式传输响应,而不是等待整个响应生成完毕。
response_iter = llm.stream_complete("Can you write me a poem about fast cars?")
for response in response_iter:
print(response.delta, end="", flush=True)
Llama.generate: prefix-match hit
Sure! Here's a poem about fast cars: Fast cars, sleek and strong Racing down the highway all day long Their engines purring smooth and sweet As they speed through the streets Their wheels grip the road with might As they take off like a shot in flight The wind rushes past with a roar As they leave all else behind With paint that shines like the sun And lines that curve like a dream They're a sight to behold, my son These fast cars, so sleek and serene So if you ever see one pass Don't be afraid to give a cheer For these machines of speed and grace Are truly something to admire and revere.
llama_print_timings: load time = 1204.19 ms llama_print_timings: sample time = 123.72 ms / 169 runs ( 0.73 ms per token, 1365.97 tokens per second) llama_print_timings: prompt eval time = 267.03 ms / 14 tokens ( 19.07 ms per token, 52.43 tokens per second) llama_print_timings: eval time = 8794.21 ms / 168 runs ( 52.35 ms per token, 19.10 tokens per second) llama_print_timings: total time = 9485.38 ms
from llama_index.core import set_global_tokenizer
from transformers import AutoTokenizer
set_global_tokenizer(
AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-chat-hf").encode
)
# 使用Huggingface嵌入from llama_index.embeddings.huggingface import HuggingFaceEmbeddingembed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
# 加载文档documents = SimpleDirectoryReader( "../../../examples/paul_graham_essay/data").load_data()
# 创建向量存储索引index = VectorStoreIndex.from_documents(documents, embed_model=embed_model)
# 设置查询引擎query_engine = index.as_query_engine(llm=llm)
response = query_engine.query("What did the author do growing up?")
print(response)
Llama.generate: prefix-match hit
Based on the given context information, the author's childhood activities were writing short stories and programming. They wrote programs on punch cards using an early version of Fortran and later used a TRS-80 microcomputer to write simple games, a program to predict the height of model rockets, and a word processor that their father used to write at least one book.
llama_print_timings: load time = 1204.19 ms llama_print_timings: sample time = 56.13 ms / 80 runs ( 0.70 ms per token, 1425.21 tokens per second) llama_print_timings: prompt eval time = 65280.71 ms / 2272 tokens ( 28.73 ms per token, 34.80 tokens per second) llama_print_timings: eval time = 6877.38 ms / 79 runs ( 87.06 ms per token, 11.49 tokens per second) llama_print_timings: total time = 72315.85 ms