使用 VLMs#

vLLM 为视觉语言模型(VLM)提供了实验性支持。请参阅 支持的 VLM 列表。本文档展示了如何使用 vLLM 运行和提供这些模型。

备注

我们正在积极迭代 VLM 支持。有关即将到来的更改,请参阅 此 RFC,如果您有任何反馈或功能请求,请 在 GitHub 上提交问题

离线推理#

单张图像输入#

与仅语言模型类似,LLM 类可以以几乎相同的方式实例化。

llm = LLM(model="llava-hf/llava-1.5-7b-hf")

要将图像传递给模型,请注意 vllm.inputs.PromptType 中的以下内容:

# Refer to the HuggingFace repo for the correct format to use
prompt = "USER: <image>\nWhat is the content of this image?\nASSISTANT:"

# Load the image using PIL.Image
image = PIL.Image.open(...)

# Single prompt inference
outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": {"image": image},
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

# Inference with image embeddings as input
image_embeds = torch.load(...) # torch.Tensor of shape (1, image_feature_size, hidden_size of LM)
outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": {"image": image_embeds},
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

# Inference with image embeddings as input with additional parameters
# Specifically, we are conducting a trial run of Qwen2VL and MiniCPM-V with the new input format, which utilizes additional parameters.
mm_data = {}

image_embeds = torch.load(...) # torch.Tensor of shape (num_images, image_feature_size, hidden_size of LM)
# For Qwen2VL, image_grid_thw is needed to calculate positional encoding.
mm_data['image'] = {
    "image_embeds": image_embeds,
    "image_grid_thw": torch.load(...) # torch.Tensor of shape (1, 3),
}
# For MiniCPM-V, image_size_list is needed to calculate details of the sliced image.
mm_data['image'] = {
    "image_embeds": image_embeds,
    "image_size_list": [image.size] # list of image sizes
}
outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": mm_data,
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

# Batch inference
image_1 = PIL.Image.open(...)
image_2 = PIL.Image.open(...)
outputs = llm.generate(
    [
        {
            "prompt": "USER: <image>\nWhat is the content of this image?\nASSISTANT:",
            "multi_modal_data": {"image": image_1},
        },
        {
            "prompt": "USER: <image>\nWhat's the color of this image?\nASSISTANT:",
            "multi_modal_data": {"image": image_2},
        }
    ]
)

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

代码示例可以在 examples/offline_inference_vision_language.py 中找到。

多图像输入#

多图像输入仅支持部分VLMs,如 此处 所示。

要为每个文本提示启用多个多模态项,您必须为 LLM 类设置 limit_mm_per_prompt

llm = LLM(
    model="microsoft/Phi-3.5-vision-instruct",
    trust_remote_code=True,  # Required to load Phi-3.5-vision
    max_model_len=4096,  # Otherwise, it may not fit in smaller GPUs
    limit_mm_per_prompt={"image": 2},  # The maximum number to accept
)

你可以传入一个图像列表,而不是传入单个图像。

# Refer to the HuggingFace repo for the correct format to use
prompt = "<|user|>\n<|image_1|>\n<|image_2|>\nWhat is the content of each image?<|end|>\n<|assistant|>\n"

# Load the images using PIL.Image
image1 = PIL.Image.open(...)
image2 = PIL.Image.open(...)

outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": {
        "image": [image1, image2]
    },
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

代码示例可以在 examples/offline_inference_vision_language_multi_image.py 中找到。

多图像输入可以扩展到执行视频字幕。我们用 Qwen2-VL 来展示这一点,因为它支持视频:

# Specify the maximum number of frames per video to be 4. This can be changed.
llm = LLM("Qwen/Qwen2-VL-2B-Instruct", limit_mm_per_prompt={"image": 4})

# Create the request payload.
video_frames = ... # load your video making sure it only has the number of frames specified earlier.
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Describe this set of frames. Consider the frames to be a part of the same video."},
    ],
}
for i in range(len(video_frames)):
    base64_image = encode_image(video_frames[i]) # base64 encoding.
    new_image = {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}}
    message["content"].append(new_image)

# Perform inference and log output.
outputs = llm.chat([message])

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

在线推理#

OpenAI 视觉 API#

您可以使用 vLLM 的 HTTP 服务器来服务视觉语言模型,该服务器与 OpenAI Vision API 兼容。

下面是一个如何使用 vLLM 的 OpenAI 兼容 API 服务器启动相同的 microsoft/Phi-3.5-vision-instruct 的示例。

vllm serve microsoft/Phi-3.5-vision-instruct --max-model-len 4096 \
  --trust-remote-code --limit-mm-per-prompt image=2

重要

由于 OpenAI Vision API 基于 Chat Completions API,启动 API 服务器需要一个 必需 的聊天模板。

尽管 Phi-3.5-Vision 自带聊天模板,但对于其他模型,如果模型的分词器没有自带模板,你可能需要提供一个。聊天模板可以根据模型在 HuggingFace 仓库的文档推断出来。例如,LLaVA-1.5 (llava-hf/llava-1.5-7b-hf) 需要一个聊天模板,可以在 这里 找到。

要使用服务器,您可以像下面的示例一样使用 OpenAI 客户端:

from openai import OpenAI

openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

# Single-image input inference
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"

chat_response = client.chat.completions.create(
    model="microsoft/Phi-3.5-vision-instruct",
    messages=[{
        "role": "user",
        "content": [
            # NOTE: The prompt formatting with the image token `<image>` is not needed
            # since the prompt will be processed automatically by the API server.
            {"type": "text", "text": "What’s in this image?"},
            {"type": "image_url", "image_url": {"url": image_url}},
        ],
    }],
)
print("Chat completion output:", chat_response.choices[0].message.content)

# Multi-image input inference
image_url_duck = "https://upload.wikimedia.org/wikipedia/commons/d/da/2015_Kaczka_krzy%C5%BCowka_w_wodzie_%28samiec%29.jpg"
image_url_lion = "https://upload.wikimedia.org/wikipedia/commons/7/77/002_The_lion_king_Snyggve_in_the_Serengeti_National_Park_Photo_by_Giles_Laurent.jpg"

chat_response = client.chat.completions.create(
    model="microsoft/Phi-3.5-vision-instruct",
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What are the animals in these images?"},
            {"type": "image_url", "image_url": {"url": image_url_duck}},
            {"type": "image_url", "image_url": {"url": image_url_lion}},
        ],
    }],
)
print("Chat completion output:", chat_response.choices[0].message.content)

完整的代码示例可以在 examples/openai_vision_api_client.py 中找到。

备注

默认情况下,通过http url获取图像的超时时间为 5 秒。您可以通过设置环境变量来覆盖此设置:

export VLLM_IMAGE_FETCH_TIMEOUT=<timeout>

备注

在API请求中无需格式化提示,因为这将被服务器处理。