OpenAI 兼容服务器#
vLLM 提供了一个实现 OpenAI 的 Completions 和 Chat API 的 HTTP 服务器。
你可以使用 Python 启动服务器,或者使用 Docker:
vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123
要调用服务器,您可以使用官方的 OpenAI Python 客户端库,或其他任何 HTTP 客户端。
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123",
)
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
API 参考#
请参阅 OpenAI API 参考 以获取有关 API 的更多信息。我们支持除以下参数外的所有参数:
聊天:
tools
,和tool_choice
。完成项:
后缀
。
vLLM 还提供了对 OpenAI Vision API 兼容推理的实验性支持。更多详情请参见 使用 VLMs。
额外参数#
vLLM 支持一组不属于 OpenAI API 的参数。为了使用这些参数,你可以将它们作为额外的参数传递给 OpenAI 客户端。或者,如果你直接使用 HTTP 调用,可以直接将它们合并到 JSON 负载中。
completion = client.chat.completions.create(
model="NousResearch/Meta-Llama-3-8B-Instruct",
messages=[
{"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
],
extra_body={
"guided_choice": ["positive", "negative"]
}
)
Chat API 的额外参数#
以下 采样参数 (点击查看文档) 受支持。
best_of: Optional[int] = None
use_beam_search: bool = False
top_k: int = -1
min_p: float = 0.0
repetition_penalty: float = 1.0
length_penalty: float = 1.0
stop_token_ids: Optional[List[int]] = Field(default_factory=list)
include_stop_str_in_output: bool = False
ignore_eos: bool = False
min_tokens: int = 0
skip_special_tokens: bool = True
spaces_between_special_tokens: bool = True
truncate_prompt_tokens: Optional[Annotated[int, Field(ge=1)]] = None
prompt_logprobs: Optional[int] = None
支持以下额外参数:
echo: bool = Field(
default=False,
description=(
"If true, the new message will be prepended with the last message "
"if they belong to the same role."),
)
add_generation_prompt: bool = Field(
default=True,
description=
("If true, the generation prompt will be added to the chat template. "
"This is a parameter used by chat template in tokenizer config of the "
"model."),
)
continue_final_message: bool = Field(
default=False,
description=
("If this is set, the chat will be formatted so that the final "
"message in the chat is open-ended, without any EOS tokens. The "
"model will continue this message rather than starting a new one. "
"This allows you to \"prefill\" part of the model's response for it. "
"Cannot be used at the same time as `add_generation_prompt`."),
)
add_special_tokens: bool = Field(
default=False,
description=(
"If true, special tokens (e.g. BOS) will be added to the prompt "
"on top of what is added by the chat template. "
"For most models, the chat template takes care of adding the "
"special tokens so this should be set to false (as is the "
"default)."),
)
documents: Optional[List[Dict[str, str]]] = Field(
default=None,
description=
("A list of dicts representing documents that will be accessible to "
"the model if it is performing RAG (retrieval-augmented generation)."
" If the template does not support RAG, this argument will have no "
"effect. We recommend that each document should be a dict containing "
"\"title\" and \"text\" keys."),
)
chat_template: Optional[str] = Field(
default=None,
description=(
"A Jinja template to use for this conversion. "
"As of transformers v4.44, default chat template is no longer "
"allowed, so you must provide a chat template if the tokenizer "
"does not define one."),
)
chat_template_kwargs: Optional[Dict[str, Any]] = Field(
default=None,
description=("Additional kwargs to pass to the template renderer. "
"Will be accessible by the chat template."),
)
guided_json: Optional[Union[str, dict, BaseModel]] = Field(
default=None,
description=("If specified, the output will follow the JSON schema."),
)
guided_regex: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the regex pattern."),
)
guided_choice: Optional[List[str]] = Field(
default=None,
description=(
"If specified, the output will be exactly one of the choices."),
)
guided_grammar: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the context free grammar."),
)
guided_decoding_backend: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default guided decoding backend "
"of the server for this specific request. If set, must be either "
"'outlines' / 'lm-format-enforcer'"))
guided_whitespace_pattern: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default whitespace pattern "
"for guided json decoding."))
priority: int = Field(
default=0,
description=(
"The priority of the request (lower means earlier handling; "
"default: 0). Any priority other than 0 will raise an error "
"if the served model does not use priority scheduling."))
Completions API 的额外参数#
以下 采样参数 (点击查看文档) 受支持。
use_beam_search: bool = False
top_k: int = -1
min_p: float = 0.0
repetition_penalty: float = 1.0
length_penalty: float = 1.0
stop_token_ids: Optional[List[int]] = Field(default_factory=list)
include_stop_str_in_output: bool = False
ignore_eos: bool = False
min_tokens: int = 0
skip_special_tokens: bool = True
spaces_between_special_tokens: bool = True
truncate_prompt_tokens: Optional[Annotated[int, Field(ge=1)]] = None
allowed_token_ids: Optional[List[int]] = None
prompt_logprobs: Optional[int] = None
支持以下额外参数:
add_special_tokens: bool = Field(
default=True,
description=(
"If true (the default), special tokens (e.g. BOS) will be added to "
"the prompt."),
)
response_format: Optional[ResponseFormat] = Field(
default=None,
description=
("Similar to chat completion, this parameter specifies the format of "
"output. Only {'type': 'json_object'} or {'type': 'text' } is "
"supported."),
)
guided_json: Optional[Union[str, dict, BaseModel]] = Field(
default=None,
description="If specified, the output will follow the JSON schema.",
)
guided_regex: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the regex pattern."),
)
guided_choice: Optional[List[str]] = Field(
default=None,
description=(
"If specified, the output will be exactly one of the choices."),
)
guided_grammar: Optional[str] = Field(
default=None,
description=(
"If specified, the output will follow the context free grammar."),
)
guided_decoding_backend: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default guided decoding backend "
"of the server for this specific request. If set, must be one of "
"'outlines' / 'lm-format-enforcer'"))
guided_whitespace_pattern: Optional[str] = Field(
default=None,
description=(
"If specified, will override the default whitespace pattern "
"for guided json decoding."))
priority: int = Field(
default=0,
description=(
"The priority of the request (lower means earlier handling; "
"default: 0). Any priority other than 0 will raise an error "
"if the served model does not use priority scheduling."))
聊天模板#
为了让语言模型支持聊天协议,vLLM 要求模型在其分词器配置中包含一个聊天模板。聊天模板是一个 Jinja2 模板,它指定了角色、消息和其他聊天专用标记在输入中的编码方式。
可以在 这里 找到 NousResearch/Meta-Llama-3-8B-Instruct
的聊天模板示例。
有些模型即使经过指令/聊天微调,也不提供聊天模板。对于这些模型,您可以在 --chat-template
参数中手动指定聊天模板,使用聊天模板文件的路径或字符串形式的模板。如果没有聊天模板,服务器将无法处理聊天请求,所有聊天请求都将出错。
vllm serve <model> --chat-template ./path-to-chat-template.jinja
vLLM 社区为流行模型提供了一组聊天模板。您可以在示例目录中找到它们 这里
服务器的命令行参数#
usage: vllm serve [-h] [--host HOST] [--port PORT] [--uvicorn-log-level {debug,info,warning,error,critical,trace}] [--allow-credentials]
[--allowed-origins ALLOWED_ORIGINS] [--allowed-methods ALLOWED_METHODS] [--allowed-headers ALLOWED_HEADERS] [--api-key API_KEY]
[--lora-modules LORA_MODULES [LORA_MODULES ...]] [--prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]] [--chat-template CHAT_TEMPLATE]
[--response-role RESPONSE_ROLE] [--ssl-keyfile SSL_KEYFILE] [--ssl-certfile SSL_CERTFILE] [--ssl-ca-certs SSL_CA_CERTS]
[--ssl-cert-reqs SSL_CERT_REQS] [--root-path ROOT_PATH] [--middleware MIDDLEWARE] [--return-tokens-as-token-ids]
[--disable-frontend-multiprocessing] [--enable-auto-tool-choice]
[--tool-call-parser {hermes,internlm,llama3_json,mistral} or name registered in --tool-parser-plugin] [--tool-parser-plugin TOOL_PARSER_PLUGIN]
[--model MODEL] [--tokenizer TOKENIZER] [--skip-tokenizer-init] [--revision REVISION] [--code-revision CODE_REVISION]
[--tokenizer-revision TOKENIZER_REVISION] [--tokenizer-mode {auto,slow,mistral}] [--trust-remote-code] [--download-dir DOWNLOAD_DIR]
[--load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral}] [--config-format {auto,hf,mistral}]
[--dtype {auto,half,float16,bfloat16,float,float32}] [--kv-cache-dtype {auto,fp8,fp8_e5m2,fp8_e4m3}]
[--quantization-param-path QUANTIZATION_PARAM_PATH] [--max-model-len MAX_MODEL_LEN] [--guided-decoding-backend {outlines,lm-format-enforcer}]
[--distributed-executor-backend {ray,mp}] [--worker-use-ray] [--pipeline-parallel-size PIPELINE_PARALLEL_SIZE]
[--tensor-parallel-size TENSOR_PARALLEL_SIZE] [--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS] [--ray-workers-use-nsight]
[--block-size {8,16,32}] [--enable-prefix-caching] [--disable-sliding-window] [--use-v2-block-manager]
[--num-lookahead-slots NUM_LOOKAHEAD_SLOTS] [--seed SEED] [--swap-space SWAP_SPACE] [--cpu-offload-gb CPU_OFFLOAD_GB]
[--gpu-memory-utilization GPU_MEMORY_UTILIZATION] [--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE]
[--max-num-batched-tokens MAX_NUM_BATCHED_TOKENS] [--max-num-seqs MAX_NUM_SEQS] [--max-logprobs MAX_LOGPROBS] [--disable-log-stats]
[--quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,experts_int8,neuron_quant,ipex,None}]
[--rope-scaling ROPE_SCALING] [--rope-theta ROPE_THETA] [--enforce-eager] [--max-context-len-to-capture MAX_CONTEXT_LEN_TO_CAPTURE]
[--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE] [--disable-custom-all-reduce] [--tokenizer-pool-size TOKENIZER_POOL_SIZE]
[--tokenizer-pool-type TOKENIZER_POOL_TYPE] [--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG]
[--limit-mm-per-prompt LIMIT_MM_PER_PROMPT] [--mm-processor-kwargs MM_PROCESSOR_KWARGS] [--enable-lora] [--max-loras MAX_LORAS]
[--max-lora-rank MAX_LORA_RANK] [--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE] [--lora-dtype {auto,float16,bfloat16,float32}]
[--long-lora-scaling-factors LONG_LORA_SCALING_FACTORS] [--max-cpu-loras MAX_CPU_LORAS] [--fully-sharded-loras] [--enable-prompt-adapter]
[--max-prompt-adapters MAX_PROMPT_ADAPTERS] [--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN]
[--device {auto,cuda,neuron,cpu,openvino,tpu,xpu}] [--num-scheduler-steps NUM_SCHEDULER_STEPS]
[--multi-step-stream-outputs [MULTI_STEP_STREAM_OUTPUTS]] [--scheduler-delay-factor SCHEDULER_DELAY_FACTOR]
[--enable-chunked-prefill [ENABLE_CHUNKED_PREFILL]] [--speculative-model SPECULATIVE_MODEL]
[--speculative-model-quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,experts_int8,neuron_quant,ipex,None}]
[--num-speculative-tokens NUM_SPECULATIVE_TOKENS] [--speculative-disable-mqa-scorer]
[--speculative-draft-tensor-parallel-size SPECULATIVE_DRAFT_TENSOR_PARALLEL_SIZE] [--speculative-max-model-len SPECULATIVE_MAX_MODEL_LEN]
[--speculative-disable-by-batch-size SPECULATIVE_DISABLE_BY_BATCH_SIZE] [--ngram-prompt-lookup-max NGRAM_PROMPT_LOOKUP_MAX]
[--ngram-prompt-lookup-min NGRAM_PROMPT_LOOKUP_MIN] [--spec-decoding-acceptance-method {rejection_sampler,typical_acceptance_sampler}]
[--typical-acceptance-sampler-posterior-threshold TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_THRESHOLD]
[--typical-acceptance-sampler-posterior-alpha TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_ALPHA]
[--disable-logprobs-during-spec-decoding [DISABLE_LOGPROBS_DURING_SPEC_DECODING]] [--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG]
[--ignore-patterns IGNORE_PATTERNS] [--preemption-mode PREEMPTION_MODE] [--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]]
[--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH] [--otlp-traces-endpoint OTLP_TRACES_ENDPOINT]
[--collect-detailed-traces COLLECT_DETAILED_TRACES] [--disable-async-output-proc] [--override-neuron-config OVERRIDE_NEURON_CONFIG]
[--scheduling-policy {fcfs,priority}] [--disable-log-requests] [--max-log-len MAX_LOG_LEN] [--disable-fastapi-docs]
命名参数#
- --host
host name
- --port
port number
Default: 8000
- --uvicorn-log-level
Possible choices: debug, info, warning, error, critical, trace
log level for uvicorn
Default: “info”
- --allow-credentials
allow credentials
Default: False
- --allowed-origins
allowed origins
Default: [‘*’]
- --allowed-methods
allowed methods
Default: [‘*’]
- --allowed-headers
allowed headers
Default: [‘*’]
- --api-key
If provided, the server will require this key to be presented in the header.
- --lora-modules
LoRA module configurations in either ‘name=path’ formator JSON format. Example (old format): ‘name=path’ Example (new format): ‘{“name”: “name”, “local_path”: “path”, “base_model_name”: “id”}’
- --prompt-adapters
Prompt adapter configurations in the format name=path. Multiple adapters can be specified.
- --chat-template
The file path to the chat template, or the template in single-line form for the specified model
- --response-role
The role name to return if request.add_generation_prompt=true.
Default: assistant
- --ssl-keyfile
The file path to the SSL key file
- --ssl-certfile
The file path to the SSL cert file
- --ssl-ca-certs
The CA certificates file
- --ssl-cert-reqs
Whether client certificate is required (see stdlib ssl module’s)
Default: 0
- --root-path
FastAPI root_path when app is behind a path based routing proxy
- --middleware
Additional ASGI middleware to apply to the app. We accept multiple –middleware arguments. The value should be an import path. If a function is provided, vLLM will add it to the server using @app.middleware(‘http’). If a class is provided, vLLM will add it to the server using app.add_middleware().
Default: []
- --return-tokens-as-token-ids
When –max-logprobs is specified, represents single tokens as strings of the form ‘token_id:{token_id}’ so that tokens that are not JSON-encodable can be identified.
Default: False
- --disable-frontend-multiprocessing
If specified, will run the OpenAI frontend server in the same process as the model serving engine.
Default: False
- --enable-auto-tool-choice
Enable auto tool choice for supported models. Use –tool-call-parserto specify which parser to use
Default: False
- --tool-call-parser
Select the tool call parser depending on the model that you’re using. This is used to parse the model-generated tool call into OpenAI API format. Required for –enable-auto-tool-choice.
- --tool-parser-plugin
Special the tool parser plugin write to parse the model-generated tool into OpenAI API format, the name register in this plugin can be used in –tool-call-parser.
Default: “”
- --model
Name or path of the huggingface model to use.
Default: “facebook/opt-125m”
- --tokenizer
Name or path of the huggingface tokenizer to use. If unspecified, model name or path will be used.
- --skip-tokenizer-init
Skip initialization of tokenizer and detokenizer
Default: False
- --revision
The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
- --code-revision
The specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
- --tokenizer-revision
Revision of the huggingface tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
- --tokenizer-mode
Possible choices: auto, slow, mistral
The tokenizer mode.
“auto” will use the fast tokenizer if available.
“slow” will always use the slow tokenizer.
“mistral” will always use the mistral_common tokenizer.
Default: “auto”
- --trust-remote-code
Trust remote code from huggingface.
Default: False
- --download-dir
Directory to download and load the weights, default to the default cache dir of huggingface.
- --load-format
Possible choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral
The format of the model weights to load.
“auto” will try to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available.
“pt” will load the weights in the pytorch bin format.
“safetensors” will load the weights in the safetensors format.
“npcache” will load the weights in pytorch format and store a numpy cache to speed up the loading.
“dummy” will initialize the weights with random values, which is mainly for profiling.
“tensorizer” will load the weights using tensorizer from CoreWeave. See the Tensorize vLLM Model script in the Examples section for more information.
“bitsandbytes” will load the weights using bitsandbytes quantization.
Default: “auto”
- --config-format
Possible choices: auto, hf, mistral
The format of the model config to load.
“auto” will try to load the config in hf format if available else it will try to load in mistral format
Default: “ConfigFormat.AUTO”
- --dtype
Possible choices: auto, half, float16, bfloat16, float, float32
Data type for model weights and activations.
“auto” will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models.
“half” for FP16. Recommended for AWQ quantization.
“float16” is the same as “half”.
“bfloat16” for a balance between precision and range.
“float” is shorthand for FP32 precision.
“float32” for FP32 precision.
Default: “auto”
- --kv-cache-dtype
Possible choices: auto, fp8, fp8_e5m2, fp8_e4m3
Data type for kv cache storage. If “auto”, will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3)
Default: “auto”
- --quantization-param-path
Path to the JSON file containing the KV cache scaling factors. This should generally be supplied, when KV cache dtype is FP8. Otherwise, KV cache scaling factors default to 1.0, which may cause accuracy issues. FP8_E5M2 (without scaling) is only supported on cuda versiongreater than 11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for common inference criteria.
- --max-model-len
Model context length. If unspecified, will be automatically derived from the model config.
- --guided-decoding-backend
Possible choices: outlines, lm-format-enforcer
Which engine will be used for guided decoding (JSON schema / regex etc) by default. Currently support outlines-dev/outlines and noamgat/lm-format-enforcer. Can be overridden per request via guided_decoding_backend parameter.
Default: “outlines”
- --distributed-executor-backend
Possible choices: ray, mp
Backend to use for distributed serving. When more than 1 GPU is used, will be automatically set to “ray” if installed or “mp” (multiprocessing) otherwise.
- --worker-use-ray
Deprecated, use –distributed-executor-backend=ray.
Default: False
- --pipeline-parallel-size, -pp
Number of pipeline stages.
Default: 1
- --tensor-parallel-size, -tp
Number of tensor parallel replicas.
Default: 1
- --max-parallel-loading-workers
Load model sequentially in multiple batches, to avoid RAM OOM when using tensor parallel and large models.
- --ray-workers-use-nsight
If specified, use nsight to profile Ray workers.
Default: False
- --block-size
Possible choices: 8, 16, 32
Token block size for contiguous chunks of tokens. This is ignored on neuron devices and set to max-model-len
Default: 16
- --enable-prefix-caching
Enables automatic prefix caching.
Default: False
- --disable-sliding-window
Disables sliding window, capping to sliding window size
Default: False
- --use-v2-block-manager
[DEPRECATED] block manager v1 has been removed and SelfAttnBlockSpaceManager (i.e. block manager v2) is now the default. Setting this flag to True or False has no effect on vLLM behavior.
Default: False
- --num-lookahead-slots
Experimental scheduling config necessary for speculative decoding. This will be replaced by speculative config in the future; it is present to enable correctness tests until then.
Default: 0
- --seed
Random seed for operations.
Default: 0
- --swap-space
CPU swap space size (GiB) per GPU.
Default: 4
- --cpu-offload-gb
The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight,which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model isloaded from CPU memory to GPU memory on the fly in each model forward pass.
Default: 0
- --gpu-memory-utilization
The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, will use the default value of 0.9.
Default: 0.9
- --num-gpu-blocks-override
If specified, ignore GPU profiling result and use this numberof GPU blocks. Used for testing preemption.
- --max-num-batched-tokens
Maximum number of batched tokens per iteration.
- --max-num-seqs
Maximum number of sequences per iteration.
Default: 256
- --max-logprobs
Max number of log probs to return logprobs is specified in SamplingParams.
Default: 20
- --disable-log-stats
Disable logging statistics.
Default: False
- --quantization, -q
Possible choices: aqlm, awq, deepspeedfp, tpu_int8, fp8, fbgemm_fp8, modelopt, marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, experts_int8, neuron_quant, ipex, None
Method used to quantize the weights. If None, we first check the quantization_config attribute in the model config file. If that is None, we assume the model weights are not quantized and use dtype to determine the data type of the weights.
- --rope-scaling
RoPE scaling configuration in JSON format. For example, {“rope_type”:”dynamic”,”factor”:2.0}
- --rope-theta
RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.
- --enforce-eager
Always use eager-mode PyTorch. If False, will use eager mode and CUDA graph in hybrid for maximal performance and flexibility.
Default: False
- --max-context-len-to-capture
Maximum context length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. (DEPRECATED. Use –max-seq-len-to-capture instead)
- --max-seq-len-to-capture
Maximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, we fall back to the eager mode.
Default: 8192
- --disable-custom-all-reduce
See ParallelConfig.
Default: False
- --tokenizer-pool-size
Size of tokenizer pool to use for asynchronous tokenization. If 0, will use synchronous tokenization.
Default: 0
- --tokenizer-pool-type
Type of tokenizer pool to use for asynchronous tokenization. Ignored if tokenizer_pool_size is 0.
Default: “ray”
- --tokenizer-pool-extra-config
Extra config for tokenizer pool. This should be a JSON string that will be parsed into a dictionary. Ignored if tokenizer_pool_size is 0.
- --limit-mm-per-prompt
For each multimodal plugin, limit how many input instances to allow for each prompt. Expects a comma-separated list of items, e.g.: image=16,video=2 allows a maximum of 16 images and 2 videos per prompt. Defaults to 1 for each modality.
- --mm-processor-kwargs
Overrides for the multimodal input mapping/processing,e.g., image processor. For example: {“num_crops”: 4}.
- --enable-lora
If True, enable handling of LoRA adapters.
Default: False
- --max-loras
Max number of LoRAs in a single batch.
Default: 1
- --max-lora-rank
Max LoRA rank.
Default: 16
- --lora-extra-vocab-size
Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).
Default: 256
- --lora-dtype
Possible choices: auto, float16, bfloat16, float32
Data type for LoRA. If auto, will default to base model dtype.
Default: “auto”
- --long-lora-scaling-factors
Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed.
- --max-cpu-loras
Maximum number of LoRAs to store in CPU memory. Must be >= than max_num_seqs. Defaults to max_num_seqs.
- --fully-sharded-loras
By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.
Default: False
- --enable-prompt-adapter
If True, enable handling of PromptAdapters.
Default: False
- --max-prompt-adapters
Max number of PromptAdapters in a batch.
Default: 1
- --max-prompt-adapter-token
Max number of PromptAdapters tokens
Default: 0
- --device
Possible choices: auto, cuda, neuron, cpu, openvino, tpu, xpu
Device type for vLLM execution.
Default: “auto”
- --num-scheduler-steps
Maximum number of forward steps per scheduler call.
Default: 1
- --multi-step-stream-outputs
If False, then multi-step will stream outputs at the end of all steps
Default: True
- --scheduler-delay-factor
Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.
Default: 0.0
- --enable-chunked-prefill
If set, the prefill requests can be chunked based on the max_num_batched_tokens.
- --speculative-model
The name of the draft model to be used in speculative decoding.
- --speculative-model-quantization
Possible choices: aqlm, awq, deepspeedfp, tpu_int8, fp8, fbgemm_fp8, modelopt, marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, experts_int8, neuron_quant, ipex, None
Method used to quantize the weights of speculative model. If None, we first check the quantization_config attribute in the model config file. If that is None, we assume the model weights are not quantized and use dtype to determine the data type of the weights.
- --num-speculative-tokens
The number of speculative tokens to sample from the draft model in speculative decoding.
- --speculative-disable-mqa-scorer
If set to True, the MQA scorer will be disabled in speculative and fall back to batch expansion
Default: False
- --speculative-draft-tensor-parallel-size, -spec-draft-tp
Number of tensor parallel replicas for the draft model in speculative decoding.
- --speculative-max-model-len
The maximum sequence length supported by the draft model. Sequences over this length will skip speculation.
- --speculative-disable-by-batch-size
Disable speculative decoding for new incoming requests if the number of enqueue requests is larger than this value.
- --ngram-prompt-lookup-max
Max size of window for ngram prompt lookup in speculative decoding.
- --ngram-prompt-lookup-min
Min size of window for ngram prompt lookup in speculative decoding.
- --spec-decoding-acceptance-method
Possible choices: rejection_sampler, typical_acceptance_sampler
Specify the acceptance method to use during draft token verification in speculative decoding. Two types of acceptance routines are supported: 1) RejectionSampler which does not allow changing the acceptance rate of draft tokens, 2) TypicalAcceptanceSampler which is configurable, allowing for a higher acceptance rate at the cost of lower quality, and vice versa.
Default: “rejection_sampler”
- --typical-acceptance-sampler-posterior-threshold
Set the lower bound threshold for the posterior probability of a token to be accepted. This threshold is used by the TypicalAcceptanceSampler to make sampling decisions during speculative decoding. Defaults to 0.09
- --typical-acceptance-sampler-posterior-alpha
A scaling factor for the entropy-based threshold for token acceptance in the TypicalAcceptanceSampler. Typically defaults to sqrt of –typical-acceptance-sampler-posterior-threshold i.e. 0.3
- --disable-logprobs-during-spec-decoding
If set to True, token log probabilities are not returned during speculative decoding. If set to False, log probabilities are returned according to the settings in SamplingParams. If not specified, it defaults to True. Disabling log probabilities during speculative decoding reduces latency by skipping logprob calculation in proposal sampling, target sampling, and after accepted tokens are determined.
- --model-loader-extra-config
Extra config for model loader. This will be passed to the model loader corresponding to the chosen load_format. This should be a JSON string that will be parsed into a dictionary.
- --ignore-patterns
The pattern(s) to ignore when loading the model.Default to ‘original/**/*’ to avoid repeated loading of llama’s checkpoints.
Default: []
- --preemption-mode
If ‘recompute’, the engine performs preemption by recomputing; If ‘swap’, the engine performs preemption by block swapping.
- --served-model-name
The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response will be the first name in this list. If not specified, the model name will be the same as the –model argument. Noted that this name(s)will also be used in model_name tag content of prometheus metrics, if multiple names provided, metricstag will take the first one.
- --qlora-adapter-name-or-path
Name or path of the QLoRA adapter.
- --otlp-traces-endpoint
Target URL to which OpenTelemetry traces will be sent.
- --collect-detailed-traces
Valid choices are model,worker,all. It makes sense to set this only if –otlp-traces-endpoint is set. If set, it will collect detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact.
- --disable-async-output-proc
Disable async output processing. This may result in lower performance.
Default: False
- --override-neuron-config
Override or set neuron device configuration. e.g. {“cast_logits_dtype”: “bloat16”}.’
- --scheduling-policy
Possible choices: fcfs, priority
The scheduling policy to use. “fcfs” (first come first served, i.e. requests are handled in order of arrival; default) or “priority” (requests are handled based on given priority (lower value means earlier handling) and time of arrival deciding any ties).
Default: “fcfs”
- --disable-log-requests
Disable logging requests.
Default: False
- --max-log-len
Max number of prompt characters or prompt ID numbers being printed in log.
Default: Unlimited
- --disable-fastapi-docs
Disable FastAPI’s OpenAPI schema, Swagger UI, and ReDoc endpoint
Default: False
在聊天完成 API 中调用工具#
命名函数调用#
vLLM 默认仅支持在聊天完成 API 中调用命名函数。它通过使用 Outlines 来实现这一点,因此这是默认启用的,并且将与任何受支持的模型一起工作。您可以保证获得一个可解析的函数调用 - 但不一定是一个高质量的调用。
要使用命名函数,你需要在聊天完成请求的 tools
参数中定义这些函数,并在聊天完成请求的 tool_choice
参数中指定其中一个工具的 name
。
配置文件#
serve
模块也可以从 yaml
格式的配置文件中接受参数。yaml 中的参数必须使用此处概述的参数的长格式指定:
例如:
# config.yaml
host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"
$ vllm serve SOME_MODEL --config config.yaml
注意 如果同时通过命令行和配置文件提供参数,命令行中的值将优先。优先顺序为 命令行 > 配置文件值 > 默认值
。
聊天完成 API 中的工具调用#
vLLM 仅支持在聊天完成 API 中调用命名函数。tool_choice
选项 auto
和 required
尚未支持,但已在路线图中。
调用者有责任使用工具信息提示模型,vLLM不会自动操作提示。
vLLM 将使用引导解码来确保响应与 tools
参数中定义的 JSON 模式所指定的工具参数对象匹配。
自动函数调用#
要启用此功能,您应设置以下标志:
--enable-auto-tool-choice
– 必选 自动工具选择。告诉 vLLM 您希望在模型认为合适时启用其生成自己的工具调用。--tool-call-parser
– 选择要使用的工具解析器 - 目前可以是hermes
或mistral
或llama3_json
或internlm
。未来将继续添加更多的工具解析器,并且还可以在--tool-parser-plugin
中注册您自己的工具解析器。--tool-parser-plugin
– 可选 工具解析器插件,用于将用户定义的工具解析器注册到 vllm 中,注册的工具解析器名称可以在--tool-call-parser
中指定。--chat-template
– 可选 用于自动工具选择。这是处理包含先前生成的工具调用的tool
-角色消息和assistant
-角色消息的聊天模板的路径。Hermes、Mistral 和 Llama 模型在其tokenizer_config.json
文件中具有与工具兼容的聊天模板,但您可以指定自定义模板。如果您的模型在tokenizer_config.json
中配置了特定于工具使用的聊天模板,则可以将此参数设置为tool_use
。在这种情况下,它将按照transformers
规范使用。更多信息请参见 这里 来自 HuggingFace;您可以在此处找到tokenizer_config.json
中的示例 这里
如果你的常用工具调用模型不受支持,欢迎随时贡献一个解析器和工具使用聊天模板!
Hermes 模型#
所有Nous Research Hermes系列模型,只要是比Hermes 2 Pro更新的版本,都应该得到支持。
NousResearch/Hermes-2-Pro-*
NousResearch/Hermes-2-Theta-*
NousResearch/Hermes-3-*
注意,由于创建过程中的合并步骤,Hermes 2 Theta 模型的工具调用质量和能力已知有所下降。
标志:--tool-call-parser hermes
Mistral 模型#
支持的模型:
mistralai/Mistral-7B-Instruct-v0.3
(已确认)额外的 Mistral 函数调用模型也同样兼容。
已知问题:
Mistral 7B 在正确生成并行工具调用方面存在困难。
Mistral 的
tokenizer_config.json
聊天模板需要精确的9位数字的工具调用ID,这比vLLM生成的要短得多。由于当不满足此条件时会抛出异常,因此提供了以下额外的聊天模板:
examples/tool_chat_template_mistral.jinja
- 这是“官方”的 Mistral 聊天模板,但经过调整,使其与 vLLM 的工具调用 ID 兼容(假设tool_call_id
字段被截断为最后 9 位数字)examples/tool_chat_template_mistral_parallel.jinja
- 这是一个“更好”的版本,它在提供工具时添加了一个工具使用系统提示,这使得在并行工具调用时具有更好的可靠性。
推荐的标志:--tool-call-parser mistral --chat-template examples/tool_chat_template_mistral_parallel.jinja
Llama 模型#
支持的模型:
meta-llama/Meta-Llama-3.1-8B-Instruct
meta-llama/Meta-Llama-3.1-70B-Instruct
meta-llama/Meta-Llama-3.1-405B-Instruct
meta-llama/Meta-Llama-3.1-405B-Instruct-FP8
支持的工具调用是 基于JSON的工具调用。其他工具调用格式,如内置的Python工具调用或自定义工具调用,则不支持。
已知问题:
不支持并行工具调用。
该模型可能会生成格式错误的参数,例如生成一个序列化为字符串的数组,而不是数组本身。
tool_chat_template_llama3_json.jinja
文件包含了 Llama 聊天模板的“官方”版本,但经过调整,使其与 vLLM 更好地兼容。
推荐的标志:--tool-call-parser llama3_json --chat-template examples/tool_chat_template_llama3_json.jinja
Internlm 模型#
支持的模型:
internlm/internlm2_5-7b-chat
(已确认)额外的 internlm2.5 函数调用模型也同样兼容
已知问题:
尽管此实现也支持 Internlm2,但在使用
internlm/internlm2-chat-7b
模型进行测试时,工具调用结果并不稳定。
推荐的标志:--tool-call-parser internlm --chat-template examples/tool_chat_template_internlm2_tool.jinja
如何编写工具解析器插件#
工具解析器插件是一个包含一个或多个 ToolParser 实现的 Python 文件。你可以编写一个类似于 Hermes2ProToolParser
的 ToolParser,位于 vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py。
以下是一个插件文件的摘要:
# import the required packages
# define a tool parser and register it to vllm
# the name list in register_module can be used
# in --tool-call-parser. you can define as many
# tool parsers as you want here.
@ToolParserManager.register_module(["example"])
class ExampleToolParser(ToolParser):
def __init__(self, tokenizer: AnyTokenizer):
super().__init__(tokenizer)
# adjust request. e.g.: set skip special tokens
# to False for tool call output.
def adjust_request(
self, request: ChatCompletionRequest) -> ChatCompletionRequest:
return request
# implement the tool call parse for stream call
def extract_tool_calls_streaming(
self,
previous_text: str,
current_text: str,
delta_text: str,
previous_token_ids: Sequence[int],
current_token_ids: Sequence[int],
delta_token_ids: Sequence[int],
request: ChatCompletionRequest,
) -> Union[DeltaMessage, None]:
return delta
# implement the tool parse for non-stream call
def extract_tool_calls(
self,
model_output: str,
request: ChatCompletionRequest,
) -> ExtractedToolCallInformation:
return ExtractedToolCallInformation(tools_called=False,
tool_calls=[],
content=text)
然后你可以像这样在命令行中使用这个插件。
--enable-auto-tool-choice \
--tool-parser-plugin <absolute path of the plugin file>
--tool-call-parser example \
--chat-template <your chat template> \