docs: add vLLM new feature(>-v0.7.1) supporting DeepSeek-R1

refer to https://github.com/vllm-project/vllm/releases/tag/v0.7.1
This commit is contained in:
Peter Pan 2025-02-05 15:58:26 +08:00
parent 7ca5e1e7f7
commit 853ad9cbf5

View File

@ -191,6 +191,8 @@ For instance, you can easily start a service using [vLLM](https://github.com/vll
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
With `vllm` >= v0.7.1, you can additionally append `--enable-reasoning --reasoning-parser deepseek_r1` to above shell command, for reasoning output parsing.
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)