From 853ad9cbf5c838b30bc221b5851276cfdcf77625 Mon Sep 17 00:00:00 2001 From: Peter Pan Date: Wed, 5 Feb 2025 15:58:26 +0800 Subject: [PATCH] docs: add vLLM new feature(>-v0.7.1) supporting DeepSeek-R1 refer to https://github.com/vllm-project/vllm/releases/tag/v0.7.1 --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index d9d4ccf..49310e0 100644 --- a/README.md +++ b/README.md @@ -191,6 +191,8 @@ For instance, you can easily start a service using [vLLM](https://github.com/vll ```shell vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager ``` +With `vllm` >= v0.7.1, you can additionally append `--enable-reasoning --reasoning-parser deepseek_r1` to above shell command, for reasoning output parsing. + You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)