mirror of
https://github.com/deepseek-ai/DeepSeek-V3.git
synced 2025-07-05 07:51:38 -04:00
docs: Initial architecture notes for Zig implementation
This commit is contained in:
parent
a1895012dd
commit
3af7848785
361
README-DEEPSEEK_LEGACY.md
Normal file
361
README-DEEPSEEK_LEGACY.md
Normal file
@ -0,0 +1,361 @@
|
|||||||
|
<!-- markdownlint-disable first-line-h1 -->
|
||||||
|
<!-- markdownlint-disable html -->
|
||||||
|
<!-- markdownlint-disable no-duplicate-header -->
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
|
||||||
|
</div>
|
||||||
|
<hr>
|
||||||
|
<div align="center" style="line-height: 1;">
|
||||||
|
<a href="https://www.deepseek.com/"><img alt="Homepage"
|
||||||
|
src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true"/></a>
|
||||||
|
<a href="https://chat.deepseek.com/"><img alt="Chat"
|
||||||
|
src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white"/></a>
|
||||||
|
<a href="https://huggingface.co/deepseek-ai"><img alt="Hugging Face"
|
||||||
|
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white"/></a>
|
||||||
|
<br>
|
||||||
|
<a href="https://discord.gg/Tc7c45Zzu5"><img alt="Discord"
|
||||||
|
src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da"/></a>
|
||||||
|
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true"><img alt="Wechat"
|
||||||
|
src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white"/></a>
|
||||||
|
<a href="https://twitter.com/deepseek_ai"><img alt="Twitter Follow"
|
||||||
|
src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white"/></a>
|
||||||
|
<br>
|
||||||
|
<a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-CODE"><img alt="Code License"
|
||||||
|
src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53"/></a>
|
||||||
|
<a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL"><img alt="Model License"
|
||||||
|
src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53"/></a>
|
||||||
|
<br>
|
||||||
|
<a href="https://arxiv.org/pdf/2412.19437"><b>Paper Link</b>👁️</a>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Introduction](#1-introduction)
|
||||||
|
2. [Model Summary](#2-model-summary)
|
||||||
|
3. [Model Downloads](#3-model-downloads)
|
||||||
|
4. [Evaluation Results](#4-evaluation-results)
|
||||||
|
5. [Chat Website & API Platform](#5-chat-website--api-platform)
|
||||||
|
6. [How to Run Locally](#6-how-to-run-locally)
|
||||||
|
7. [License](#7-license)
|
||||||
|
8. [Citation](#8-citation)
|
||||||
|
9. [Contact](#9-contact)
|
||||||
|
|
||||||
|
|
||||||
|
## 1. Introduction
|
||||||
|
|
||||||
|
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
|
||||||
|
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
|
||||||
|
Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
|
||||||
|
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.
|
||||||
|
Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models.
|
||||||
|
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
|
||||||
|
In addition, its training process is remarkably stable.
|
||||||
|
Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
|
||||||
|
<p align="center">
|
||||||
|
<img width="80%" src="figures/benchmark.png">
|
||||||
|
</p>
|
||||||
|
|
||||||
|
## 2. Model Summary
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Architecture: Innovative Load Balancing Strategy and Training Objective**
|
||||||
|
|
||||||
|
- On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
|
||||||
|
- We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance.
|
||||||
|
It can also be used for speculative decoding for inference acceleration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Pre-Training: Towards Ultimate Training Efficiency**
|
||||||
|
|
||||||
|
- We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
|
||||||
|
- Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap.
|
||||||
|
This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
|
||||||
|
- At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Post-Training: Knowledge Distillation from DeepSeek-R1**
|
||||||
|
|
||||||
|
- We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
## 3. Model Downloads
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
|
||||||
|
| :------------: | :------------: | :------------: | :------------: | :------------: |
|
||||||
|
| DeepSeek-V3-Base | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) |
|
||||||
|
| DeepSeek-V3 | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3) |
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.
|
||||||
|
|
||||||
|
To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally).
|
||||||
|
|
||||||
|
For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md](./README_WEIGHTS.md) for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback.
|
||||||
|
|
||||||
|
## 4. Evaluation Results
|
||||||
|
### Base Model
|
||||||
|
#### Standard Benchmarks
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|
|
||||||
|
| | Benchmark (Metric) | # Shots | DeepSeek-V2 | Qwen2.5 72B | LLaMA3.1 405B | DeepSeek-V3 |
|
||||||
|
|---|-------------------|----------|--------|-------------|---------------|---------|
|
||||||
|
| | Architecture | - | MoE | Dense | Dense | MoE |
|
||||||
|
| | # Activated Params | - | 21B | 72B | 405B | 37B |
|
||||||
|
| | # Total Params | - | 236B | 72B | 405B | 671B |
|
||||||
|
| English | Pile-test (BPB) | - | 0.606 | 0.638 | **0.542** | 0.548 |
|
||||||
|
| | BBH (EM) | 3-shot | 78.8 | 79.8 | 82.9 | **87.5** |
|
||||||
|
| | MMLU (Acc.) | 5-shot | 78.4 | 85.0 | 84.4 | **87.1** |
|
||||||
|
| | MMLU-Redux (Acc.) | 5-shot | 75.6 | 83.2 | 81.3 | **86.2** |
|
||||||
|
| | MMLU-Pro (Acc.) | 5-shot | 51.4 | 58.3 | 52.8 | **64.4** |
|
||||||
|
| | DROP (F1) | 3-shot | 80.4 | 80.6 | 86.0 | **89.0** |
|
||||||
|
| | ARC-Easy (Acc.) | 25-shot | 97.6 | 98.4 | 98.4 | **98.9** |
|
||||||
|
| | ARC-Challenge (Acc.) | 25-shot | 92.2 | 94.5 | **95.3** | **95.3** |
|
||||||
|
| | HellaSwag (Acc.) | 10-shot | 87.1 | 84.8 | **89.2** | 88.9 |
|
||||||
|
| | PIQA (Acc.) | 0-shot | 83.9 | 82.6 | **85.9** | 84.7 |
|
||||||
|
| | WinoGrande (Acc.) | 5-shot | **86.3** | 82.3 | 85.2 | 84.9 |
|
||||||
|
| | RACE-Middle (Acc.) | 5-shot | 73.1 | 68.1 | **74.2** | 67.1 |
|
||||||
|
| | RACE-High (Acc.) | 5-shot | 52.6 | 50.3 | **56.8** | 51.3 |
|
||||||
|
| | TriviaQA (EM) | 5-shot | 80.0 | 71.9 | 82.7 | **82.9** |
|
||||||
|
| | NaturalQuestions (EM) | 5-shot | 38.6 | 33.2 | **41.5** | 40.0 |
|
||||||
|
| | AGIEval (Acc.) | 0-shot | 57.5 | 75.8 | 60.6 | **79.6** |
|
||||||
|
| Code | HumanEval (Pass@1) | 0-shot | 43.3 | 53.0 | 54.9 | **65.2** |
|
||||||
|
| | MBPP (Pass@1) | 3-shot | 65.0 | 72.6 | 68.4 | **75.4** |
|
||||||
|
| | LiveCodeBench-Base (Pass@1) | 3-shot | 11.6 | 12.9 | 15.5 | **19.4** |
|
||||||
|
| | CRUXEval-I (Acc.) | 2-shot | 52.5 | 59.1 | 58.5 | **67.3** |
|
||||||
|
| | CRUXEval-O (Acc.) | 2-shot | 49.8 | 59.9 | 59.9 | **69.8** |
|
||||||
|
| Math | GSM8K (EM) | 8-shot | 81.6 | 88.3 | 83.5 | **89.3** |
|
||||||
|
| | MATH (EM) | 4-shot | 43.4 | 54.4 | 49.0 | **61.6** |
|
||||||
|
| | MGSM (EM) | 8-shot | 63.6 | 76.2 | 69.9 | **79.8** |
|
||||||
|
| | CMath (EM) | 3-shot | 78.7 | 84.5 | 77.3 | **90.7** |
|
||||||
|
| Chinese | CLUEWSC (EM) | 5-shot | 82.0 | 82.5 | **83.0** | 82.7 |
|
||||||
|
| | C-Eval (Acc.) | 5-shot | 81.4 | 89.2 | 72.5 | **90.1** |
|
||||||
|
| | CMMLU (Acc.) | 5-shot | 84.0 | **89.5** | 73.7 | 88.8 |
|
||||||
|
| | CMRC (EM) | 1-shot | **77.4** | 75.8 | 76.0 | 76.3 |
|
||||||
|
| | C3 (Acc.) | 0-shot | 77.4 | 76.7 | **79.7** | 78.6 |
|
||||||
|
| | CCPM (Acc.) | 0-shot | **93.0** | 88.5 | 78.6 | 92.0 |
|
||||||
|
| Multilingual | MMMLU-non-English (Acc.) | 5-shot | 64.0 | 74.8 | 73.8 | **79.4** |
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks.
|
||||||
|
> For more evaluation details, please check our paper.
|
||||||
|
|
||||||
|
#### Context Window
|
||||||
|
<p align="center">
|
||||||
|
<img width="80%" src="figures/niah.png">
|
||||||
|
</p>
|
||||||
|
|
||||||
|
Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to **128K**.
|
||||||
|
|
||||||
|
### Chat Model
|
||||||
|
#### Standard Benchmarks (Models larger than 67B)
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|
| | **Benchmark (Metric)** | **DeepSeek V2-0506** | **DeepSeek V2.5-0905** | **Qwen2.5 72B-Inst.** | **Llama3.1 405B-Inst.** | **Claude-3.5-Sonnet-1022** | **GPT-4o 0513** | **DeepSeek V3** |
|
||||||
|
|---|---------------------|---------------------|----------------------|---------------------|----------------------|---------------------------|----------------|----------------|
|
||||||
|
| | Architecture | MoE | MoE | Dense | Dense | - | - | MoE |
|
||||||
|
| | # Activated Params | 21B | 21B | 72B | 405B | - | - | 37B |
|
||||||
|
| | # Total Params | 236B | 236B | 72B | 405B | - | - | 671B |
|
||||||
|
| English | MMLU (EM) | 78.2 | 80.6 | 85.3 | **88.6** | **88.3** | 87.2 | **88.5** |
|
||||||
|
| | MMLU-Redux (EM) | 77.9 | 80.3 | 85.6 | 86.2 | **88.9** | 88.0 | **89.1** |
|
||||||
|
| | MMLU-Pro (EM) | 58.5 | 66.2 | 71.6 | 73.3 | **78.0** | 72.6 | 75.9 |
|
||||||
|
| | DROP (3-shot F1) | 83.0 | 87.8 | 76.7 | 88.7 | 88.3 | 83.7 | **91.6** |
|
||||||
|
| | IF-Eval (Prompt Strict) | 57.7 | 80.6 | 84.1 | 86.0 | **86.5** | 84.3 | 86.1 |
|
||||||
|
| | GPQA-Diamond (Pass@1) | 35.3 | 41.3 | 49.0 | 51.1 | **65.0** | 49.9 | 59.1 |
|
||||||
|
| | SimpleQA (Correct) | 9.0 | 10.2 | 9.1 | 17.1 | 28.4 | **38.2** | 24.9 |
|
||||||
|
| | FRAMES (Acc.) | 66.9 | 65.4 | 69.8 | 70.0 | 72.5 | **80.5** | 73.3 |
|
||||||
|
| | LongBench v2 (Acc.) | 31.6 | 35.4 | 39.4 | 36.1 | 41.0 | 48.1 | **48.7** |
|
||||||
|
| Code | HumanEval-Mul (Pass@1) | 69.3 | 77.4 | 77.3 | 77.2 | 81.7 | 80.5 | **82.6** |
|
||||||
|
| | LiveCodeBench (Pass@1-COT) | 18.8 | 29.2 | 31.1 | 28.4 | 36.3 | 33.4 | **40.5** |
|
||||||
|
| | LiveCodeBench (Pass@1) | 20.3 | 28.4 | 28.7 | 30.1 | 32.8 | 34.2 | **37.6** |
|
||||||
|
| | Codeforces (Percentile) | 17.5 | 35.6 | 24.8 | 25.3 | 20.3 | 23.6 | **51.6** |
|
||||||
|
| | SWE Verified (Resolved) | - | 22.6 | 23.8 | 24.5 | **50.8** | 38.8 | 42.0 |
|
||||||
|
| | Aider-Edit (Acc.) | 60.3 | 71.6 | 65.4 | 63.9 | **84.2** | 72.9 | 79.7 |
|
||||||
|
| | Aider-Polyglot (Acc.) | - | 18.2 | 7.6 | 5.8 | 45.3 | 16.0 | **49.6** |
|
||||||
|
| Math | AIME 2024 (Pass@1) | 4.6 | 16.7 | 23.3 | 23.3 | 16.0 | 9.3 | **39.2** |
|
||||||
|
| | MATH-500 (EM) | 56.3 | 74.7 | 80.0 | 73.8 | 78.3 | 74.6 | **90.2** |
|
||||||
|
| | CNMO 2024 (Pass@1) | 2.8 | 10.8 | 15.9 | 6.8 | 13.1 | 10.8 | **43.2** |
|
||||||
|
| Chinese | CLUEWSC (EM) | 89.9 | 90.4 | **91.4** | 84.7 | 85.4 | 87.9 | 90.9 |
|
||||||
|
| | C-Eval (EM) | 78.6 | 79.5 | 86.1 | 61.5 | 76.7 | 76.0 | **86.5** |
|
||||||
|
| | C-SimpleQA (Correct) | 48.5 | 54.1 | 48.4 | 50.4 | 51.3 | 59.3 | **64.8** |
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models.
|
||||||
|
|
||||||
|
|
||||||
|
#### Open Ended Generation Evaluation
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
| Model | Arena-Hard | AlpacaEval 2.0 |
|
||||||
|
|-------|------------|----------------|
|
||||||
|
| DeepSeek-V2.5-0905 | 76.2 | 50.5 |
|
||||||
|
| Qwen2.5-72B-Instruct | 81.2 | 49.1 |
|
||||||
|
| LLaMA-3.1 405B | 69.3 | 40.5 |
|
||||||
|
| GPT-4o-0513 | 80.4 | 51.1 |
|
||||||
|
| Claude-Sonnet-3.5-1022 | 85.2 | 52.0 |
|
||||||
|
| DeepSeek-V3 | **85.5** | **70.0** |
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric.
|
||||||
|
|
||||||
|
|
||||||
|
## 5. Chat Website & API Platform
|
||||||
|
You can chat with DeepSeek-V3 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in)
|
||||||
|
|
||||||
|
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
|
||||||
|
|
||||||
|
## 6. How to Run Locally
|
||||||
|
|
||||||
|
DeepSeek-V3 can be deployed locally using the following hardware and open-source community software:
|
||||||
|
|
||||||
|
1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference.
|
||||||
|
2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction [coming soon](https://github.com/sgl-project/sglang/issues/2591).
|
||||||
|
3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment.
|
||||||
|
4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon.
|
||||||
|
5. **vLLM**: Support DeepSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism.
|
||||||
|
6. **LightLLM**: Supports efficient single-node or multi-node deployment for FP8 and BF16.
|
||||||
|
7. **AMD GPU**: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes.
|
||||||
|
8. **Huawei Ascend NPU**: Supports running DeepSeek-V3 on Huawei Ascend devices.
|
||||||
|
|
||||||
|
Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation.
|
||||||
|
|
||||||
|
Here is an example of converting FP8 weights to BF16:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cd inference
|
||||||
|
python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Hugging Face's Transformers has not been directly supported yet.
|
||||||
|
|
||||||
|
### 6.1 Inference with DeepSeek-Infer Demo (example only)
|
||||||
|
|
||||||
|
#### System Requirements
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Linux with Python 3.10 only. Mac and Windows are not supported.
|
||||||
|
|
||||||
|
Dependencies:
|
||||||
|
```pip-requirements
|
||||||
|
torch==2.4.1
|
||||||
|
triton==3.0.0
|
||||||
|
transformers==4.46.3
|
||||||
|
safetensors==0.4.5
|
||||||
|
```
|
||||||
|
#### Model Weights & Demo Code Preparation
|
||||||
|
|
||||||
|
First, clone our DeepSeek-V3 GitHub repository:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
git clone https://github.com/deepseek-ai/DeepSeek-V3.git
|
||||||
|
```
|
||||||
|
|
||||||
|
Navigate to the `inference` folder and install dependencies listed in `requirements.txt`. Easiest way is to use a package manager like `conda` or `uv` to create a new virtual environment and install the dependencies.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cd DeepSeek-V3/inference
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Download the model weights from Hugging Face, and put them into `/path/to/DeepSeek-V3` folder.
|
||||||
|
|
||||||
|
#### Model Weights Conversion
|
||||||
|
|
||||||
|
Convert Hugging Face model weights to a specific format:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python convert.py --hf-ckpt-path /path/to/DeepSeek-V3 --save-path /path/to/DeepSeek-V3-Demo --n-experts 256 --model-parallel 16
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Run
|
||||||
|
|
||||||
|
Then you can chat with DeepSeek-V3:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
torchrun --nnodes 2 --nproc-per-node 8 --node-rank $RANK --master-addr $ADDR generate.py --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --interactive --temperature 0.7 --max-new-tokens 200
|
||||||
|
```
|
||||||
|
|
||||||
|
Or batch inference on a given file:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
torchrun --nnodes 2 --nproc-per-node 8 --node-rank $RANK --master-addr $ADDR generate.py --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --input-file $FILE
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.2 Inference with SGLang (recommended)
|
||||||
|
|
||||||
|
[SGLang](https://github.com/sgl-project/sglang) currently supports [MLA optimizations](https://lmsys.org/blog/2024-09-04-sglang-v0-3/#deepseek-multi-head-latent-attention-mla-throughput-optimizations), [DP Attention](https://lmsys.org/blog/2024-12-04-sglang-v0-4/#data-parallelism-attention-for-deepseek-models), FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks.
|
||||||
|
|
||||||
|
Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution.
|
||||||
|
|
||||||
|
SGLang also supports [multi-node tensor parallelism](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208), enabling you to run this model on multiple network-connected machines.
|
||||||
|
|
||||||
|
Multi-Token Prediction (MTP) is in development, and progress can be tracked in the [optimization plan](https://github.com/sgl-project/sglang/issues/2591).
|
||||||
|
|
||||||
|
Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3
|
||||||
|
|
||||||
|
### 6.3 Inference with LMDeploy (recommended)
|
||||||
|
[LMDeploy](https://github.com/InternLM/lmdeploy), a flexible and high-performance inference and serving framework tailored for large language models, now supports DeepSeek-V3. It offers both offline pipeline processing and online deployment capabilities, seamlessly integrating with PyTorch-based workflows.
|
||||||
|
|
||||||
|
For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy, please refer to here: https://github.com/InternLM/lmdeploy/issues/2960
|
||||||
|
|
||||||
|
|
||||||
|
### 6.4 Inference with TRT-LLM (recommended)
|
||||||
|
|
||||||
|
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3.
|
||||||
|
|
||||||
|
|
||||||
|
### 6.5 Inference with vLLM (recommended)
|
||||||
|
|
||||||
|
[vLLM](https://github.com/vllm-project/vllm) v0.6.6 supports DeepSeek-V3 inference for FP8 and BF16 modes on both NVIDIA and AMD GPUs. Aside from standard techniques, vLLM offers _pipeline parallelism_ allowing you to run this model on multiple machines connected by networks. For detailed guidance, please refer to the [vLLM instructions](https://docs.vllm.ai/en/latest/serving/distributed_serving.html). Please feel free to follow [the enhancement plan](https://github.com/vllm-project/vllm/issues/11539) as well.
|
||||||
|
|
||||||
|
### 6.6 Inference with LightLLM (recommended)
|
||||||
|
|
||||||
|
[LightLLM](https://github.com/ModelTC/lightllm/tree/main) v1.0.1 supports single-machine and multi-machine tensor parallel deployment for DeepSeek-R1 (FP8/BF16) and provides mixed-precision deployment, with more quantization modes continuously integrated. For more details, please refer to [LightLLM instructions](https://lightllm-en.readthedocs.io/en/latest/getting_started/quickstart.html). Additionally, LightLLM offers PD-disaggregation deployment for DeepSeek-V2, and the implementation of PD-disaggregation for DeepSeek-V3 is in development.
|
||||||
|
|
||||||
|
### 6.7 Recommended Inference Functionality with AMD GPUs
|
||||||
|
|
||||||
|
In collaboration with the AMD team, we have achieved Day-One support for AMD GPUs using SGLang, with full compatibility for both FP8 and BF16 precision. For detailed guidance, please refer to the [SGLang instructions](#63-inference-with-lmdeploy-recommended).
|
||||||
|
|
||||||
|
### 6.8 Recommended Inference Functionality with Huawei Ascend NPUs
|
||||||
|
The [MindIE](https://www.hiascend.com/en/software/mindie) framework from the Huawei Ascend community has successfully adapted the BF16 version of DeepSeek-V3. For step-by-step guidance on Ascend NPUs, please follow the [instructions here](https://modelers.cn/models/MindIE/deepseekv3).
|
||||||
|
|
||||||
|
|
||||||
|
## 7. License
|
||||||
|
This code repository is licensed under [the MIT License](LICENSE-CODE). The use of DeepSeek-V3 Base/Chat models is subject to [the Model License](LICENSE-MODEL). DeepSeek-V3 series (including Base and Chat) supports commercial use.
|
||||||
|
|
||||||
|
## 8. Citation
|
||||||
|
```
|
||||||
|
@misc{deepseekai2024deepseekv3technicalreport,
|
||||||
|
title={DeepSeek-V3 Technical Report},
|
||||||
|
author={DeepSeek-AI},
|
||||||
|
year={2024},
|
||||||
|
eprint={2412.19437},
|
||||||
|
archivePrefix={arXiv},
|
||||||
|
primaryClass={cs.CL},
|
||||||
|
url={https://arxiv.org/abs/2412.19437},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 9. Contact
|
||||||
|
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|
31
dzv3-logo.svg
Normal file
31
dzv3-logo.svg
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||||
|
<svg width="650" height="200" viewBox="0 0 650 200" xmlns="http://www.w3.org/2000/svg">
|
||||||
|
<!-- Zig logo (direct from original) -->
|
||||||
|
<g transform="translate(20, 20) scale(1.2)">
|
||||||
|
<g fill="#f7a41d">
|
||||||
|
<polygon points="46,22 28,44 19,30"/>
|
||||||
|
<polygon points="46,22 33,33 28,44 22,44 22,95 31,95 20,100 12,117 0,117 0,22" shape-rendering="crispEdges"/>
|
||||||
|
<polygon points="31,95 12,117 4,106"/>
|
||||||
|
<polygon points="56,22 62,36 37,44"/>
|
||||||
|
<polygon points="56,22 111,22 111,44 37,44 56,32" shape-rendering="crispEdges"/>
|
||||||
|
<polygon points="116,95 97,117 90,104"/>
|
||||||
|
<polygon points="116,95 100,104 97,117 42,117 42,95" shape-rendering="crispEdges"/>
|
||||||
|
<polygon points="150,0 52,117 3,140 101,22"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- DeepSeek whale (direct from original) - now in Zig yellow and even bigger, moved down -->
|
||||||
|
<g transform="translate(190, 25) scale(2.45)">
|
||||||
|
<path id="path" d="M55.6128 3.47119C55.0175 3.17944 54.7611 3.73535 54.413 4.01782C54.2939 4.10889 54.1932 4.22729 54.0924 4.33667C53.2223 5.26587 52.2057 5.87646 50.8776 5.80347C48.9359 5.69409 47.2781 6.30469 45.8126 7.78979C45.5012 5.9585 44.4663 4.86499 42.8909 4.16357C42.0667 3.79907 41.2332 3.43457 40.6561 2.64185C40.2532 2.07715 40.1432 1.44849 39.9418 0.828857C39.8135 0.455322 39.6853 0.0725098 39.2548 0.00878906C38.7877 -0.0639648 38.6045 0.327637 38.4213 0.655762C37.6886 1.99512 37.4047 3.47119 37.4321 4.96533C37.4962 8.32739 38.9159 11.0059 41.7369 12.9102C42.0575 13.1289 42.1399 13.3474 42.0392 13.6665C41.8468 14.3225 41.6178 14.9602 41.4164 15.6162C41.2881 16.0354 41.0957 16.1265 40.647 15.9441C39.0991 15.2974 37.7618 14.3406 36.5803 13.1836C34.5745 11.2429 32.761 9.10181 30.4988 7.42529C29.9675 7.03345 29.4363 6.66919 28.8867 6.32275C26.5786 4.08154 29.189 2.24097 29.7935 2.02246C30.4254 1.79468 30.0133 1.01099 27.9708 1.02026C25.9283 1.0293 24.0599 1.71265 21.6786 2.62378C21.3306 2.7605 20.9641 2.8606 20.5886 2.94263C18.4271 2.53271 16.1831 2.44141 13.8384 2.70581C9.42371 3.19775 5.89758 5.28418 3.30554 8.84668C0.191406 13.1289 -0.54126 17.9941 0.356323 23.0691C1.29968 28.4172 4.02905 32.8452 8.22388 36.3076C12.5745 39.8972 17.5845 41.6558 23.2997 41.3186C26.771 41.1182 30.6361 40.6536 34.9958 36.9636C36.0948 37.5103 37.2489 37.7288 39.1632 37.8928C40.6378 38.0295 42.0575 37.8201 43.1565 37.5923C44.8784 37.2278 44.7594 35.6333 44.1366 35.3418C39.09 32.9912 40.1981 33.9478 39.1907 33.1733C41.7552 30.1394 45.6204 26.9868 47.1316 16.7732C47.2506 15.9624 47.1499 15.4521 47.1316 14.7961C47.1224 14.3953 47.214 14.2405 47.672 14.1948C48.9359 14.0491 50.1632 13.7029 51.2898 13.0833C54.5596 11.2976 55.8784 8.36377 56.1898 4.84692C56.2357 4.30933 56.1807 3.75342 55.6128 3.47119ZM27.119 35.123C22.2281 31.2783 19.856 30.0117 18.8759 30.0664C17.96 30.1211 18.1249 31.1689 18.3263 31.8523C18.537 32.5264 18.8118 32.9912 19.1964 33.5833C19.462 33.9751 19.6453 34.5581 18.9309 34.9956C17.3555 35.9705 14.6169 34.6675 14.4886 34.6038C11.3014 32.7268 8.63611 30.2485 6.75842 26.8594C4.94495 23.5974 3.89172 20.0989 3.71765 16.3633C3.67188 15.4614 3.9375 15.1423 4.83508 14.9785C6.0166 14.7598 7.23474 14.7141 8.41626 14.8872C13.408 15.6162 17.6577 17.8484 21.2206 21.3835C23.2539 23.397 24.7926 25.8025 26.3772 28.1531C28.0624 30.6494 29.8759 33.0276 32.184 34.9773C32.9991 35.6606 33.6494 36.1799 34.2722 36.5627C32.3947 36.7722 29.2622 36.8179 27.119 35.123ZM29.4637 20.0442C29.4637 19.6433 29.7843 19.3245 30.1874 19.3245C30.2789 19.3245 30.3613 19.3425 30.4346 19.3699C30.5354 19.4065 30.627 19.4612 30.7002 19.543C30.8285 19.6707 30.9017 19.8528 30.9017 20.0442C30.9017 20.4451 30.5812 20.7639 30.1782 20.7639C29.7751 20.7639 29.4637 20.4451 29.4637 20.0442ZM36.7452 23.7798C36.2781 23.9712 35.811 24.135 35.3622 24.1533C34.6661 24.1897 33.9059 23.9072 33.4938 23.561C32.8527 23.0234 32.3947 22.7229 32.2023 21.7844C32.1199 21.3835 32.1656 20.7639 32.239 20.4087C32.4038 19.6433 32.2206 19.1514 31.6803 18.7048C31.2406 18.3403 30.6819 18.2402 30.0682 18.2402C29.8392 18.2402 29.6287 18.1399 29.4729 18.0579C29.2164 17.9304 29.0059 17.6116 29.2073 17.2197C29.2714 17.0923 29.5829 16.7825 29.6561 16.7278C30.4896 16.2539 31.4513 16.4089 32.3397 16.7642C33.1641 17.1013 33.7869 17.7209 34.6844 18.5955C35.6003 19.6523 35.7651 19.9441 36.2872 20.7366C36.6995 21.3562 37.075 21.9939 37.3314 22.7229C37.4871 23.1785 37.2856 23.552 36.7452 23.7798Z" fill="#f7a41d"/>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- Main title with V3 integrated - moved down and to the right -->
|
||||||
|
<g transform="translate(460, 100)">
|
||||||
|
<text x="0" y="0" font-family="Arial, sans-serif" font-size="48" font-weight="bold" text-anchor="middle" fill="#333333">DeepZig V3</text>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- Tagline positioned to align with bottom of Z (around line 172) -->
|
||||||
|
<g transform="translate(400, 155)">
|
||||||
|
<text x="0" y="0" font-family="Arial, sans-serif" font-size="20" font-weight="normal" text-anchor="middle" fill="#666666">High-performance LLM inference engine written in Zig</text>
|
||||||
|
</g>
|
||||||
|
</svg>
|
After Width: | Height: | Size: 4.9 KiB |
Loading…
Reference in New Issue
Block a user