Fix the Readme.md

This commit is contained in:
harshsj1504 2025-01-30 04:54:15 +05:30
parent b5d872ead0
commit e0dde63571

View File

@ -47,14 +47,14 @@
## 1. Introduction
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.
We present DeepSeek-V3, a powerful Mixture-of-Experts (MoE) language model with a total of 671B parameters, of which 37B are activated per token.
To achieve efficient inference and cost-effective training, DeepSeek-V3 utilizes Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
Furthermore, DeepSeek-V3 introduces an auxiliary-loss-free strategy for load balancing and establishes a multi-token prediction training objective for enhanced performance.
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to maximize its capabilities.
Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models.
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
In addition, its training process is remarkably stable.
Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for full training.
Additionally, its training process is remarkably stable.
Throughout the entire training process, we did not experience any irrecoverable loss spikes or need to perform any rollbacks.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
@ -65,24 +65,24 @@ Throughout the entire training process, we did not experience any irrecoverable
**Architecture: Innovative Load Balancing Strategy and Training Objective**
- On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
- We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance.
It can also be used for speculative decoding for inference acceleration.
- Building on the efficient architecture of DeepSeek-V2, we introduce an auxiliary-loss-free strategy for load balancing, minimizing performance degradation caused by load balancing constraints.
- We investigate a Multi-Token Prediction (MTP) objective and demonstrate its benefits to model performance.
It can also be used for speculative decoding to accelerate inference.
---
**Pre-Training: Towards Ultimate Training Efficiency**
- We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
- Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap.
This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
- At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
- We develop an FP8 mixed-precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training for extremely large-scale models.
- By co-designing algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, achieving near-complete computation-communication overlap.This significantly enhances training efficiency and reduces costs, allowing us to scale up the model size without additional overhead.
- At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
---
**Post-Training: Knowledge Distillation from DeepSeek-R1**
- We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3.
- We introduce an innovative methodology to distill reasoning capabilities from the long Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3.
Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3, significantly improving its reasoning performance while maintaining control over its output style and length.
---