From 43cffb1ae38c5bfcbe1bf6acfdb93ee8d386c50f Mon Sep 17 00:00:00 2001 From: Benjamin Winkler Date: Fri, 7 Feb 2025 01:01:40 -0500 Subject: [PATCH] Minor grammatical tense corrections to README.md Minor changes to correct grammatical tense for activities that took place in the past. --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 318a40c..5e38c25 100644 --- a/README.md +++ b/README.md @@ -62,7 +62,7 @@ We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. -We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. +We pre-trained DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. @@ -78,17 +78,17 @@ Throughout the entire training process, we did not experience any irrecoverable **Architecture: Innovative Load Balancing Strategy and Training Objective** - On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing. -- We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. +- We investigated a Multi-Token Prediction (MTP) objective and proved it beneficial to model performance. It can also be used for speculative decoding for inference acceleration. --- **Pre-Training: Towards Ultimate Training Efficiency** -- We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model. -- Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap. +- We designed an FP8 mixed precision training framework and, for the first time, validated the feasibility and effectiveness of FP8 training on an extremely large-scale model. +- Through co-design of algorithms, frameworks, and hardware, we overcame the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap. This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead. -- At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours. +- At an economical cost of only 2.664M H800 GPU hours, we completed the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training required only 0.1M GPU hours. ---