update README

This commit is contained in:
Thomas 2025-01-29 08:41:36 +01:00 committed by GitHub
parent b5d872ead0
commit f57c42a8b8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -48,7 +48,7 @@
## 1. Introduction
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in [DeepSeek-V2](https://github.com/deepseek-ai/DeepSeek-V2).
Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.
Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models.
@ -65,7 +65,7 @@ Throughout the entire training process, we did not experience any irrecoverable
**Architecture: Innovative Load Balancing Strategy and Training Objective**
- On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
- On top of the efficient architecture of [DeepSeek-V2](https://github.com/deepseek-ai/DeepSeek-V2), we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
- We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance.
It can also be used for speculative decoding for inference acceleration.
@ -80,7 +80,7 @@ Throughout the entire training process, we did not experience any irrecoverable
---
**Post-Training: Knowledge Distillation from DeepSeek-R1**
**Post-Training: Knowledge Distillation from [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1)**
- We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3.