From 81969b0a0671a881cd1356d4a8ffc8c62b954fc1 Mon Sep 17 00:00:00 2001 From: Afueth Thomas <97304915+Afueth@users.noreply.github.com> Date: Mon, 27 Jan 2025 15:11:49 +0530 Subject: [PATCH] Update README.md Updated the introductory sentence in the "Introduction" section to improve clarity and readability. Changed: "We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token." To: "DeepSeek-V3 is a powerful Mixture-of-Experts (MoE) language model with 671 billion total parameters, of which 37 billion are activated per token." This revision ensures conciseness and better emphasis on key details. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7ecf87e..39a2c0f 100644 --- a/README.md +++ b/README.md @@ -47,7 +47,7 @@ ## 1. Introduction -We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. +DeepSeek-V3 is a powerful Mixture-of-Experts (MoE) language model with 671 billion total parameters, of which 37 billion are activated per token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.