From af8bbe63722fe8665cfa76332387b58c1e7e53b9 Mon Sep 17 00:00:00 2001
From: Alimi Faith <73644238+ftoucch@users.noreply.github.com>
Date: Tue, 28 Jan 2025 23:05:01 +0000
Subject: [PATCH] Cleanup README

---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 7ecf87e..b1330d1 100644
--- a/README.md
+++ b/README.md
@@ -99,7 +99,7 @@ Throughout the entire training process, we did not experience any irrecoverable
 </div>
 
 > [!NOTE]
-> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.**
+> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.
 
 To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally).