From 97b35f1fcadf435b41a835d5f4e86c8d9dc4497b Mon Sep 17 00:00:00 2001 From: luislopez-developer Date: Mon, 3 Feb 2025 15:02:04 -0500 Subject: [PATCH] docs: remove redundant asterisks in note --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 7ecf87e..0b452f5 100644 --- a/README.md +++ b/README.md @@ -99,7 +99,7 @@ Throughout the entire training process, we did not experience any irrecoverable > [!NOTE] -> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.** +> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights. To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally). @@ -249,7 +249,7 @@ python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-h ``` > [!NOTE] -> Hugging Face's Transformers has not been directly supported yet.** +> Hugging Face's Transformers has not been directly supported yet. ### 6.1 Inference with DeepSeek-Infer Demo (example only)