mirror of
https://github.com/deepseek-ai/DeepSeek-V3.git
synced 2025-04-19 10:08:59 -04:00
Saw the file to spy on it, hehehehe
This commit is contained in:
parent
b5d872ead0
commit
15ed430ffe
42
README.md
42
README.md
@ -37,14 +37,15 @@
|
||||
<a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL" style="margin: 2px;">
|
||||
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
|
||||
</a>
|
||||
<a href="https://github.com/myakoobi" style="margin: 2px;">
|
||||
<img alt="Model License" src="https://avatars.githubusercontent.com/u/170466781?v=4" style="display: inline-block; vertical-align: middle;"/>
|
||||
</a>
|
||||
</div>
|
||||
|
||||
|
||||
<p align="center">
|
||||
<a href="DeepSeek_V3.pdf"><b>Paper Link</b>👁️</a>
|
||||
</p>
|
||||
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
|
||||
@ -55,6 +56,7 @@ Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source
|
||||
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
|
||||
In addition, its training process is remarkably stable.
|
||||
Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
|
||||
|
||||
<p align="center">
|
||||
<img width="80%" src="figures/benchmark.png">
|
||||
</p>
|
||||
@ -86,34 +88,34 @@ Throughout the entire training process, we did not experience any irrecoverable
|
||||
|
||||
---
|
||||
|
||||
|
||||
## 3. Model Downloads
|
||||
|
||||
<div align="center">
|
||||
|
||||
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
|
||||
| :------------: | :------------: | :------------: | :------------: | :------------: |
|
||||
| :--------------: | :---------------: | :-------------------: | :----------------: | :--------------------------------------------------------------------: |
|
||||
| DeepSeek-V3-Base | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) |
|
||||
| DeepSeek-V3 | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3) |
|
||||
|
||||
</div>
|
||||
|
||||
> [!NOTE]
|
||||
> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.**
|
||||
> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.\*\*
|
||||
|
||||
To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally).
|
||||
|
||||
For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md](./README_WEIGHTS.md) for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback.
|
||||
|
||||
## 4. Evaluation Results
|
||||
|
||||
### Base Model
|
||||
|
||||
#### Standard Benchmarks
|
||||
|
||||
<div align="center">
|
||||
|
||||
|
||||
| | Benchmark (Metric) | # Shots | DeepSeek-V2 | Qwen2.5 72B | LLaMA3.1 405B | DeepSeek-V3 |
|
||||
|---|-------------------|----------|--------|-------------|---------------|---------|
|
||||
| ------------ | --------------------------- | ------- | ----------- | ----------- | ------------- | ----------- |
|
||||
| | Architecture | - | MoE | Dense | Dense | MoE |
|
||||
| | # Activated Params | - | 21B | 72B | 405B | 37B |
|
||||
| | # Total Params | - | 236B | 72B | 405B | 671B |
|
||||
@ -157,18 +159,21 @@ For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md
|
||||
> For more evaluation details, please check our paper.
|
||||
|
||||
#### Context Window
|
||||
|
||||
<p align="center">
|
||||
<img width="80%" src="figures/niah.png">
|
||||
</p>
|
||||
|
||||
Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to **128K**.
|
||||
Evaluation results on the `Needle In A Haystack` (NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to **128K**.
|
||||
|
||||
### Chat Model
|
||||
|
||||
#### Standard Benchmarks (Models larger than 67B)
|
||||
|
||||
<div align="center">
|
||||
|
||||
| | **Benchmark (Metric)** | **DeepSeek V2-0506** | **DeepSeek V2.5-0905** | **Qwen2.5 72B-Inst.** | **Llama3.1 405B-Inst.** | **Claude-3.5-Sonnet-1022** | **GPT-4o 0513** | **DeepSeek V3** |
|
||||
|---|---------------------|---------------------|----------------------|---------------------|----------------------|---------------------------|----------------|----------------|
|
||||
| ------- | -------------------------- | -------------------- | ---------------------- | --------------------- | ----------------------- | -------------------------- | --------------- | --------------- |
|
||||
| | Architecture | MoE | MoE | Dense | Dense | - | - | MoE |
|
||||
| | # Activated Params | 21B | 21B | 72B | 405B | - | - | 37B |
|
||||
| | # Total Params | 236B | 236B | 72B | 405B | - | - | 671B |
|
||||
@ -200,15 +205,12 @@ Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 pe
|
||||
> [!NOTE]
|
||||
> All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models.
|
||||
|
||||
|
||||
#### Open Ended Generation Evaluation
|
||||
|
||||
<div align="center">
|
||||
|
||||
|
||||
|
||||
| Model | Arena-Hard | AlpacaEval 2.0 |
|
||||
|-------|------------|----------------|
|
||||
| ---------------------- | ---------- | -------------- |
|
||||
| DeepSeek-V2.5-0905 | 76.2 | 50.5 |
|
||||
| Qwen2.5-72B-Instruct | 81.2 | 49.1 |
|
||||
| LLaMA-3.1 405B | 69.3 | 40.5 |
|
||||
@ -221,8 +223,8 @@ Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 pe
|
||||
> [!NOTE]
|
||||
> English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric.
|
||||
|
||||
|
||||
## 5. Chat Website & API Platform
|
||||
|
||||
You can chat with DeepSeek-V3 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in)
|
||||
|
||||
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
|
||||
@ -249,7 +251,7 @@ python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-h
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Hugging Face's Transformers has not been directly supported yet.**
|
||||
> Hugging Face's Transformers has not been directly supported yet.\*\*
|
||||
|
||||
### 6.1 Inference with DeepSeek-Infer Demo (example only)
|
||||
|
||||
@ -259,12 +261,14 @@ python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-h
|
||||
> Linux with Python 3.10 only. Mac and Windows are not supported.
|
||||
|
||||
Dependencies:
|
||||
|
||||
```
|
||||
torch==2.4.1
|
||||
triton==3.0.0
|
||||
transformers==4.46.3
|
||||
safetensors==0.4.5
|
||||
```
|
||||
|
||||
#### Model Weights & Demo Code Preparation
|
||||
|
||||
First, clone our DeepSeek-V3 GitHub repository:
|
||||
@ -317,16 +321,15 @@ Multi-Token Prediction (MTP) is in development, and progress can be tracked in t
|
||||
Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3
|
||||
|
||||
### 6.3 Inference with LMDeploy (recommended)
|
||||
|
||||
[LMDeploy](https://github.com/InternLM/lmdeploy), a flexible and high-performance inference and serving framework tailored for large language models, now supports DeepSeek-V3. It offers both offline pipeline processing and online deployment capabilities, seamlessly integrating with PyTorch-based workflows.
|
||||
|
||||
For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy, please refer to here: https://github.com/InternLM/lmdeploy/issues/2960
|
||||
|
||||
|
||||
### 6.4 Inference with TRT-LLM (recommended)
|
||||
|
||||
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/deepseek/examples/deepseek_v3.
|
||||
|
||||
|
||||
### 6.5 Inference with vLLM (recommended)
|
||||
|
||||
[vLLM](https://github.com/vllm-project/vllm) v0.6.6 supports DeepSeek-V3 inference for FP8 and BF16 modes on both NVIDIA and AMD GPUs. Aside from standard techniques, vLLM offers _pipeline parallelism_ allowing you to run this model on multiple machines connected by networks. For detailed guidance, please refer to the [vLLM instructions](https://docs.vllm.ai/en/latest/serving/distributed_serving.html). Please feel free to follow [the enhancement plan](https://github.com/vllm-project/vllm/issues/11539) as well.
|
||||
@ -336,13 +339,15 @@ For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy
|
||||
In collaboration with the AMD team, we have achieved Day-One support for AMD GPUs using SGLang, with full compatibility for both FP8 and BF16 precision. For detailed guidance, please refer to the [SGLang instructions](#63-inference-with-lmdeploy-recommended).
|
||||
|
||||
### 6.7 Recommended Inference Functionality with Huawei Ascend NPUs
|
||||
|
||||
The [MindIE](https://www.hiascend.com/en/software/mindie) framework from the Huawei Ascend community has successfully adapted the BF16 version of DeepSeek-V3. For step-by-step guidance on Ascend NPUs, please follow the [instructions here](https://modelers.cn/models/MindIE/deepseekv3).
|
||||
|
||||
|
||||
## 7. License
|
||||
|
||||
This code repository is licensed under [the MIT License](LICENSE-CODE). The use of DeepSeek-V3 Base/Chat models is subject to [the Model License](LICENSE-MODEL). DeepSeek-V3 series (including Base and Chat) supports commercial use.
|
||||
|
||||
## 8. Citation
|
||||
|
||||
```
|
||||
@misc{deepseekai2024deepseekv3technicalreport,
|
||||
title={DeepSeek-V3 Technical Report},
|
||||
@ -356,4 +361,5 @@ This code repository is licensed under [the MIT License](LICENSE-CODE). The use
|
||||
```
|
||||
|
||||
## 9. Contact
|
||||
|
||||
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|
||||
|
Loading…
Reference in New Issue
Block a user