mirror of
https://github.com/deepseek-ai/DeepSeek-R1.git
synced 2025-04-19 10:09:02 -04:00
Merge 7dc7f56a74
into 95aaec702f
This commit is contained in:
commit
8804c1da0f
75
README.md
75
README.md
@ -8,26 +8,49 @@
|
|||||||
</div>
|
</div>
|
||||||
<hr>
|
<hr>
|
||||||
<div align="center" style="line-height: 1;">
|
<div align="center" style="line-height: 1;">
|
||||||
<a href="https://www.deepseek.com/" target="_blank"><img alt="Homepage"
|
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
|
||||||
src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true"/></a>
|
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
|
||||||
<a href="https://chat.deepseek.com/" target="_blank"><img alt="Chat"
|
</a>
|
||||||
src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white"/></a>
|
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
|
||||||
<a href="https://huggingface.co/deepseek-ai" target="_blank"><img alt="Hugging Face"
|
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
||||||
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white"/></a>
|
</a>
|
||||||
<br>
|
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
|
||||||
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank"><img alt="Discord"
|
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
||||||
src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da"/></a>
|
</a>
|
||||||
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank"><img alt="WeChat"
|
|
||||||
src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white"/></a>
|
|
||||||
<a href="https://twitter.com/deepseek_ai" target="_blank"><img alt="Twitter Follow"
|
|
||||||
src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white"/></a>
|
|
||||||
<br>
|
|
||||||
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE"><img alt="License"
|
|
||||||
src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53"/></a>
|
|
||||||
<br>
|
|
||||||
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div align="center" style="line-height: 1;">
|
||||||
|
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
|
||||||
|
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
|
||||||
|
</a>
|
||||||
|
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
|
||||||
|
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
||||||
|
</a>
|
||||||
|
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
|
||||||
|
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div align="center" style="line-height: 1;">
|
||||||
|
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
|
||||||
|
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
|
||||||
|
</p>
|
||||||
|
|
||||||
|
1. [Introduction](#1-introduction)
|
||||||
|
2. [Model Summary](#2-model-summary)
|
||||||
|
3. [Model Downloads](#3-model-downloads)
|
||||||
|
4. [Evaluation Results](#4-evaluation-results)
|
||||||
|
5. [Chat Website & API Platform](#5-chat-website--api-platform)
|
||||||
|
6. [How to Run Locally](#6-how-to-run-locally)
|
||||||
|
7. [License](#7-license)
|
||||||
|
8. [Citation](#8-citation)
|
||||||
|
9. [Contact](#9-contact)
|
||||||
|
|
||||||
## 1. Introduction
|
## 1. Introduction
|
||||||
|
|
||||||
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
|
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
|
||||||
@ -50,16 +73,16 @@ To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSe
|
|||||||
|
|
||||||
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
|
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
|
||||||
|
|
||||||
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
|
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
|
||||||
|
|
||||||
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
|
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
|
||||||
We believe the pipeline will benefit the industry by creating better models.
|
We believe the pipeline will benefit the industry by creating better models.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Distillation: Smaller Models Can Be Powerful Too**
|
**Distillation: Smaller Models Can Be Powerful Too**
|
||||||
|
|
||||||
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
|
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
|
||||||
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
|
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
|
||||||
|
|
||||||
## 3. Model Downloads
|
## 3. Model Downloads
|
||||||
@ -99,10 +122,10 @@ We slightly change their configs and tokenizers. Please use our setting to run t
|
|||||||
## 4. Evaluation Results
|
## 4. Evaluation Results
|
||||||
|
|
||||||
### DeepSeek-R1-Evaluation
|
### DeepSeek-R1-Evaluation
|
||||||
|
|
||||||
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
|
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
|
||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||
|
|
||||||
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|
||||||
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
|
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
|
||||||
| | Architecture | - | - | MoE | - | - | MoE |
|
| | Architecture | - | - | MoE | - | - | MoE |
|
||||||
@ -132,10 +155,8 @@ We slightly change their configs and tokenizers. Please use our setting to run t
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
||||||
### Distilled Model Evaluation
|
### Distilled Model Evaluation
|
||||||
|
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|
||||||
@ -153,8 +174,8 @@ We slightly change their configs and tokenizers. Please use our setting to run t
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
||||||
## 5. Chat Website & API Platform
|
## 5. Chat Website & API Platform
|
||||||
|
|
||||||
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
|
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
|
||||||
|
|
||||||
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
|
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
|
||||||
@ -254,13 +275,16 @@ When responding, please keep the following points in mind:
|
|||||||
```
|
```
|
||||||
|
|
||||||
## 7. License
|
## 7. License
|
||||||
|
|
||||||
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
|
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
|
||||||
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
|
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
|
||||||
|
|
||||||
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
|
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
|
||||||
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
|
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
|
||||||
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
|
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
|
||||||
|
|
||||||
## 8. Citation
|
## 8. Citation
|
||||||
|
|
||||||
```
|
```
|
||||||
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
|
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
|
||||||
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
|
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
|
||||||
@ -274,4 +298,5 @@ DeepSeek-R1 series support commercial use, allow for any modifications and deriv
|
|||||||
```
|
```
|
||||||
|
|
||||||
## 9. Contact
|
## 9. Contact
|
||||||
|
|
||||||
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|
||||||
|
Loading…
Reference in New Issue
Block a user