mirror of
https://github.com/deepseek-ai/DeepSeek-Coder.git
synced 2025-06-20 00:14:03 -04:00
update table
This commit is contained in:
parent
ae0c703eb8
commit
628327ad2a
@ -213,7 +213,8 @@ Only `pass@1` results on HumanEval (Python and Multilingual), MBPP, DS-1000 are
|
|||||||
<img src="pictures/table.png" alt="table" width="80%">
|
<img src="pictures/table.png" alt="table" width="80%">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
The result shows that DeepSeek-Coder-Base-33B significantly outperforms existing open-source code LLMs. Compared with CodeLLama34B, it leads by 7.9%, 9.3%, 10.8% and 5.9% respectively on HumanEval Python, HumanEval Multilingual, MBPP and DS-1000.
|
|
||||||
|
The result shows that DeepSeek-Coder-Base-33B significantly outperforms existing open-source code LLMs. Compared with CodeLLama-34B, it leads by 7.9%, 9.3%, 10.8% and 5.9% respectively on HumanEval Python, HumanEval Multilingual, MBPP and DS-1000.
|
||||||
Surprisingly, our DeepSeek-Coder-Base-7B reaches the performance of CodeLlama-34B.
|
Surprisingly, our DeepSeek-Coder-Base-7B reaches the performance of CodeLlama-34B.
|
||||||
And the DeepSeek-Coder-Instruct-33B model after instruction tuning outperforms GPT35-turbo on HumanEval and achieves comparable result with GPT35-turbo on MBPP.
|
And the DeepSeek-Coder-Instruct-33B model after instruction tuning outperforms GPT35-turbo on HumanEval and achieves comparable result with GPT35-turbo on MBPP.
|
||||||
|
|
||||||
|
Binary file not shown.
Before Width: | Height: | Size: 553 KiB After Width: | Height: | Size: 362 KiB |
Loading…
Reference in New Issue
Block a user