mirror of
https://github.com/deepseek-ai/DeepSeek-LLM.git
synced 2025-04-19 10:09:12 -04:00
Update README.md
This commit is contained in:
parent
b39e9db138
commit
1b79a65a10
@ -126,7 +126,7 @@ In line with Grok-1, we have evaluated the model's mathematical capabilities usi
|
||||
<img src="images/mathexam.png" alt="result" width="70%">
|
||||
</div>
|
||||
|
||||
**Remark:** Some results are obtained by DeepSeek authors, while others are done by Grok-1 authors. We found some models count the score of the last question (Llemma 34b and Mammoth) while some (MetaMath-7B) are not in the original evaluation. In our evaluation, we count the last question score. Evaluation details are [here](https://github.com/deepseek-ai/DeepSeek-LLM/tree/HEAD/evaluation/hungarian_national_hs_solutions).
|
||||
**Remark:** Some results are obtained by DeepSeek LLM authors, while others are done by Grok-1 authors. We found some models count the score of the last question (Llemma 34b and Mammoth) while some (MetaMath-7B) are not in the original evaluation. In our evaluation, we count the last question score. Evaluation details are [here](https://github.com/deepseek-ai/DeepSeek-LLM/tree/HEAD/evaluation/hungarian_national_hs_solutions).
|
||||
|
||||
|
||||
---
|
||||
@ -299,7 +299,7 @@ print(generated_text)
|
||||
|
||||
### Could You Provide the tokenizer.model File for Model Quantization?
|
||||
|
||||
DeepSeek Coder utilizes the [HuggingFace Tokenizer](https://huggingface.co/docs/tokenizers/index) to implement the Bytelevel-BPE algorithm, with specially designed pre-tokenizers to ensure optimal performance. Currently, there is no direct way to convert the tokenizer into a SentencePiece tokenizer. We are contributing to the open-source quantization methods facilitate the usage of HuggingFace Tokenizer.
|
||||
DeepSeek LLM utilizes the [HuggingFace Tokenizer](https://huggingface.co/docs/tokenizers/index) to implement the Byte-level-BPE algorithm, with specially designed pre-tokenizers to ensure optimal performance. Currently, there is no direct way to convert the tokenizer into a SentencePiece tokenizer. We are contributing to the open-source quantization methods facilitate the usage of HuggingFace Tokenizer.
|
||||
|
||||
#### GGUF(llama.cpp)
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user