From dbad4d47a9649b7aa2d6040572f09655051fd87c Mon Sep 17 00:00:00 2001 From: Bingxuan Wang Date: Thu, 30 Nov 2023 11:15:14 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 7ebbc79..e0337e3 100644 --- a/README.md +++ b/README.md @@ -299,7 +299,7 @@ print(generated_text) ### Could You Provide the tokenizer.model File for Model Quantization? -DeepSeek LLM utilizes the [HuggingFace Tokenizer](https://huggingface.co/docs/tokenizers/index) to implement the Byte-level-BPE algorithm, with specially designed pre-tokenizers to ensure optimal performance. Currently, there is no direct way to convert the tokenizer into a SentencePiece tokenizer. We are contributing to the open-source quantization methods facilitate the usage of HuggingFace Tokenizer. +DeepSeek LLM utilizes the [HuggingFace Tokenizer](https://huggingface.co/docs/tokenizers/index) to implement the Byte-level BPE algorithm, with specially designed pre-tokenizers to ensure optimal performance. Currently, there is no direct way to convert the tokenizer into a SentencePiece tokenizer. We are contributing to the open-source quantization methods facilitate the usage of HuggingFace Tokenizer. #### GGUF(llama.cpp) @@ -322,7 +322,7 @@ python convert-hf-to-gguf.py --outfile --model-name dee ``` #### GPTQ(exllamav2) -`UPDATE:`[exllamav2](https://github.com/turboderp/exllamav2) has been able to support Huggingface Tokenizer. Please pull the latest version and try out. +`UPDATE:`[exllamav2](https://github.com/turboderp/exllamav2) has been able to support HuggingFace Tokenizer. Please pull the latest version and try out. ## 7. Limitation