mirror of
https://github.com/deepseek-ai/DeepSeek-V2.git
synced 2025-05-03 08:59:03 -04:00
docs: update README.md
HuggingFace -> Hugging Face
This commit is contained in:
parent
cbd18b1e47
commit
dfffc674f5
@ -83,7 +83,7 @@ We pretrained DeepSeek-V2 on a diverse and high-quality corpus comprising 8.1 tr
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
Due to the constraints of HuggingFace, the open-source code currently experiences slower performance than our internal codebase when running on GPUs with Huggingface. To facilitate the efficient execution of our model, we offer a dedicated vllm solution that optimizes performance for running our model effectively.
|
Due to the constraints of Hugging Face, the open-source code currently experiences slower performance than our internal codebase when running on GPUs with Hugging Face. To facilitate the efficient execution of our model, we offer a dedicated vllm solution that optimizes performance for running our model effectively.
|
||||||
|
|
||||||
## 3. Evaluation Results
|
## 3. Evaluation Results
|
||||||
### Base Model
|
### Base Model
|
||||||
@ -186,8 +186,8 @@ We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.c
|
|||||||
|
|
||||||
## 7. How to run locally
|
## 7. How to run locally
|
||||||
**To utilize DeepSeek-V2 in BF16 format for inference, 80GB*8 GPUs are required.**
|
**To utilize DeepSeek-V2 in BF16 format for inference, 80GB*8 GPUs are required.**
|
||||||
### Inference with Huggingface's Transformers
|
### Inference with Hugging Face's Transformers
|
||||||
You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference.
|
You can directly employ [Hugging Face's Transformers](https://github.com/huggingface/transformers) for model inference.
|
||||||
|
|
||||||
#### Text Completion
|
#### Text Completion
|
||||||
```python
|
```python
|
||||||
@ -235,7 +235,7 @@ result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_token
|
|||||||
print(result)
|
print(result)
|
||||||
```
|
```
|
||||||
|
|
||||||
The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository.
|
The complete chat template can be found within `tokenizer_config.json` located in the Hugging Face model repository.
|
||||||
|
|
||||||
An example of chat template is as belows:
|
An example of chat template is as belows:
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user