diff --git a/README.md b/README.md
index 378a6c4..0b3b6c6 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
-[
Homepage] | [🤖 Chat with DeepSeek Coder] | [🤗 Models Download] | [Discord] | [Wechat(微信)]
+[
Homepage] | [🤖 Chat with DeepSeek Coder] | [🤗 Models Download] | [Discord] | [WeChat (微信)]
@@ -38,7 +38,6 @@ And the DeepSeek-Coder-Instruct-33B model after instruction tuning outperforms G
More evaluation details can be found in the [Detailed Evaluation](#5-detailed-evaluation-results).
-
### 3. Procedure of Data Creation and Model Training
#### Data Creation
@@ -59,8 +58,6 @@ More evaluation details can be found in the [Detailed Evaluation](#5-detailed-ev
-
-
### 4. How to Use
Before proceeding, you'll need to install the necessary dependencies. You can do this by running the following command:
```
@@ -70,7 +67,7 @@ A demo is also available on the [🤗 Hugging Face Space](https://huggingface.co
Here are some examples of how to use our model.
-#### 1)Code Completion
+#### 1) Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
@@ -97,7 +94,7 @@ def quick_sort(arr):
return quick_sort(left) + [pivot] + quick_sort(right)
```
-#### 2)Code Insertion
+#### 2) Code Insertion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
@@ -124,7 +121,7 @@ This code will output the following result:
for i in range(1, len(arr)):
```
-#### 3)Chat Model Inference
+#### 3) Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
@@ -172,7 +169,7 @@ You are an AI programming assistant, utilizing the Deepseek Coder model, develop
```
-#### 4)Repository Level Code Completion
+#### 4) Repository Level Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-base", trust_remote_code=True)
@@ -265,16 +262,16 @@ In the following scenario, the Deepseek-Coder 6.7B model effectively calls a cla
### 5. Detailed Evaluation Results
The reproducible code for the following evaluation results can be found in the [Evaluation](https://github.com/deepseek-ai/deepseek-coder/tree/main/Evaluation) directory.
-#### 1)Multilingual HumanEval Benchmark
+#### 1) Multilingual HumanEval Benchmark

-#### 2)MBPP Benchmark
+#### 2) MBPP Benchmark
-#### 3)DS-1000 Benchmark
+#### 3) DS-1000 Benchmark

-#### 4)Program-Aid Math Reasoning Benchmark
+#### 4) Program-Aid Math Reasoning Benchmark
