Commit Graph

3 Commits

Author SHA1 Message Date
Rahul Dubey
78d0fd332a
Update finetune_deepseekcoder.py
Using torch.float16 or torch.cuda.amp can significantly reduce memory usage and speed up training by performing computations with lower precision.
2025-01-29 12:38:12 +05:30
DejianYang
3fd8db86c3
Update license in finetune_deepseekcoder.py 2023-11-14 18:15:08 +08:00
Yang Dejian
4f0b860d30 add deepspeed finetune 2023-11-09 22:46:45 +08:00