Commit Graph

11 Commits

Author SHA1 Message Date
Wow Rakibul
35703ca641
mprove convert.py with error handling and code optimization
Description:
Purpose: This PR improves the convert.py file by adding error handling, optimizing code, and enhancing documentation.

Changes: Added error handling, optimized loops, and added type hints and comments.

Problem: Addresses potential runtime errors and improves code readability and maintainability.

Testing: The changes were tested locally to ensure functionality remains intact.
2025-02-09 01:55:23 +06:00
Xingkai Yu
5ee97a83f0
fix comment 2025-02-07 16:42:55 +08:00
Xingkai Yu
87a01053e4
Merge pull request #556 from XxAlonexX/main
Fix Linear Layer Bias Initialization
2025-02-05 16:23:02 +08:00
XxAlonexX
6a30b43249 Fix Linear Layer Bias Initialization 2025-02-04 10:38:45 +05:30
Roman Fitzjalen
2756e130c2 clarify assertion error 2025-01-28 13:16:54 +01:00
enoch kan
bc77f22afc Updated model.py docstrings 2025-01-05 18:24:31 +00:00
enoch kan
a1296f099e Enhance documentation and update .gitignore for model conversion scripts 2025-01-05 18:18:18 +00:00
GeeeekExplorer
fd011c11aa torch rmsnorm 2025-01-05 14:33:48 +08:00
Xingkai Yu
8710ec2ecb
require model-parallel in convert.py 2024-12-31 18:05:55 +08:00
Yang Wang
8f1c9488b5
handle missing scale_inv_name (#2)
* handle missing scale_inv_name

Fixed an issue where `weight` and `weight_scale_inv` (e.g. `model.layers.39.mlp.experts.92.gate_proj.weight` and `model.layers.39.mlp.experts.92.gate_proj.weight_scale_inv`) were not in the same SafeTensor, causing an assertion error due to scale_inv_name not being in the state_dict.

* sort filename to reduce memory costs

* Add CUDA cache clearing in memory management

Added torch.cuda.empty_cache() to free up unused memory on the GPU,
2024-12-27 09:34:38 +08:00
stack-heap-overflow
4c2fdb8f55 Release DeepSeek-V3 2024-12-26 19:01:57 +08:00