dependabot[bot]
4efaf49eef
Bump transformers in /inference in the pip group across 1 directory
...
Bumps the pip group with 1 update in the /inference directory: [transformers](https://github.com/huggingface/transformers ).
Updates `transformers` from 4.46.3 to 4.48.0
- [Release notes](https://github.com/huggingface/transformers/releases )
- [Commits](https://github.com/huggingface/transformers/compare/v4.46.3...v4.48.0 )
---
updated-dependencies:
- dependency-name: transformers
dependency-type: direct:production
dependency-group: pip
...
Signed-off-by: dependabot[bot] <support@github.com>
2025-03-25 12:45:53 -04:00
Xingkai Yu
1398800ebf
fix scores mask
2025-02-14 20:26:45 +08:00
Xingkai Yu
5ee97a83f0
fix comment
2025-02-07 16:42:55 +08:00
Xingkai Yu
87a01053e4
Merge pull request #556 from XxAlonexX/main
...
Fix Linear Layer Bias Initialization
2025-02-05 16:23:02 +08:00
XxAlonexX
6a30b43249
Fix Linear Layer Bias Initialization
2025-02-04 10:38:45 +05:30
Roman Fitzjalen
2756e130c2
clarify assertion error
2025-01-28 13:16:54 +01:00
enoch kan
bc77f22afc
Updated model.py docstrings
2025-01-05 18:24:31 +00:00
enoch kan
a1296f099e
Enhance documentation and update .gitignore for model conversion scripts
2025-01-05 18:18:18 +00:00
GeeeekExplorer
fd011c11aa
torch rmsnorm
2025-01-05 14:33:48 +08:00
Xingkai Yu
8710ec2ecb
require model-parallel in convert.py
2024-12-31 18:05:55 +08:00
Yang Wang
8f1c9488b5
handle missing scale_inv_name ( #2 )
...
* handle missing scale_inv_name
Fixed an issue where `weight` and `weight_scale_inv` (e.g. `model.layers.39.mlp.experts.92.gate_proj.weight` and `model.layers.39.mlp.experts.92.gate_proj.weight_scale_inv`) were not in the same SafeTensor, causing an assertion error due to scale_inv_name not being in the state_dict.
* sort filename to reduce memory costs
* Add CUDA cache clearing in memory management
Added torch.cuda.empty_cache() to free up unused memory on the GPU,
2024-12-27 09:34:38 +08:00
stack-heap-overflow
4c2fdb8f55
Release DeepSeek-V3
2024-12-26 19:01:57 +08:00