Commit Graph

5 Commits

Author SHA1 Message Date
Hitesh Yadav
bc9459df40 refactor(inference): modularize model architecture for improved maintainability
BREAKING CHANGE: Restructured model.py into dedicated modules under inference/models/

Key Changes:
- Split monolithic model.py into focused, single-responsibility modules:
  - config.py: Model configuration and hyperparameters
  - attention.py: Multi-head Latent Attention (MLA) implementation
  - moe.py: Mixture of Experts components (Gate, Expert, MoE)
  - linear.py: Linear layer variants with parallel processing support
  - __init__.py: Clean public API exports

Benefits:
- Improved code organization and maintainability
- Better separation of concerns
- Enhanced testability of individual components
- Clearer dependency management
- Simplified future modifications and extensions

Migration:
- Update imports to use new module structure
- No functional changes to existing implementations
- Backwards compatible with current model weights
2025-01-05 16:28:10 +05:30
GeeeekExplorer
fd011c11aa torch rmsnorm 2025-01-05 14:33:48 +08:00
Xingkai Yu
8710ec2ecb
require model-parallel in convert.py 2024-12-31 18:05:55 +08:00
Yang Wang
8f1c9488b5
handle missing scale_inv_name (#2)
* handle missing scale_inv_name

Fixed an issue where `weight` and `weight_scale_inv` (e.g. `model.layers.39.mlp.experts.92.gate_proj.weight` and `model.layers.39.mlp.experts.92.gate_proj.weight_scale_inv`) were not in the same SafeTensor, causing an assertion error due to scale_inv_name not being in the state_dict.

* sort filename to reduce memory costs

* Add CUDA cache clearing in memory management

Added torch.cuda.empty_cache() to free up unused memory on the GPU,
2024-12-27 09:34:38 +08:00
stack-heap-overflow
4c2fdb8f55 Release DeepSeek-V3 2024-12-26 19:01:57 +08:00