Commit Graph

7 Commits

Author SHA1 Message Date
Evan Wallace
40ec3a3f21 Optimization to Model Script
Appended Mixed Precision Training (FP16/BF16)
Generated Low-Rank Factorization (SVD) Functionality
Generated Attention Efficiency using Linformer
Reducing Memory & Computational Complexity using FlashAttention
Attached Functionality for Spare Matrices using Butterfly Matrices (Structured Linear Layers)
Generated Function for Low-Rank Approximations

Changes to the Transformer Class:
Efficient Initialization
Uses list comprehension for self.layers instead of a loop.
Consolidated distributed initialization logic.
Memory and Performance Enhancements
Avoids unnecessary operations on tensors.
Uses .shape instead of .size() for clarity.
Code Clarity and Maintainability
Removed redundant variables.
Used in-place operations where applicable.

Changes to the Gate Class:
Replaced linear(x, self.weight) with torch.matmul(x, self.weight.T):
More efficient for linear transformations.
Reduced Redundant Computations:
Avoided unnecessary reassignments.
Merged bias addition into a single step.
Optimized Group-Based Routing:
Used amax instead of unnecessary top-k and sum operations.
Applied in-place scatter operation for memory efficiency.
Simplified Expert Selection:
Directly applied topk for selecting top experts.
2025-01-30 21:52:56 -08:00
enoch kan
bc77f22afc Updated model.py docstrings 2025-01-05 18:24:31 +00:00
enoch kan
a1296f099e Enhance documentation and update .gitignore for model conversion scripts 2025-01-05 18:18:18 +00:00
GeeeekExplorer
fd011c11aa torch rmsnorm 2025-01-05 14:33:48 +08:00
Xingkai Yu
8710ec2ecb
require model-parallel in convert.py 2024-12-31 18:05:55 +08:00
Yang Wang
8f1c9488b5
handle missing scale_inv_name (#2)
* handle missing scale_inv_name

Fixed an issue where `weight` and `weight_scale_inv` (e.g. `model.layers.39.mlp.experts.92.gate_proj.weight` and `model.layers.39.mlp.experts.92.gate_proj.weight_scale_inv`) were not in the same SafeTensor, causing an assertion error due to scale_inv_name not being in the state_dict.

* sort filename to reduce memory costs

* Add CUDA cache clearing in memory management

Added torch.cuda.empty_cache() to free up unused memory on the GPU,
2024-12-27 09:34:38 +08:00
stack-heap-overflow
4c2fdb8f55 Release DeepSeek-V3 2024-12-26 19:01:57 +08:00