mirror of
https://github.com/deepseek-ai/DeepSeek-V3.git
synced 2025-02-23 06:08:58 -05:00
Appended Mixed Precision Training (FP16/BF16) Generated Low-Rank Factorization (SVD) Functionality Generated Attention Efficiency using Linformer Reducing Memory & Computational Complexity using FlashAttention Attached Functionality for Spare Matrices using Butterfly Matrices (Structured Linear Layers) Generated Function for Low-Rank Approximations Changes to the Transformer Class: Efficient Initialization Uses list comprehension for self.layers instead of a loop. Consolidated distributed initialization logic. Memory and Performance Enhancements Avoids unnecessary operations on tensors. Uses .shape instead of .size() for clarity. Code Clarity and Maintainability Removed redundant variables. Used in-place operations where applicable. Changes to the Gate Class: Replaced linear(x, self.weight) with torch.matmul(x, self.weight.T): More efficient for linear transformations. Reduced Redundant Computations: Avoided unnecessary reassignments. Merged bias addition into a single step. Optimized Group-Based Routing: Used amax instead of unnecessary top-k and sum operations. Applied in-place scatter operation for memory efficiency. Simplified Expert Selection: Directly applied topk for selecting top experts. |
||
---|---|---|
.. | ||
configs | ||
convert.py | ||
fp8_cast_bf16.py | ||
generate.py | ||
kernel.py | ||
model.py | ||
requirements.txt |