Commit Graph

7 Commits

Author SHA1 Message Date
Utsav-pal
38333fb817
Update generate.py: Add parallel processing for token generation
vThis update introduces parallel processing for token generation using torch.multiprocessing.Pool.
The new implementation improves inference speed by processing multiple sequences concurrently.
- Added the generate_parallel() function for parallel token generation.
- Used multiprocessing to distribute the workload across multiple processes, allowing for faster generation of tokens for multiple prompts.
- The generate_single_sequence() function was added to handle individual sequence generation logic, which is called by each worker in parallel.
- The num_workers parameter is introduced to control the number of worker processes (default is 4).
- Model is shared across processes for efficient memory usage.

These changes are particularly beneficial for batch processing or multi-prompt generation scenarios where multiple sequences need to be generated simultaneously.
2025-01-28 23:54:11 +05:30
enoch kan
bc77f22afc Updated model.py docstrings 2025-01-05 18:24:31 +00:00
enoch kan
a1296f099e Enhance documentation and update .gitignore for model conversion scripts 2025-01-05 18:18:18 +00:00
GeeeekExplorer
fd011c11aa torch rmsnorm 2025-01-05 14:33:48 +08:00
Xingkai Yu
8710ec2ecb
require model-parallel in convert.py 2024-12-31 18:05:55 +08:00
Yang Wang
8f1c9488b5
handle missing scale_inv_name (#2)
* handle missing scale_inv_name

Fixed an issue where `weight` and `weight_scale_inv` (e.g. `model.layers.39.mlp.experts.92.gate_proj.weight` and `model.layers.39.mlp.experts.92.gate_proj.weight_scale_inv`) were not in the same SafeTensor, causing an assertion error due to scale_inv_name not being in the state_dict.

* sort filename to reduce memory costs

* Add CUDA cache clearing in memory management

Added torch.cuda.empty_cache() to free up unused memory on the GPU,
2024-12-27 09:34:38 +08:00
stack-heap-overflow
4c2fdb8f55 Release DeepSeek-V3 2024-12-26 19:01:57 +08:00