DeepSeek-V3/inference
Utsav-pal 38333fb817
Update generate.py: Add parallel processing for token generation
vThis update introduces parallel processing for token generation using torch.multiprocessing.Pool.
The new implementation improves inference speed by processing multiple sequences concurrently.
- Added the generate_parallel() function for parallel token generation.
- Used multiprocessing to distribute the workload across multiple processes, allowing for faster generation of tokens for multiple prompts.
- The generate_single_sequence() function was added to handle individual sequence generation logic, which is called by each worker in parallel.
- The num_workers parameter is introduced to control the number of worker processes (default is 4).
- Model is shared across processes for efficient memory usage.

These changes are particularly beneficial for batch processing or multi-prompt generation scenarios where multiple sequences need to be generated simultaneously.
2025-01-28 23:54:11 +05:30
..
configs Release DeepSeek-V3 2024-12-26 19:01:57 +08:00
convert.py Enhance documentation and update .gitignore for model conversion scripts 2025-01-05 18:18:18 +00:00
fp8_cast_bf16.py Enhance documentation and update .gitignore for model conversion scripts 2025-01-05 18:18:18 +00:00
generate.py Update generate.py: Add parallel processing for token generation 2025-01-28 23:54:11 +05:30
kernel.py Enhance documentation and update .gitignore for model conversion scripts 2025-01-05 18:18:18 +00:00
model.py Updated model.py docstrings 2025-01-05 18:24:31 +00:00
requirements.txt Release DeepSeek-V3 2024-12-26 19:01:57 +08:00