docs: Tidy list items @ README

This commit is contained in:
Triex 2025-06-04 11:41:41 +10:00
parent 69c1bab49e
commit 9aedaae1d5

View File

@ -101,10 +101,10 @@ Current LLM inference is dominated by Python/PyTorch, which introduces:
## Technical Challenges ## Technical Challenges
**Model Complexity**: DeepSeek V3's MoE architecture requires careful memory management - **Model Complexity**: DeepSeek V3's MoE architecture requires careful memory management
**Backend Integration**: Need efficient FFI to CUDA/Metal while maintaining performance - **Backend Integration**: Need efficient FFI to CUDA/Metal while maintaining performance
**Web Scale**: Handle concurrent requests without blocking inference - **Web Scale**: Handle concurrent requests without blocking inference
**Accuracy**: Match PyTorch numerical precision - **Accuracy**: Match PyTorch numerical precision
## Platform-Specific Opportunities ## Platform-Specific Opportunities
@ -197,6 +197,5 @@ This is an ambitious project that would benefit from expertise in:
--- ---
**Status**: 🎯 Seeking feedback on initial idea - **Status**: 🎯 Seeking feedback on initial idea
- **Target**: Production-ready LLM inference in Zig
**Target**: Production-ready LLM inference in Zig