From 9aedaae1d5dee66c53b1718518191bcec87a5a19 Mon Sep 17 00:00:00 2001 From: Triex Date: Wed, 4 Jun 2025 11:41:41 +1000 Subject: [PATCH] docs: Tidy list items @ README --- README.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index e604d0d..97641f4 100644 --- a/README.md +++ b/README.md @@ -101,10 +101,10 @@ Current LLM inference is dominated by Python/PyTorch, which introduces: ## Technical Challenges -**Model Complexity**: DeepSeek V3's MoE architecture requires careful memory management -**Backend Integration**: Need efficient FFI to CUDA/Metal while maintaining performance -**Web Scale**: Handle concurrent requests without blocking inference -**Accuracy**: Match PyTorch numerical precision +- **Model Complexity**: DeepSeek V3's MoE architecture requires careful memory management +- **Backend Integration**: Need efficient FFI to CUDA/Metal while maintaining performance +- **Web Scale**: Handle concurrent requests without blocking inference +- **Accuracy**: Match PyTorch numerical precision ## Platform-Specific Opportunities @@ -197,6 +197,5 @@ This is an ambitious project that would benefit from expertise in: --- -**Status**: 🎯 Seeking feedback on initial idea - -**Target**: Production-ready LLM inference in Zig \ No newline at end of file +- **Status**: 🎯 Seeking feedback on initial idea +- **Target**: Production-ready LLM inference in Zig \ No newline at end of file