Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Making your own ggufs is trivial: https://rentry.org/tldrhowtoquant/edit

It's a bit harder when they've provided the safetensors in FP8 like for the DS3 series, but these smaller distilled models appear to be BF16, so the normal convert/quant pipeline should work fine.



Thanks for that! It seems that unsloth actually beat me to [it](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-...)!

Edit: Running the DeepSeek-R1-Distill-Llama-8B-Q8_0 gives me about 3t/s and destroys my system performance on the base m4 mini. Trying the Q4_K_M model next.


Not trivial as long as imatrix is concerned: we've found it substantially improves performance in Q4 for long Ukrainian contexts. I imagine, it's similarly effective in various other positions.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: