|
|
| | Bugs in LLM Training – Gradient Accumulation Fix (unsloth.ai) | | 81 points by apsec112 on Oct 16, 2024 | past | 16 comments | |
| | Bugs in LLM Training – Gradient Accumulation Fix (unsloth.ai) | | 3 points by jasondavies on Oct 15, 2024 | past | |
| | 2x faster Gemma 2 finetuning and 63% less VRAM (unsloth.ai) | | 3 points by ricopags on July 4, 2024 | past | 1 comment | |
| | Continued LLM Pretraining with Unsloth (unsloth.ai) | | 1 point by swyx on June 5, 2024 | past | |
| | Fixing Gemma Bugs (unsloth.ai) | | 166 points by danielhanchen on March 11, 2024 | past | 63 comments | |
| | Fixing All Gemma Bugs (unsloth.ai) | | 2 points by xyzzyrz on March 7, 2024 | past | |
| | CodeLlama-34B 13x faster finetuning (unsloth.ai) | | 2 points by danielhanchen on Dec 16, 2023 | past | |
| | Reducing FLOPs for transformers (unsloth.ai) | | 1 point by danielhanchen on Dec 14, 2023 | past | 1 comment | |
| | Unsloth: 30x faster AI training (unsloth.ai) | | 3 points by Tomte on Dec 1, 2023 | past | 1 comment | |
| | Finetune language models 30x faster (unsloth.ai) | | 2 points by danielhanchen on Nov 30, 2023 | past | |
|

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
|