Hacker News new | past | comments | ask | show | jobs | submit login

Or at the very least would you have to finetune some 3B or 7B model to do this?

I want to create some locally hosted model that can do this in real time with a low memory footprint

Let people mash the keyboard at 2x-3x their typing speed and the unscramble it for them in real time to enable typing at 200-300wpm




Contextual autocorrect has been on phones for a decade. It's still far from perfect. Many ambiguities are not trivial, even for an LLM


Mistral-7B does this reliably with no special finetuning, while being 1000x smaller than GPT-4.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: