Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's going to chew up at least 1 GB of storage space and RAM, right? And probably kill the battery life to boot.


Yeah. People are playing tons of word games with this stuff, ex. Apple is saying its shipping an LLM for the iOS 17 keyboard, and who knows what that means: it sounds great and plausible unless you're familiar w/the nuts and bolts.


Apple's not playing word games, because they didn't say "LLM". They said that autocorrect will use "a transformer language model, a state-of-the-art on-device machine learning language model for word prediction", which is a much more precise statement than what you attributed to them.

This sounds totally plausible. It will be a much smaller transformer model than ChatGPT, probably much smaller than even GPT-2.

https://www.apple.com/newsroom/2023/06/ios-17-makes-iphone-m...


Apple is calling their typing correction a “transformer”. That is a component of LLMs, but Apple may not be using a full LLM in that case. This feature seems like a sandbox for them to try out some of this tech in the field while they do work internally on more ambitious implementations.

Apple is also dogfooding an LLM AI tool internally also likley to gain a better understanding of how this works in practice and how people are using them. https://forums.macrumors.com/threads/apple-experimenting-wit...


An LLM is made entirely out of transformers; you could just call it a "large transformer model".

In this case it's a transformer model that is not "large". So, an LM.


Once the foundational tech has stabilised, post transformers likely, the hardware people will finally get to do their thing - at which point the battery drain will go the way the battery drain for HD video went


AppleTV devices are usually always connected and most of the time just idling. Maybe you could move processing to such device, if one is connected to the same Apple ID.


Yep, I'd love to have a semi-dedicated device in my home that handled these sorts of requests. I'd even consider buying a Mac mini, Studio, or other computer for this purpose.


Would be cool, but I think it is improbable. Apple would want such a key feature to be available to everyone and less than 10% of iPhone users have Macs.

Unless they also made an option to run it in iCloud, but offering so many options to do a thing doesn't sound very Apple-like.


Agree. But it should be doable to set this up using open LLMs, right? For example, using Siri to trigger a shortcut that sends a prompt to the dedicated processing device.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: