Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
quentinp
on March 29, 2024
|
parent
|
context
|
favorite
| on:
Towards 1-bit Machine Learning Models
Actually they plan to put an LLM in your back pocket using flash memory, not silicon:
https://arxiv.org/abs/2312.11514
declaredapple
on March 29, 2024
[–]
The flash doesn't do the computations though, that's just a method of getting it to the processor
sroussey
on March 29, 2024
|
parent
[–]
It would be better to have eeprom or some such directly attached as memory. No loading.
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: