Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been thinking a lot recently about how much we'd be able to model the human existence as a foundation model (or multiple models representing each core part of the brain) hooked up to a load of sensors (such as 'optic nerve feed', 'temperature', 'cortisol levels') as input and as a response to tool calls -- and have all of this stream out as structured output controlling speech, motor movement, and other physiological functions.

I don't know if anyone is working on modelling the human existence (via LMMs) in this way... It feels like a Frankensteinian and eccentric project idea, but certainly a fun one!





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: