I've been thinking a lot recently about how much we'd be able to model the human existence as a foundation model (or multiple models representing each core part of the brain) hooked up to a load of sensors (such as 'optic nerve feed', 'temperature', 'cortisol levels') as input and as a response to tool calls -- and have all of this stream out as structured output controlling speech, motor movement, and other physiological functions.
I don't know if anyone is working on modelling the human existence (via LMMs) in this way... It feels like a Frankensteinian and eccentric project idea, but certainly a fun one!
I don't know if anyone is working on modelling the human existence (via LMMs) in this way... It feels like a Frankensteinian and eccentric project idea, but certainly a fun one!