Hacker News new | past | comments | ask | show | jobs | submit login

It's hard to build a model of a system without observing it - perhaps a reason we don't know how to decode neural signals yet is that we didn't have a way to read them. I'm sure that this vastly improved sensor will lead to better models of the brain and neural activity.



We’ve actually had ways of observing neural signals for quite some time. In order to identify the regions of the brain requiring surgery for epilepsy, patients get electrodes implanted and spend several weeks in hospital. Various researchers have recruited volunteers from this cohort for neural signal studies. A team recently managed to decode speech from neural signals from such a study!

https://www.ucsf.edu/news/2019/04/414296/synthetic-speech-ge...


Cool! This seems very similar to what Neuralink hopes to build out, aside from some technical details


We have plenty of ways to observe neural activity, and have had them for years. This chip fab represents an incremental step. In fact if I had to choose for my lab this new chip or some other tech currently available, there are about 5-6 other things I'd prioritize over this Neuralink device.


Eg. what alternative?


1. Fiber photometry rig

2. A 1-/2-/multi-photon microscope with head fixation stage for GRIN lens imaging

3. New electrophysiology rig for single cell, paired cell, multi cell, voltage clamp, patch clamp, etc recording

4. An automated micromanipulator for molecular uncaging and optogenetics experiments

5. a rig for superres single molecule tracking experiments, PALM/STED/STORM/uPaint

6. A set of headmount Miniscopes (elon could actually help to vastly improve these)


Which of these could you use in vivo?


Thanks.


That looks like a key point behind the existence of the Neuralink company.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: