It's a Parrot ArDrone quadrotor. The frame is made of foam which encloses the rotors and the motors automatically cut off if they sense the blades are obstructed.
Being hit by one is like being bumped with a bit of polystyrene.
Downscaling the myriad of signals from your brain to 64 electrodes controlling four spacial dimensions seems tricky. I can see that making a fist may turn left, but what prevents any other random thought of being interpreted in the same way? I.e. how safe is a bike that turns left upon twisting the handlebars left, you may end up turning right with odds of 1:1000?
If this is just up and down based on your level of concentration, it is relatively useless. However, if it maps all 6 directions, it is pretty impressive.
Seems like a good candidate for machine learning. Train a neuralnet to recognize the EEG patterns that a given person wants to use to do a given operation.
For example, this one was having the subject think about making a fist, but it could be that having them "will" the copter to move right could be more intuitive. And with the right machine learning/recognizer it might be feasible.
I wonder how many "channels" can be worked out? 2? 20? 2,000? 2,000,000? Anyone have any background on this? It's fascinating to think of the complexity of a machine that could be controlled, when compared to the inherent limitations of pedals, wheels, switches and the like.
I have a background in these so-called brain computer interfaces. Typically, research know some of the properties of the EEG signal during specific tasks. Using signal processing and machine learning methods (and sometimes human training), these tasks can be recognized by 'decoding' the EEG.
Some of these signals are spontaneous (realizing an error has been made), some are produced by voluntarily executing some mental task. Currently, the amount of these 'channels' that is available is limited by 1) the amount of detectors that a lab is willing to build, and 2) how many tasks the user can simultaneously execute — which is typically very low. If you really want a number, I would settle for four as the current state of the art.
I have been working on a method to make problem 1) so easy it can be solved by laymen by just collecting examples of EEG during the task of interest. Now we are founding a startup to make this happen commercially :).
PS: I think this technology does not lend itself well for analogies with channels or buttons. Buttons were invented for a physical world. Brain-computer interfaces lend itself to interact with signals there are /not/ available in normal interaction (i.e. relevance, errors, intended movements etc).
Cool. How would such things work when it came to a prosthetic leg, for example? Suppose you built a robotic leg with, say, 80 or 100 actuators all working together. Could you train such a device to work on thought, mimicking a real leg, or is that out of the scope of what you're talking about?
Hmm, difficult question. In the US there are some groups doing very advanced work with implants to restore limb or prosthesis control, and there are some very impressive movies of monkey's controlling robotic arms. But invasive (i.e. with implants) work is not really my thing. And in these studies, often the monkey is the one doing the learning — it is not the device that adapts to the user.
For non-invase (i.e. EEG measured from outside the body) EEG I think that is still far off. The problem is that the signals are measured from a distance, and that it is very hard isolate signals from a precise region in the brain which is needed for accurate control.
I typically express the performance of these brain-computer interfaces in bits/minute. Keyboard gets roughly around 300 bits/min, brain-computer interfaces 2-20 bits/min. I would not know the bandwidth (and latency) requirements for reliable prosthesis control, but that would probably depend on intended use of the prosthesis. But then again, not all the actuators need to be controlled individually; maybe it is feasible with a smart controller and a forgiving application. And of course usability plays a major role; I cannot imagine controlling a prosthesis using the keyboard, although the information throughput might be sufficient :).
Wow, so we might be less than 5 years away from pretty solid brain computer interfaces? That would be revolutionary. It would make things like Google glass a lot more viable.
Google Glass and the Oculus Rift are very interesting for us, since it puts technology right on the spot where we need to acquire signals for brain-computer interfaces. FYI, there was a successful crowd-source campaign by InteraXon for an acceptable EEG headset [1]. This is the first headset that I can imagine being worn in public spaces.
In five years we can potentially see brain-computer interfaces for consumers. I fear using the word 'solid' though, since I isn't a replacement for traditional input like the mouse or keyboard (that is what I would call solid). The biggest challenge I see is that we have to help consumers understand what it can do. I feel this technology is a game changer, but it is difficult to pinpoint what game is being changed. Therefore, it will at least take a while to get mainstream.
No, not without some radical new insights :). But, why replace them? If you need something to replace them, probably you have a different need (e.g. hands-free, private communication, or expressing something that is hard to do consciously, like the level of pain, tiredness of familiarity of a face). Perhaps EEG can be used to fulfill that need instead.
So where do you think we'll see hands free private communication coming from? I can't really think of any technologies that might be able to do that.
Also replacing the keyboard is a worthwhile goal. A lot of people get RSI, and I think even at something fast like 100 wps our brain to computer "bandwidth" is pretty slow. (And typing fast takes a lot of practice)
The current state of the art for BCI ranges from 2-3 continuously valued "channels" using motor imagery (the method used in the article) or 1 channel of 2-32+ discrete choices using a sensory stimulus-based method such as event-related potentials (P300) or steady state visual evoked potentials (SSVEP).
It is highly unlikely that an EEG BCI will ever replace any normal task, as the performance relative to any reliable motor movement for direct control is terrible. For instance, if you have an eye tracker, you can reliably out perform the best BCI. It is really aimed at severely paralyzed people who don't have any other means of communication. The idea is undoubtedly cool and compelling, but the practicality of BCI for healthy subject use is very limited.
the articles say 4 directions, but I'm going to assume that's because it's what they've been able to find clear enough patterns to map actions to. If they thought up a few more, ex "think about kicking your left leg" to move forwards, it could probably work. The leader talked about using wheelchairs and prosthetic limbs.
They are using a technique called motor imagery, which looks for small changes in synchrony in the sensory motor rhythms (SMR). SMR is currently only capable of reliably detecting left hand, right hand, and combined foot imagined movement. When they say "raising a hand" they are not finding a pattern of activity that relates to that gesture, they are simply detecting if a left vs right vs both motor action was imagined. As such, you cannot, unfortunately, simply think up another gesture to add.