Cool. How would such things work when it came to a prosthetic leg, for example? Suppose you built a robotic leg with, say, 80 or 100 actuators all working together. Could you train such a device to work on thought, mimicking a real leg, or is that out of the scope of what you're talking about?
Hmm, difficult question. In the US there are some groups doing very advanced work with implants to restore limb or prosthesis control, and there are some very impressive movies of monkey's controlling robotic arms. But invasive (i.e. with implants) work is not really my thing. And in these studies, often the monkey is the one doing the learning — it is not the device that adapts to the user.
For non-invase (i.e. EEG measured from outside the body) EEG I think that is still far off. The problem is that the signals are measured from a distance, and that it is very hard isolate signals from a precise region in the brain which is needed for accurate control.
I typically express the performance of these brain-computer interfaces in bits/minute. Keyboard gets roughly around 300 bits/min, brain-computer interfaces 2-20 bits/min. I would not know the bandwidth (and latency) requirements for reliable prosthesis control, but that would probably depend on intended use of the prosthesis. But then again, not all the actuators need to be controlled individually; maybe it is feasible with a smart controller and a forgiving application. And of course usability plays a major role; I cannot imagine controlling a prosthesis using the keyboard, although the information throughput might be sufficient :).