Hacker News new | past | comments | ask | show | jobs | submit login
Your Brain Can Be Hacked (technorati.com)
49 points by iProject on Aug 25, 2012 | hide | past | favorite | 12 comments



The paper's title is "On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces". Given that, yeah, I can see how you could get a LOT of information out of someone without them realizing it. If e.g. you're playing a game, and the surroundings occasionally reflect something you've seen, that's information that such an interface could detect. Maybe a NPC has a disorder that runs in your family, and you react to it more strongly than others - insurance companies would probably love to know it.

All of which is to say, if you assume the worst, and brain-computer interfaces become ubiquitous, yes, I can see there being a serious potential for you to leak things you don't want to leak, just by being exposed to something similar. Done on a grand enough scale, the possibilities could be terrifying.


The experiments measured whether you have a brainwave amplitude peak 300ms after being exposed to a known concept.

The subjects were asked specific questions on screen (for 2s) and flashed all possible answers every half a second, multiple times. The PIN extraction lasts 90 seconds and even then it doesn't guarantee order of the 4 most recognised numbers, with many subjects probably willingly trying to answer the question in their mind.

Experiments with prominent results -- face recognition, month of birth -- don't specify their length, but compared to PIN sampling would last 90s - 100s. Once again the subjects are explicitly shown the question on screen and are prepared to answer in their mind. Also, with face recognition we might get false positives because people look alike.

What this paper builds on is the ability to register a response to perception of something we're currently thinking about. Given that it references existing keyboard input methods that take advantage of this capability, I think this article makes it sound too much like the answers were unwilfully obtained from the subjects, with a 40% success rate. It would actually be very interesting to carry out this study in a different setting. If subjects were in their own environment, using the gaming device as intended? If questions were being asked subliminally or through in game messaging? The success rate might be a lot closer to random guess?

Link to paper: https://www.usenix.org/system/files/conference/usenixsecurit...


While I agree the possibilities are terrifying I have a rather hard time imagining this type of interface becoming ubiquitous. I just don't think most people have enough mental self control to avoid accidental input. For example how many times have most of us, in a moment of anger, thought about saying or texting someone something that we'd never actually send? How would such a device determine between what you actually want to input and what's just passing thoughts?

I also have hard time seeing the "normals" feel comfortable with this kind of tech for everyday use.


OK, sensationalistic headlines aside, this is what is actually going on.

Using EEG, you can look for something called a P300 Event Related Potential (ERP). This is a positive deflection from the baseline activity in the brain signals approximately 300 milliseconds after an anticipated event occurs. Note two key facts about this:

1) P300 actually varies by person; it can appear sooner or much later than 300 ms and have different amplitudes. Because of this, a training phase is required to train the classifier.

2) The P300 happens when an event happens the subject is anticipating or recognizes, so they have to be primed in some sense. For instance, the researchers asked subjects to think of an imaginary PIN, then flashed single digits at them one at a time and tried to infer what the first digit of the PIN was by that. Because they were thinking of, say, 1234, when 1 flashed on the screen, a P300 may have been generated.

What the researchers did was interesting, in that they made the case for potential malware in a consumer BCI game. Their accuracy rates weren't that great, however. This is a far far cry from nefarious agents pulling secret info from your brain.


"...via brute force methods."

Seriously, this registers barely above a lie detector for me. They have to just guess my password and then, when they get it right, they'll record a different brainwave pattern? Sounds simple enough. Okay, my passwords typically consist of several words with some numbers thrown in. I wager we can go through 40 trials each minute for 16 hours each day. How many millions of years do you have?


Except the human brain has pattern recognition. It's possible that these waves are elicited if your password is simply close to the actual one, maybe the first couple letters or somesuch. Then the algorithm could use that information to narrow down the range it has to check; essentially a game of warm/cold.

This might be completely off, since the article is pretty vague, but it seems like a possibility to me.


It's essentially the same idea, except that you elicit a particular brainwave signal only when recognizing something you've already seen. I believe India experimented with using it in murder trials a few years ago-- if the accused elicits a P300 upon seeing the actual murder weapon amongst a line-up of dozens of other weapons, they must know a priori how the murder took place and hence be guilty. Interesting times and there really are applications that could benefit from this kind of analysis.


Between Paul Ekman's work in behavioral science and detecting a persons pulse rate from video (http://www.youtube.com/watch?v=ONZcjs1Pjmk), it seems like one could have a pretty decent "lie detector" running on a smart phone.


Sure it's equivalent to brute force now, but it's not like the brain is doing a cryptographic hash. I would not be suprised if a bit more development made it possible to break passwords one character at a time or something on that order of complexity.


Yeah I know, and that's probably what they did here or something like it. But this article seems a bit... off. What is this:

These headsets use EEG technology to detect and acquire neuro-signals - brainwaves, and are already popular with gamers who simply "think" about their next move.

Really? This is a thing now?


Just read the linked presentation/paper page instead. Less fluff, more details.

https://www.usenix.org/conference/usenixsecurity12/feasibili...

They claim a 15-40% decrease in entropy over a pure brute force attack. The blip about 'already popular' isn't there. That said, there are gaming oriented neural interfaces on the market. An earlish one for example [1]. They're all basically just commoditized EEG machines. Would not say that they're 'popular with gamers', but they're out there.

[1] http://www.ocztechnology.com/nia-game-controller.html


How long is your credit card PIN?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: