Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"AI" that interprets human emotion is fraud. There is no means to determine the inner state of an individual short of asking them, and then filtering for situational context. Unless you're talking about strapping a portable fMRI machine to the subject's head, it's not possible to determine the emotional state of another individual.

And for the point, I've worked with Ekman. This sub-field is not scientifically possible.



Looking at a person’s face gives me Bayesian evidence about their emotional state, nudging my beliefs in a direction. AIs would be even better at this, given access to not just more data but also superhuman abilities such as detecting heart rate by watching the target’s throat.


To the extent that “no means” is true, surely it’s also true of humans trying to judge the emotional state of other humans?

Not denying the snake oil currently in the sector as a whole, but I think the tech should eventually be able to do anything our minds can do.


But your mind cannot identify the emotional state of a professional actor. People are far more sophisticated than this immature and naively conceived technology.


A system (or human) does not need to be perfect to be useful. I'll definitely assert that my mind can identify the emotional state of humans around me even if it's not always correct about it and can be cheated by a professional actor; it's right more often than not and it's a very useful capability to have so I'd say it works because I'm using it all the time even if's not perfect.

It would be fraud if and only if it's no better than chance, if it has no correlation whatsoever with the true emotional state - and it's perfectly possible (by both humans and machines) to make an "educated guess" about the emotional state that would - especially in agregate for many humans - be informative for all kinds of purposes, even if it can be easily cheated by a professional actor. For example, you probably can detect if a particular section of an online class turns people unusually frustrated or unusually sleepy, even if it doesn't affect all people, half of affected people don't show it or aren't detected, and there's a professional actor in the class faking the exactly opposite signal. Also, there are many scenarios where it's not even worth considering antagonistic behavior, where people have no motivation to put any effort in misleading it.

The argument that it's impossible to determine the inner state of an individual with certainty is irrelevant, because noone is claiming that, and it's not a requirement for the described use cases. After all, surveys of "do you like product/person X" provide valuable information even if it's trivial for anyone surveyed to lie. All the system needs to do (which needs to be properly validated, of course) is to provide some metric that in practice turns out to be reasonably correlated with the inner state of the individual, even if it doesn't work in all cases and for all individuals, and that IMHO is definitely achievable.

Perhaps it's more a difference in semantics - what do we call if a system (or human) for identifying some status or truth is halfway between "no better than chance" and "can identify the true status 100% of the time". I would say that it's a feasible system for identifying that thing, and that system works (though it's not perfect); it seems that you would say that such a system does not work - but how would you call or describe such a halfway-accurate system?


It is facial expression recognition, not emotion recognition. If the tech/companies state what the tech actually does, then a non-AI-specialists will not make gross assumptions and design a completely nonsense fictional system on top they believe has some form of "authority" which they then proceed to enforce.


Won't stop snake oil salesmen from advertising such models and profit off of them. A substantial portion of the detrimental effects will be shared, regardless of the validity of the algorithm.


As an industry of computer SCIENTISTS, we should make the statement that such systems are not possible and fraud.


But such systems are possible and aren't necessarily fraud. You'd prefer that reality be other than what it is. I get it. You don't want these systems to work. But they do, and a noble lie saying they don't is still a lie.

Do you really not think it's possible to read emotions from facial expressions? If humans can do it, a machine can do it better. The claim from the article that machines can't read emotions is pure motivated reasoning so distorted and disconnected from reality that it amounts to fraud. It's obviously the case that faces convey emotions. Look around.


As if none of you are aware of actors and method actors that will and do fool any system observing them externally. Identifying the inner emotional state of a 3rd person is not possible, no matter how much you want it to be.


Most people most of the time aren't acting. Facial expressions are great Bayesian indications of emotional state.


Really? I disagree. I believe most of the time people are acting. Most of the time people are in a role they would not care to hold if the choice were theirs.

Facial expressions are a terrible indication of emotional state because humans are multi-layered: what's to say the person with a grimace does not have a tooth ache, or a bad back - yet otherwise is in completely normal state for them, if asked they'd say they are good.

I perceive the majority of people considering this situation to only be considering 1st level effects. I have seriously considered this as a professional ambition, scientifically investigated the situation and discarded the concept as unreliable at best and a fraud engine in reality.

Think about this a bit more, use your scientific method training. you'll come to the conclusion this is not science, this is pseudo-science and fraud.


Are you telling me that you need an fMRI to relatively reliably classify 100 people into two classes based on whether they're very happy or in great despair?


Of course one can try it out to see if it needs regulating: https://emojify.info




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: