But your mind cannot identify the emotional state of a professional actor. People are far more sophisticated than this immature and naively conceived technology.
A system (or human) does not need to be perfect to be useful. I'll definitely assert that my mind can identify the emotional state of humans around me even if it's not always correct about it and can be cheated by a professional actor; it's right more often than not and it's a very useful capability to have so I'd say it works because I'm using it all the time even if's not perfect.
It would be fraud if and only if it's no better than chance, if it has no correlation whatsoever with the true emotional state - and it's perfectly possible (by both humans and machines) to make an "educated guess" about the emotional state that would - especially in agregate for many humans - be informative for all kinds of purposes, even if it can be easily cheated by a professional actor. For example, you probably can detect if a particular section of an online class turns people unusually frustrated or unusually sleepy, even if it doesn't affect all people, half of affected people don't show it or aren't detected, and there's a professional actor in the class faking the exactly opposite signal. Also, there are many scenarios where it's not even worth considering antagonistic behavior, where people have no motivation to put any effort in misleading it.
The argument that it's impossible to determine the inner state of an individual with certainty is irrelevant, because noone is claiming that, and it's not a requirement for the described use cases. After all, surveys of "do you like product/person X" provide valuable information even if it's trivial for anyone surveyed to lie. All the system needs to do (which needs to be properly validated, of course) is to provide some metric that in practice turns out to be reasonably correlated with the inner state of the individual, even if it doesn't work in all cases and for all individuals, and that IMHO is definitely achievable.
Perhaps it's more a difference in semantics - what do we call if a system (or human) for identifying some status or truth is halfway between "no better than chance" and "can identify the true status 100% of the time". I would say that it's a feasible system for identifying that thing, and that system works (though it's not perfect); it seems that you would say that such a system does not work - but how would you call or describe such a halfway-accurate system?
It is facial expression recognition, not emotion recognition. If the tech/companies state what the tech actually does, then a non-AI-specialists will not make gross assumptions and design a completely nonsense fictional system on top they believe has some form of "authority" which they then proceed to enforce.