Data driven algorithms are very very useful as moral crumple zones. I don't like this trend of companies replacing traditional decision making with "AI"; the anti-discrimination laws we have are going to be toothless if people can just hide their biases in training sets. Who is held responsible when the algorithm discriminates? Considering how much money these companies are extracting from these algorithms it's really a "heads I win, tails you lose" situation.
I know enough to know that using this in schools, or at work, is a human rights complaint just waiting to happen in my jurisdiction. Do people who are blind, or who have certain mental illnesses, or certain cognitive conditions, or certain conditions like paralysis, end up reported inaccurately or unfavourably by these systems? Oh, and what about ethnicity, culture, religion, race? Any bias there?
Misuse of psychometric tests ("Do you make friends easily?") have resulted in payouts in the UK and Canada, probably elsewhere. This sort of emotion-detection seems analogous.