Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing is, if you include something like that (i.e. sentiment analysis as crude, rudimentary way of looking at people's emotions), the genie is out of the bottle - it's a widespread task that's used as a relatively simple homework excercise in undergrad courses, you would need to censor it out of textbooks worldwide, which is a quite big issue to say at least.

I.e. my point is that such a ban would have to be very extensive and invasive, with obvious censorship of small, simple segments of code and whole avenues of basic knowledge. Given some data, you can get a crude emotion detector from facial images or text messages - not state of art but somewhat accurate - with something like ten lines of code, with no previous skill on "emotion analysis", just applying generic ML approaches. I can't imagine how such a ban could be implemented, as so many people would still be able to easily make such systems whenever they wanted to, so the ban wouldn't be effective.

Perhaps you could regulate the application of automated decision making to decisions about people and requiring some review-and-override mechanisms (GDPR has some limited aspects of that), but it's a very different area than just banning knowledge and skills that already exist and are relatively widespread.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: