Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, on one hand, I see people hyping half-baked AI through the roof, cherry-picking good examples, refusing to study and discuss its limitations and even outright dismissing the idea that AI failures are, in fact, failures, rather than some kind of "different way of thinking".

On the other hand, I see the same crowd engaging in ridiculous alarmism that's not grounded in reality. They place technologies in far-fetched scenarios, completely ignoring that the same scenarios can already be enacted without AI. The usual conclusion is always that technologies needs to be kept out of the hands of the public.

Someone is drinking too much of their own Kool-Aid. But regardless of how much they believe in what they're posting, this behavior is disgusting and unethical.

---

>Here are some examples of what might happen if the technology got into the wrong hands

Since when do we start a discussion with the assumption that a piece of software will be restricted in distribution? Software tends to get in the hands of everyone who wants it.

>Spam callers impersonating your mother or spouse to obtain personal information

News flash: this is already happening without AI. All you need is a bad phone connection and someone who sounds vaguely like the person being impersonated.

Moreover, it's already trivial to change the pitch of your voice in real time. With some simple audio engineering, you can alter timbre as well (e.g. filtering, equalization). If that's such a big deal, why is no one using this already? It's way, way, way easier than collecting lots of voice samples and training a model.

>Impersonating someone for the purposes of bullying or harassment

Why would someone need to impersonate someone else for bullying or harassment? Bullying or harassment seems to work pretty well as is.

>Gaining entrance to high security clearance areas by impersonating a government official

If someone can get access to a place simply by using voice coming from computer speakers, it's clearly not a "high security clearance area".

>An ‘audio deepfake’ of a politician being used to manipulate election results or cause a social uprising

Media organizations already do this every day, in plain sight, via selective editing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: