Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Impressive (really), but it raises a more philosophical question (as in practical ethics): do we really want voice-bots to blend in perfectly, or should they better feature distinctive marks (like a rather monotonous personality)?

[Edit: This is not so much about DeepFakes, as discussed in the article, but more about a general level of implementation.]



If perfect imitations are outlawed, only outlaws will have perfect imitations.

I’m not sure how useful that question is. If it can be done then it will be, and we’ll have to deal with it whether we want it or not.


I suppose, we'd want bots distinguished (by tone, etc). Where's the practical value of not being able to discern an algorithmic speaker, e.g., on the phone. There's probably some value in being able to do so, regarding liabilities and so on. (A contract arises from an agreement of intents. We may not be sure, if such an agreement has actually been reached, or if we were just witnessing a behavioral pattern triggered by a Markov chain. We may also question the nature of the intent or who's intent this actually is.)


I would argue that while yes, I want to be "in on it" as far as knowing the realness of the speaker, I also want AIs that do sound natural. Even the Google thing that handles phone reservations or whatever: so long as I'm aware I'm listening to an AI, we're good.

If I want a Dan Rather news reader application that parses text and says it to me (and obviously, Dan Rather is okay with it), I see no issue with that. I don't want to be distracted by the artificial tone and attempting to parse it on my end.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: