Hacker News new | past | comments | ask | show | jobs | submit login

Beats me. It was pretty obvious to me early on when asking about any field I know well that it had no understanding and would happily blurt out a wrong, but plausible sounding answer. So I haven't even tried to ask it abkut stuff I don't understand. Because how would I even tell if the answer made sense? Seems like an easy way to get the completely wrong idea.



People love DeepL translations for the same reason, that they sound convincing, even though they are often completely wrong. Even before that people were (and still are) trusting the infocards that Google puts in search results based on arbitrary snippets it extracted from the webpage, because those snippets are presented as authoritative even though they're often out of context or completely wrong.

People are used to AI being clunky, unfocused, ungrammatical text, ala markov chain bots from the 2000s. So conversely this kind of verbose, coherent, well-written text appears to be knowledgeable and correct.

I can only hope that deepfakes and such become popular enough that people learn to be less trusting of what they find on the internet.


> People love DeepL translations for the same reason, that they sound convincing, even though they are often completely wrong.

Could you show examples?

I'm using DeepL to translate things form and to languages that I know very well, usually to double check or get additional inspiration for wording. I've never experienced anything that was completely wrong. The translations are most of the time almost perfect.

But maybe it's a question of language pairs.


The examples I know of all involve Japanese light novels translated to English. For example https://twitter.com/Xythar/status/1405658562378952705 (The tweet author is someone I know, not me.) Other cases are fan TLs of novels that I've read which were done through DeepL, so I can't link them.


It probably works quite well if the field you are asking about has a high ratio of ‘plausible sounding’ to ‘logic follows’ language


Which fields would that be? I can't think of any field that, once dug into a bit, doesn't reveal that lots of plausible sounding ideas are false.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: