Translation is difficult at the best of times. I thought it was interesting how Google Translate seemingly kept coming up with different translations for the name of the program. Under “Features”, it suddenly decides the name is “Takigami”, as one example. By the end, it even goes so far as to say: “When you start up the cooking program, the following screen will be displayed.”
The translation seemed largely consistent with what Google Translate provided, but some of ChatGPT’s translation differences seemed more plausible to me, and it certainly reads more coherently. It also doesn’t keep forgetting that it’s dealing with the proper name of the program.
I didn’t try Gemini for this, but I imagine it has to be decent at translation too, so I wonder if/when Google will use Gemini to assist, replace, or otherwise complement Google Translate.
It’s particularly obvious if you translate between languages with vastly different grammar, e.g. Korean -> English, since Korean doesn’t require a subject per sentence but English does – so Google Translate then sometimes just inserts random subjects from its training data into the translated text. ChatGPT, by understanding more of the context before each sentence in a long text, seems to do this less.
For stuff like French -> English or German -> English where there is “no missing info” per sentence to create grammatically correct sentences, so that it doesn’t need to rely on context to translate correctly, Google Translate is great.
I stopped using GT for larger texts when (having finally achieved some rudimentary literacy in my target language) I noticed it was rather cavalier about inserting or deleting not, changing the sense of sentences completely.
Sure, in translation one always has issues of sarcasm or irony, but I felt the tool was probably hallucinating more than being a useful work instrument.
EDIT: and yes, I also prefer the older behaviour of translation programs, whose output was noticeably disfluent where it was poor instead of just bullshitting to stay fluent.
I've been talking to a lot of Chinese people using machine translation recently, and noticed that inserting and removing "not" is very common for all translation tools I've used, from Google Translate to DeepL to ChatGPT. I'm not sure if it's particular to Chinese ←→ English, or if it's a common problem across all languages.
A priori, it seems like a pretty huge issue, because it changes a sentence's meaning to its opposite. Fortunately, it's usually easy to notice. But then again, I obviously wouldn't know about any instances I haven't noticed.
I don't have a good idea on how to objectively test this, but subjectively, my impression is that most regular people in China don't know a lot about any countries outside of East and Southeast Asia.
Even something like Halloween, which apparently triggered this discussion, is not something most people, including kids, seem to be particularly familiar with. When I mentioned to a Chinese friend that we were celebrating Halloween last year, she advised me to be careful when inviting ghosts into my home. She was unaware that it is mostly a fun children's holiday where they dress up and get candy.
Halloween is of Western, Christian origination and so knowledge of it wouldn't be something to expect in places not heavily influenced by the West or Christianity.
Despite already having our own (week-long, springtime) holiday involving dressing up in costumes, Halloween has taken a firm hold here over the last couple of decades. Kids have never yet showed up at our door, but they're definitely out trick or treating.
I'd blamed chinese factories needing more places to sell their plastic halloween gear, but now it sounds like it just comes down to US media saturation?
Then again, we should all be stealing more holidays from each other; a more syncretic world is a less boring one.
[My german teacher in high school said the best thing about growing up in southern germany was that they got all the holidays (both protestant and christian) off from school]
Google Translate frequently makes plenty of errors/hallucinations. I pointed out several above in this very thread!
When accuracy is absolutely critical, don’t depend on machine translation alone, and especially don’t depend on a single machine translation without cross checking it. As it is, I have anecdotally only had good experiences when comparing GPT-4o’s translation quality to Google Translate. I would love to see objective research into the topic, if someone were offering it, but not trite dismissals that imply Google Translate is somehow immune to hallucinations.
As a note, for Japanese text deepl is widely used even by Japanese people. From eng to jpn it may not choose properly nuanced words though, but it largely produces acceptable translations.
I asked ChatGPT 4o to translate the README: https://chatgpt.com/share/6700bed9-1198-8004-8eed-07f5055d07...
The translation seemed largely consistent with what Google Translate provided, but some of ChatGPT’s translation differences seemed more plausible to me, and it certainly reads more coherently. It also doesn’t keep forgetting that it’s dealing with the proper name of the program.
I didn’t try Gemini for this, but I imagine it has to be decent at translation too, so I wonder if/when Google will use Gemini to assist, replace, or otherwise complement Google Translate.