You caught me! Yes, I am using AI to assist with the translation.
My IELTS score is 7.5, but my writing band is 6.0.
I write my thoughts and comments in Chinese first and then use AI to translate them. The entire article was also translated from my original Chinese manuscript.
Thank you very much for the article, it was super interesting. The mystery in the story draws people in, and people surely won't mind a couple of grammatical mistakes. But you have to watch out: the use of AI makes it easy for people to suspect that the story might've been embellished. For the second part, it might be better to try translating it manually; the same goes for writing replies.
I hate that too, but my english writing is not good enough to write a long article, I edited this many times, I thought it's acceptable.Yes, I will try my best to learn english writing.
Right, you do what you think is best. I'm in no position to tell you what to do. Having said that, it comes off as robotic and impersonal. Personally, I'd rather read you trying to write with your own words what you wanted to write. That is, if you're not AI yourself which there are high chances of and I'm leaning on that theory.
Tip, I’d rather read slightly bad grammar due to simple translation than AI assisted interpretation of what you are wanting to say.call me old fashioned I guess
Why? Would a incorrect but literal translation be closer or further from what the author is trying to communicate?
I've been seeing this take on HN a lot recently, but when it comes to translation current AI is far, far superior to what we had previously with Google Translate, etc.
If the substack was written in broken English there's no way it would even be appearing on the front page here, even less so if it was written in Chinese.
An incorrect but more authentic translation would seem more real, like an human earnestly trying to tell a story. We would accept the imperfections and have a subjective feeling of more authenticity.
When the translation differs so much from what the author is trying to say in their native language, it loses its earnestness.
That's why translation is a job in the first place and you don't see publishers running whole books through Google translate. No one, least the authors, would accept that.
We don't know how much the imperfect translation would differ from the author’s intent, but we would sure try to meet him halfway. Nobody would criticize his broken English.
Contrast this with the faux polite, irritating tone of the AI, complete with fabrications and phrases the author didn't even intend to write.
Authenticity has value. AI speech is anything but authentic.
I mean, you're making assumptions about the author's intent going one way, but not the other. What if the polite tone is what they intended? And how do you know they didn't review the output for phrasing and fabrications?
The author acknowledged they used AI to translate. Is the translation they decided to publish among the given tools they had available to them not by definition the most authentic and intentional piece that exists?
All of this aside, how do you think tools like Google Translate even work? Language isn't a lookup table with a 1:1 mapping. Even these other translation tools that are being suggested still incorporate AI. Should the author manually look up words in dictionaries and translate word by word, when dictionaries themselves are notoriously politicized and policed, too?
The first version was translated by google and it's wordy, sometimes doesn't make sense, so I used Gemini this time. I sent the first version to many people and no one could finish it ;)
Highly doubt this. Have you read a translated book? Are you looking for literal translations or a translation from someone who's an expert in both languages and makes subjective adjustments based on their experience?
No, I agree with the other commenter. I'd rather read broken English than the fake tone AI injects on everything (and the suspicion of fabrications, too).
In my new domain, photography, the most common "advice" for beginners is to learn the exposure triangle, shoot manual and get everything done in camera. This kind of advice comes from beginners, quite close to take a fall from the Dunning-Kruger scale. I'm working towards a distinction from one of the most respected photography organizations in the world and nobody involved with it that gave me guidance ever asked how I took the images.
Maybe or, most likely this is the same for writing: there are people that think correct grammar and punctuation and no help on achieving this, means writing.
The core algorithm behind modern generative AI was developed specifically for translation, the task which arguably these chatbots are the most suited! It’s the task that they’re far the best at, both relative to older translation algorithms (which were also AI), and relative to their capabilities other tasks that they’re being put to. These LLMs are “just” text-to-text transformers! That’s where the name comes from!
“Stop using the best electric power tool, please use the outdated steam powered tool.” is what you’re saying right now.
You’re not even asking for something to be “hand crafted”, you’re just being a luddite.
The "terribleness" is a feature. It means I can be confident that the meaning of fluent output corresponds to the meaning of the input: I'm capable of hand-translating any passages the computer can't, but I'm not capable of proof-reading all the translations to spot fluent confabulations.
LLM can translate in the style you want them to. You can make them translate more creatively or just translate word by word. I even think you can make them explain their choice of translation and help you proof-read the result.
> The core algorithm behind modern generative AI was developed specifically for translation
Indeed! And yet, generative AI systems wire it up as a lossy compression / predictive text model, which discreetly confabulates what it doesn't understand. Why not use a transformer-based model architecture actually designed for translation? I'd much rather the model take a best-guess (which might be useful, or might be nonsense, but will at least be conspicuous nonsense) than substitute a different (less-obviously nonsense) meaning entirely.
Bonus: purpose-built translation models are much smaller, can tractably be run on a CPU, and (since they require less data) can be built from corpora whose authors consented to this use. There's no compelling reason to throw an LLM at the problem, introducing multiple ethical issues and generally pissing off your audience, for a worse result.
> Why not use a transformer-based model architecture actually designed for translation?
Because translation requires a thorough understanding of the source material, essentially up to the level of AGI or close to it. Long-range context matters, short-range context matters, idioms, short-hand, speaker identity, etc... all matters.
Current LLMs do great at this, the older translation algorithms based on "mere" deep learning and/or fancy heuristics fail spectacularly in the most trivial scenarios, except when translating between closely related languages, such as most (but not all) European ones. Dutch to English: Great! Chinese to English: Unusable!
I've been testing modern LLMs on various translation tasks, and they're amazing at it.[1] I've never had any issues with hallucinations or whatever. If anything, I've seen LLMs outperform human translators in several common scenarios!
Don't assume humans don't make mistakes, or that "organic mistakes" are somehow superior or preferred.
[1] If you can't read both the source and destination language, you can gain some confidence by doing multiple runs with multiple frontier models and then having them cross-check each other. Similarly, you can round-trip from a language you do understand, or round-trip back to the source language and have an LLM (not necessarily the same one!) do the checking for you.
What models are you using? I'm using whatever's built into Firefox 140.6.0esr (some Bergamot derivative, iirc), which gives me:
> This can avoid the taste of AI, but it may be very bad to read, I first used machine translation translation, many parts become very wordy, and at the same time puzzling.
Perfectly clear and comprehensible. It's not fluent English, there are comma splices everywhere, and it translated "machine translation翻译" as "machine translation translation", but I understand it – and I'm confident it's close to what you actually meant to say. I can spot-check with my Chinese-to-English dictionary, and it seems like a slightly-better-than-literal translation. My understanding of your comment:
> This can avoid the smell of AI, but it may be a struggle to read. I initially used a dedicated machine translation system, but many parts became verbose (/ very wordy) and incomprehensible.
Generative models don't solve the 令人费解 problem: they just paper over it. If a machine translation is incomprehensible, that means the model did not understand what you were saying. Generative models are still transformer models: they're not going to magically have greater powers of comprehension than the dedicated translation model does. But they are trained and fine-tuned to pretend that they know what they're talking about. Is it better for information to be conspicuously lost in translation, or silently lost in translation?
Please, be willing to write in your native language, with your own words, and then provide us with either the original text, or a faithful translation of those words. Do you really want future historians to have to figure out which parts of this you wrote yourself, and which parts were invented by the AI model? I suspect that is not the reason you wrote this.
My IELTS score is 7.5, but my writing band is 6.0.
I write my thoughts and comments in Chinese first and then use AI to translate them. The entire article was also translated from my original Chinese manuscript.