> What makes it clear to you that they don't mean what they explicitly write?
Because that's how language works - it's inherently ambiguous, and we interpret things in the way that makes the most sense to us. Your interpretation makes no sense to me, and requires a whole host of assumptions that aren't present in the article at all (and are otherwise very unlikely, like an AI that can literally work at the level of concepts).
> Why are you defending their poor writing?
I'm defending them because I don't think it's poor writing.
There are two ways to interpret the sentence we are discussing:
A: a grammatically incorrect statement, saying that "the AI used theory", when they mean that "the AI's design can be understood using theory" (or more sloppy "that the design uses the theory").
B: a grammatically valid if contentious-to-you statement about an LLM or knowledge graph based system (e.g., something like the AI Scientist paper) parsing theory and that parsing being used to create the experiment design.
As I have explained, B is a perfectly valid interpretation, given the current state of the art. It is also valid historically, as knowledge graph based systems have been around for a long time. It is also the likely interpretation of a lay person, who is mainly exposed to hype and AI systems like chatGPT.
Regardless, they a) introduce needless ambiguity that is likely to mislead a large proportion of readers. And b) if they are not actively misleading then they have written something grammatically incorrect.
Both findings mean that the article is a sloppy and bad piece of writing.
This particular sentence is also only a particular example of how the article is likely to mislead.
Okay - at this point I have nothing more to say. You think I'm misrepresenting Quanta's audience, and I think you're being needlessly pedantic. Doesn't seem like we're going to resolve this short of you "showing me the victim", so to speak. It didn't mislead me, it didn't mislead my partner, and it doesn't seem to have mislead you either. So who are these "laypeople" who are injecting all this hidden meaning into the article?
Anyway, I don't think it's reasonable for me to ask you for evidence here, so let's just agree to disagree.
The thread is full of people debating what AI means and whether ML/optimization algorithms count as AI. Laypeople don't think of machine learning when they see AI, they think of chatbots. I would argue even for a techy magazine this is a bad term to use without spending two sentences to clarify the distinction.
Examples of people being confused:
rlt: The discovering itself doesn’t seem like the interesting part. If the discovery wasn’t in the training data then it’s a sign AI can produce novel scientific research / experiments.
wizzwizz4 in reply: It's not that kind of AI. We know that these algorithms can produce novel solutions. See https://arxiv.org/abs/2312.04258, specifically "Urania".
About a quarter of the comments here I just have to assume what definition of AI they're talking about, which changes the meaning and context significantly.
I deal with this at work all the time, and it drives me up the wall. Words have meaning! That's the point! If you cannot or will not say what you mean, your writing skills are poor. Precision matters. Lack of ambiguity matters. Perhaps in this case you were able to read between the lines and divine what the words truly meant, but forcing readers to do that is the mark of a bad writer. Not just because it's added time and mental effort for the reader, but because readers who failed to read between the lines will have no idea that you fed them a falsehood and now have an inaccurate understanding of whatever you were writing about.
It confused me sufficiently that I consulted the original paper to check what type of AI algorithm the research used! That was my immediate reaction to reading the sentence.
Out of curiosity, are you familiar with work like "the AI Scientist"? Having an LLM based AI suggest experiments based on parsing scientific literature is not outlandish.
Because that's how language works - it's inherently ambiguous, and we interpret things in the way that makes the most sense to us. Your interpretation makes no sense to me, and requires a whole host of assumptions that aren't present in the article at all (and are otherwise very unlikely, like an AI that can literally work at the level of concepts).
> Why are you defending their poor writing?
I'm defending them because I don't think it's poor writing.