(Buuuut let us also enjoy the fact that ML can generate a decent-looking abstract to a scientific paper 32% of the time. This represents massive progress; I would venture that most humans cannot write such an abstract convincingly.)
I would argue the opposite. Give anyone who can write reasonnably the abstract of a few dozen papers in a given field, and I am confident they would be able to produce a convincing bullshit abstract. Most abstracts in a given field sound extremely similar, and are mostly keyword dropping with an ounce of self-promotion. I actually think that the only reason ChatGPT did not produce a higher rate of convincing abstracts is that the reaserchers assessing them knew they had a high probability to look at an AI result and were thus extra careful. Most of the abstracts correctly labelled as AI would probably be considered legit (which does not mean good) if sent to a conference without warning.