Ideally yes, for a paper to be accepted it should be reproduced, if ChatGPT is ever able to produce code that runs and produce SOTA results then I guess we won't need researchers anymore
There is however a problem when the contents of the papers costs thousands/millions of $ to be reproduced (think GPT3, DALLE, and most of the papers coming Google, OpenAI, Meta, Microsoft). More than replication, it would require fully open science where all the experiments and results of a paper are publicly available, but I doubt tech companies will agree with that.
Ultimately it could also end up with researchers only trusting papers coming from known labs/people/companies
Reproduction of experiments generally comes after publication, not before acceptance. Reviewers of a paper would review the analysis of the data, and whether the conclusions are reasonable given the data, but no one would expect a reviewer to replicate a chemical experiment, or the biopsy of some mice, or re-do a sociological survey or repeat observation of some astronomy phenomenon, or any other experimental setup.
Reviewers work from an assumption that the data is valid, and reproduction (or failed reproduction) of a paper happens as part of the scientific discourse after the paper is accepted and published.
I'm thinking of the LHC or the JWST: billions of dollars for an essentially unique instrument, though each produces far more than one paper.
Code from ChatGPT could very well end up processing data from each of them — I wouldn't be surprised if it already has, albeit in the form of a researcher playing around with the AI to see if it was any use.
Indeed and other sciences seems even harder to reproduce/verify (e.g. how can mathematicians efficiently verify results if chatgpt can produce thousands of wrong proofs)
> there are already ways to automate testing in their domain.
Do you mean proof assistant like Lean ? From my limited knowledge of fundamental math research, I thought most math publications these days only provide a paper with statements and proofs, but not with a standardized format
Only a tiny fraction of existing maths can be done with proof assistants currently, and as a result very very few papers use them. In most current research automated testing would be impossible or orders of magnitude more work; in many areas mathematicians are working with things centuries ahead of where proof assistants are up to, and working at a much higher level of abstraction. Also, many maths papers have important content that is not proofs (and many applied maths papers contain no proofs at all).
There is however a problem when the contents of the papers costs thousands/millions of $ to be reproduced (think GPT3, DALLE, and most of the papers coming Google, OpenAI, Meta, Microsoft). More than replication, it would require fully open science where all the experiments and results of a paper are publicly available, but I doubt tech companies will agree with that.
Ultimately it could also end up with researchers only trusting papers coming from known labs/people/companies