Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Science in general, as many other things, is predicated on the idea that most people are not lying. AI-generated or not, if a large portion of any body of work ends up being composed of lies, there is a problem. So in the hypothetical future that people are using AI to lie en masse, yes, it is a problem. However, even without AI, if a large number of people are submitting lies as research, there would be a problem.

My point is that this is similar to the problem of art fraud or fake news or any other thing that involves faking media: the immoral act is the problem; the use of AI just makes it easier to do and harder to catch. Yes, it is a problem, but the heart of the problem lies in the immorality of the act, not in the technology per se. Perhaps the issue is not that "many people are immoral", but that the technology enables a larger proportion of immoral people to fool us.



Elite universities insider. People lie in elite science just about as much as in politics. A lot of data from schools like MIT and Stanford is intentionally baked to lead to politically-popular but incorrect results.

By "politically-popular," I don't mean primarily in a red-blue sense. If you're publishing a study for therapists to read, it should ideally use a clever methodology to re-affirm their viewpoints.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: