Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice trick for ChatGPT, but this will not destroy science.

Nobody takes a serious decision reading only the abstract. Look at the tables, look at the graphs, look at the strange details. Look at the list of authors, institutions, ...

Has it been reproduced? Has the last few works of the same team been reproduced? And if it's possible, reproduce it locally. People claim that nobody reproduce other teams works, but that's misleading. People reproduce other teams works unofficially, or with some tweaks. An exact reproductions is difficult to publish, but if it has a few random tweaks ^W^W improvements, it's more easy to get it published.

The only time I think people read only the abstract is to accept talks for conference. I've seen a few bad conference talks, and the problem is that sometimes the abstracts get posted on like in bulk without further check. So the conclusion is don't trust online abstracts, always read the full paper.

EDIT: Look at the journal where it's published. [How could I have forgotten that!]



> Nobody takes a serious decision reading only the abstract.

I am not sure if this is sarcasm or not.

Literally the whole world besides the researches read mainly the abstract and make even life changing decisions based on it. (just look at any twitter discussion linking to a paper)


It's not sarcasm, but it may be researcher bias https://xkcd.com/2501/

I don't consider twitter discussion "serious decisions". When a thread here reach 100 comments, the quality of the discussion usually drops. I don't want to imagine how bad it's in twitter.

The problem is people overdosing from snake oil they read in twitter. I think there were a few cases with ivermectine, hydroxychloroquine and chlorine dioxide [1]. People should not self medicate, or at least understand proportions and that the dose makes the poison. They should see a medical doctor [2], that at least understand proportions and that the dose makes the poison and that they should follow the advice from the FDA made by people that read a few of research papers and extended reports, instead of only one press release about a preprint written by a moron that somehow got a position in a university or hospital.

I guess politician don't read the full paper (except Merkel?), but they hire experts to read the articles and give advice. If you are a politician and the "expert" is reading only the abstract without checking the full paper and the journal and other related stuff, you should fire the moron.

[1] The chemistry of chlorine dioxide is too simple and it's obvious that it can't possible work, so it makes me angry. The other stuff don't work, there were no good reasons to expect them to work, but at least it's not obvious with elementary chemistry that it can't work.

[2] I have horror stories about medical doctos too. Always ask for a second opinion for important stuff.


I'm quite confident that there are cliques within "science" which are admitted without as much as a glance at the body of the papers. Some people simply cannot be bothered to get past the paywalls, others accept on grounds outside the content of the paper, like local reputation or tenure. Others are asked to review without the needed expertise, qualification, or time to properly understand the content. Even the most honorable reviewers make mistakes and overlook critical details. Then there are the set of papers which are (rightfully so) largely about style, consistency, and honestly, fashion.

How can we yield results from an industry being lead by automated derivatives of the past?

Is an AI-generated result any less valid than one created by a human with equally poor methods?

Will this issue bring new focus on the larger problems of the bloated academic research community?

Finally, how does this impact the primary functions of our academic institutions... teaching.


> How can we yield results from an industry being lead by automated derivatives of the past?

Even a human researcher need experiments to validate ideas. AI can generate plausible ideas, so why not run the experiment and let it learn from the outcomes? The source of learning comes from experimentation, that's how models escape the derivative trap. AlphaGo invented move 37, proving AI can be creative and smart.


Interesting questions. I don't think most of them have a definitive answer, so this is my opinion.

> I'm quite confident that there are cliques within "science" which are admitted without as much as a glance at the body of the papers.

It's possible. I don't know every single area, but I guess it's not common in most serious branches od science.

> Some people simply cannot be bothered to get past the paywalls, others accept on grounds outside the content of the paper, like local reputation or tenure.

My friends says they can ask Alexandra for a copy. A few years ago it was also common to ask a friend in another site that has a copy.

> Others are asked to review without the needed expertise, qualification, or time to properly understand the content. Even the most honorable reviewers make mistakes and overlook critical details. Then there are the set of papers which are (rightfully so) largely about style, consistency, and honestly, fashion.

That's why you RTFP instead of thrusting the journal or the reviewer or just the abstract. I've seen "dubious" papers published in good journal.

> How can we yield results from an industry being lead by automated derivatives of the past?

It's running now in natural intelligence. Every time there is a interesting paper, other groups run to publish variants or combinations with other result to reach the anual quota, or get enough point for the graduate student. Once the GAI can read all papers and combine them, there will be very few low hanging fruits to pick.

> Is an AI-generated result any less valid than one created by a human with equally poor methods?

For now, AI is to stupid to get the details right. That's why it's important to read the full paper instead of only the abstract. One AI is intelligent enough, the result may be as bad as a cheating human. Let's hope the AI has a good PRNG to create credible noise, because that's a part humans do badly.

> Will this issue bring new focus on the larger problems of the bloated academic research community?

Nah.

> Finally, how does this impact the primary functions of our academic institutions... teaching.

I don't understand the question. You fear that professor in some areas will just send fake papers written by ChatGPT4.0 and the journal and the community will not notice? There are a lot of predatory journals, open-peer-review journals and other bad journals that are publishing a lot of crap. Usually by professor in bad universities or just as a fancy achievement for the c.v.. A good AI will increase the amount of crap, but it will be just ignored.


> I don't understand the question.

No, actually I'm curious if this could open up the schedule for professors who want to spend more time teaching to design and develop better curriculums for their students. But I'm probably being overly optimistic about the number of profs who actually want to teach.


There are some universities that don't require research, so if you want to only teach you can go to one of them. It's a solved problem. Anyway, the best universities require research, opting out of research so you get less money, less prestige, and not the top students.

Also, accumulating published papers is important to get a new position in the future, so only teaching is a risk for your future career.

For teaching some of the topics to advanced students, it's important to have people that do research and is updated with the cutting edge results. Also to babysit the graduate students so they can publish their results. For teaching the students in the first years, research is probably overrated.


Good points.

It's just been a while since I was inspired by anyones research I guess.


There are many interesting result that fly under the radar. If you don't want to count big things like LIGO, my favorite to talk about is magnetoresitance, IIRC the giant one https://en.wikipedia.org/wiki/Giant_magnetoresistance [It's not my specialization, so I may have a few details wrong.]

You can explain it to a technical friend.

Start explaining about the the two possible spin of electrons, and how it cause the existence of two currents inside a conductor. This is not important in a normal conductor like cooper, but it's important inside a magnetic conductor like iron. So yo get a different resistance for each one of the two currents, the one that has a spin that is in the same direction than the magnetic field and the other with the oposite spin.

You can make a sandwich of iron-cooper-iron. If there is no external magnetic field, the two iron parts have oposite magnetization and the total resistance is higher. If there is an external magnetic field, the two iron parts have the same magnetization and the total resistance is lower. Anyway, the difference is not too much.

[Ideally, your fiend should be boring now, uninterested about the abstract currents no one cares about.]

It get's more interesting if you have many layers of iron and cooper, because the difference is higher and they call is "giant". It is used in the heads to read hard disks, like in the laptop of your friend. [Your friend will never see it coming!]

It's interesting because it mix weird abstract quantum properties and engineering to make it more efficient and you get a device that everyone has. For some weird reason, no one talks about it. And the sad part is that SSD are killing the punch line of the story :( .




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: