It feels like there should be some kind of tool analogous to valgrind/ASAN/TSAN/UBSAN for code that is just run on all the images of any kind of paper before it is published to check for these kinds of things. Some of them just seem to be honest mistakes like linking the same image in two different places and this would fix that so nobody needs to bother checking for it in the future. It would also have the byproduct of catching all the malicious image doctoring that some people do which is just horrific for science in general.
Wouldn't that just give the fakers a tool that lets them easily tweak their fakery until it can't be easily caught?
The saving grace here is that many of these scammers don't have the sophistication or expertise in the wide possibility space of fraud discovery techniques to know where they're exposing themselves. Every type of check that we make automatable and trivially repeatable by anyone will immediately cease to detect anything except the most lazy or incompetent scammers.
Publications should be responsible for the veracity of the content they publish. Instead they just shift the blame. They are 100% about profits and provide zero advantage, no wonder they are being replaced by free alternatives. But it doesn't solve the problem with fake papers and plagiarism.
> Publications should be responsible for the veracity of the content they publish
That is ridiculous and betrays a serious misunderstanding of the role of publications. The publications are a forum for discussion. People say incorrect things in discussions all the time, and they can’t be corrected until they’ve been said. Where do you propose they get said? How do you propose the publication “know” what’s true or not? There’s no way to do it, otherwise why would you be publishing what you’re publishing? The whole point is that people don’t already know what you’re telling them.
Scientific publishing is super broken in many ways but this is a terrible solution.
And yet, they do try to reject papers for various issues that at least approximate veracity. No journal will accept a paper proposing a perpetual motion machine. What is that but a veracity check? What is peer review? Glorified spellcheck? No, we expect publishers to at least catch the obvious stuff.
If you design a publisher around the idea that it truly has no responsibility for veracity, then you get... Arxiv. In that scenario traditional publishers truly provide no value.
Yes it is glorified spellcheck. They’re checking your methods. If you claim to have built a perpetual motion machine they almost certainly will not reject your paper on the basis that “perpetual motion machines are impossible,” but on the basis that your methodology [almost certainly] has an error.
"Your methodology probably has an error [because you got the wrong answer]" is not meaningfully different from "we're rejecting your paper because it's false". My point stands.
Ed: your other comment about how publishers used to just be mailing lists is actually kind of funny in how it proves my point. If we wanted publishing to just be a mailing list... we can just have a mailing list. Or use Arxiv. But today's publishers have to at least pretend to do better than that, which they mostly do by supposedly filtering papers on quality, which in turn is, you guessed it, 80% veracity checks (oh, and also increasing the right people's citation counts).
Well yeah when you add stuff I didn’t say, it does sound ridiculous.
I didn’t say they would reject it based on the result and presuming an error. They would find and reject the error (or on the basis of a million far more superficial issues which are actual good targets for reforming scientific publishing).
What did I add that's meaningfully different from what you said, i.e. in terms of outcome?
Ed: ok, I think I get it, you mean they'll somehow identify a specific method error, and if they can't then they'll... Just accept it? I simply don't believe that. I think the journal both should and actually will reject papers that violate the second law of thermodynamics without finding a specific methodological flaw.
If the paper is completely methodologically sound and the only problem is that it violates the second law of thermodynamics, then we need to take another look at the second law of thermodynamics
You're assuming that (a) reviewers not finding any methodology flaws in the paper means there are none, and (b) no flaws in the paper means no flaws in the work. Neither of those are good assumptions. And that's just the generically applicable arguments.
For the Second Law, Arthur Eddington had this to say:
> ...If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations - then so much the worse for Maxwell's equations... But if your theory is found to be against the Second Law of Thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.
> You're assuming that (a) reviewers not finding any methodology flaws in the paper means there are none, and (b) no flaws in the paper means no flaws in the work
No, you are assuming that with your flawed understanding of what a journal is and what peer review does. Like this isn’t a matter of opinion. Peer review in fact and by design is not a stamp of veracity. The reason is exactly the conundrum you’ve backed into.
If someone produces apparently high-quality science that challenges a well-regarded theory, it is only under your model that paper cannot be published. Under our model (the one currently in use today), the reviewers are expected to try to find holes in the methodology and, if they can’t find them, publish the paper anyway even if they disbelieve the conclusion. That way the broader scientific community can attempt to blow holes in the paper, and they frequently find things the authors and the peer reviewers missed. That is not a shot against the publisher: that is how science must work because of exactly the dynamic you’ve identified.
You're still not paying attention to what I'm saying, which is that journals are expected to catch the easy errors, not make an absolute determination of truth. I said this in my first comment on this thread.
That seems more like what blogs and posts are for, not academic journals. Unfortunately things have degraded so much that there is very little difference between a blog and a research article, but there should be a very large distinction.
We can compare against another mode of human-controlled transportation. There are 1.37 deaths per 100 million passenger-miles driving in the US [1]. In comparison, there are ~0.2 deaths per 10 billion passenger-miles flying. Converting into the same units, there are 137 deaths per 10 billion passenger-miles driving. So you are 685X more likely to die while driving/riding in a car than flying. That's almost three orders of magnitude worse! Humans are pretty terrible drivers in comparison to how good we are at flying.
Pilots have mandatory sleep cycles, drug tests, significantly greater initial training, backup pilots, and dedicated airspace. If you could wipe out all of the tired, drunk/high, and teenagers from the road, I bet the driving stats would look significantly better.
We don't ask pilots to do basically anything. They are given dedicated lanes and have constant radar monitoring them anywhere near an airport where they may be expected to somehow come in contact with another plane.
Compared to sharing roads going opposite directions at high speeds inches away from each other, it is no contest that humans driving is the much more impressive number.
You should compare with GA to get closer to apples-and-apples. Comparing a highly regulated industry with everybody from 16 year olds to 90 year olds over and extreme range of experience and health isn't going to give you a useful result.
And even GA pilots are probably in much better physical shape than the general public that drives cars.
Flying (the type done commercially) is a much easier task than driving, well maybe except for takeoff and landing where the majority of miles are not spent.
The pilot can basically sleep most of the way.
To be fair, it's never been easier to get access to thousands of GPUs in the cloud. It might be expensive, but that is an entirely different kind of barrier. Just a decade ago, it used to be that the only way to get access to thousands of GPUs was to get access to a supercomputer at a national lab. Now anybody with enough money can rent thousands of GPUs (with good interconnects too!) in the cloud. There's certainly a limitation on it from a money perspective, but access to the computational resources themselves is not a problem.
If your government passes a law saying that GPUs can only be purchased or rented with a license, as OP was suggesting, all of that capacity disappears with the snap of a finger.
"Dear government, yes, I was spending my money accessing an illegal minution, please don't throw me too far under the jail"
I'd like you to stop and think about this for a minute... which country is going to be exporting powerful GPUs? If the US blocked GPU resources, do you think China will suddenly start allowing everyone to have them? No, they'd jump right on board with their own limitations so they didn't have to worry about internal issues.
Alright, here's an idea: consciousness is spectrum that occurs when a system develops an automatic self-correcting mechanism for interacting with the external physical universe. In some sense all animals (including humans) wandering on this planet are conscious, because we all learn to build our actions around interactions with the external physical universe, e.g., we learn how to walk/swim/fly under the force of gravity without falling/crashing, we learn that the square peg fits in the square hole and not in the round hole, etc. The feedback from these interactions allows us to automatically adjust our future actions without external help. In this sense we learn what works, aka. what is "true" (at least under the laws of this universe). Some animals happen to have "higher" consciousness in that they interact with the universe in more sophisticated ways, learning "deeper" truths, but all animals possess some degree of consciousness under this definition (my cat is certainly consciousness, she has learned how to manipulate the external world, especially me, perfectly at this point). Consciousness is a matter of degrees, not a binary property that one can satisfy.
This definition also has the nice property of showing why current LLMs don't fit on the spectrum. They don't have any concept of learning what is true and automatically self-correcting. They will happily tell us things that are obviously not true, e.g., the square peg fits in the round hole, and then insist that they are right, when a basic physics experiment will disprove their assertions. Interestingly though, things like linear feedback control systems like we might find in an elevator do possess some degree of consciousness: they interact with the physical world, identify the true position of the elevator, move it where they want, and self-correct when necessary. They might be primitive, but I for one believe that they are certainly "conscious" at some level, and definitely more than LLMs. :)
Almost certainly this definition is incomplete and flawed in many aspects, but I think it's at least self-consistent.
Nice view. I see it that way too, for the most part. Yes consciousness is a spectrum, so is intelligence and on top of that, intelligence has multiple types, with different qualities. What is happening these days is that our machines that simulate some form of intelligence are forcing us to refine our crude everyday concepts such as consciousness and intelligence. And also machine, AI, etc. In a few generations, people will use much more accurate concepts for these things. Our vocabulary will expand much. Unfortunately many of us will use inaccurate concepts and make life dangerous for us all, just as the current inaccurate concepts of superior ethnicity or superior group or political power or disregard for clean environment, are making us in danger and ruining many lives.
I don't know of any state that gets less then it gives. Most articles that claim such ignore tons of things like food stamp, section 8, or military spending in that state.
I'm still using a Pixel 2, now 4.5 years after I got it (lack of security is easy: don't put or do anything I care about on the phone, turn off the power when having sensitive conversations nearby). It can go two days between charging.
The clever thing about interferometers is that they're actually measuring space and time in two different dimensions concurrently and then looking for changes in the interference pattern between light moving along each of the dimensions. Simplistically, imagine a gravitational wave propagating along one dimension of the interferometer (realistically gravitational waves will never be perfectly aligned with any direction of the interferometer). Space will be distorted in that dimension but not the other and we can notice the change in the resulting interference pattern. In practice, gravitational waves will come from all sorts of weird angles, but they will distort each of the two dimensions differently and allow us to figure what direction they were propagating by the interference pattern that's observed.
> The clever thing about interferometers is that they're actually measuring space and time in two different dimensions concurrently
I would say that we measure space in two different directions concurrently.
> and then looking for changes in the interference pattern between light moving along each of the dimensions
This is true, but I like to push back against the use of the phrase "interference pattern." We're not looking at a "pattern", which to me brings to mind a complicated interference pattern resolved spatially. We do not resolve the "interference pattern" spatially. We measure the amplitude (er, power) of the light coming out of the interferometer with a photodiode (a single pixel, if you will.)
> In practice, gravitational waves will come from all sorts of weird angles, but they will distort each of the two dimensions differently
This is true. The detector's sensitivity to waves coming from different directions is the "antenna pattern."
> t they will distort each of the two dimensions differently and allow us to figure what direction they were propagating by the interference pattern that's observed.
With a single detector, we cannot determine the direction in which a g.w. is propagating. With a single detector and a transient (short-lived) source, we cannot tell the difference between a loud source in a direction where the detector is not very sensitive versus a quieter source in a direction where we have good sensitivity.
With a network of detectors and/or signals that persist for a long time (compared to the rotation of the earth, etc) we can resolve the source direction.
> What again is the rush to vaccinate the lowest risk age group?
Because they still contribute to chain of transmission to the most vulnerable members of society, likely in a disproportional way since younger members of society tend to socialize more. Fighting a virus is a collective action. In order to stop transmission to the vulnerable members, you need to cut edges along all paths through the graph. Furthermore, additional spread, even among healthy people with no side effects, increases the probability of mutations that lead to more fit variants capable of causing even more sickness and death.
Vaccinated people can still get infected and transmit the virus to others, with or without mutations. The idea that vaccinated people are ‘safe to be around’ is an outdated fantasy.
I never said it stopped all transmission, but it will stop a large fraction of it. We're playing games of probability. Anybody that deals in absolutes is living in a fantasy world.
The theory of vaccinating people to protect others is taking another hit.
It looks more that we should focus on proposing the vaccine to the people at risk (clearly identified, >65 years old, or multiple co-morbidity) for their own survival
In my experience, the whole 10 year warranty seems a bit excessive for vehicles that are nigh close to indestructible [1,2,3]. My parents have had multiple Toyotas that made it to 300K-400K miles. My brother's Corolla was rear-ended by a semi doing 30mph and wasn't even close to being totaled. Eight years later and he's still driving it through swamps in south Florida. They are impeccable feats of engineering.