> The most impressive one so far is making a port of a ~100 KiB Linux a.out i386 program to a native Windows PE i386 executable, despite not having access to its original source code or even decompiling it.
Thanks! I've been working on-and-off on this for the past 20 months. It's the third prototype, the first one was a set of Ghidra Jython scripts [0] and the second one was a fork of Ghidra [1].
Quite the opposite IMHO : when your program interacts with a user, you cannot panic the program each time something unexpected happens. Here are some examples of unexpected conditions:
- "Null pointer dereference"
- "Out of memory"
- "Disk is full"
- "File does not exist"
- "File does not exist in cache"
- "File exists but is corrupt"
- "Access denied"
- "Connection reset by peer"
It's pretty obvious that all of the above is generally unwanted most of the time.
However, putting them all in the same bag labeled "error", and forcing them to be treated the same way might be counterproductive. Sometimes you might want to panic. Sometimes you might want to retry. Sometimes you might want to ignore!
Now, if your program isn't interactive (such as a compiler), halting on any error might be a choice. But you still have to provide contextualized and accurate error messages, which is easy for the case "File does not exist", and a lot less easy for the case "Out of range index".
> But as natural as causal reasoning feels to us, computers can’t do it. That’s because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term “equation” implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.
What about Prolog ? How is it not a counter-example of this ?
> I was curious to see how the self-correcting mechanisms of science would respond [...]
> I was disappointed by the response from Southwest University. Their verdict has protected [a fraudulent researcher] and enabled him to continue publishing suspicious research at great pace.
The self-correcting mechanisms of science can only correct knowledge. Those mechanism work mainly by requiring the research works to be checkable by others. Self-correctness emerges by the accumulation of checks on the same topic, all leading to the same conclusion, and by the progressive retractation of bad research ... not by the elimination of "bad researchers".
Efficiently "correcting" people, whatever that means, is a different beast. Such a mechanism belongs to an administrative entity who can emit decisions - and, by construction, who can make errors.
How does bad research get retracted and corrected then?
As the author points out, the "data" in these papers is large enough to contaminate meta-analyses for years to come. And if the Bad Scientist continues to produce more of them, then decades to come. The consensus of the entire discipline will be swayed. Self-correcting this will be very difficult, require lots of data, and be unrewarding. It probably won't happen. Politicians consulting The Science on this subject will get erroneous conclusions and make erroneous decisions.
The Scientific Method is self-correcting. Academia, not so much.
It's probably very different in different disciplines - in social studies like the ones in main article this is a big problem because, as you say, meta-studies will likely include these papers simply because they exist.
However, in more practical sciences if someone fakes data to show that their method A works better than baseline B, then other people building on that find out that method A doesn't really work well for them for a weird reason, shrug, and ignore the bad paper, so it doesn't get used and cited, while the correct assertions persist, get replicated, repeated and cited.
The usual way is someone writing a better paper and everyone going "yeah, that's a better reading of the data".
Not sure why you say correcting an established consensus would be unrewarding? Sure, for unimportant details getting a correction out isn't much fun, but correcting an important point is basically the career goal of every scientist precisely because it is important and rewarding.
She stated it is unrewarding because the university didn’t reprimand or remove the bad researcher, and because most papers didn’t retract the bad papers. She didn’t produce new studies disproving the results of the existing one, which might have been a more rewarding pursuit for someone who had that inclination. I’m sure the author has other ideas about what research she should do, and it’s not for us to say.
Yeah, I hear this a lot. That the goal of every scientist is to disprove the consensus and overturn bad science. Yet every time I read a blog article by someone who has actually tried to overturn bad science, they say it's an uphill battle against bad incentives, vested interests and academic politics.
If you know of a scientist who has written about succeeding in achieving this objective and had a great time (in the last 20 years), can you point me to their writing, please?
It depends on how the physics engine has been integrated into your game/engine. The book "Game Coding Complete" shows how to isolate a 3rd-party physics engine so you can switch implementations.
Of course, every physics engine behaves a little differently from the others, so, if you game is physics centered (pinball, racing), switching implementation might result in a slightly different game - but having the core of your game depends on the implementation details of some 3rd-party library might not be a good idea anyway.
In general, the more you sprinkle your code with dependencies to 3rd party libraries, the less control you have over the resulting product.
Your point still stands, though, for some very low-level "utility" libraries like boost or the STL, or any standard library replacement , and more generally, libraries holding "vocabulary" types.
> You do not live in a video game. There are no pop-up warnings if you’re about to do something foolish, or if you’ve been going in the wrong direction for too long.
In the good old times most video games weren't like that.
It's a pity that those times are gone, to the point video games are now used as such a comparison point!
Not sure how old your good times, but relatively recently I played an indie game that first gave me a very subtle footgun without any warning, pressed me to use it and when I did the wrong thing, sent me in the wrong direction for the rest of the game. It was quite some hell.
This is indeed really impressive!
(direct link: https://boricj.net/atari-jaguar-sdk/2024/01/02/part-5.html )