A journal dedicated to negative results is IMO not the solution here. Academics must publish in reputable, prestigious journals in order to advance their career (a publication in a non-prestigious journal basically counts for nothing). A journal dedicated to negative results is never going to be a prestigious, impactful, or widely-disseminated venue.
But that's a self-fulfilling prophecy. If more people were publishing their null results in such a journal, one of those journals would emerge as the most prestigious null-result journals. It seems like an acceptable solution, especially if the alternative is going to be not publishing at all and increasing the positive result bias.
Being the most prestigious null-result journal doesn't rise to being a prestigious journal, though.
Regardless, this is the reality today. I don't think trying to boost a null-result journal to the level of Nature or Science is a better way forwards than pressuring Nature or Science (and similar caliber journals) to publish more null result works.
As far as career advancement goes, I don't work in academia, but I would be more confident walking into an interview with a list of difficult and impactful experiments finding no effects than having even one highly cited paper in a prestigious journal that was later debunked by a couple of grad students in a null-result journal. The latter prospect should also frighten journal publishers into taking null results more seriously (especially if they published the earlier work), but only if the threat is real.
Most assessment processes in academia are purely administrative processes where they count the number of cookies that you have earned. They don't take the actual quality of the work into account.
In my country, it goes as far as having "objective" point scales like: publication in ISI JCR-indexed journals, 1 point for Q4, 2 points for Q3, 3 points for Q2, 5 points for Q1. Publication in non-indexed journal, 0.1 points.
No one ever looks into whether the paper is good or crap, or whether the author has managed to submit almost the same paper to five journals. In fact, I know cases of honest people that tried to look into that kind of thing, but they didn't let them because it was against the published "objective" scale.
I don't think the system is equally rotten in every country/institution, my country is probably among the most extreme places in this respect, but anyway this kind of assessment is the most common AFAICT, as the impact factor cult is often denounced by international scientific societies.
Neither of the candidates you describe will even get a faculty interview.
Reply to below: I think it's valid. The first researcher has not demonstrated they will be leader/pioneer/inventor/theorizer/discoverer of new research findings, if that's all they've done - this is what reputable research universities are looking for. The second researcher has, well, no findings at all.
Edit: To clarify, I was also assuming some typical amount of other research credentials, as well. From Al-Khwarizmi's reply it sounds like the second candidate is likely to be the only one interviewed, which rules out the possibility of hiring the first candidate, the superior experimentalist.
If they have a good research portfolio otherwise, then the first candidate's chances are unaffected, and the second candidate's chances at an interview are probably reduced.
The first candidate's replication studies won't really count for anything (maybe a slight boost). The second candidate's bogus studies will count negatively.
The process he describes is not reflective of US research university faculty hiring practices - the faculty will read your papers, they will solicit expert opinions from other leading researchers in the same research areas as you.
Chances of those grad students being correct are slim.
Since you were published in this hugely impactful prestigious journal that thousands of people have read, and are debunked by those grads in a journal that nobody reads, no-one is even aware that it happened.
Why is there no prestige associated with getting rid of rubbish?
As a programmer the best feeling in the world is deleting code, reducing the cognitive load of the overall system without sacrificing it effectiveness.
Does the academic world have the concept of technical debt?
One flaw in your analogy is that if all you do is actually just delete code (you don't replace the code), that's not usually valuable.
It is easier to publish null results than it is to publish legitimate positive results (of course easiest of all is to publish bogus positive results). It doesn't require creativity or brilliance to publish null results, just competence (competence enough to replicate).
Of course, it is still valuable, so a thoroughly analyzed and argued negative result should be publishable in prestigious venues - but this is more a reflection of the value to the community than the actual novelty/brilliance of the work.
Put another way: a university is not going to hire you if all you've published are null results. They want leaders who are going to pioneer/discover/invent/theorize new fields.
The Catholic church for centuries now has employed an advocatus Diaboli, someone whose entire job it is to examine claims of miracles and make the argument that in fact they are natural phenomena. They are priests who have been trained in science (there's a perhaps surprising number of those).
I think this is a good thing. There should be somebody whose job it is to poke holes in the castles we build.
[1] http://www.jasnh.com/about.html