Many hiring decisions in academia still rely strongly on the reputation of journals where applicants published. This may be less accentuated in computer science, where most research is presented at conferences and preprints are often available on the arXiv, but it's a very important factor in many other disciplines. It's also the primary reason why researchers don't boycott these journals: most of them are simply not in a position where they can afford to without seriously damaging their career prospects.
See also https://en.wikipedia.org/wiki/Price_of_anarchy - a system can get stuck in a terrible state where everyone's best move is to keep going. A much better equilibrium could be constructed (Price of Stability), but how do we get there?
> Many hiring decisions in academia still rely strongly on the reputation of journals where applicants published.
This is very weird to me. It seems to me like a variant of the 'argument from authority' fallacy. Publications should be judged on their content, not on who owns the printing press.
If I'm reading a paper in Nature I can assume that the editors and reviewers did first checks on content (BS can still get through, but it is rare). Self-published, unknown journal gives me none of that. Thus when making hiring decisions it does help to overweight established journals. And most tenure tracks in academia make mistakes costly.
I do not like the current system, but established, well curated journals do provide some benefits in academia hiring. They do damage, too, but to a different group. My 2c.
That's possible in a few exceptional cases, but at least in CS, you can see a very visible quality drop as you go from the top conferences to the second-tier conferences.
>"If I'm reading a paper in Nature I can assume that the editors and reviewers did first checks on content (BS can still get through, but it is rare)"
You think stuff published in nature is better? I found the opposite, that Nature publishes papers with "sexy" results and poorly described methodology. Nature is one of the worst journals, I cringe when info I want is in a Nature article (although with supplements it is getting better).
Well, that exists in the form of citations. The more a paper is cited the more valued are its authors. Just as the more often a site is linked to, the higher its page rank. And since you're not dealing with massive numbers of sites with lots of SEO experts, straight citation count is good enough without any of the corrections included in the page rank algorithm.
Hiring committees look at both quantity of publications and citations to those publications. The problem is that quantity doesn't always indicate quality. Which is why journal prestige acts as a proxy measure.
I always though this was the impact factor of a journal.
However, according to [1] that is not the case. [2] Mentions the 'eigenfactor' which seems to be closer to emulating page-rank. I haven't read much about it though.
Page-rank is "transitive" in the sense that it not only counts citations, but also citations-of-citations, etc.
I wonder though if such a system can be easily gamed (see SEO), without damaging the reputation of offending authors.
Also, we should perhaps have the concept of "negative citations". For example, when the text contains "In this paper we show that the approach of [..] is incorrect."
Maybe, but it will take some work. IMO the main challenge is to get wide buy in.
Many researchers do not love the current system, but are mostly OK with it. They are not looking to topple it; if everyone is using a different option, they will switch, too, but they are not your passionaries to drive the change.
I agree that good papers are likely the most cited ones. However, how are the people citing a paper going to find it? The traditional mechanism is for the paper to gain exposure by being published in a prestigious journal.
> Self-published, unknown journal gives me none of that.
How can you know that?
Edit: I don't know why I get the downvotes. My question might be naive, but its serious. I think it is not self-evident why an unkown self-published journal couldn't give you that. Elsevier was also once unknown. For me it sounds like a logical fallacy, but I wanted to know your take on it. It's pretty cheap to just downvote me for asking.
It's the opposite. You can't be sure that a self-published journal has editors and reviewers do a first check. This means you need to confirm and audit their practices.
Compare this to a publication in nature. There, by virtue of the name, you know the article is worth more. (Note that in practice, Nature apparently has quite a few sham publications)
If we were talking about self-published articles, I'd agree, but we are not. We are talking about self-published journals.
Under self-published journals I understand a journal brought out by the researchers or research organisatons. I don't see why Germany's or even Europe's universities and research institutions couldn't publish a peer-reviewed and checked journal in collaboration, without a third entity, that can provide the quality of, say, nature.
I don't see why Germany's or even Europe's universities and research institutions couldn't publish a peer-reviewed and checked journal in collaboration, without a third entity, that can provide the quality of, say, nature.
You mean a university press or a society journal? That model worked extremely well for > 100 years, until in the 1990s the societies sold their journals to Wiley, Springer and Elsevier, and somewhere around that time Springer took a wrong turn. You'd really like to ask why the selloff happened when it did.
I believe when they said "self-published, unknown" journals, the key word was unknown rather than self-published. They're not talking about journals self-published by established scholarly societies and so forth, who are "known." Nature (or other known journals) come with an established reputation (deserved or not) for publishing high quality work; an unknown journal (i.e. one with no established reputation one way or the other) by definition cannot provide this.
In recent years, journals that claim to have a peer review process but which actually offer dubious or no review or quality control have proliferated (some are even published by companies like Elsevier). Having no established reputation doesn't prove that a journal is bad, but having an established reputation is a heuristic shortcut for evaluating whether it is good (at least in the "no one was ever fired for buying IBM" sense). Hiring committees use this heuristic to save time when filtering candidates.
Would it a good use case for a blockchain validation system ? Decentralized (scientific) content reviewer ? Pay scarce resource (attention odf the well educated/trained scientist) with some new coin "SCI-coin" ?
Oh, yes. The problem is not rasearch, the problem is the Administration and its permanent quest for "objective" (whatever that means) rating criteria...
That sounds to be a better idea, but people say (and as an outsider, it does sound believable, I guess) that that's pretty much impossible - see for example [1]:
> I regret to say that in reviewing perhaps a hundred grants or job applications and trying to find the ten grants to fund or one person to employ, I do not read every paper in the bibliography and assess the research on the basis of my limited understanding. I just don’t have the time or expertise to read and judge all the papers.
They are but it takes time to build that reputation. Also publishing houses with big pockets can hire good reviewers etc and get established quicker. If Nature churned out rubbish papers all the time people would stop trusting it but their hit rate is pretty high and retractions are issued for poor papers.
1) I was trying to talk in general terms. I understand that the peers doing the peer review are usually not paid but that there are other editors involved. I probably should have made that more clear. One of my fellow PhD students is now an editor for a journal.
2) I was aware of this also. I have seen some weird stuff on arXiv but it is normally trustworthy within the realms of Physics. I must admit my use of it was higher as an undergrad than a postgrad. Again I was probably being imprecise in my terminology but it is an archive of pre-prints.
3) 'Slip of the tongue'. I'll edit that in my original comment. Turns out you can't edit a comment after that long. I didn't know that but I think that is probably a good idea overall. I meant Open Access not Open Source.
I would consider myself pretty well informed as I was in academia until a move to industry a few years back and still work fairly closely with academics. That being said I am always happy to be corrected, I know people's opinions on this subject are a moving target and it is probably different at different hierarchies within academia. Thanks.
See, this is the point I still don't understand. None of the things you mentioned involve the publisher or title of the journal. All of those reputational aspects are functions of the reviewers and editors, right?
The people accepting and reviewing the papers are the sole source of any journal's value. I mean, people didn't stop listening to Lou Reed or Prince or Radiohead just because they switched labels. Nobody buys music because of the label.
So what's stopping a mass defection of a reputable journal's editorial staff to a new, open title? I'd expect this level of brand loyalty from mindless consumers buying material junk, but not from a scientific community supposedly dedicated to objectivity and quality.
But you don't know the reviewers when you are looking for articles about something.
Reviewers aren't related to the articles: you don't know who reviewed your paper, and in good practices reviewers won't know the author of the papers either. The point of trust is the journal: the journal has its impact, its reputation, built as good articles are published and cited. And the cycle goes on: good reviewers means good articles published (on the average), that attracts attention to the journal, that will be looked for new articles and will attract better reviewers, that will do better revision, that...
Want to break that? Begin to give credit to the reviewers, and it will be gamified too: people will have to pay to publish with good reviewers (and there will be impartiality?), or they will make a journal/company so that you take off the weight of the reviewers names and will put the impact factor on the journal... ops...
"There are some open source websites that are trusted (e.g. https://arxiv.org"
Lol. Citing arxiv is even worse than citing wikipedia, and some reviewers will reject arxiv citations all together. It's full of junk 'science' by crackpots. Not just 'hey look I ran this regression on a public dataset' bad, but all-out 'I was abducted by aliens, and I made up an equation to show that they took me to Pluto' bad. Maybe it differs by field, I don't know - but suggesting arxiv is 'trusted' the same way Nature is 'trusted' is inane.
Yeah, nobody would consider an arXiv-only article by an unknown author. There are plenty of P=NP "proofs" published on the arXiv. But that isn't what it is for. It's a preprint server where you have to do your own quality control. A lot of work in the theoretical CS community is published in conferences and on the arXiv, often in extended form (additional proofs, plots, etc that didn't fit into the conference publication's page limit)
From https://arxiv.org/help/general: "Disclaimer: Papers will be entered in the listings in order of receipt on an impartial basis and appearance of a paper is not intended in any way to convey tacit approval of its assumptions, methods, or conclusions by any agent (electronic, mechanical, or other)."
I was not meaning to equate ArXiv to Nature. I can see how I implied that though.
As I've said in another comment it was used much more frequently during my undergrad years than at postgrad. Within Maths and Physics it is generally trustworthy as they are pre-prints of papers to established journals.
Would I cite an arxiv reference in a paper for submission or thesis? 99% of the time, no. Useful for citing on the web though where your users may not have access to the final journal article behind a paywall.
> the reputation of journals where applicants published
Yes which is why ordinary individuals can't just start a journal. But governments and universities come with already-established reputation. If Cambridge University started their own open access biology journal (or whatever) and encouraged their scientists to contribute I don't think it would be hard to imagine it succeeding.
To have a successful journal, you don't need a particular university's researchers to publish in it, even if it's a very reputable university. You need everybody working in the field wanting to publish there. All the supposedly really simple solutions suggested in this thread don't work. It's not like academics haven't thought about it.
Sure, that's why you get a particular university's researchers to publish in it. That's exactly how you start making it so that everyone in the field wants to publish there.
They may have thought about it but has anyone actually tried it? I'm genuinely asking.
Edit: It seems they have and it worked. See mjn's answer below.
Their scientists have careers that span beyond those universities, so "encouraging them to contribute" is easier said than done. (Also, their scientists have very diverse disciplines and thus many, many different journals to contribute to - so each individual journal still wouldn't have enough good scientists contributing to carry them.)
> Many hiring decisions in academia still rely strongly on the reputation of journals where applicants published.
Maybe there could be a policy of only counting articles in open-access journals.
> It's also the primary reason why researchers don't boycott these journals: most of them are simply not in a position where they can afford to without seriously damaging their career prospects.
> Maybe there could be a policy of only counting articles in open-access journals.
Unfortunately it's not a policy, it's the easy option for those evaluating scientist, which is why they do it. Sometimes there's a policy to ignore the reputation (or actually, the flawed metric that's supposed to represent it called "Impact Factor"), but that's not widespread. Academics unfortunately have more to concern themselves with than spreading Open Access.
You start with the people who haven't been educated to rationalize the racket, in journals that have the least amount of reputability in the first place. (Ignoring fake journals, journal mills, etc.) They are also more likely to understand the underlying technology and probably do a lot of the implementation work, too.
Same for grads.
Then you get the journals that only function within the scope of a single university-- i.e., they are used to publish and gain notoriety within the university but are of only limited value outside of it. Probably best to start with a high-fallutin' uni whose stature could be used to convinced them the publicity of being the first mover is reason enough for the risk.
Then regional journals, where again the notoriety is relatively low and personal relationships are leveraged more for notoriety/credibility. (For example, an anthropological journal that covers a small region.) Because these academic communities probably already can recite the names of all the current scholars in the field, moving the research content isn't as disruptive.
In each case the science journals should go first. Humanities have a natural envy of science's verifiability/falsifiability, so they'll quickly follow whatever the scientists do.
You don't have to boycott anything - if a university starts a new journal and their respective academics start publishing with it they would have a public plausible deniability on why they're also publishing in this journal.
The problem isn't that you're somehow branded a 'rebel' if you refuse to publish with Elsevier journals. It's that, when you're applying somewhere, they'll rank you by SUM(paper_i * impact_factor(journal_i)). That score will suffer, and nobody will care about your reasons.
You're also underestimating the independence of researches at a university. They're not employees in the usual sense. In fact, the best method to stop them from publishing at some journal Y would be for the administration to tell them to publish in Y.
Also, in some counties (the UK is a prime example) the rules governing "impact" is largely dictated and/or influenced by central government. So not only do you need to change just your local institution but a whole country's institutions at a go; a multi-year political fiefdom protecting shambles at best.
See [https://news.ycombinator.com/item?id=15007958]. Their impact factor is now 2.450 vs 1.848. Of course I am sure it's way harder to do this for far more conservative and larger industries. At the same time it may be just the case of starting small.
If done collectively that has worked before. In 2001, almost the entire editorial board of the Springer journal Machine Learning resigned [1] in order to lend their support to a new open-access journal, Journal of Machine Learning Research, which quickly supplanted the former as the top journal in the field. It helped that this included a lot of senior people in the field (Stuart Russell, Geoffrey Hinton, Leslie Kaelbling, etc.) who would be the ones judging ML hiring and tenure cases at many universities.
That only works if the editorial board "flips", though - unfortunately, universities mostly aren't able to start journals with reputable editorial boards.
I'm currently leaning towards finding a way to convince large numbers of boards to flip as the most viable path to proper open access.
You mean a new journal for every scientific field? That's helluva lot of journals. Then, you need to attract people to send the papers there. There are countless local journals that mainly attract local authors now, and those are a joke when they are not outright junk. To publish even a half decent journal, you need to attract researchers from all corners of the world to send you their best work. How?
Many papers have multiple authors from different universities/institutions. It would be difficult if just one was boycotting the top journals in the field.
See also https://en.wikipedia.org/wiki/Price_of_anarchy - a system can get stuck in a terrible state where everyone's best move is to keep going. A much better equilibrium could be constructed (Price of Stability), but how do we get there?