I honestly don't understand why JSTOR, Elsevier and others like them still need to exist.
Top universities should just found a non-profit, per subject, with a single paid facilitator and a single paid editor (per journal) to find peer reviewers and edit the papers into a monthly journal.
Modern tech has made it ridiculously easy to type, edit and publish such a thing if the inputs are LaTeX, Word, Markdown files or a Google Doc. And if you want it printed, there are shops that can do that for you for a small fee as well.
This should be 100% open access to everyone, extremely cheap and could be 100% funded by those who are still willing to pay for paper versions or by tiny contributions from the top 100 universities.
For years, my academic niche has tried to break free from the likes of Springer/Elsevier. Here are the bottlenecks:
* There are wonderful "pre-print" servers like arxiv and eprint.iacr.org. However, these do not maintain the "archival quality" document storage that is needed for academic scientific literature. In day-to-day, all researchers use these to stay informed on recent results.
But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old? How to guarantee that these documents are available 75 years from now? I'm sure that many of you can devise solutions to this, but they will be costly, and they will need constant labor to implement. How do you pay for this?
It is OK when 20,000 researchers in a field are downloading papers every once in a while, but what happens when every student in the world wants to read these? The bandwidth charge becomes non-trivial.
It seems like it needs to be outsourced, and some commercial entity with experience handles it.
* The tenure process is slow to change. Many academics need publications in prestigious journals with "high impact factors" in order to get tenure because the upper-level tenure committees in older institutions use these metrics to evaluate cases. These people are not stupid: it is just hard to evaluate cases across a university when you are not an expert. Instead, you assume that certain journals represent "the highest quality work" and thus use the presence of those publications to judge researchers. This means that the top papers still end up in Elsevier/Springer journals.
When I was a grad student at MIT, it was easy to read papers; if your IP was from MIT, every paper was 1 click away. I wonder how it is going to work now that Elsevier's catalog won't work this way...
> In day-to-day, all researchers use these to stay informed on recent results. But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?
Printed versions + digitally signed and timestamped PDFs. This is a solved problem in the world, at least up to the level that Springer can solve it.
> How to guarantee that these documents are available 75 years from now?
I trust MIT and Harvard to keep PDFs and printed versions available much more than I trust Elsevier or Springer to be around in 75 years.
> Many academics need publications in prestigious journals with "high impact factors" in order to get tenure because the upper-level tenure committees in older institutions use these metrics to evaluate cases. These people are not stupid: it is just hard to evaluate cases across a university when you are not an expert. Instead, you assume that certain journals represent "the highest quality work" and thus use the presence of those publications to judge researchers. This means that the top papers still end up in Elsevier/Springer journals.
I don't disagree. This is why the change and the first wave of papers will likely come from already-tenured professors, who still publish high impact papers.
> When I was a grad student at MIT, it was easy to read papers; if your IP was from MIT, every paper was 1 click away. I wonder how it is going to work now that Elsevier's catalog won't work this way...
Now imagine the same situation, except you don't need your IP to be from MIT.
Just pointing out that Elsevier is ancient in business terms, it's origins as a publisher goes back to the mid 16th century and the modern version of the company is from around the 19th century. I'd be surprised if the company isn't around when I die.
In addition to publishing, they (RELX) is one of the biggest companies you've never heard of. They provide information systems to governments all over the world and span multiple market segments. I guarantee you're in a dozen of their databases right now. And that your local, state, and federal taxes all funnel into their pockets in one form or another. Along with some of the money you pay for various insurances throughout the year. When you buy a house, rent an apartment, get a job, or basically have any major event in your life, they get paid.
> Just pointing out that Elsevier is ancient in business terms, it's origins as a publisher goes back to the mid 16th century and the modern version of the company is from around the 19th century.
The 16th century publisher has nothing whatsoever to do with the current one, which shamelessly pirated and stole everything to plagiarize the prestige (so its ideas of business ethics go right back to its founding). Apparently, it worked.
It's not so much as they aren't an established company it's that that their business model has been broken/bypassed by technology. They've been reduced to being a middle-man that obstructs value rather than providing it.
The only part of biz model left is "prestige" (very fickle), "customer lock in / inertia" (which is already going away re: OA), and lobbying government to prop up/expand their monopoly (ever extending/expanding copyrights, which is the one thing that doesn't seem they will ever lose on cause ever other bypassed dinosaur broke ass business model publisher spends tons on it).
I disagree. It seems like you only know Elsevier as a publisher of journals, but that's only about 1/3 of their overall business. They (RELX) provide a lot of useful services to companies and governments.
About half their revenue and profits come from Risk and Legal services, which are not things you hear about in the news. They offer services for police, airlines, legal firms, insurance companies, accounting firms Hell, they have an analytics tool for agricultural businesses. They also have enough money to throw around in these spaces to prevent any startups from getting large enough to be a threat.
Tends to be ignored but the process of extracting a profit has costs. Both internal costs and external costs. Sometimes the external costs imposed exceed the profit extracted by a large amount.
The life expectancy of long-lived companies is shorter than you might expect:
> Based on detailed survival analysis, we show that the mortality of publicly traded companies manifests an approximately constant hazard rate over long periods of observation. This regularity indicates that mortality rates are independent of a company's age.
Everyone here is imagining all the technical ways to replace publishers. That's quite feasible as you and others point out. I think there is also real work needed to solve social (people) problems, for example:
- explain to the stakeholders by preparing various text and other media about how your format/venue/website is different and better, and convince them that this solves a real problem they should care about
- solicit requirements from universities, funding agencies, various governments, about archiving and metadata requirements. Consider security, accessibility, long-term preservation, financial model, etc.
- respond to the questions and pushback from numerous stakeholders about problems (real or not) about your proposed solution, debate them in a polite and professional way in semi-public forums, converge on a solution that's acceptable (or at least not overly repulsive) to the key stakeholders. Deal with any PR backlash, response from existing publishers, etc.
- inform authors, potential authors, readers, journalists, universities administrators, students, etc. that there is a new publishing format/venue/website and that it is well managed and has a plan to be around for a long time
- coordinate and schedule a team of people to work on this with you, to figure out policies (author plagiarism, recruiting editors if needed, dealing with potential lawsuits, bad actors, copyright and IP issues, etc.)
The same way Wikipedia and Reddit manage to provide quality platforms that provide tools to address these social issues, built for longevity, and have strong community moderation.
> However, these do not maintain the "archival quality" document storage that is needed for academic scientific literature. In day-to-day, all researchers use these to stay informed on recent results. But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old? How to guarantee that these documents are available 75 years from now?
I have never had a link to Arxiv or Biorxiv break, and I have never had difficulty finding a copy of a paper on them either, going back to Arxiv's founding in 1991. On the other hand, on a daily basis, I struggle to get a copy of a paper published often just years or decades old from these 'archival quality' publishers like Elsevier, and they break my links so frequently that I spend some time every day fixing broken links on y website (and for new links, I have simply stopped link them entirely & host any PDF I need so I don't have to deal with their bullshit in the future). I guess "archival-quality publisher" is used in much the same way as the phrase "academic-quality source code"...
Hey Gwern, big fan of your GPT2 work. I notice I'm surprised to hear you say you struggle daily to fix broken links to the Elsevier catalog at ScienceDirect, because the links are used by libraries all over the world & they don't have the same feedback. Would you have a few examples available for me to send to the folks responsible?
Nature does it all the time. Here's one I fixed just this morning when I noticed it by accident: http://www.nature.com/mp/journal/vaop/ncurrent/full/mp201522... (Note, by the way, how very helpfully Nature redirects it to the homepage without an error. That's what the reader wants, right? To go to the homepage and for Nature to deliberately conceal the error from the website maintainer? This is definitely what every 'archival quality' journal should do, IMO, just to show off their top-notch quality and helpful ways and why we pay them so much taxpayer money.) Oh, SpringerLink broke a whole bunch which I am still fixing, here's two from yesterday: http://www.springerlink.com/content/5mmg0gmtg69g6978/http://www.springerlink.com/content/p26143p057591031/ And here's an amusing ScienceDirect example: https://www.sciencedirect.com/science/article/pii/S000632071... (I would have loads more specifically ScienceDirect examples except I learned many years ago to never link ScienceDirect PDFs because the links expire or otherwise break.)
Isn't this exactly the intended use-case for the DOI?
Your first article has the DOI 10.1038/mp.2015.225, and the resulting link (https://doi.org/10.1038/mp.2015.225) properly directs to the article's present location.
DOIs link to paywalls or temporarily-unembargoed papers, have to be hunted down (many places hide the DOIs in tabs or, like JSTOR, actually bury it in the HTML source itself!), and break things like section links as well. Adding yet another level of indirection is not my idea of a solution and hardly speaks well of 'archive-quality publishers' that we have to resort to third parties to work around their hideously broken websites which, like Nature, go out of their way to make links not just break but actively misleading.
To solve your immediate problem, just grab the DOI here: https://apps.crossref.org/SimpleTextQuery
They also have an API from which you can fetch DOIs in various ways.
DOIs are a solution to the issue of having persistent, publisher-independent links that will always resolve, even if a journal changes publisher or goes out of business. Academia uses them because link rot is unavoidable across the web, but there must always be a link to the publication that resolves so that when someone in 2070 wants to follow a citation in the references of a work published today, they can do that. It's the same thinking that underlies people pointing to the internet archive in Wikipedia citations. It's a layer of redirection, but in a way that preserves accessibility for the long term. It's also the same thinking that underlies DNS. There shouldn't be one company that controls how to resolve an IP address to a domain name, and likewise you shouldn't have to go through one publisher to resolve a reference to a research article.
As a side note, Crossref is staffed with exactly the sort of web geeks that you would see at an Internet Archive get-together (#).
So I hear your frustrations, but I think you're giving DOIs short shrift.
Do Nature's spinoffs have any prestige any more? Anything in a Nature spinoff related to batteries comes across as PR Newswire level material. If that.
Journals do transfer among publishers, go out of business, etc so you shouldn't expect a direct link like that to be stable. The recommended practice is to use the DOI. Would using a DOI meet your needs?
Digital archival of PDFs weighing a few hundred KBs to a few MBs is definitely a solved problem. And there are already arXiv overlay journals out there, and platforms supporting them. Tim Gowers' (Fields medalist) blog posts on this topic are quite informative:
Highlights: $10 per submission, plus some fixed costs, including archival with CLOCKSS. No Elsevier extortion ring needed.
Impact factors are of course kind of a chicken and egg problem. Need to have enough high profile journals move off Big Publishing, or have enough high profile ones started.
> When I was a grad student at MIT, it was easy to read papers; if your IP was from MIT, every paper was 1 click away.
When I was a grad student at <institution of similar caliber>, or an undergrad at <another institution of similar caliber>, accessing papers was rather painful off campus. One either has to use EZproxy, which might decide to block you if it doesn't like your IP range (say in a foreign country), or use some godawful proprietary VPN client that I would stay the hell away from unless necessary.
Today it's much easier, practically all universities participate as Identity Providers in SAML Federations and digital libraries participate as Service Providers. So you can just use your institutional login credentials to the identity provider page of your university. The service provider receives a signed SAML assertion that, well, asserts that you belong to your university and you are, say, a student. Most popular software is Internet2 Shibboleth (IdP and SP) in the academic field. It all works very well and has been for some time.
In the country where I live, you get access to office365, (physical) books, digital libraries (including Elsevier :)) and a wide variety of other services all via your institutional login.
It also happens to be very widely deployed and supported by well-established companies, because it's an integral part of executable cross-signing. That is, this exists right now.
> Many academics need publications in prestigious journals with "high impact factors"
It wouldn't surprise me if this is a massive part of the problem. Any new system to replace Elsevier may be perfect in lots of ways, but it doesn't count as prestigious, which means everybody will still want/need to publish with Elsevier. How do you magically grant a new publishing platform this 'prestige'?
When they mess up is when it changes.
When you look at old institutions and powerful people, sometimes their rule ends abruptly because of scandal, bad decisions, or corruption. Bear Sterns, Enron, and Nixon are examples of this.
For a new publishing platform to succeed, the old one needs die. For an organization built on prestige to die, it needs to be mired in scandal wrapped up and packaged in the political zeitgeist at that moment that not only affects its small community but also develops the ire of the entire society. At that point a new platform will emerge, likely backed by, and inheriting its prestige from, another institution.
Edit: I realize, unfortunately, this post doesn’t give anything actionable that anyone can enact. It at least offers hope that things can change.
"Flipping" journals is an option, but doesn't happen often because it's risk for the editors with little personal benefit.
The answer the project I volunteer for [1] is that the prestige of a journal comes from the researchers who submit or review for it, so we can also employ their reputations without the middleman - by having them endorse works, thus having their names instead of the journal names attached to the works.
When the prestigious expert editorial board resigns at the same time and creates a new journal. It has happened several times, e. g. Glossa for linguistics.
Also JMLR in machine learning is independent and still well regarded.
I’m also an academic. Hash each paper, then hash the hashes. Publish the result with the proceedings. After year one, include the hash of the prior year(s). Problem solved.
Recently I downloaded one of my old peer-reviewed papers. The “archival” service added a spammy logo to the bottom left corner of each page.
I’ve been meaning to find the original and put it on my web page. Honestly, I might just add a list of all my papers with links to SciHub instead.
I’m allowed to post them on my personal web page according to every copyright agreement I recall signing.
Corrections and amendments are separate (but related) documents. Preventing them from being retroactively applied to the original version of the source document is the specific thing that a archival-quality document storage is supposed to do (as opposed a non-archival-quality storage, which only needs to protect against data loss (as in turn opposed to a cache, which can rely on a backing storage)).
If that was your point, it was very poorly made, since you appeared to be claiming that archival-quality document storage required much more than hashing papers.
Archival-quality document storage requires two things: 1, hashing papers; 2, guaranteeing that the preimages of those hashes (ie the papers) remain available despite accidental and deliberate forces toward their destruction.
Non-archival-quality document storage already requires thing 2, we just want to add more nines of reliablity to those guarantees, which is a fundamentally technical endeavor that the likes of Springer/Elsevier don't particularly help with.
Signed declarations of amendment, then amended papers being added as new one with proof of amendment and link with the original. Kind of like how a keyserver deals with revoked keys.
> But how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?
Mostly you should stop worrying about this. Other people have explained various countermeasures that could be used, which are very cheap, but mostly nobody cares.
And already today, without anybody altering anything, it is very common for papers to use misleading citations. You take a paper that found some clowns like cake, you write "Almost all clowns like cake" and you cite that paper. It's possible a reviewer will notice and push back, but very likely you will get published even though you've stretched that citation beyond breaking point. Why "hack in" to change the paper when you can just distort what it said and get away with it?
Just about the bandwidth costs: You can rent a server at Hetzner.de for 40 EUR with 1 Gbit/s. Let's say each PDF is around 100 kb, then you can serve 1000 PDFs per second. Say there are 50 million active research students in the world, then the single 40 EUR server can serve them about 10 PDFs/week on average.
From my brief time working at Springer, seeing how their business model shifted towards services and processes aimed at enabling as many publications as possible -
I think basing tenure decisions on the fact papers were published there is based on archaic notions and misguided.
These are valid concerns but in 2020 very easy problems to solve. There’s little reason why a small consortium of institutions couldn’t build a very robust system to accomplish all that. Use digital signatures and distributed mirrored storage and problem solved. Charge a very modest fee to members to cover fixed costs and make it free to the public. Heck a few we’ll organized S3 buckets with a search engine attached would be better than a lot of what’s out there today.
Not to pick on academics but the commercial publishing houses basically prey on the stubbornness of the academic community here. In the pure private sector someone would come along tomorrow and make Elsevier and others obsolete and they would go bust quickly. MIT is making the moves that might just whip something into shape to remove Elsevier’s role in the market.
On “high impact” if the top universities in the world en-mass unsubscribe from the commercial players that will change quick.
But I can't imagine that this is an expensive role for an organization like library of Congress or similar to take on.
Many counties have a national library of sorts.
Bandwidth/storage costs are limited, we're talking about PDFs.
> I wonder how it is going to work now that Elsevier's catalog won't work this way
MIT alum here. In my experience you can always request a copy directly from the author by e-mail. There is ResearchGate which aims to make this easier, but doesn't because the fundamental problem is academics don't have time to respond to every e-mail, much less every ResearchGate notification. So yes sometimes you have to ping them by e-mail about 2 or 3 times.
I think ResearchGate -- or even Google Scholar -- should add a feature to allow manuscript requests to be auto-replied with a copy of the document instead of waiting for the author to manually send a copy.
> how to guarantee that nobody hacks in and figures out how to change a few bytes in one paper that is 10years old?
You publish hashes of all the documents. It's trivial to distribute lists of hashes. You can even put them into existing blockchains, which guarantee they won't change.
Re: hacking, preservation - - does Elsevier make reasonable assurances in this space beyond being a strawman that can be torn apart in a lawsuit? Is there a reason an open access platform could not make technological assurances that are as good or better?
This really seems like a good and fairly low-risk opportunity for universities to form a consortium of sorts to make their own publishing platform. And then maybe "impact" would be built in because they'd all be getting high on their own supply. But these institutions are strangely prone to silos despite the collaborative spirit of academia writ large so I don't see that happening.
I think there are field-specific differences. In biology / bioinformatics world the "high impact factors" journals are the norm for tenure or even confirmation of work. But highly influential computing related papers are rarely from those journals. Bioinformatics is an interesting exception because it's bridging these fields and you'll see references from both highly reputable journals and biorxiv.
> I honestly don't understand why JSTOR, Elsevier and others like them still need to exist.
I suspect you're being rhetorical here, but just in case: your premise is wrong; they don't need to exist, in fact they need to not exist; preferably they need to die in a fire.
Then you'll have replaced the problem of making research results actually available somewhere. But what it doesn't solve, is the problems of a) deciding what research to read and b) deciding which researchers to hire.
Note that the current system, which relies on the brand name of the journals in which works (or an author's works) are published, is very flawed, but it's what people use, and is therefore what's making people refrain from actually publishing in those journals the universities would found.
(Disclosure: I volunteer for https://plaudit.pub, a project that aims to contribute to solving the mentioned problems to enable transitioning to Open Access journals.)
I love the idea of plaudit, it wold be interesting to tie into dlbp or semanticscholar. As it is now, I have to see if a researcher tweets paper recommendations. Are you working with either?
I am sure you are aware, posting for the wider audience.
Availability is the hard part, formats, indexing, a handle so that it can be referenced. We already have an awesome model for this with the e-print archives [2..=4].
As for what to read, this is what overlay journals are for! [1]. By splitting the mechanics of submission, serving, basic vetting, etc. any other group of people can create as many overlay journals as they deem necessary. Sorting, ranking and clustering of the research now is not coupled to getting the knowledge recorded.
This excellent article [5] linked from the wikipedia entry has the perfect description of the concept,
>>> The Open Journal of Astrophysics works in tandem with manuscripts posted on the pre-print server arXiv. Researchers submit their papers from arXiv directly to the journal, which evaluates them by conventional peer review. Accepted versions of the papers are then re-posted to arXiv and assigned a DOI, and the journal publishes links to them.
> I love the idea of plaudit, it wold be interesting to tie into dlbp or semanticscholar. As it is now, I have to see if a researcher tweets paper recommendations. Are you working with either?
I'm not, unfortunately, but if have any contacts there please do point them my way (Vincent@plaudit.pub) :)
Oh dear, the burden of organizing peer-review and of consolidating some sort of "quality" stamp (I said "some sort of") is much more expensive than "nothing".
The only people getting paid in the reviewing process are the journals that are only coordinating the reviews. Actual reviewers (aka other researchers in the same field as the paper) are working for free.
That is largely correct, however the editors are very key to maintaining this process and especially the standards. A prestiged academic will not blindly accept any review request without having a level of trust in the process and also the coordinator.
Reviewing others' work is rather tedious and I think it will be a challenge for any fully open platform to demonstrate that it will not be a waste of time to do peer reviews on them.
Perhaps this is the real change that's needed. Getting a review structure that rewards really thorough reviews, monetarily. Those reviewers then become like YT stars where yes, everyone can review, but these reviewers are top-notch. The payment structure would depend on fees from accessing the works or fees for subscription to access.
That might finally break (or finally justif) Elsevier and their ilk.
"Rockstar reviewers" seem like a cure that's almost as bad as the disease. Some scientific fields already have a problem with groupthink, with a few well-defined and vigorously-opposed schools of thought. I would vastly prefer a broader reviewer pool to the usual suspects from the same few labs.
Everybody likes money, but I'm also not sure that's the way to go either. It would be great if reviewing directly impacted people's academic/research careers; I suspect the ability to review well is highly correlated with the ability to successfully run a research group. However, there are lots of thorny issues involving power and interpersonal relationships.
What is expensive is paying the typesetters to place the movable type in various places, and to create plates with the various graphics. Oh wait, we don't need to do that anymore.
At this point, there is no rational justification for what Elsevier is doing now except greed. They actually have some other services that makes sense, but this lock on academic papers is simply a historical accident that is no longer relevant.
Per subject, per journal, the same person who edits a journal today at Elsevier could do the same thing for the same salary at a university consortium-backed non-profit.
Yes, per subject per journal but then how many non-profits do you need? How do you organize them? How do you get a coordinated best-effort, etc...
I mean: corporations do not exist in a vacuum, they (usually) DO provide benefits to the society also.
I insist: I am not trying to defend abuses, I am trying to clarify that a for-profit corporation dealing with those many editorial issues is not bad per se.
Wikipedia editing doesn't cost much. Open, public review is free anyway. We would need a prestige-setting institution, i m sure we can come up with a substitute.
Why though? It is not like you don’t have access to high quality cheap talent in the form of RAs/TAs etc why cannot that part be done by students ? It will also actually help them learn their subjects
Well, despite the parasitic nature of the modern commercial journal world (and originally they did come from more benevolent aims, but got consumed by corporations) -- they do serve an important filtering and quality control mechanism.
A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied. One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors? There's also the problem of how to create a new journal that has the prestige of an old established one. Which new journal will we select to have the prestige?
But yes, they have become parasites, who prey on the free labor of eager young academics, take their work and sell access to it, enforce copyrights on knowledge created by taxpayer money, and bundle useless journals in with important ones so everyone has to pay more.
It's in the public interest for academic fields and the universities to come up with a reasonable alternative.
> Well, despite the parasitic nature of the modern commercial journal world (and originally they did come from more benevolent aims, but got consumed by corporations) -- they do serve an important filtering and quality control mechanism.
No they don't. Their editors do, not the entire organization, and really it's the selected (volunteer) peer reviewers who do.
> A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied.
Agreed.
> One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors?
That's why I think universities should be the founders. The top professors in a certain field can nominate a good editor, who will be paid full-time.
> There's also the problem of how to create a new journal that has the prestige of an old established one. Which new journal will we select to have the prestige?
Prestige comes from being relevant and innovative. Also, who said this has to be a new journal? Why not convert an established on?
The majority (70% or so) of submissions are desk-rejected without even being sent for review, and the ability to do that well is something that's learned over time with extensive detailed knowledge of the particular field served by the journal. Note that there are more kinds of editors than just academic editors, too, even at places like PLOS & eLife.
> "A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied."
Establishing reputation is the central challenge for a lot of the internet. Sorting spam from mail, sorting useful search results from SEO, sorting legit programs from malware on app stores.
"Let's just have a small handful of people manually review everything" is not a terrible first approach! It is the naive solution, and will work if you don't have to scale. It even worked for search for a couple years.
And you might argue that it's ok for journals to keep doing that because they don't have to scale. They don't have to review, rate, and publish everything good. They can have a very, very tiny output and it's ok.
But there is some cost to rate limiting scientific output.
So I'm surprised there hasn't at least been a good competitor incorporating what we've learned from other domains. It wouldn't be the same, but at least trying to use some things like citation counts and reader behavior for an initial guess at what deserves review.
All the arguments that "we need a small group of professionals curating these" lose a little weight in a replication crisis.
If you really wanted to try this, you might want to go after low hanging fruit. Someone should make a nutritional science journal, using purely algorithmic data to score proposals. Not much to lose there.
A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied.
How does this follow? "Open" doesn't mean "anyone can publish", it means "anyone can read".
Funding for editors and webhosting should come from the universities themselves. Replace Elsevier with a nonprofit consortium funded directly by universities, and a lot of these problems just go away.
> One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors?
I've been a reviewer and editor for various IEEE and other engineering publications and have never been paid. Of course funding for editors is helpful, yet it may be like open source where some are willing to put in work for free.
> Well, despite the parasitic nature of the modern commercial journal world (and originally they did come from more benevolent aims, but got consumed by corporations) -- they do serve an important filtering and quality control mechanism.
I generally agree with you here
> A 100% open and free journal cannot achieve selectivity without having some judgement and bias applied. One that's hard for an intrepid band of volunteers to recreate without funding and full time commitment. Who will be the editors?
But the editors in the majority of journals are already volunteers. They might get some minor amount of money for their work (we are typically talking maybe $100 a month max), but that's it. The only journals that have full time editors are the highest impact journals like nature and science, but it shows again and again that they are not really domain experts and are not necessarily acting in the interest of science. I actually have heard a nature editor say "our business is to sell journals, not to publish the best science".
>There's also the problem of how to create a new journal that has the prestige of an old established one. Which new journal will we select to have the prestige?
Well if the big universities and funding agencies would push, this would happen quite fast.
> But yes, they have become parasites, who prey on the free labor of eager young academics, take their work and sell access to it, enforce copyrights on knowledge created by taxpayer money, and bundle useless journals in with important ones so everyone has to pay more.
> It's in the public interest for academic fields and the universities to come up with a reasonable alternative.
>I honestly don't understand why JSTOR, Elsevier and others like them still need to exist.
I'm going to guess they exist because, despite decrying these companies, scientists still want to snag that spot in Nature for the same reason that writing a front-page New York Times article is considered a bigger deal than publishing the same story on your blog, even if the content is identical.
I believe peer review should be supplemented or even replaced by social review methods where not only arbitrary reviewers but the whole scientific community might have chance judging, discussing and commenting any paper. Online.
The logic and safeguards may not be that easy to create in the first place but in my opinion it would worth the effort eventually!
We will never be all experts on all subjects. Peer review is by peers, not laypersons.
From the STEM perspective, a democratic solution would be a disaster. Two immediate reasons come to mind, the loss of peer expertise in the noise, and brigading.
I wrote "scientific community".
What I meant is the "relevant scientific community", it wasn't evident apparently.
Also the selection of the reviewers is just partially depends on the expertness already, several other aspects affect it quite a lot. Not to mention that why a certain selection should be the one why not an other, why not the relevant community chooses the reviewers then?
Just because not every details are fined carved the idea should not be dropped.
(I was participating in certain peer review processes where I was an almost outsider and very far from being an expert, I have little conviction that the current one works well)
Imagine that you have voting rights to review a paper (a-la slashdot, where random people got opportunity to tag something as insightful, interesting, etc).
Now imagine that there comes an article in X subject (say, Agent Based Modeling).
When you "vote" in that article, the "dimension" of your vote is proportional to your "impact factor" in that subject (i.e., say you published 20 articles in ABM and you got 10 "votes" on them, then each of your votes count as 10 votes). On the contrary, if your impact factor is negative, your vote doesn't count. That way people that are considered "knowledgable" in their subject, will be able to peer-review other articles.
Another method would be something like what StackOverflow has: Initially everybody gets 1 vote (or 10, or 1 every month, or whatever), and you "transfer it" by voting for an article (maybe to the 1st author, or evenly distributed), so because the "votes" are scarce, people with care for them. And people with articles that are most voted, can themselves vote more.
There are plenty of systems that could work. And the beauty of it is that they could be "layered" on top of Arxiv with a Chrome extension or similar.
There seem to be more effective ones and less effective ones.
Unfortunately sometimes economic incentives cause them to be gamed against real quality. E.g. think of shill online shopping reviews, and circular voting to boost reputation.
In academic terms, that would be "citation rings" to promote their rankings. Like web rings, there are nice and friendly ones, and there are heavy, spammy clones of sites. I would expect the rise of junk-article, plagiarism-from-elsewhere, machine-learning-assisted-plausibility mutual citation rings if there were no good controls to detect and prevent that sort of thing.
Hm, perhaps the HN should be closed as well and everyone should have their own blog instead?... or ask the peer reviewers to discuss and judge papers on their blogs instead?
But you are aware of journal articles as well, dont't you?
And how standardized and comprehensive the peer selection is?
(it is not, in general, with huge variations)
> But you are aware of journal articles as well, dont't you?
Yes.
> And how standardized and comprehensive the peer selection is? (it is not, in general, with huge variations)
Yes.
So what?
My point was you can accept whatever papers you want. You can't make or stop someone else accepting. Seems like a fine situation to me? Anyone can recognise the papers they want to.
Top universities should just found a non-profit, per subject, with a single paid facilitator and a single paid editor (per journal) to find peer reviewers and edit the papers into a monthly journal.
Modern tech has made it ridiculously easy to type, edit and publish such a thing if the inputs are LaTeX, Word, Markdown files or a Google Doc. And if you want it printed, there are shops that can do that for you for a small fee as well.
This should be 100% open access to everyone, extremely cheap and could be 100% funded by those who are still willing to pay for paper versions or by tiny contributions from the top 100 universities.