One of the big changes was to make many basic tools into lazy iterators instead of greedy lists (map, filter, range, etc.). I can remember having to teach my fellow scientist/engineer colleagues not to just loop over an index variable when working with certain datasets, but this was a major bit of friction because they were so used to Matlab and other languages where direct indexing is the primary method. While lazy execution is great for many things, it is not something that is necessarily common knowledge among people who use coding as a means to an end.
Additionally, because so many of the core libraries were 32-bit only or Python 2 only, you ended up having to either write your own version of them or just go back to 32-bit Python 2. Numpy in particular (and therefore transitively anything halfway useful for science and engineering) took several years to stabilize and I have many memories of having to dig into things like https://www.lfd.uci.edu/~gohlke/pythonlibs/ to get unofficial but viable builds going for the Windows machines we used. It was enough of a pain to deal with dependencies that I actually ended up rolling my own ndarray class that was horrendous but just good enough to get the job done.
It partly arises as a mechanism to pay per case, not just per visit. For example, if we put all payment for a breast cancer treatment into the first visit, there is now an incentive to either complete the treatment (ideal) or keep the patient from coming back (less ideal). You can use these follow-up codes to measure rates of return visits, which can be related to quality of care metrics.
And yet it is one of the few places where it is mandatory to have your name on the publicly-facing postbox in order for mail to arrive. The privacy sentiment here is all over the place.
Another thing is the legal requirement to put your full name and home address on any website you own privately. Everyone I ask about this contradiction is like "Well it is like it is, so it probably has to be like that"
A friend of mine founded a company in this space (https://www.joincake.com/). From what I know it is more on the side of helping you build your own solution, rather than delivering all of the technical aspects.
The technical integration would be some significant effort to work out: do you need to run a credential management service? Can you get away with just having account recovery codes, specifying beneficiaries for specific accounts, etc? At first glance it feels like a massive amount of integration work would be needed
This is quite common in scientific research. The typical algorithm I follow is to reframe the problem in the language of different fields and see whether there is a more useful way of tackling it in that framework. There are always some leaks in the abstraction/translation but often by reframing the problem you find a good-enough solution.
A thing I've often fantasized about is some sort of mega-conference where top luminaries from every academic field get together and hammer out a global namespace of jargon, resolving all collisions so that no longer can a term mean eight different things in eight different fields.
Imagine the global boost in productivity and knowledge-sharing...
I fantasize about this sometimes for scientific purposes. Part of the challenge is how to keep the barrier to entry low enough that people stay excited and creative, because the opposite is the world of regulated industries where your way of thinking is heavily influenced by the legal framework. Periodic synchronization helps and the best I have seen in person at the Gordon Research Conferences (GRC) since these tend to be narrow enough to have consensus but still broad enough to get a little bit of perspective
They also manage the review process (screen papers, find reviewers, communicate with authors and reviewers), make sure that papers are accessible long-term (decades to centuries), typeset manuscripts, print paper issues, and several other tasks.
But yes, reviewers are not paid. A half-decent review can easily take 3-4 hours, and a typical paper gets 2-3 reviews, so something like 8-12 hours of PhD-level labor which are not currently compensated.
> They also manage the review process (screen papers, find reviewers, communicate with authors and reviewers),
They do not. That's the editor's job, and Elsevier journal editors are unpaid volunteers -- as the Guardian article correctly pointed out.
> A half-decent review can easily take 3-4 hours, and a typical paper gets 2-3 reviews, so something like 8-12 hours of PhD-level labor which are not currently compensated.
It generally takes much much longer than that.
The average article in Neuroimage is 14 pages long (~ 10000 words), not counting references and declarations. Pure reading, single-domain, technical: the chart says 10k words will take at least 60 minutes. And you'll definitely need to read the article more than once while doing peer review.
Then you'll need to look up references, do some sanity check calculations, evaluate artifacts if available, scrutinize figures, and unless you're up-to-date on all recent developments of the field, probably read at least one more article on prior work just to understand what's going on. If unusual methods were used in the evaluations, you have to understand these too, so better add a few more days.
And you still have to actually write the review: at that point, you only have some notes and scribbles! In neuroscience, we're talking about at least 2-3 days of FTE work. And that's only the first round of reviews. About 80% of articles go through multiple rounds of review-response, according to Neuroimage's own statistics.
In other fields, review times might be significantly longer (weeks, or in mathematics even months, instead of days).
Elsevier profit margins are 40%, according to their own admission. They could afford the costs of compensating this labor. But why would they?
I think it would be interesting to actually evaluate review time. I've seen journal and conference paper reviews go all over the place. Some people blast them out in an hour. Other people dig deep and spend way more time. I don't think I have ever heard of a reviewer sanity checking calculations.
Ultimately, the taxpayer funds the researchers' salaries, and the researchers spend some of their research time on peer review instead of advancing their research.
But these hours dedicated to peer review are not the issue: something scientifically useful gets done. It's the tax dollars that go to Elsevier, in exchange for literally nothing*, that we should worry about.
* sometimes they charge you, the taxpayer, in exchange for access to the research output
Technically, no one; in practice, it gets done by both faculty and grad students/postdocs paid by universities as something that's expected of them to keep getting their wages for their primary work.
The only thing elsevier manages is their website. The review process is managed by academics who also work for free. Typesetting is done by authors. Paper printing is basically extinct.
Typesetting is done by authors (in some fields) and then completely redone by Elsevier staff. The value they add is questionable at best, but they certainly spend a lot of effort making the paper conform to their standards.
...or detract, one might add. I know of a researcher who has been meticulous about editing and wording, only to have the submissions "corrected" for publication to use the wrong word (existing, and meaning something different) for key concepts.
I'm guessing incompetence combined with standard dictionary computer spellchecking at the publisher's end.
Which the author then has to go back and fix because they don't understand what they're typesetting. Some journals provide the LaTeX template too so they don't even do THAT step.
As far as I am aware, they don't necessarily even do the final editing themselves; at least with the APS journals, they outsource the final typesetting to another company.
It's been a few years and I know it depends on the exact journal, but I thought Elsevier had paid editors? My memory is that society journals tend to be unpaid but that Elsevier, Nature, Frontiers, etc. had paid editors.
Not true. Every (decent) journal has someone who goes through the typesetting process again is very time consuming. Whether that plus the review process, website maintenance etc it is worth £2,700 is a separate question.
> But yes, reviewers are not paid. A half-decent review can easily take 3-4 hours, and a typical paper gets 2-3 reviews, so something like 8-12 hours of PhD-level labor which are not currently compensated.
They aren't paid, but that doesn't mean they aren't compensated; those reviewers will later submit papers which will receive reviews, for which they don't have to pay.
When I published with a journal, we had basically direct contact with the reviewers and little interposing. I did all of the typesetting. The journal just executed my latex files. The long term accessibility of papers via journals is largely overblown and it isn't even actual accessibility paid for by the submission - since that is covered by subscription fees.
You'll be surprised how many scientific papers published in reputable journals are not "properly reviewed". In many cases there simply isn't enough time or knowledge to go through every detail of a paper. One example -- if a paper says go to a github repository to look at our source code, guess how many people actually review the code as part of the peer review process? Very few. There could be serious bugs in the code that would affect the results, or maybe the code is difficult to set up and run, but nobody would notice them for a long time.
After reading my previous comments, why would you think that surprises me? I know for a fact that academics of the highest reputation do very poor reviewing. The cost of a low-quality review is basically zero for the reviewer, while the cost of a high-quality review is high. The consequences of this are not surprising.
It's definitely been my experience that reviews are hit-or-miss. Sometimes you get 1-2 reviews where there are valid points that need to be addressed, sometimes you wait 3 months for the reviews to come back, only to find that reviewers 2 and 3 dropped out and reviewer 1 basically said "looks fine".
Part of the challenge of peer review is that it comes _after_ the paper is written, when the cost of changing anything is quite high. I find much more value in peer review when planning experiments, and do so informally with my peers.
Great reader, I have been using it since Google Reader went away.