This seems like a great idea, but I think it could be vastly helped by some examples about the format should be, what kind of software is acceptable, etc.
For instance: When I was in school, I wrote a MATLAB program to simulate grid cells in a rat hippocampus. Is that acceptable? I've also written numerous packages for Django - are those? And so on..
They have a list of requirements on the about page that basically boils down to "any open source software that you authored which has a research application". There's more detail about the requirements in the reviewer guidelines, which mostly have to do with the development best practices: documentation, test suite, contributor-friendliness, etc. And they also give Fidgit as an example paper on the about page.
For your particular examples, I would say the rat hippocampus simulation almost certainly fits their guidelines, and the Django packages probably don't, unless they're doing some sort of scientific computation or data visualization.
This journal also is also encouraging and rewarding good software development practices, such as documentation, licensing, having the code in version control and testing. These practices are sadly not common in scientific software, partly because there is little academic reward.
This is a very important observation. There are plenty of academic "we build software X" papers there's even a method specifically for these types of papers: design science, Hevner et al. etc. but almost all of them never mention how to get the software, how it's licensed etc...theoretically that software couldn't exist and I'd be none the wiser.
Since writing the paper is expected to take no more than an hour, given the software is already written, I wonder if they're worried about receiving a flood of submissions. Perhaps it's less of a problem since reviewing a paper should take much less time given their length?
I feel it may be a problem for them since it looks like they try to reproduce the program's setup/workflow, which is (in my experience) more than bioinformatics reviewers usually do:
>Authors are strongly encouraged to include an automated test suite covering the core functionality of their software.
> OK: Documented manual steps that can be followed to check the expected functionality of the software (e.g. a sample input file to assert behaviour)
You can see the checklist reviewers go through in any of the issues here:
Most scientific software probably doesn't already have all these components in place. It would be a great problem to have, to have a flood of submissions!
If the only point of this is for credits and reputation, than I find it useless as there are a variety of ways for one to get credit. The number of citations means nothing compared to how many users are actually using the software everyday.
Also, a paper is inherently different from a piece of open source software - a paper is set to present facts about something observed in nature; software on the other hand is used to accomplish a specific task.
This is designed specifically for researchers in academic institutions that have quite narrowly-defined mechanisms for 'getting credit' that can be formally applied towards career advancement, tenure review, etc., where by definition there are not 'a variety of ways for one to get credit'. Changing those evaluation mechanisms to better support recognizing software development for its "inherent" or naive value would be ideal, but is not a practical option for non-tenured researchers who need to work within existing advancement systems. Anyone outside of those systems would not find this form of publication useful at all - exactly why it is self-described as an academic 'hack' and not intended for general-purpose software development.
The problem with being so openly self-described as a merely a way to get credit is: you're not evaluated on how many publications you get. You're evaluated on how many top publications you get.
There are a ton of 2nd tier and 3rd tier journals out there that are legitimately peer reviewed but won't help you get tenure / get academic positions, etc. This is a new journal so nobody will have heard of it (and therefore almost all will assume it's a 3rd tier venue). And I don't see that changing for a journal that is so narrowly focused on just publishing papers about open source software.
This is correct. Specifically in my country, publications have to be published in journals of the first tertile of the corresponding category in the JCR index to get full credit.
I believe the idea is to create a minimal citeable entity for a piece of software. It's currently hard to cite software in journals in any coherent way - this gives a DOI and a human-friendly landing page, as well as a system for conducting basic peer review of (scientific) software.
I guess such things are not very widespread, but there is an Astrophysics Source Code Library[0] that is citeable. I am not sure how hiring/tenure/promotion committees weigh those citations, though.
We built this journal because we believe that after you've done the hard work of writing great software, it shouldn't take weeks and months to write a paper about your work.
and from the original post:
If an author of research software is interested in writing a paper describing their work then there are a number of journals such as Journal of Open Research Software and SoftwareX dedicated to reviewing such papers. In addition, professional societies such as the American Astronomical Society have explicitly stated that software papers are welcome in their journals. In most cases though, submissions to these journals are full-length papers that conflate two things: 1) A description of the software and 2) Some novel research results generated using the software.
> Also, a paper is inherently different from a piece of open source software - a paper is set to present facts about something observed in nature; software on the other hand is used to accomplish something.
Curious how you think that this journal won't do this. I'm curious what you think of the Journal of Mathematical Logic!
This is exactly the point: Software production and maintenance is irrelevant for academic credit. So researchers do not have an incentive to make software. This journal explicitly says about itself that it is an ugly hack for academic software writers to get academic credits: A citable entry in a peer-reviewed journal.
The relationship between the algorithms employed to solve a problem and their actual ability to solve that problem are facts about nature that you can observe and write papers about.
For instance: When I was in school, I wrote a MATLAB program to simulate grid cells in a rat hippocampus. Is that acceptable? I've also written numerous packages for Django - are those? And so on..