Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's simple:

We stop publishing in papers, and instead adopt smaller chunks of our work as the core publishing units.

Each figure should be an individually published entity which contains the entire computational pipeline.

Figures are our observations on which we apply logic/philosophy/whatyouwannacallit. Publishing them alongside their relevant code makes the process transparent, reproducible and individually reviewable, as it should be.

We can then "publish" comments, observations, conclusions etc on those Figures as a separate thing. Now the logic of the conclusions can be reviewed separately from the statistics and code of the figure.



A comparable solution would be for all involved to value all research, not just the ground breaking, earth shattering type.

As it is, research that yields a "failure" is buried. That means wheels are being reinvented and re-failed. That means there's no opportunity to compare similar "failures", be inspired, and come up with the magic that others overlooked.

Unfortunately, I would imagine, even if you can get researchers to agree to this the lawyers are going to have a shit fit. Imagine Google using an IBM "failure" for something truly innovative.


What you are proposing sounds a lot like the concept of the least publishable unit.

https://en.wikipedia.org/wiki/Least_publishable_unit


> Each figure should be an individually published entity which contains the entire computational pipeline.

I agree in principle. But, for the experimental sciences, we need better publication infrastructure to make this practically possible.

For example, consider a figure that summarizes compares, between several groups, the mechanical strain of tensile test specimens for a given load. Strain is measured from digital image correlation of video of the test. Some pain points:

1. There is a few hundred GB of test video underlying the figure. Where should the author put this where it will remain publicly accessible for the useful lifetime of the paper? How long should it remain accessible, anyway? The scientific record is ostensibly permanent, but relying on authors to personally maintain cloud hosting accounts for data distribution will seldom provide more than a couple years' of data availability.

2. Open data hosts that aim for permanent archival of scientific data do exist (e.g., the Open Science Framework), but their infrastructure is a poor match with reproducible practices. I haven't found an open data host that both accepts uploads via git + git annex or git + git LFS and has permissive repository size limits. Often the provided file upload tool can't even handle folders, requiring all files to be uploaded individually. Publishing open data usually requires reorganizing it to according to the data host's worldview or publishing a subset of the data, which breaks the existing computational analysis pipeline.

3. Proprietary software was used in the analysis pipeline. The particular version of the software that was used is no longer sold. It's unclear how someone without the software license would reproduce the analysis.

Finally, there's the issue of computational literacy of scientists. In most cases, the "computational pipeline" is a grad student clicking through a GUI a couple hundred times, and occasionally copying the results into an MS Office document for publication. No version control. Generally, an interactive analysis session cannot be stored and reproduced later. How do we change this? Can we make version control (including of large binary files) user-friendly enough that non-programmers will use it? And make it easy to update Word / PowerPoint documents from the data analysis pipeline instead of relying on copy & paste?

If any of these pain points are in fact solved and my information is out of date, I would be thrilled to hear it.


1 ans 2: I like IPFS for this, check it out

3: analysis that uses propriatory is marked appropriately as second class

> computational literacy of scientists

Welp...


I have two words for you: Ted. Nelson.


Can you expand on this?


I can’t speak for GP, but Nelson invented hypermedia/hyperlinks and had a vision for the future that included documents including other documents. All of that seems pretty compatible.


similar to reproducible builds or nix

research just jumped onto jupyter notebooks, it's halfway there, someone helps the remaining step


www was created to publish information in CERN but we can use it in other contexts too ;) http://info.cern.ch/Proposal.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: