Hacker News new | past | comments | ask | show | jobs | submit login

People propose that plan all the time. Here's why it can't work:

While programmers like coding, researchers hate writing (they like doing research!). While there are relatively few OSS projects, there are millions of research papers (yes, millions! http://scholar.google.com/scholar?q=research&hl=en&b...)

There are wiki style projects out there for each major domain. For example, quantum computing has http://www.quantiki.org. These sites are great, but they will never replace journal articles because they do not handle history or citations well. Imagine if all we had was wikipedia to tell us about the Church-Turing hypothesis? Papers like that give us an insight into the scientists' mind that nothing else can.




It works now in physics: arxiv.org

The reason it works is that your peers read your work, and either refute or cite you. Remember -- everybody knows everybody in a subfield, or at least everybody's advisor. It helps when there is math involved which is either correct or not. If somebody random submits something to arxiv, people know they don't know him or his grad school and that is thus probably a crackpot and so don't bother to read it (except sometimes someone from nowhere gets a result, which would NEVER be seen in the closed peer review process).

EDIT: Also you attach your name to a finished product, not an unattributed ongoing wiki page. Your reputation is everything, so you win or lose based on the articles.

One of the problems in social science academia is that 98% of the articles are completely worthless, and they only serve to get a professor promotion based on an index of citation times journal prestige. The smaller state colleges require their junior faculty to "publish", but they don't have enough to say to get into real journals and the publishing houses fill the need with "Journal of Rural Sociology" etc.


It's true that arxiv works well. Do remember, though, that even arXiv does not allow posting by anyone. If you're not affiliated with a major research institution, you need to be vouched for by current members.


IANA Physicist, but I can't imagine any physics journals feel threatened by arxiv. When someone wins a Nobel prize for work that was published exclusively through arxiv, then you might have a point.


I Am A Physicist, and researchers do use Arxiv quite often: pre-publication papers, or something that is written up and can iterated (unlike journals, Arxiv can store multiple version of the same paper). It is not peer reviewed but good for having feedback on things you are doing.

Of course, publications from the "big guns" go straight to the big journals, and their peer review is more like rubber-stamp if you have the name on your author list.

So the only difference is for the bureaucrats, who want you to show them "proper" papers to be able to move forward, the people doing the actual research only care whether your experiment and explanation is solid or not, can be published anywhere.

(And I'm totally for open journals, even more, use CC-BY on it...)


Arxiv acts as a supplement to journals, providing a place to host preprints. It does not compete with those journals.


Actually, I didn't mean using the OSS world as a model for the original research, but for the gatekeeping aspect. That is to say, conducting all work in a public and open manner, and letting the community of interest sort out the quality of the results.

There are not actually relatively few OSS projects -- like with papers, there are just relatively few that are interesting or important. For example, Github alone claimed to be hosting 2 million repositories. I don't know what fraction of those are private, but . . . if you go looking for OSS projects, there are a lot of them out there.

Most of them -- like most papers -- are not of much interest to anyone (possibly not even the author), and very few are of interest to everyone. I am able to navigate the landscape, though, by observing which projects attract the interest of others, whether through contributions or communities of interest such as this one.

I guess what I am saying is, I like the community acting as a gatekeeper. I think it's more reliable, and it has more brainpower to spend on the job, than a small number of dedicated editors. It's my read that this is the way research works anyways -- when I was in grad school for mathematics, a paper wasn't a proof until most of us had read it, discussed it, and critiqued it among ourselves. My community, not the journal, was the gatekeeper. I think things would be healthier if that were more open and obvious.


I think that an issue with that method is the barrier to entry. To make a change to an OSS project, you need as little as a text editor. Code something up, try a few things, commit (and add your name to the list!).

To make a change to someone else's article, all you can do is question their methods, and question them again more sternly. It's not trivial to repeat experiments with scientific rigor, but it is (generally) trivial to test out a code change or two.

Another issue is experience - it takes a lot of education to get someone to the point of being able to apply scientific rigor; there's a lot of seductive pitfalls along the way. It's not really something you can be 'amateur' about. That's not really the case with a lot of OSS, where you can successfully contribute without understanding important but subtle issues.


Well, I certainly don't agree with your description of what software development entails. You need a text editor, but you also need to be familiar with a whole ecosystem of tools, libraries, platforms, procedures, and general technical knowhow. You need to commit your changes; you also need the project owner to think what you've done is worth merging into the main branch. The suggestion that it's something you can "be amateur about" is making me giggle.

If anything, it's the development behind closed doors that you can "be amateur about". Work done in public is too embarrassing and too unlikely to attract any interest to do a bad job on.

But that's kind of beside the point. Even if the expertise is rarer on research, I don't understand why that matters. Wouldn't that be an argument for pooling the resources you do have? I mean, if there are only four people working in a field, wouldn't it be better if they all could critique a new work, and if they were able to watch it progress and influence its direction (maybe stopping mistakes early!), and if the judgement of a new work came with their written opinions on its worthiness attached?


turns out my original response was eaten by the daft HN 'unknown or expired link' thing.

but in short: It seems you misunderstand me. They both require understanding, but it is easier to effect changes on an OSS project than it is to make an author go back and rewrite or retest their article.


Reputation systems could help.

Then you're simply increasing granularity...instead of judging papers by what journals considered them worthy, judge by what individuals considered them worthy...and if you don't know the individuals involved, the reputation system is there to help you.

It'd have to be something similar to pagerank, not a simple vote-count method. High-reputation individuals should be able to contribute more to the reputation of other individuals.

Another idea would be to apply pagerank to article citations, though this would only be useful for older papers.

Pagerank patent expires in 2018. There are some other interesting reputation system out there, too.


For programming reputation has a point, but for science articles, each should be judged on their own merit. Ideally you shouldn't know whose article it is that you're reviewing, or you run the risk of preferential treatment.


I wasn't thinking of judging articles by the reputation of the authors. Instead, I was thinking of judging the value of reviews by the reputation of the reviewers.

I think that's actually pretty similar to what we're doing now. Articles accepted by journals of high reputation get a high reputation, and the journals select high-reputation individuals to do reviews. A good distributed reputation algorithm could perform essentially the same function. Leaving review open to anyone could help preserve net objectivity even if some individual reviewers are biased.

But anonymity could be accomplished by releasing articles initially with a timestamp and a digital signature by a new public key. After review, the authors could reveal that they have the matching private key.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: