> This is one of the reasons double blind peer review can be appealing. It avoids credentialism, where the institutional reputation overrides honest evaluations of the paper quality.
This is one of the things I really like about the top machine learning conferences at the moment. The reviews are open to all and blind to credentials. Combine that with the fact that even vast compute resources can be obtained fairly cheaply (at least for a short amount of time), and CS research is one of the more "democratic" fields that someone can be a part of nowadays.
This is in contrast to the research I performed during grad school, which required access to a billion dollar neutron source, and reviewers generally knew which institutions the papers were coming from...
Eh, more often than not, it's pretty clear who at least one of the authors of the paper is by just looking at what they cite. Double blind review is not a foolproof system of avoiding creds and appeals to authority.
From an article [0] published in last month's CACM, describing attempts by three 2016 conferences to anonymize papers:
We find that anonymization is imperfect but fairly effective: 70%–86% of the reviews were submitted with no author guesses, and 74%–90% of reviews were submitted with no correct guesses.
This is one of the things I really like about the top machine learning conferences at the moment. The reviews are open to all and blind to credentials. Combine that with the fact that even vast compute resources can be obtained fairly cheaply (at least for a short amount of time), and CS research is one of the more "democratic" fields that someone can be a part of nowadays.
This is in contrast to the research I performed during grad school, which required access to a billion dollar neutron source, and reviewers generally knew which institutions the papers were coming from...