Here's a proposal: Bayesian scientists shouldn't select their own prior. Instead publish how your results would update any prior, including the one picked by me, the reader.
I certainly haven't thought this through, but maybe this would make science more modular: combine the updates from M studies and calculate the new, combined update. Statisticians, does this work?
Typically, if you are practitioner in the field, it is not too difficult to identify instances where the result is highly dependent on the choice of prior.
Yes - Laplace originally proposed this, it's a good approach (and incidentally the basis for Bayesian meta-analysis). Google "skeptical prior" for more.
I had a professor in graduate school who suggested exactly this - each study should conduct a meta-analysis of all previous studies on the subject, use that estimate as their prior, update, and publish for the next study...
The problem is, having tried it, this is much more difficult to do in practice.
On the second point. It's called meta-analysis and is an important area of research. However it's very far from operating automagically and requires substantial manual input.
On the first point this is equivalent to contracting a builder to build a house in any style.
Choosing a prior isn't always just about picking a wide band around 3 or a narrow band around 2. This is like the builder offering a choice of curtain colours from a swatch.
Making a commitment to implement any prior could be totally. Like the overcommitted builder being asked to reconstruct R'lyeh.
I certainly haven't thought this through, but maybe this would make science more modular: combine the updates from M studies and calculate the new, combined update. Statisticians, does this work?