This is known as the "evidence". Usually we expect scientific papers to not only present their data, but also some analysis and some kind of conclusion. If we eliminate that second half, it probably won't do much to solve the problem.
More importantly, your priors help you elect plausible explanations for data that let you delve deeper and identify the root cause. If you pretend not to have priors, you won't have any idea how to understand the data.
For example, you run an experiment that shows that several communities using pesticide X average a 1% higher rate of lung cancer than comminities using pesticide Y. There's a lot of variance though, and some communities X have much lower cancer rates.
You could publish the data with no further analysis and leave the broader scientific community to do the work of trying to draw conclusions. Or you could remember that you expected the opposite effect, because pesticide Y has a more direct route to enter the lungs, and then you go back and check the smoking rates in those communities and find a nearly perfect correlation. Accounting for the smoking rates, pesticide Y appears to increase lung cancer rates by 1% relative to pesticide X.
Yes, you could have just published the raw data and other scientists could have figured that out. But realistically, they won't be nearly as involved as you and your peer reviewers are in the work. Even assuming you identify the communities in your published dataset, few scientists doing a meta-review would go deep enough to cros correlate your data against smoking rates if you didn't already do it for them. And if they actually do it, it's only because their priors are telling them that there's an important effect that could alter the conclusion...
In other words, it's your priors that give you a nagging feeling that no matter how big the sample size D is comparing cancer rates among X and Y, you just won't feel P(C|X,~Y,D)-P(C|~X,Y,D) collapse to a narrow distribution in your mind, because you'll know that the outcome hinges critically on a third factor that you haven't seen yet.
More importantly, your priors help you elect plausible explanations for data that let you delve deeper and identify the root cause. If you pretend not to have priors, you won't have any idea how to understand the data.
For example, you run an experiment that shows that several communities using pesticide X average a 1% higher rate of lung cancer than comminities using pesticide Y. There's a lot of variance though, and some communities X have much lower cancer rates.
You could publish the data with no further analysis and leave the broader scientific community to do the work of trying to draw conclusions. Or you could remember that you expected the opposite effect, because pesticide Y has a more direct route to enter the lungs, and then you go back and check the smoking rates in those communities and find a nearly perfect correlation. Accounting for the smoking rates, pesticide Y appears to increase lung cancer rates by 1% relative to pesticide X.
Yes, you could have just published the raw data and other scientists could have figured that out. But realistically, they won't be nearly as involved as you and your peer reviewers are in the work. Even assuming you identify the communities in your published dataset, few scientists doing a meta-review would go deep enough to cros correlate your data against smoking rates if you didn't already do it for them. And if they actually do it, it's only because their priors are telling them that there's an important effect that could alter the conclusion...
In other words, it's your priors that give you a nagging feeling that no matter how big the sample size D is comparing cancer rates among X and Y, you just won't feel P(C|X,~Y,D)-P(C|~X,Y,D) collapse to a narrow distribution in your mind, because you'll know that the outcome hinges critically on a third factor that you haven't seen yet.