The data in a paper can be objectively measured, the methodology used to conduct a study - like sample size, cohort composition, cohort groups chosen for comparison - could go both ways, the conclusions drawn are often subjective. If two of the three component parts of a paper are arguably subjective, not surprising that Nobel Prize winners get a pass on the quality of their papers.
The data is objective but the quality of the research is subjective. For example if my sample size was 150 for some study that's subjective whether it was sufficient of size
Peer review doesn't verify the correctness of data. At best they'll flag flagrantly fabricated data that doesn't pass the sniff test, but attempted replication of the results is not what peer reviewers are doing.
There are objective measures of statistical power, given an effect size a priori ahead of time you can estimate the power of a particular procedure. The trouble is that what a "reasonable" effect size may be is subjective and requires prior knowledge; post-hoc power calculations are widely regarded as misleading and conveying little additional information beyond a p-value.
Honestly, this is the least of academia's problems.
I've seen it from the inside. Probably read 80-100 widely-cited papers during my PhD (before dropping out), and maybe half a dozen of them were written by people who had any interest in discovering truth and pushing mankind forward.
Seriously cannot overstate both the willful ignorance of established scientists, and the extent to which this is enforced onto the next generation.
People will choose what they listen, see, or read heavily based on who the performer is.
A performer who has consistently given good content will obviously have a bigger pull than a nobody still trying to get their first break.