You can't "not assume" a prior. Frequentist statistics does indeed use (sometimes improper) implicit priors, and thus implicitly assumes subjective information. Bayesian inference makes it explicit. Anyone who thinks they can evaluate evidence objectively in the first place is fooling themselves. There is always subjectivity.
If you really want to assume "minimum information," use the Jefferys prior for your domain. This minimizes the square root of the determinant of the Fisher information matrix.
The Jefferys has the key property of being invariant upon changes of parameters. This property is not generally true of implicit frequentist priors (unless they correspond with the Jefferys prior), but yet is essential for uninformativeness. If your model depends on the units you choose for your parameters, you can hardly call it objective!
Explicit nonsense is still nonsense. I don't believe that either Bayesian or frequentist approaches are the "one true way"; they are merely tools that help us understand the world, and both approaches have limitations.
I suggest looking at the essay "Beyond Bayesians and Frequentists" by Jacob Steinhardt http://cs.stanford.edu/~jsteinhardt/stats-essay.pdf , who says, "The essential difference between Bayesian and frequentist decision theory is that Bayes makes the additional assumption of a prior... and optimizes for average-case performance rather than worst-case performance. It follows, then, that Bayes is the superior method whenever we can obtain a good prior and when good average-case performance is sufficient. However, if we have no way of obtaining a good prior, or when we need guaranteed performance, frequentist methods are the way to go." The same author also has an essay that tries to explain why Bayesians shouldn't be so confident in their approach: "A Fervent Defense of Frequentist Statistics", 18th Feb 2014, https://www.lesswrong.com/posts/KdwP5i6N4E4q6BGkr/a-fervent-...
Note that the author isn't against Bayesian approaches, but against the dogmatic assumption that frequentist approaches are always worse.
> Bayes makes the additional assumption of a prior... and optimizes for average-case performance rather than worst-case performance
This is nonsense. Nothing stops a Bayesian from picking a prior that optimizes for worst-case rather than average-case performance, given a particular utility function. The really questionable premise here is that the utility function is known; in practically any case of real interest, it isn't, and that's a problem regardless of whether your decision theory is "Bayesian" or "frequentist" (which are misnomers for decision theories anyway for the reasons I gave in my other post just now).
> the essay "Beyond Bayesians and Frequentists" by Jacob Steinhardt
This essay is about decision theory, and the things he is calling "Bayesian" and "frequentist" are more than just statistical methods, which is what the replication crisis is about. Decision theory, particularly when other agents are present, cannot be handled by any method that only considers statistics; game theory is involved. The Steinhardt article is basically claiming that "Bayesians" can't use game theory while "frequentists" can, which is nonsense.
When I say you can't "not assume" a prior, I am being 100% literal. Every frequentist technique can be interpreted in a Bayesian way, and the priors always contain information (which has a mathematical definition). It is also true vice-versa, but is much more awkward and complicated.
If you really want to assume "minimum information," use the Jefferys prior for your domain. This minimizes the square root of the determinant of the Fisher information matrix.
The Jefferys has the key property of being invariant upon changes of parameters. This property is not generally true of implicit frequentist priors (unless they correspond with the Jefferys prior), but yet is essential for uninformativeness. If your model depends on the units you choose for your parameters, you can hardly call it objective!
https://en.wikipedia.org/wiki/Jeffreys_prior