Statistics as practiced today (1930s until now?) consists almost entirely of making inferences about unobserved probability distributions. That includes nonparametric statistics, and frequentist versus Bayesian has nothing to do with it.
There are some probability models that are not really statistical models, but there are few or no statistical models that are not also probability models.
Least-squares regression is a probability model. Even if you don't particularly care about the error distribution, you are still estimating a conditional expectation and setting a conditional independence assumption on the residuals. If that's not a probability model, then I don't know what it is!
Statistics can be summarizes as one thing n -> N. Does ‘little n’ represent ‘big N’. In other words, does the sample generalize to the population. Statistics means something like “description of the state”. It was born out of census samples where larger population samples had to be estimated. “n” could be a handful of fish in a “N” lake. “n” could also be the parameter estimated in a linear regression with the sample of data collected while “N” is the true parameter of the relationship if we had all the data. Point estimation is about finding the needle in the haystack, but much more often statistics is about finding the haystack given the needle. One tool statistics uses to get to the haystack is probability.
A point estimate of distribution parameters describing a population is frequentist. A point estimate of distribution parameters describing another distribution's parameter is Bayesian.
How the parameters are estimated it not the message.
In statistics there are latin letters and greek letters. When you see a symbol denoted as a greek letter then that is a population parameter. When you see a latin letter that is a sample estimate. It could be Frequentist, Bayesian, Likelihoodist, Fiducial, Empirical Bayes, etc. Theoretical population greeks or sample calculated latins.
There are some probability models that are not really statistical models, but there are few or no statistical models that are not also probability models.
Least-squares regression is a probability model. Even if you don't particularly care about the error distribution, you are still estimating a conditional expectation and setting a conditional independence assumption on the residuals. If that's not a probability model, then I don't know what it is!