Very odd that a world renown physicist would be branded a crackpot for making this observation. Why would making the principle of indifference apply to the continuous domain be controversial?
Well, we'd have to read what he was showing people to know. Maybe his presentation was deeply obfuscated, or maybe he was saying something slightly more subtle than what I mentioned. I'm just guessing from the hints in the article.
... not a physicist here, but I do research on applied information theory and statistics. This grandfather's paper is sort of interesting to me because it starts to creep into some of my areas of expertise a tiny bit, but he seemed to be approaching it from a totally different perspective (of a physicist) that I'm less familiar with. I kind of so far seem to fall in the camp of "I kind of get what he's talking about but he's not quite connecting the dots, or isn't explaining himself well or something."
To answer your question: in some cases, transferring probabilistic reasoning about discrete states to continuous states, or vice versa, is sort of trivial. But sometimes it becomes somewhat controversial, and often this is because it's unclear how the scale of information maps onto the scale of data, to put it murkily. When it's discrete, you know your observations have specific possible values, which in itself is a chunk of information. But when it's continuous, you have an infinite range of values, and what constitutes "indifference" can be unclear.
Think of a scale (like a kitchen or bathroom scale), for example. The scale will have a certain accuracy within a certain range, and as you go further outside that range, the accuracy will decrease further and further. So, to make an analogy with the water-and-wine paradox, let's say I tell you "here's this container of walnuts, that could be anywhere from 0 to 100kg. What's the probability its mass as measured on this scale is in a certain range?" What would be adhering to the principle of indifference? You could argue it would be a uniform distribution over the numbers from 0 to 100, but you could also say it's somehow uniformly distributed over the distinguishable units of the scale, which will not be uniform from 0 to 100 because the meaningfulness of the scale's numbers will be compressed in its most accurate range, and stretched in its less accurate range, and there will also be issues with machine precision, etc. If you were counting walnuts though, you've kind of implicitly fixed the scale to the nonnegative integers, and assumed your ability to count doesn't get fuzzy at some point. So you assume uniformity on one scale is uniformity on another.
E.g., if you think of Jeffreys prior as being one scaling of indifference, even with a Bernoulli variable, indifference isn't necessarily scaled as a uniform: https://en.wikipedia.org/wiki/Jeffreys_prior.
I think this paradox involves (along with maybe being underspecified or poorly posed) somewhat related information-scale mapping issues, about how you define uniformity and what your "possibility space is" to be sort of Jaynesian about it.