One other mystery not mentioned is the problem of fine tuning. The standard model requires certain parameters (like alpha, the fine structure constant) to have their current values accurate to many orders of magnitude for the universe as we know it to exist. There are two philosophical schools of thought about that -- (a) we're in the universe we're in, so by definition it must exist and there's a selection bias there; or (b) there is an underlying detailed structure that gives the values of supposed 'fundamental' quantities their shape as an emergent property of something more beautiful – and thus they're not "free" at all. This is one of the things that SUSY was supposed to solve – but it's been experimentally found to not really exist by the LHC. A good introduction about this (in the context of the Higgs mass, where the need for fine tuning is really apparent) is here: https://www.physicsmatt.com/blog/2016/11/17/paper-explainer-...
“This is rather as if you imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.”
The puddle being in a hole implies a world outside of the hole. If the puddle has no means to perceive the outside world except a careful inspection of its own bounds, isn't that an interesting mystery?
To me the fine tuning problem is a bit like saying "circles wouldn't exist without π being exactly the value it is" and wondering why π has that value specifically and not any other value.
I don't think A and B are mutually exclusive. The values seem perfectly tuned for our universe because we exist in our universe, and they are probably emergent from a more fundamental parameter, possibly something like the particular Calabi-Yau manifold topology that happens to correspond to our universe (in the case of superstring theory). If we lived in a different CY topology that was capable of supporting intelligent life then we'd wonder why that one's constants are so precisely tuned for us.
But then, I'm and idiot who just watches PBS Space Time and nods his head, not a physicist.
That’s not that great analogy for fine tuning. The quantized representative value of Pi is arbitrary based our numerical system, the relationship that derives it is fixed if the relationship would change circles indeed would not exist.
Let’s move the analogy to triangles a triangle has 180 degrees but that only holds true when it’s on a flat surface if you have a curvature it can have more or less than 180 degrees.
So this isn’t a fine tuning problem on its own, if you would lived in a universe where triangles have less or more than 180 degrees it would represent a universe with negative or positive curvature.
The issue with fine tuning is that the curvature of the universe is directly tied to the mass/energy density and any deviation from an extremely narrow range which our universe seems to sit in out of all possible values would not just produce a universe with triangles with fewer or more degrees than 180 but would either produce a universe that would collapse on itself within a blink of an eye or expand so fast that gravity would never be strong enough to cause even the most basic structures to form.
So the issue really is that to produce a universe which will form galaxies and stars and survive long enough to produce life you need a lot of parameters at a certain very specific value even the smallest of deviations would not produce a universe that would ever support life, yet alone an intelligent life. What’s even stranger iirc is that the values have to be really what they are now you can’t simply 2X all of them to maintain the proportions and get the same result.
And this is really what people are looking to solve, yes the androcentric is a solution if they were anything than what they are now we wouldn’t be here to discuss why, but the issue is that out of all other possible combinations you don’t seem to find another stable state and that is the true mystery.
would either produce a universe that would collapse on itself within a blink of an eye or expand so fast that gravity would never be strong enough to cause even the most basic structures to form
It could do so in other chunks of space or time. If “time” and “expansion” have no final end, and these constants can fluctuate for some reason, eventually there would appear a universe like ours. There is no one who could count all the “failed” attempts. And maybe ours is also a failure in a sense, compared to a hypothetical much “better” universe.
That is still fine tuning, in fact one of the more common interpretations of it just have an infinite number of universes one of them has to turn out to be alright…
> The issue with fine tuning is that the curvature of the universe is directly tied to the mass/energy density and any deviation from an extremely narrow range which our universe seems to sit in out of all possible values would not just produce a universe with triangles with fewer or more degrees than 180 but would either produce a universe that would collapse on itself within a blink of an eye or expand so fast that gravity would never be strong enough to cause even the most basic structures to form.
Well, the notion of "extremely narrow range" is arbitrary. There is no mathematical notion of "large numbers" vs "small numbers". If the constants you speak about were 10^-g64 (Graham's number) times larger or smaller than they are, nothing would really change. If they were 2 times smaller, the universe would be extremely different.
But 2 and 10^-g64 are just numbers. Neither of them is inherently large or small. We just happen to understand 1, 2, 3, ... much better than g64.
You could say that there is a surprising amount of leg-room in the values of some constants of nature, while others can only vary in a reasonable range. While the fine-structure constant can vary within a reasonable +-1/1000 range without massive changes, other values can vary by massive numbers such as +-2!
I'm not sure I understand your point. There is such a thing as "large" and "small" in physics, but they're relative to the scale you're considering. For example, if you're a million miles from the Earth, then from a gravitational perspective you may as well be infinitely far away (a ~ 10^-5 g). But if you were the same distance from the sun, gravity would be extremely relevant for you (a ~ 5 g).
Moreover, I find it unfathomable that the scale 1/g64 would be relevant anywhere in physics – I certainly can't think of any examples!
There is such a thing as "large" and "small" as related to measurement and impact.
But a change in a base parameter of the laws of physics is different. If a change of 1/10% in the value of one parameter is detectable, than it is "a large change". If a change of 200% in the value of another parameter is not detectable, then it is "a small change".
But the fine tuning argument relies on defining an absolute idea of "large" and "small". In the fine tuning problem definition, 1/10% is "small" and 200% is "large", and it's the parameters themselves that behave strangely, since one parameter produces visible results for "small" changes while another doesn't. Get rid of the absolute ideas of "small" and "large" numbers, and the fine tuning problem goes away entirely.
> Moreover, I find it unfathomable that the scale 1/g64 would be relevant anywhere in physics – I certainly can't think of any examples!
My point about 1/g64 was that, for all physical quantities, there is some scale at which they can vary without measurable differences. I took such a fantastically small number exactly to make sure I'm not risking a case where the factor I gave as an example would in fact cause detectable differences.
The range doesn’t have anything to do with “numbers” directly because yes you can always say there is an infinity between any two numbers no matter how close they are to each other. The very limited range is actually captured in physical quantities.
Back to the original analogy Pi is always exactly Pi not because we count it as 3.14~… but because of the relationship between the various innate properties of a circle.
So fine tuning isn’t that oh gosh we got very oddly specific values here is that we have problems such as https://en.m.wikipedia.org/wiki/Hierarchy_problem that are only solvable through fine tuning.
So back to the circle it’s really a case of use not understanding the relationship between the circumference and the radius and just brute forcing the value of Pi which is really what fine tuning is.
Sure, but none of this must have a fundamental explanation. It could just be the way the universe is, nothing in maths requires this 'problem' to have a solution.
Conversely, things like the measurement problem, quantum gravity, dark matter, dark energy, the mass of neutrinos and others must have some answer - they are facts we can see that just don't mesh with current theories.
So why study a problem that there is no reason to believe will have an ultimate answer, when you can study problems that must have such an answer?
Back to the circle… take a string and pin trace a circle with it, take another piece of string and trace it.
Cut the traced string in half each half would not have the value Pi (granted the original string you had used to draw the circle has a value of one). That relationship is fundamental and the answer we are trying to find is exactly is this fundamental relationship that would explain why the constants have the values they have.
As in there needs to be a certain mechanism that defined those values in relationship to each other in a manner that does not require “fine tuning” because that would require either an infinite number of other universes which can have different values regardless of the fate of those universes or a cycle of death and rebirth in which these values somehow can be randomized (which opens a whole other question how does that happen?).
The circle is a nice example where there is a simple analytical relationship between the radius and the circumference (1:2pi). Other basic shapes, like the ellipse, have no such simple relationship, even in pure maths. And when performing the experiment you mentioned, the strings are very unlikely to be in 1:2pi ratio, they will be within some range of that ratio.
The universe has some set of independent initial parameters. Whether they are the ones we know or there are some other, more fundamental ones, ultimately some parameters are there.
Even with a single fundamental parameter, that parameter will have some range within which variations in value don't affect the universe at all (like the 1/g64 example), and some range where changes in them would lead to a very different universe, potentially one that can't sustain life.
This idea of "probability" of the observed values of these parameters is then meaningless. What is the probability distribution of the "universes"? To say that the "fine tuned" universe is "unlikely" you have to posit a probability distribution, but there is no empirical data to then verify if this distribution is correct, and, crucially, there can never be such empirical data.
You might as well ask yourself why the universe is four-dimensional (or 11-dimensional) and not 2-dimensional.
> You might as well ask yourself why the universe is four-dimensional (or 11-dimensional) and not 2-dimensional.
We do ask ourselves that the number of degrees of freedom is tied to the laws of physics, so it’s now really a question of why X but what are the physical laws that give us X.
And if we continue with the circle and parabola analogy then what we are tying to figure out is whether our universe is a circle or a parabola.. that’s a pretty worthy question to ask in my book.
We are thing to answer if the the values are actually random and thus have to be fine tuned to produce the universe we live in today or are they not random and if such what is the relationship governing them that makes them take the values they have.
> other possible combinations you don’t seem to find another stable state and that is the true mystery.
Is that really the case, I thought in theory there could be a massive amount of other stable universes with different values. The number of unstable universes is of course much larger.
"explain" might be a better word than "solve" there. People are looking to solve the question "why are the fundamental constants of physics what they are?" which isn't a problem in the sense of something wrong but more something that needs an explanation for a full understanding to be possible.
According to Leonard Susskind[1], fine tuning is a compelling argument by itself[2], and the strongest case is the cosmological constant.[3] In a nutshell it is a sort of repelling force first proposed by Einstein to create a workable model for a static universe who later regretted it as one of the biggest mistakes in his career. However, the theory is now back with Nobel prize winning research showing that expansion is accelerating, which would require a positive number. It could explain a large portion of "dark matter."
When expressed in one way, it is 10^-122 "units of the square Planck length". I'm not smart enough to completely understand it, but it is (according to physicists) an incredibly precise ingredient in the various properties of physics that make our Universe possible. Any larger or smaller and the model falls apart. If it is an accident, that is one hell of a lottery ticket.
> and wondering why π has that value specifically and not any other value.
Maybe the question should be "why circles" then. Early models of the solar system suffered under the assumption that everything should be modeled using circles when ellipses modeled everything better.
It's an explanatory theory with a lot - a lot - of gaps.
It has been extended with some nice predictions like the Higgs. But basically it's a Franken-patchwork of math glued together quite awkwardly.
Because that is what it is. Literally. It was developed by thousands of grad students and their supervisors throwing math at the wall and keeping anything that matched observations. So there was a lot of random searching involved.
What's missing is a central guiding metaphor.
Relativity has one. In comparison, the Standard Model is very epicycle-ish tool for calculating Lagrangians with plenty of "Yes but" and "Except when".
I'm not sure I quite explained myself, although I do appreciate the reply.
See, I also don't see how one central guiding metaphor would be any less arbitrary than none, or ten metaphors.
What I am asking is why would α = 0.0072973525693 be more of a problem than electromagnetism or the speed of light being constant or e=mc^2? If the fine structure constant was exactly 1 would that make it better? Why is 1 less arbitrary or more good?
I understand the beauty in simplicity, fewer concepts or ideas, smaller formulas explaining more things, 1 being "nicer" than other numbers, etc. So I understand that aspect of goodness, so it is the connection to underlying observable reality I'm asking about.
Because isn't it also arbitrary to think that if we could explain things in more beautiful ways then it would be a deeper understanding or closer to the truth?
> The upshot is that there are three parsimony paradigms that explain how the simplicity of a theory can be relevant to saying what the world is like:
> Paradigm 1: sometimes simpler theories have higher probabilities.
> Paradigm 2: sometimes simpler theories are better supported by the observations.
> Paradigm 3: sometimes the simplicity of a model is relevant to estimating its predictive accuracy.
> These three paradigms have something important in common. Whether a given problem fits into any of them depends on empirical assumptions about the problem. Those assumptions might be true of some problems, but false of others. Although parsimony is demonstrably relevant to forming judgments about what the world is like, there is in the end no unconditional and presuppositionless justification for Ockham’s Razor.
I really hope it doesn't sound like that is what I'm asking. I went back and re-read what I wrote and it doesn't seem like that at all to me, but maybe I'm not a good writer.
Perhaps I am the one who is not a good reader! Please also note that I am not a physicist or a mathematician.
> What I am asking is why would α = 0.0072973525693 be more of a problem than electromagnetism or the speed of light being constant or e=mc^2? If the fine structure constant was exactly 1 would that make it better? Why is 1 less arbitrary or more good?
α = 1 is less arbitrary / more good than α = 0.0072973525693 because the former requires fewer bits of information than the latter. Fewer bits => more parsimony.
Similarly:
> See, I also don't see how one central guiding metaphor would be any less arbitrary than none, or ten metaphors.
1 < 10, therefore if parsimony is desirable, 1 is better than 10.
That is what I'm asking. However you want to define this or label it.
The article says a whole lot as far as I can tell without really saying anything. Or at least not what I'm asking. The "paradigms" all start with "sometimes", and they don't seem to be backed by anything scientific beyond handwaving.
I agree that if you had two theories that are equally expressive and accurate, all else being equal the simpler one would be preferable. That's not really what I'm asking about though.
We do not have two equal theories one that requires the fine structure constant and the other which does not. We have a theory with a fine structure constant and lots of other things. The question is why is that "problematic"? We don't look at e=mc^2 and think that ^2 is problematic and think that it suggests there should be a simpler better theory without it. Why does alpha get singled out?
The stop squark is one of the sfermions (the superpartner particles of their associated fermions). As such they are all sparticles. Some of the other sfermions would be the sup squark, the scharm squark, the sstrange squark, the selectron or the stau. [1]
In my opinion physicists are great at naming things :D
Those supersymmetric particles have the disadvantage of not having any evidence they exist. I am sad for all those physics grad students who went into supersymmetry and string theory.
> (a) we're in the universe we're in, so by definition it must exist and there's a selection bias there;
This has basically 3 possible explanations - insane coincidence, multiverse (with many versions of constants; certain forms of mathematicism also provide such "multiverse") or "reason" (simulation admin/God).
somewhat related, I think, is the enormous disparity between the strength of the strong/week/electromagnetic forces and gravity (30 or 40 orders of magnitude); possible indication that there's something missing
Why is this a problem? If the other three forces were perfectly equal, or in regular intervals, maybe this would have meant something, but the other three forces' strength varies by x, y, z between them. Other than human intuition, there is nothing inherently different between x = 2 and x = 10^30-40.
I don't know that it's a problem, and I'm no expert; my understanding is that differences like that can sometimes indicate that there's something else at play
It's only framed as a problem when it is assumed that the values could be other than what they are. But we have no reason to suspect that they could be different. We have no idea how unlikely the current selection is given our observation is a single observable. For all we know, it's certain.
'What really interests me is whether God had any choice in creation' - Albert Einstein