Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intriguing new result from the LHCb experiment at CERN (home.cern)
76 points by hartem_ on March 28, 2021 | hide | past | favorite | 49 comments


> The new result indicates hints of a deviation from one: the statistical significance of the result is 3.1 standard deviations

That's not enough, is it? I thought in particle physics, people wanted 5 standard deviations? Does someone know why this is being published then?


I am under the impression that over 3 sigma suggests strong evidence for new physics but not enough to be treated as new physics with high probability, which is done at 5 sigma.

However, all the numbers are, well, arbitrary. By that I mean that there is nothing special about 3 sigma or 5 sigma or p-value less than 0.05. The actual value is arbitrary and used because it has high inertia, i.e. commonly used in literature.


Due to the sheer number of hypotheses get tested all the time in particle physics, 3 sigma effects appear all the time, and then disappear before getting a larger certainty. That's why it's considered a suggestion, not proof.

And yes, the exact numbers are arbitrary, but their ballpark isn't. You reduce the necessary certainty if you have confidence on your priors, and increase it if you test many different hypotheses. The target confidence also varies from one discipline to another based on how much data one can realistically gather, but nearly all of physics falls on the "we can gather enough data" category anyway.


Well put, but I’ve always felt a middle ground existed where a knowledgeable scientist can make a very educated guess with much lesser statistical significance as long as the experiment is thought through, you are aware of the noise models and alternative explanations have been explored and mostly ruled out. Of course for publication you better whip your ass to whatever sigma they demand, but more often than not there either cause for optimism or pessimism ways before that point.


Not really. The amount of confidence you have grows exponentially with the observed deviation, and that last one grows kinda proportionally to the amount of data you have (if the hypothesis is real, of course).

So there is some fine adjustment possible, but it's very small on the experiments where you need a lot of confidence. There is more margin to judge low non-significant results at the low confidence end, as exponentials grow slowly when they are small.


> Due to the sheer number of hypotheses get tested all the time in particle physics, 3 sigma effects appear all the time, and then disappear before getting a larger certainty.

See, eg, the look-elsewhere effect [1] or the problem of multiple comparisons [2].

---

[1] https://en.wikipedia.org/wiki/Look-elsewhere_effect

[2] https://en.wikipedia.org/wiki/Multiple_comparisons_problem


Re: arbitrary. I once attended a Stephen Toulman lecture where he explored the meaning of 'arbitrary.' While the word is often taken to connote "random and uninformed", the root of the word indeed connotes "arbitration," i.e. informed judgement.


In a normal distribution, about 3 out of 1000 values will be over 3 sigma just by random chance.

How many measurements do they make at CERN? They probably see random 3 sigma measurements fairly often.


That's the reason it is expected to have 5 sigma, so that anomalies are extremely rare. The more experiments one makes, the closer one gets to the mean value and this happens exponentially fast. By that, I mean that using some probability theory and some algebra you can get a confidence bound on the accuracy you demand based on the number of experiments that you run.


I think this happens sublinearly w.r.t sample size.


I would like to point you towards Hoeffding's inequality. The probability of deviating more than epsilon from the mean after n experiments is exponential in n.


I think you're talking about different things.

For a fixed range, the probability of being outside that range decays like 1/k^n.

But for a fixed probability, the width of the range with that probability decays like 1/sqrt(n).


If I understand correctly, "certainly new physics" is not correct. It makes the results very substantial, but you still have systemic things to consider at that point, like misapplication of a mathematical concept that no one has caught, or instrument error.


That's correct. I should have said, new physics with high confidence. Physicists tend to be very very conservative when it comes to discoveries as there have been some big blunders during the last century.


5 sigma is the customary limit for claiming “discovery” of a new phenomenon. Whereas 3 sigma is the normal exclusion limit. Meaning that in a search for a new phenomenon the fluctuation from the expected background measurements must be more than 3 sigma away to claim that the null hypothesis is valid. So if that can’t be claimed the reverse may be “evidence of” new physics. You can basically decode a particle physics article by keywording “search for”, “evidence for” and “discovery of” respectively.


5 sigma is a rule of thumb and it still needs to be qualified. IIRC, while the Higgs discovery was officially announced when the signal reached 5 sigma, most scientists in the field were already comfortable talking about a discovery well before that threshold as there were very high expectations (a prior I guess?) that the Higgs had to be there.

On the other hand, almost nobody believed the FTL Neutrino incident was real even if the signal was stronger than 5 sigma. Extraordinary claims still require extraordinary evidence.

IANP and everything.


The world isn't binary where 5 is absolute truth and 4.99 is irrelevant.

3.1 is better than 1, so people think about it more than they think about 1, but less than they think about 5. This is how science _actually_ works.


it's not uncommon that things get published below the five sigma bar, you would get one paper every decade else.. however 3 sigma is not enough, things get quickly back to noise in this field.. unfortunately, i left theoretical physics for these reasons: no new data:(


In my opinion, it's like when an expedition to an 8K is at a few meters of the top of the mountain and it makes the news. They are not there yet, but it can be interesting to follow the last steps, even although it is perfectly possible that they do not make it at the end.

And from a more academic point of view, although this is not a new discovery, it can be an interesting line of research. If they have used all the available data and they could only get sigma 3, I think it makes sense to publish what you got in order to justify spending your time trying to either make that a five or discard the hypothesis.


5 sigma is the standard for calling something an observation. It is not a barrier to publication! That would introduce horrible biases and encourage p-hacking.


Form the article: >The deviation presented today is consistent with a pattern of anomalies measured in similar processes by LHCb and other experiments worldwide over the past decade.

So while you're correct that the 3 sigma is quite a low significance by particle physics standards, it is consistent with previous measurements which yielded similar anomalies, presumably at lower significance.


When experiments are extremely costly, 3 sigma goes a long way in getting published.


Enough to perk up some ears, not enough to get excited about.


They announced Highs (discovery of the century) at 3 sigma as well.



I'm pretty sure that Higgs had 2 independent experiments at >3 sigma which at the time added to more than 5 globally.


The experiments each reported 4.9 and 5.0σ observations


Previously on HN with 166 comments https://news.ycombinator.com/item?id=26552375


I struggle to understand the love arm chair physicists on HN has for dismissing of anything that comes from particle physics.


HN loves their contrarian takes. I guess it's fine when rejecting one techy opinion for another, but it comes off as quite ignorant (and arrogant) when someone who couldn't explain what a Yang-Mills theory is pretends to better understand particle physics than CERN (a collaboration with thousands of the brightest minds in the field).


How can we monetize the Higgs boson?


Perhaps time for CERN to get on the NFT hypetrain...



[flagged]


Yes, that's... called science.


It's actually the opposite of the scientific method.


The scientific method is an iterative process of coming up with hypotheses about the world and testing them. I see articles about new hypotheses in physics every few months on news feeds, including attempts to obsolete the Standard Model and/or General Relativity. So far, none of them have been able to (i) explain more phenomena than existing theories, (ii) predict new ones, and (iii) actually be testable.

Such breakthroughs are really rare. Until then, most physicists are stuck with twisting and twiddling with existing theories, hoping that the existing holes can't be adjusted anymore to let their newly discovered peg pass.


The whole point of science is that you continually revise and update your theory based on new facts you acquire. That was the big revolution in thinking that the Scientific Revolution brought, as opposed to the worshipping of "ancient, immutable knowledge" championed by religions and by the study of the classics.

I'm curious what's your beef here.


No, it's not. The scientific method is you adjust the theory to match the facts. How else would progress happen?


I take it by "as usual" you mean "for the first time since the 1970s"? We have precisely one reason so far to adjust the Standard Model (neutrino oscillation) and it's not a resolved matter of how to do that.


IANAP, but I understand those decayment asymmetries are very hard to insert into the standard model. They will either lead to more particles (that can be detected) or a much more complex model.


And when did that happen last? SM since C T parity breaking has not been modified and it has predicted many things including the Higgs boson, the holy grail of particle physics.


When LHC failed to find superpartners, as one really easy example.


Supersimmetry is not part of the standard model though, right? It was one of the candidate extensions.


What’s wrong with adjusting a model designed describe the real world when new data from the real world comes in?


If your model has a lot of wiggle room it can be consistent with anything. Maybe the solar system really does move in epicycles https://www.youtube.com/watch?v=QVuU2YCwHjw&ab_channel=Santi...


The standard model actually does not have a whole lot of wiggle room. Suggesting that it has suggests to me that the person asserting it does not have the foggiest idea what they are talking about.


Some people consider the standard model to be a tape job. It can be modified to fit the predictions. But will provide any insights? Thats not necessarily the case. Ability to predict does not equal understanding


IANAP, but IIRC the standard model was born from the electroweak unification effort and in addition to giving some order to the particle zoo, it predicted the Higgs boson. I think that's a pretty important win.


I think it’ll be rewritten in the next five to ten years. There are a lot of new ideas emerging now that some of these expectations have been fully destroyed, and on top of that since verifying Higg’s we’re starting to see bosons are much stranger thank quarks :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: