Hacker Newsnew | past | comments | ask | show | jobs | submit | marta_morena_9's commentslogin

To speed things up: http://constructortheory.org/what-is-constructor-theory/

This is what the video talks about. I don't like this video much. It's tedious and lengthly.

Probably no reasonable human being would assume that Quantum Mechanic as we know it is the "holy grail" of physics. We came up with it within 100 years. Can we explain everything? Of course not.

To me, the fact that we have the uncertainty principle and schroedingers cat already prove that we basically built our understanding of physics on the premise that we can never understand it. We were basically throwing the towel and saying "yeah this is it". Advances are made through measurement and verification. How do you understand something that you can't measure, because the measurement changes the system?

It was pretty obvious to me when I studied physics that this is 100% not how the world works. It's a model, like everything else. In the model of quantum mechanics, which undoubtedly is one of the most impressive human achievements nonetheless, we have essentially complete uncertainty on the quantum scale. We can not peek beyond it and it doesn't give us any insights into how to peek beyond it. It is, in a way, the end of the line.

And like any end of the line in science, it needs a radically different thought or model, to overcome. I always found it amusing how physicists, who really should know better, actually believe that quantum mechanics is how the world works and that this is it. I always was with Einstein in rejecting Quantum Theory. It's a nice model. But that's really all it is. A model. This is not how the world works and we have come to see its limitations. We need something better if we ever want to leave our solar system and colonize space. Even Einstein fell into the trap of thinking that Quantum Mechanics is more than a model. "God does not throw a dice". Probably he does not, but it's the best approximation we could come up with. He should have refuted it and looked for something better instead. Who knows where we would be now.


Quantum mechanics is a deterministic theory. It does not have "complete uncertainty at the quantum scale". The probabilistic aspect only comes into play when you cast a quantum state down to a classical observable quantity like position or momentum.

You may have missed the subtle but important aspect of the uncertainty principle. It is not the case that there is a true underlying value of position and momentum for which the error bars are limited by quirk of theory. It is more like you expect to be able to observe 2 bytes of position information and 2 bytes of momentum information but the underlying quantum state is defined by 2 bytes in total. The full 4 bytes cannot fit inside 2 bytes. You can have a state which encodes the 2 bytes of position information but no momentum information, a state which encodes 1 byte of each, a state which encodes 2 bytes of momentum information but no position information, and everything in between. The uncertainty principle is actually a non-existence principle. When the position information exists the momentum information isn't uncertain, it just outright doesn't exist.


That is one interpretation, but the reality is that we do not know. In fact, what you are describing is the position known as anti-realism - that quantities that can't be measure are not real, a major feature of the Copenhagen interpretation.

But in other theories (those termed "realist"), such as Pilot Wave or Superdeterminism, the uncertainty principle is exactly an artifact of measurement. A particle has an exact position, momentum, energy, spin etc, but they can't be reliably measured all at once.

In the end, these are philosophical positions, outside of our current mathematical and physical tools. Perhaps in time we will be able to settle these questions, but for now there are no definitive answers.


> In the end, these are philosophical positions

Not really. Realist theories are actually just rhetorical sleight-of-hand. Bohmian positions, for example, are supposedly "real" but they cannot actually be measured, and this inability to measure them is not a technological limit, it is an inherent part of the interpretation. Bohmians essentially take the randomness, hide it in the infinite digits of a real (no pun intended) number, call that number "the actual position of a particle" and claim that, because it's "the actual position of a particle" that it is somehow "real". Well, it's not real. You can't measure it. You've just hidden the randomness behind some clever rhetoric.

> the reality is that we do not know

Yes, we do. The Bell inequalities and concomitant experiments show that quantum randomness is fundamental. You can hide it or sweep it under the rug but you can't get rid of it because it's actually part of our reality.


There are many classical systems that have properties we consider real that are nevertheless not practically measurable. For example, grains of sand have definite position and momentum, yet you can't in reality measure them for each grain of sand when you want to describe the flow of a sand in an hourglass. So the practical ability to measure something is not required for us to consider something "real".

Now, Bell's inequalities show that any theory of local hidden variables independent from the measurement apparatus is inconsistent with QM and with experimental observations. However, that still leaves the possibility of non-local hidden variables possible, and it also leaves open the possibility that the experiments have been flawed - that the measurement decision was somehow correlated with the result.

The Copenhagen interpretation is BOTH non-real and non-local, so I don't find a real non-local theory a priori problematic. Whether pilot wave theory can be made consistent with at least special relativity is still open as far as I know, but perhaps there is some equivalent of the no-communication theorem that will be found.

Superdeterminism is also seeing a resurgence - I can't claim to understand how it could be made into a real scientific theory, but perhaps there is something that is escaping me and we will see a successful theory come out of it.

Now, the matter of whether something can be considered "real" but fudamentally un-measurable (as opposed to just being practically un-measurable) is in the end a problem of definitions.

I would also note that there is no fundamental quantum randomness - quantum phenomena behave according to a linear equation, so they are fundamentally deterministic. It is only measurement (the interaction between quantum systems and classical systems) that introduces randomness - the superposition of all the solutions to the Schrodinger equation is replaced with a single position&momentum with some probability, and with error bars that are subject to the uncertainty principle.


> Whether pilot wave theory can be made consistent with at least special relativity is still open as far as I know

It can be made consistent, namely, adding a foliation (a notion of a canonical present moment) makes it work out fine. The foliation is not detectable, it gives the right predictions, it is mathematically sound, etc.

But it does feel philosophically sketchy and not fundamentally relativistic. However, there is a paper https://arxiv.org/abs/1307.1714 which explores how to derive foliations from the wave function itself and thus would be as relativistic as in theory involving the wave function.

The article also points to other articles with a variety of ideas how to deal with nonlocality in Bohmian mechanics.


> grains of sand have definite position and momentum

No, they don't. The uncertainty principle applies just as much to grains of sand as it does to electrons. The only reason they appear to have definite position and momentum is that their masses are large, so a very small velocity produces a very large momentum which gives them a very small wavelength, much smaller than their physical extent. But the uncertainty principle still applies. With the right technology you could do a two-slit experiment on grains of sand. See: https://www.wired.com/story/even-huge-molecules-follow-the-q...


To be fair, the parent was talking about "classical systems," not about the applicability of quantum-mechanical principles to macroscopic objects.


But there is in reality no such thing as a classical system. The world we live in is quantum. Its behavior approaches that of a classical system as the number of entangled degrees of freedom grows, to the point where classical physics becomes a damn good approximation. But reality never actually becomes classical. The wave function does not actually collapse. And grains of sand do not have actual positions and momemta.


Whether this is actually true or not remains an open question, but it is indeed the expected consequence of QM. However, there is as yet no experiment showing any kind of quantum behavior for huge objects (closest is a few hundred million atoms as far as I know).

Still, the point was about a classical model, not about physical reality being classical or quantum. To rephrase, my point was that even in a classical model there are quantities that can't practically be measured (even though classical mechanics as a model assumes all objects have definite positions and momentums), and that doesn't stop us from accepting in such a model that they are 'real'. So, the idea of property that is 'real' but not really measurable already exists in classical physics, it is not an invention of exotic interpretations of QM.


The Bohmian model is that particle positions are an intrinsic part of the evolution of the theory. This is in contrast to, say, the Copenhagen interpretation in which the particle position is not a part of the evolution of the system. In the CI, wave functions are real (I guess) and we need something external to the model, called measurements, which mathematically do some complicated operation to the wave function of putting it in an approximate eigenstate (it can't do it exactly nor is the approximate nature specified and the timing of this is generally specified but not really clear about time of arrival measurements in scattering experiments).

In the Bohmian model world, particle positions would be real. They may not be measurable to the inhabitants to infinite precision (assuming quantum equilibrium, i.e., the particles are distributed according to psi-squared), but the particles have definite postions in this world. It has in it no need to postulate anything about measurement. Observables and all that are deduced from the theory.

It is important to ask what in a model has a fundamental existence, what has an implied existence, and what is just a useful term for something secondary to all that. In Bohmian mechanics, particles (things with position) have a fundamental existence, the wave function has an implied existence because that is how the particles move about, and spin has a secondary existence since it does not have a separate existence from either of those two objects, being solely deduced from the motion of the particles.

The Bohmian model is well-defined and has an easy correspondence between the entities that are real and our experience (stuff has position). It does not require observers to be complete. A Bohmian world could happily exist without PhDs, humans, mammals, life forms, etc.

What's real in CI? Not really sure, but I guess the wave function and measuring devices? It does require something that corresponds to measurements and the evolution of that classical world is distinct and separate.

This is the context of calling BM a realist theory. I would just call it an actual theory.

Bohmian mechanics illuminates the quantum randomness, allowing it to be analyzed. The theory happens to be deterministic (ignoring creation/annihilation), but it derives the apparent randomness from all that and it clarifies the meaning of psi-square.


> What's real in CI? Not really sure, but I guess the wave function and measuring devices? It does require something that corresponds to measurements and the evolution of that classical world is distinct and separate.

The wave function and any of its behaviors are not considered Real in CI, quite the opposite. CI posits that the wave function / schrodinger equation is just a mathematical tool that can be used to model the behavior of quantum phenomena - it is merely a tool that we can use, together with the Born rule, to predict the outcome of measurements. The measurements are the only things that are real. That's why CI is also sometimes summarized as "shut up and calculate" :).


>It is more like you expect to be able to observe 2 bytes of position information and 2 bytes of momentum information but the underlying quantum state is defined by 2 bytes in total. The full 4 bytes cannot fit inside 2 bytes.

So the programmer who wrote the simulation program took a shortcut. "No one will ever look at reality at this granular of a level, we should be good." Figures.

Seriously though, great explanation.


I apologise for replying with what is essentially a rant, but as far as we can tell simulating quantum mechanics is exponentially (in the precise, mathematical sense) harder than simulating classical mechanics.

I find it very "triggering" when people use this metaphor of uncertainty as a programming shortcut, when my life (currently working on simulating quantum systems) would be substantially (exponentially) easier if reality was classical.


Interested layman, but this always kind of rubs me the wrong way.

I consider it a universal warning sign when your physical theory postulates that reality works a certain way underneath, and then conspires to keep you from exploiting this way. Superluminal collapse, for instance, or closed timelike curves; it seems to my intuition that these are clearly flaws in the theory rather than properties of reality. In a sense, the sci-fi concept of the ansible, where quantum collapse effects are exploited for FTL signaling, is more plausible than the view where quantum systems collapse instantly, but cannot be exploited for FTL. If your theory requires FTL in the "backend" to operate, it shouldn't end up with FTL just happening to not be exploitable in the "frontend". So I expect that when we find the true ToE, it's not going to end up with traits where the data structure in principle allows a certain capability, but coincidentally doesn't give us access to it; all limitations of that theory would follow directly from its structure, so that illegal operations cannot even be expressed. So when we find a practical limitation to the exploitation of a physical theory, then this seems to me to require, barring very good theoretical reasons to the contrary, a genuine underlying property of the universe that justifies this limit.

As such, it seems to me, purely from an intuitive view, that of those three claims:

- reality is quantum

- quantum mechanics requires exponential cost to simulate

- quantum mechanics cannot be exploited for exponential computation

One has to be false! If reality requires exponential cost to simulate, we shouldn't expect it to then turn around and hide that capability from us. God famously doesn't play dice with the universe; but I doubt he's playing Poker either. Personally, I suspect quantum mechanics is cheaper than is widely believed.

Idk, am I missing something obvious?


> - quantum mechanics cannot be exploited for exponential computation

As far as we know for now, [a variant of [0] ] this one is the wrong claim: we do have algorithms (Shor's, Grover's) that are faster on Quantum Computers than any known classical algorithm.

Of course, we do not yet have any proof that a faster classical algorithm can't exist, so the possibility remains open that this claim remains true. BUT, if this claim turns out to be true, then it is extremely likely that either one or both of the other 2 claims will immediately turn out to be wrong.

[0] I would note that the "exponential" part is somewhat of a red herring. If there is any kind of asymptotic speed-up that QCs fundamentally have over classical computers, even if it's just quadratic, then the larger point will remain valid.

> In a sense, the sci-fi concept of the ansible, where quantum collapse effects are exploited for FTL signaling, is more plausible than the view where quantum systems collapse instantly, but cannot be exploited for FTL.

This is also what rubs me the wrong way about entanglement, and seems a very tantalizing thread to pull at if one wanted a more fundamental theory. The other thread of course is the measurement problem.


Your two threads seem to be essentially the same thread. The "weird stuff" in entanglement comes pretty directly from collapse as part of measurement. If you drop this and go to something like many worlds then things look a bit nicer, but then you have so solve the measurement problem in many worlds, which seems quite hard.


But collapse is fake news, as it were. A state can be called a “superposition” only relative to a particular set of basis states, which, in turn, depends on what observable we are talking about, and the “collapse” is simply yet another way a state can change.


No, collapse or some alternative to it is a fundamental component of quantum mechanics. Without it, quantum mechanics makes wildly wrong predictions.

The Schrodinger equation almost always predicts that a particle has some particular amplitude at many different locations. However, if you try to detect a particle, you will never find it at more than one location. Furthermore, once the particle is detected at one particular location, you need to update the Schrodinger equation to give it amplitude 1 at that location and 0 everywhere else - if you don't perform this nonlinear update and instead use the old wave-function, the predictions for further experiments will be completely off.

Now, the physical interpretation of this collapse varies wildly between different interpretations of QM, but some variant of it is always required - otherwise, the observations simply do not match the math. We don't yet have any theory consistent with QM that can make predictions without applying the Born rule.


I believe in Many-Worlds, the update step falls out of decoherence "for free." Because you are also in superposition after the measurement, the "you have measured 1" part of your wavefunction of course measures amplitude 1. That doesn't give you the Born rule though - I believe there's some attempts to get it out of game theory [1], where they show that given sufficiently long timespans, "almost every" worldline has a history with quantum measurements whose distribution matches the Born probabilities.

[1] https://arxiv.org/abs/0906.2718


I think almost everyone in the field would reject your third claim. There is a great deal of work being done on applying the speedup afforded by quantum mechanics to classically hard problems.

The current consensus (at least among people I speak to) is that a quantum computer will probably be able to solve some problems which are in NP (and not in P), but will probably not be able to solve an NP-hard problem.

Your intuitive view is appealing, but I don't think one should get too hung up on intuition. Our intuition was honed for millions of years to avoid large wild animals and avoid falling out of trees in prehistoric Africa. It generally sucks pretty badly at making guesses about the fundamental nature of the universe (or at least mine does).

Edit: by the way a faster than light communication "ansible" is functionally equivalent to a machine that sends messages back in time. I disagree with your view that such a machine is more plausible than the view that some ftl stuff happens on the "backend" but is hidden from the "frontend". Having said that my view is that there is no ftl stuff on the back or front end, this is consistent with something like the many worlds interpretation of quantum mechanics, which is entirely local and causal (no ftl stuff).


I'm aware and agree that quantum computer speed is very probably greater than classical and may allow some NP. My point is more that I'd expect the performance cost of QM to equal the complexity class of QM. I don't know, is that the case in current research? Whenever people talk about how expensive it is to compute QM, that always sounds a lot higher than the actual performance they get back out. Or maybe I just have a bad imagination for how high NP really is.

Disagreeing again with the "Ansible is unrealistic" view: I agree it's unrealistic in a global sense, I just think all of its unrealism comes from the use of FTL in the backend. FTL + relativity produces timetravel, yes. I just think it's implausible to expect a physical universe to first provision the mechanics for FTL, then provision a system that gives you timetravel when exposed to FTL, and then very carefully separate those two systems so that they computationally never touch, even though in theory they could! and so in a sense, this physical theory implies that this universe has to be prepared, in principle, to allow time travel and account for it, and having gained this capability has elected to specifically not use it. It just stinks of a massive Occam violation - the potential for time travel is an "entity without necessity", and the theory is bloated by it twice over; once by its inclusion in the physical logic and once again by the additional exclusion mechanism that makes it unusable in every reachable state. And so the ansible universe pays slightly less cost just by leaving out the global censor.

Am I saying "no correct theory can look like this"? No! I am saying it enters the race with a huge handicap.


> My point is more that I'd expect the performance cost of QM to equal the complexity class of QM.

My understanding is that that is a yes - with a quantum computer, quantum simulations [are expected to] run in linear time instead of the current exponential time.


I think this is not correct. A fundamental problem in quantum simulation of quantum systems is the Hamiltonian problem, where you give me a Hamiltonian H, which is the sum of polynomially many little local Hamiltonians H_i each of which acts on at most k subsystems of your big total system.

Then you give me two numbers a and b (with some technical constraints) and ask me whether the ground state energy of H is between a and b.

This problem is QMA-complete as long as k >= 2, and known to be QMA-hard even for some pretty nice looking Hamiltonians. Here QMA stands for Quantum Merlin-Arthur which is the complexity class equivalent to MA (Merlin-Arthur) for classical computers. You can think of MA as being related to BPP in the same way that NP is related to P, and this is the same way that QMA is related to BQP.

Basically QMA problems are not expected to be solved efficiently by a quantum computer.


I am way out of my depth here, thank you for weighing in!


I think we may be talking about different simulations. I'm talking about THE simulation.


I think the GP understood this, but their point was that it is a false assumption that introducing the uncertainty principle in a classical simulation would simplify the computation.

That is, if we think that the universe is a simulation running on a classical computer, then quantum effects such as the uncertainty principle CAN'T be a "shortcut" to make the simulation easier - as in fact they make the simulation literally exponentially harder. If we assume that the universe is a simulation running on a quantum computer, then quantum effects are fundamental anyway.

Of course, everyone understands that this is a simple joke. Still, it's interesting that it goes completely against our intuitions as computer simulation designers.


https://m.youtube.com/watch?v=RlXdsyctD50&feature=emb_rel_pa...

PBS Spacetkme has a great video on pilot wave.

Getting rid of all the messy probability issues is amazing.


It's also worth skimming the wikipedia page on quantum decoherence [1].

Quantum mechanics forms a consistent theory without any concept of randomness or wave function collapse, and decoherence explains how this evolves into a classical observation when things get "big". There is nothing fundamentally random about this evolution. We still use random numbers (wave function collapse) to approximate observations, because in many cases quantum calculations for macroscopic objects would be intractable, and because they give the same answer to any precision we can hope to measure.

Every experiment that demonstrates quantum entanglement is essentially just showing that quantum mechanics still applies on systems that were previously thought to behave classically. And none of these experiments have demonstrated a mechanism for wave function collapse beyond what we already understand from decoherence.

[1]: https://en.wikipedia.org/wiki/Quantum_decoherence


No, that is a misunderstanding of what decoherence does or does not buy you.

Decoherence does explain why macroscopic systems can't (normally) exhibit self-interference and some other quantum effects.

However, decoherence doesn't in any way solve the other aspect of the measurement problem: why the object is only ever found entirely at one location instead of appearing with different amplitudes at multiple locations, as the Schrodinger equation predicts. That is the source of the "fundamental randomness" and it is not explained by decoherence.

For a slightly more mathematical (but still very understandable) explanation of what decoherence solves and what it doesn't solve, I recommend this article [0] by Sabine Hossenfelder.

[0] https://backreaction.blogspot.com/2020/08/understanding-quan...


Exactly, you end up with objects appearing at multiple locations that don't interfere with each other. This is completely consistent with observation, it's just weird: it means all the outcomes actually happened and the observer just can't see it.

So decoherence either fails to solve the measurement problem or solves it by saying the universe is weird.


It is certainly not consistent with observations. Imagine you set up two detectors at two locations. Imagine further that the schrodinger equation shows that the particle has the same non-0 amplitude at both locations.

Nevertheless, you will NEVER see both detectors detect the same particle. You know with 100% certainty that if one detector "saw" the particle, the other one didn't (this is true regardless of how far apart the detectors are).

Furthermore, imagine an experiment like this:

1. A particle P1 is fired, and it has some amplitude at locations Det1 and Det2.

2. At location Det1, a second particle P2 is made to collide with P1; the collision will throw P2 towards location Det3.

Now, the correct probability of finding P2 at location Det3 depends on whether Det1/Det2 are activated. If Det1 detects P1, the probability for P2 to reach Det3 is 0, but this is inconsistent with the Schrodinger equation for P1 and P2. To make the results consistent with observations, you must update the wave function for P1 and P2 after Det1 has detected P1.

Now, the MWI tries to circumvent this by postulating that both results happen at the same time, just in different worlds; and that Det3 becomes entangled with Det1, so their results are perfectly correlated. But that runs into other problems, both philosophical (how could the other worlds be "real" if they are fundamentally undetectable) and formal (e.g. it is not yet agreed that a consistent, non-circular definition for probability in the face of many worlds can be given that is consistent with the observed probabilities computed according to the Born rule).

Edit: I should also mention that without the Born rule, if Det1 and Det2 were both going to launch a satellite if they detected P1, then both satellites would have some amplitude in orbit; and you would start expecting to see the gravity of both satellites, which breaks off with experimental observations even more sharply.


> The probabilistic aspect only comes into play when you cast a quantum state down to a classical observable quantity like position or momentum.

That's also part of quantum mechanics. At least if you are trying to do physics.

https://www.math.columbia.edu/~woit/wordpress/?p=10533&cpage...


wow this is "the" best explanation I have ever heard as an outsider of physics. thank you. actually this made me feel comfortable with the universe and effort explaining it. I think any one who speak about information must use bytes!


I'm not convinced it's right. Einstein was wrong on some aspects of it too, i.e. local hidden variables.


People have already replied to a lot of you wrote here, so I just want to add that Einstein emphatically did not reject quantum theory. Indeed he is probably one of the people who could fairly be called the creator of quantum theory.

What Einstein was rejecting with that quote is what we call the "Copenhagen Interpretation" of quantum theory. In my opinion he was quite right to do so, I do not know anyone in the quantum foundations community that takes Copenhagen very seriously any more. Einstein's qualms about Copenhagen lead the way to us understanding its flaws, and to people like John Bell putting us on a path to fixing some of those flaws.

Einstein spent a lot of time looking for "something better" than Copenhagen, sadly he did not find it, but people who came later than him did (Everett, Bohm, the quantum Bayesianists etc).

I'm going to slightly repeat what others have said here and talk about your issue with the uncertainty principle. Think about a wave travelling on the surface of an otherwise smooth swimming pool. The wave is an extended object, it is spread out. When we model the wave the fact that we don't assign it a perfectly precise "position" is a feature, not a bug of our model. Similarly I would argue that the uncertainty principle is not some barrier "blocking" us from seeing the true values of observables as we would like. It is a statement telling us that our mental model of things having a precise position and momentum at the same time is wrong, as wrong as trying to locate the wave at a precise position on the pool would be.


You're treading down a well worn path here, Einstein was down this road as well as you note and got stuck there for the rest of his life.

Everybody in the field would WELCOME something we can measure but can't explain with QM. But there isn't, and not due to lack of trying. Cosmology aside (difficult to measure) all the measurements of femtoscale interactions and condensed matter "constants" and their corresponding QM/QFT calculations that match to like 13-15 decimal places.. there are just incredibly massive chunks of evidence in favor of QM. It's just not a "first approximation" of reality that can be thrown out easily.

Theorists and experimentalists in the field come up with new tests every year (they are my favorite papers to read!), so it's absolutely not like there is some inherent scare of not daring to contest it. It's just that you need experiments to contest well-established and working theories, you can't just go out and say "well, but I don't like it"..

By the way as I'm sure you know if you studied this kind of physics, the "it doesn't give us any insights into how to peek beyond it" isn't really true, there are falsifiable assumptions that are tested and peeked at all the time - I'm thinking on the hidden variable theorems and validations. The theorems don't assume any specific theory (QM or a new one would do). That road is a nice one to casually drive down.


This idea seems similar to Wolfram's somewhat new hypergraph + rule idea: https://writings.stephenwolfram.com/2020/04/finally-we-may-h...


This is a fascinating read. Thanks for sharing.


>> I always found it amusing how physicists, who really should know better, actually believe that quantum mechanics is how the world works and that this is it.

Is this a reasonable stance to adopt? "All experts in a field disagree with me - how amusing". Amusing or not it suggests that perhaps you are missing something that those experts are aware of.

>> We need something better if we ever want to leave our solar system and colonize space.

The way I understand it -I am, myself, no expert in the matter- we could be colonising space right now, the challenges are primarily engineering and economic (it would cost too much to develop the necessary technology) but we are not really lacking fundamental knowledge of how to do it, e.g. generation ships could work for that purpose, in principle, but nobody is mad enough to build one, let alone ride it if one was built.


You're familiar with Bell's Theorem, yes? QM puts significant constraints on any physical theory that might underlie it.


The uncertainty is not exclusive to QM and is a property of any ‘wavy’ system, it seems. It is inherent. Here is the example: https://m.youtube.com/watch?v=MBnnXbOM5S4 — if you want to defeat QM, attack its wave-part.

It would be nice to have much better (deterministic, measurable) physics, but some properties just emerge from a fundamental level and you can’t do much about it (monster group etc). Nature and maths don’t care about our confusion and convenience.

>should have refuted it and looked for something better instead

Afaik, the time when theories were driven by ideas is over. Today it’s petabytes of data that you can’t really argue with over a mailbox, only discuss. (I’m just a layman physicist with deep interest, but shallow understanding, so please don’t quote me on any of this)


Honestly, seeing the high-level requirements for what constructor theory defines as Possible / Impossible, I mostly expect that it will have to conclude that many of the transformations that happen in QM are Impossible (a constructor can't be conceived that could achieve those tasks with arbitrary precision and reliability).

Related to your observations, I believe you are absolutely right in that it's likely we've sort of reached the end of the road in exploring beyond the quantum scale with the current approach. I do believe there are some dangling threads that can be pulled to help guide some other directions - such as the measurement problem and the no-communication theorem.

I also have some hope that computer theory may be able to shed some light - at the moment, we have a clear distinction between quantum algorithms and classical algorithms, but we don't know if they are fundamentally different or not. A discovery of how to efficiently compute quantum algorithms on classical computers (i.e. BQP = P or at least BQP = NP) would likely be a major insight into QM itself. Conversely, proof that quantum computers are fundamentally different from classical computers would also hopefully show WHY/HOW they are different and help in this area. Alternatively, if it turns out that quantum computers are in fact not physically realizable or require exponential memory or energy would essentially prove that physical reality does not obey some of the properties that make quantum computers (apparently) more powerful than classical computers.


> Probably no reasonable human being would assume that Quantum Mechanic as we know it is the "holy grail" of physics.

Fully agreed. This, however, rests on the assumption that there is a 'holy grail' in physics (aka a theory of everything). It's a matter of taste, but I don't like that idea that there is a theory of everything because it doesn't seem reasonable that every last bit of our universe is explainable by a single theory. Why would that be? Seems like a conspiracy, if this was the case. To me, it's rather quite comforting to assume that there is something (be it the position and momentum of an electron) which we won't ever be able to understand because it just seams realistic. If this wasn't the case and we could show that there is a theory of everything I would immediately start my quest to find Morpheus to ask him to give me the right pill to wake up.


Strong assertions.. I think that you'd better study the EPR paradox (E as in Einstein), Bell's inequality https://en.wikipedia.org/wiki/Bell%27s_theorem and the Alan Aspect (and other) experiments.

To summarize: reality isn't "local" even though we cannot send information faster than light..


There are (at least) two local, deterministic interpretations of quantum mechanics (many worlds and superdeterminism).

You are correct that Bell-type experiments put very strong constraints on such theories. If your theory is local it is going to be very weird.


This is not meant as a snarky comment - but I would be happy to hear how many worlds is a local theory (as I guess it isn't a fully operational theory in the sense that QM+Born rule is). Even if you assume the independent, local evolution of all "branches" of Psi, at some point you need to explain interference (as that is an observed phenomena in our real world) and the comparison of different swaths of Psi's is not a local operation. How is this handled in that interpretation?


By local I mean local in the sense of special relativity. Stuff at a point in spacetime only affects stuff at that point and in its causal future. If you want to affect something somewhere else then you have to fire a light particle or whatever there.

Interference between different "branches" of the wavefunction is local if you're comparing the value of the two branches at the same point(s).

Edit: I should emphasise that interference is a completely local (in the sense I use it above) process in every interpretation of quantum mechanics I am aware of (even Copenhagen) generally the thing that makes quantum mechanics non-local in various interpretations is measurement.


Yeah I guess we're thinking similarly in that case :)

It IS annoying though that it seems like the "last step" to produce measurable results requires a global integration though even though all the field interactions are local. The path integral QFT approach even makes this explicitly manifest, and modern QFT is the gold standard of reality-matching-predictions no matter the field..

Oh BTW re your edit: I'm not sure what "local interference" would mean. If interference effects could be solved locally, there would be no entanglement, no violated Bell's theorem or EPR experiments etc.


I think maybe we're talking about different things? Could you be more precise about the sort of interference effects you're talking about?

I generally wouldn't call entanglement an "interference effect". When I think about interference my mind goes to the patterns emerging from double slit experiments.

Consider a quantum teleportation experiment, I start with

N (a|0> + b|1>) ( |00> + |11> )

where N fixes the normalisation

I do a measurement on qubits 0 and 1 in the basis ( |00> + |11> ), ( |00> - |11> ), ( |10> + |01> ), ( |10> - |10> ) and observe the outcome associated with ( |00> + |11> ), so now my state is

N ( |00> + |11> ) (a|0> + b|1>)

Now we collect our grant money because we've teleported the state to the other end. I don't see where the interference effect(s) were, from my perspective everything looked local and nice except for the impact of measurement, which is nonlocal in the "standard" picture and unexplained in general.

I think my perspective on interference comes from the fact that water waves and sound waves (for example) exhibit interference in pretty much the same way that wavefunctions do, but we can't break Bell inequalities with them since we don't have the same weirdness around measurement.


As these 'interpretations' provide no new prediction, I think that Occam's razor apply here..


Occam's razor is not trivial to apply when everyone claims that their own interpretation is the simplest.


Shrodinger cat is not a physical thing. It just comes from interpretation which has no mathematics behind it. There are many equivalent interpretations. And they are equivalent because none of them postulates any additional math to observe.


Like... Facebook can read all your messages? The encryption only protects you from entities that are not Facebook. With that out of the door, it's not really worth it to even consider other angles of attack, since you now depend on the goodwill of a billion dollar company whose sole purpose and reason for existence is to extract and monetize your data. WhatsApp is a nice chat tool, but it's nothing you should ever use for secure communication. Sending a letter via mail is more safe.


that's incorrect. WhatsApp is end to end encrypted using the signal technology as confirmed by moxie. are you suggesting Facebook tricked the signal team into believing that WhatsApp is using end to end encryption when it isn't? a source: https://signal.org/blog/there-is-no-whatsapp-backdoor/


Honest question, then: outside of things like ethical concerns about supporting Facebook, and some missing features like enabling a passphrase to open the app, why should anyone use Signal over WhatsApp? Given that WhatsApp has a better UI/UX and that the people you want to talk to are much more likely to have it, and it supports some things that Signal is known for like disappearing messages (though limited to 7 day expiry).

I love Signal and all the work Moxie and his team have put into it and the protocol, so this isn't a diss to them, but just wondering what the disadvantages would be for someone just looking for an E2EE communication app.

One difference I suppose would be that Facebook would have all the message metadata; just not the contents.


If they can, then either Facebook boldly lied about implementing end-to-end encryption[0], or found significant attack against the Signal protocol, which is considered quite safe.

They do have access to the metadata, i.e. whom you messaged and when.

--

[0] https://techcrunch.com/2016/04/05/whatsapp-completes-end-to-...


> Facebook can read all your messages?

Whatsapp still does E2E encryption. (Same as Signal) Facebook can't read your messages. (There's still the scenario of you getting a special version of the app with that functionality disabled of course) You can also enable notification for changed signatures, which I do see changing as people replace their phones.


If there is funding, there is apparently a need for research.

The solution is never for the individuals to induce change. This doesn't work for consumers (i.e. you can't say "If everyone stops buying plastic bags, we do something for the environment", because well, nobody cares... You have to make selling plastic bags either illegal or add a significant surcharge that causes consumers to avoid them for reasons that actually affect them, like "too expensive")

And it doesn't work for anything else either. As long as there is funding, this will go on. The "solution" is to stop funding, but apparently since funding is still flowing, someone still values the research. So what is even the problem?

No progress has been made? Well, if funding is still there, whoever funds this seems to be beyond happy with the progress. This is a non-issue.


> If there is funding, there is apparently a need for research.

This is very close to "There must be a pony in there somewhere". [1] If the question is whether the field is worth continuing to fund, the answer can't be, "Well since people are still funding it, it must be worth funding." By the same logic, we should just keep putting billions into WeWork.

Research is hard to value, and fundamental research especially so. The logic of "if people buy it, it must be good" only patchily applies to normal commerce, where results are relatively easy to measure and feedback loops are short. It's wholly inadequate for feedback loops on the scale of decades.

[1] https://www.google.com/search?channel=fs&client=ubuntu&q=the...


This makes sense until you realize the people funding the grants are not the people approving the grants. It is really a lot easier to spend other people's money poorly than to spend your own poorly. Besides, if you want to know if a grant is worth funding who are you going to ask? Probably esteemed people in that field or something related, who all have the same poor incentives and institutional inertia to contend with.


And when politicians are funding long term science, they aren't funding the actual long term benefits, they are funding the APPEARANCE of producing long term benefits (and the funneling of money to those who can provide kickbacks). As long as the pretense can be maintained, anything can be funded, regardless of its actual utility.


The logic is not "if people buy it, it must be good", but more "if people buy it, there must not be something else that they can buy that gets them what they want for less".


I think that's within the working definition of "good", but even if it isn't, I think my concerns still apply. It might on average be true in certain narrow circumstances, but there are so many exceptions it conceals at least as much as it reveals.


+1 insightful; 100% agreed; well-put!


Slashdot reference?


Yep


Perhaps the issue is that the funding agency is not sophisticated enough to realize that there's no value, or more value elsewhere?


OR perhaps the issue that the same people who are in the clique of the said field are also evaluating the research proposals. So the funding doesn’t run dry?

Not that that’s a good or bad thing. You wouldn’t want a “non-expert” to evaluate the proposal. And perhaps a tapering of the funding is better indicator of interest/progress of the field?


Generally there's an outside source of funding that's not part of the field. Thus top scientists find themselves becoming "rainmakers" rather than doing research. You tell your staff to wear their white coats and glasses, lead a tour around the facility, point out how big the machines are, etc.


Well, there _is_ quite a bit less funding for anything in Physics nowadays, at least compared to the Cold War era. Some professors have indeed been forcibly pushed out of Physics for this reason, but mostly it has an effect of making things more hopeless for younger folk (due to the cronyism problem mentioned above).


This doesn't make any sense. Mirrorless cameras cost a couple thousand dollars (I have one) and the last thing I am going to wear down/abuse this equipment with, is to jump into daily meeting lol. At the low end, there are decent webcams, like Logitech HD 1080p, but it costs over 100$ and honestly, it's a couple of years behind the current. There don't seem to be webcams of around 100 to 200$ that are state of the art. If you look at mobile phones, you see the difference. Those mobile cameras are like 1/5 or 1/10th of the size a webcam could be, so there is LOTs of room for improvement, even in this price segment.

I am pretty sure we could easily get 4k recording with decent, artificial depth-of-field (LIDAR & all) under decent lighting conditions (which is why this can be done cheap) at a price point of 100 to 200$. Just nobody seems to be doing it.

And yeah we definitely do stream 1080p over meetings and most people are always like "Oh wow, what kind of webcam are you using". That's for my years old logitech... The bitrates are pretty low, even for 4k. You can definitely have a couple of those over almost any current internet connection without issues.

My main gripe is depth-of-field. Add some LIDAR and I am happy. Apple webcam anyone?


I actually did run my Fuji camera as a webcam for a Q&A at work. 35mm f/1.4 with a ring light left me looking downright pretty if I do say so myself.

https://imgur.com/7CxnRLK


I use my Fuji x-t3 with a viltrox 23mm f1.4 regularly as a webcam at work. During the first week in all the meetings there was always someone asking about how did I get that 'cinematic' look.


That's quite a difference, indeed.


The real limitation: Fuji’s software for Mac is not amazing. It works, but it’s not entirely reliable.


Logitech Brio is probably the most "state of the art" webcam you can find (and it's a few years old). Decent quality 4k (or 1080 at buttery FPS) with all manner of gubbins to correct lighting - amount, flicker, hue etc Oh, and does have depth sensing - with separate lens that'll scan your face and log you into windows (or out when you walk off).


> with separate lens that'll scan your face and log you into windows

And hence the problem. Now you have a camera is dependent on custom drivers deeply entwined with your OS that are a security risk. It also isn't likely to have usable drivers five years down the line.


It works like normal webcam if you don't need face recognition login.


Mirrorless cameras cost a couple thousand dollars

No, they don't: https://www.adorama.com/ifjxt200sk.html

And that's a new, relatively expensive model.


"On Backorder Order now, your card will not be charged until it is ready to ship."

Generally, this means "a long time". I've been waiting 6 months for a recommended underwater camera, still nada.


there are £400~£600 mirrorless cameras that would be overkill for most people... to notice any difference you'd need pro-level lighting, and a fantastic internet connection - and then you can justify the high-end 4-digit $$$ camera

I can see the argument that the market for mid-range webcams must be pretty small, as the ~$100 range is ok for most people


I could buy a refurb or used Olympus E-M5 mark ii on ebay for $250, then pick up a lens and be in business.


What meeting software are you using that streams 1080p?


> I'm really curious what Apple's differentiators will be.

Really? Why are you not curious what the differentiator of all the car companies out there is? Why don't we just have one car company?

(Hint: There is so much demand, so many different tastes, budgets and features and regulation, that a single company will never serve all of them. Tesla is a niche product and I for one would never buy one)

The iPhone was special in the regard that there simply was NO competition. We are not talking EV vs combustion (which was Tesla's main step, and even still, everyone knew EVs would work and how they work, but nobody was ready to fully commit to them yet). We are talking horses vs. cars. Apple managed to establish themselves in a market that most companies didn't even think exist...

So no comparison here. All Tesla has done is make EVs more approachable and "cool" for the masses. However they have literally ZERO differentiator compared to established car companies. They are going to tank so hard stock wise in the next 10 years that investors will wonder what hit them.


I'm not curious about other car companies because their products are on the market already and their differentiators are well known by everyone...

> they have literally ZERO differentiator compared to established car companies. They are going to tank so hard stock wise

Their stock is insanely overvalued, no argument there. But they have one of the largest installed and fastest growing DC charging networks, software that doesn't suck, and the most capable and fastest improving driver assist system enabled by a proprietary custom ASIC. That's far from "ZERO" differentiators, especially in a world where other manufacturers keep faceplanting on their software efforts over and over again.


Also, incredibly well-integrated battery packs, a reliable source of batteries (at the volume they need), and various other ventures to amortize their EV-related costs over (e.g. Tesla Energy).


TBH they're pretty much the only car manufacturer that doesn't look like generic car interior with generic car exterior shape too.


Perhaps Tesla is the Blackberry of the car industry, waiting for Apple to show people what they've actually been waiting for in personal transportation. :P


More likely Apple Car is going to be the Apple Maps of the car industry, for a couple decades at least.


These studies are always funny. So what this is trying to tell us is: Volunteering boosts your health, while the data only provides us with "volunteers are generally in better health". Now this study "tried" to adjust to that by some filtering, but this is still a pretty big leap. Essentially what they would need to do is this:

Have one group of people who "actively" is kind to people, i.e. they actually act on their desired.

Then compare the results to a control group who "wants" to be kind to people, ideally has a proven track record of being kind to people, but are not allowed to act on that during the duration of study. Then of course they also need to still be as satisfied and happy as before, otherwise they turn into a biased control group.

Then if there are no differences, and compared to another control group of "normal" humans, there is a statistically significant improvement in health, then and only then, they may be on to something... Otherwise this is just another case of survivor bias.

This is all just pointless. At least here it's for a good cause, but I always am amazed by what kind of conclusions people derive from the well established: Causation implies correlation... Erm, NOT.


I agree that some of the examples are a bit fluffy, but later in the article, they mention a much stronger study! They cite an example of high school students being split into a control and treatment group for tutoring students with a clearly observable output based on biological data.


It's "strong" in that you can see some effect in blood of people tutoring vs people waiting. But that's about it, you can't draw other conclusions from it. It says absolutely nothing about volunteering, health, being kind, etc.


I started counting the number of qualifying words, even in the linked studies, and then just gave up.

Is human behavior truly so opaque that experiments cannot be designed that conclusively determine specific causation?


One cannot possibly design proper controls with human behavior.

Even in pharmaceutics, confounding variables are already common place that led to a considerable replication crisis, with human behavior it would be even more impossible to control for every variable, and measure only what one wants to measure.

A very simple problem with such a design which would perhaps indicate that human beings that are not allowed to be kind would be less happy is simply that human beings become unhappy when they are told what to do, not necessarily anything related to kindness; it's certainly not implausible.

And the fundamental problem remains that a man cannot be compelled to participate in any study, and that alone is a selection bias that cannot be overcome — studies thus select upon the kind of person who have the time and willingness to participate in them.


When faced with a choice between:

- causality is less widely applicable than we like to think

- our behavior is more complex than we like to think

I'm more inclined to doubt causality. Not that it's not a useful framework in general, but that is less useful when applied by humans to humans regarding human constructs like kindness.


What does that even mean? What's the difference between A and B? I assume You don't think a spirit comes down and mediates inter-human interactions- they're not beyond physics- but rather that they're "beyond causality" as in 'normal' models don't have as much predictive power as you'd hope? But if it's that, then I don't see a practical difference between both choices.


You're correct. I'm not proposing some kind of spooky process that can't be described in terms of causation.

But I am proposing that we should have some skepticism when importing an explanatory style that has worked well in physics and expecting it to perform equally well everywhere. I think we're likely to make mistakes of this sort:

If a cat's head appears from behind the couch, and then later its tail becomes visible--you wouldn't say that the cat's head caused the cat's tail--they're just different parts of the same phenomenon. We're familiar with cats, so we don't make this exact mistake, but I think that we are quite susceptible to misapplying causation to things that we don't understand well, like happiness.

Years ago I had this idea that many of our ideas about causation are flawed in this way. So I developed a habit of taking a causal claim that seems true, flipping the arrow around, and then testing the mutated hypothesis--just to see if it worked backwards. I was surprised by how often it did.

I don't have the data to convince you that the habit of mistrusting my instincts about causation and using "deep down, it probably goes both ways" as a heuristic has caused me to reap benefits that I would have otherwise ignored, but I suspect it strongly enough that I plan to keep doing it.

So the difference between the alternative perspectives is that if you're skeptical about the universal utility of causal nitpicking, you'll learn to recognize when you're wasting your time trying to prove that the laws of physics demand whatever you've noticed. You'll bail sooner on perspectives that don't work, and you'll be more creative about finding new ones.

On the other hand, if you demand a certain style of causal argument, you're more likely to double down against the opacity of human behavior to other humans. You'll design experiments and write articles that inspire further nitpicking about which is a cause and which is an effect, and the matter will remain in stasis--perhaps indefinitely--where its capacity to positively affect people's heath is limited.

I'm not saying that we shouldn't try to uncover underlying mechanisms when we can. When that's in cards it's pretty great. I'm just saying that we shouldn't always expect results of that sort to be within reach, and that reasoning from correlation alone can get us further than we tend to let it.


This seems unclear. I mean, either X causes Y or it doesn’t. (If you change X, holding everything else constant, does Y change or not?)

One thing you might mean is that causal chains are very complex and variable, so the average causal effect in a large population is not very informative about any individual - it’s an average of many different sized effects.


I agree with your second paragraph, mostly. It's a big mess of causes and effects--some going in one direction and some going in the other. To assume that they "average" requires that they be quantifiable and in the same dimension--which might not be the case.

As for your first paragraph, I don't think you can reasonably treat causation like an everywhere-binary like that. Not for arbitrary choices of X and Y.

Take "hunger" and "war" for instance. There are many causal arrows pointing from war to hunger, and several pointing the other way too. You have to zoom in to specific details before the language of causation starts being useful. "good for your health" and "being kind to others" are similarly aggregate concepts.

Causation is a myth, and a damn good one. We use it to reason about nearly everything. But the tendency to use it to reason about actually everything is a dangerous one.

Most of the time you can't "hold everything else constant". We usually have to settle for frobbing X back and forth many times and letting its continued correlation with Y convince us that there is indeed causal wiring between the two. At the end of most experiments is an inductive leap of faith that is justified by the high frob cout.

But sometimes X doesn't go back and forth. Sometimes it's a one-way deal and you only have one of them. Like maybe X is "finding carbon trapped in the earth's crust and releasing it into the atmosphere", and Y is "a global change in climate that poses an extinction risk to humans." In cases like that, we don't have the luxury of waiting for enough induction to bring about the leap of faith.

The situation with kindness and heath is similar. Ideally we'd take whatever wisdom we can from the correlative data that was presented and run with it--maybe there's a cause in there and it would make us healthier, maybe not. Instead we have this artificially high bar for argumentative strength, and when an argument fails to meet it we throw the baby out with the bathwater.


The problem here is that hunger and war are woolly concepts.[1] But I wouldn't say causation is a myth at this level. You can reasonably ask whether hunger causes war and run IV regressions, instrumenting hunger with e.g. rainfall, which have a reasonable chance at testing that hypothesis.[2]

[1] More thoughts about this at: https://wyclif.substack.com/p/on-social-science-about-comple...

[2] Like the literature coming from http://data.nber.org/ens/feldstein/ENSA%20Sources/Geospatial...


I'm gonna check out those links but I just wanted to clarify that I think causation is a myth at all levels, even the hardest of sciences. It's just that it's a more useful myth in those places.


So what's your objection to the standard Pearl/Rubin idea: "X causes Y means that if X, and nothing else, varies, then Y varies"?


When was the last time you encountered a situation where only one thing was varying?

I don't have a problem with the idea on it's own. It's a great way to understand all kinds of things. But data to support explanations of that sort is hard to come by.

What I object to is the belief that all explanations that can in principle be put in terms of cause, must in practice be put in terms of cause--otherwise they're useless (which, I think, motivated a lot of unnecessary handwaving in the bbc article, and probably in the ones that it references too).

Researchers should contribute useful data so that others can reference it to make decisions. That data becomes less useful when they have incentives to skew things so that they can meet the awkwardly high bar of "supports a causal argument" just to be worthy of publication.

It ought to be ok to say: "these things go together. Here's the evidence. It might not be possible to untangle the causal web between them, but if you try one I bet you'll see the other," and leave it at that.

Not to say that untangling the causal web isn't worth doing, but sometimes it's just not in the cards, and demanding that we search for it in the cards is like going hungry because all of the carrots left in the grocery store are strangely shaped.


Ok, I’m sympathetic to that POV. I’m grateful for the credibility revolution in econometrics, but it clearly can become an obsession. Prediction on its own can be worthwhile. I wouldn’t say causation is a myth - maybe rather that it’s overvalued.


Designing the experiments is easy. Designing experiments anyone would want to participate in is tricky… and then you get volunteer selection effects.


journalists that received their degree in social science and other similar fields probably never heard of causation/correlation principle and with no principle like this in their mind they can passionately promote whatever social justice agenda they have. (and im not against helping others or being kind. Im against nonsense in social columns on mainstream media like bbc)


Social sciences do seem to have a reasonable background in statistics and experiment design based on those I’ve known. What makes you conclude they wouldn’t know the causation/correlation principle?


The gigantic replication crisis in social science and medicine for a start.

https://en.wikipedia.org/wiki/Replication_crisis


This is a very good point. But it makes me wonder how much is innate to the topic. If physics has as many confounding variables, I’m wondering if there would be equal difficulty in reproducibility


Into a senior position straight from university? That's going to be interesting.


That's pretty normal for hiring PhDs in my practical experience. My first job was 'senior' and it's the same for everyone else I can think of. You are senior - you've been part of the community for around four years by that point and by graduation you're probably part of several committees and starting to mentor others. Why isn't that 'senior'?


Not the case in my field - biomedical. The hiring manager at a large pharmaceutical company told an audience of postdocs & graduate students at a career event a few weeks ago that they prefer to see a postdoc of at least 2-4 years to be considered for a senior scientist position.


Whether you have senior in your title is just nomenclature at pharma companies. Plenty do have it in your title just after PhD, reserving Scientist roles for those coming out of BS who are doing grunt work. That said, other companies have those as “technician” and PhD (maybe masters) is start of Scientist.

Getting hung up on title in Pharma is not a good move. It isn’t uniform at all.


I agree that nomenclature is just jargon. There's also the context of the parent, that senior jobs are typically available right out of university. I do not know the field the parent is in, though. In my experience of the biomedical field, this isn't the case. I'm a postdoc at a good medical school in the US, and most of the postdocs I've interacted with would happily take an industry job if they could manage it - there just aren't enough jobs to go around. The prospects for fresh Phds are even more grim.


Worked pretty smoothly for me. Probably helped that I had some relatively good looking private sector consulting gigs during my doctorate and post doc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: