It does not apply to classical bits in the same way. Quantum computers derive their computational power from the qubits being in a single quantum state across the qubits (an entangled one, to use physics jargon.)
This is distinct from classical computers, where you can describe a bit without needing to describe the other bits in the computer. You cannot describe a qubit (in a way that's computationally useful, at least) without describing all of them.
But the exponential cost (the need to describe the "whole") is there in the classical case too.
To describe a set of classical bits completely, you need a probability distribution over the whole system (including possible classical correlations induced by the crosstalk that OP spoke about). That probability distribution is a stochastic vector that has 2^n real positive components if we have n bits.
To describe a set of qubits completely, you need at least a "quantum probability distribution", i.e. a ket that has 2^n complex components (I am skipping the discussion of density matrices).
Both the classical and quantum description of the bits requires exponentially many real numbers. This exponential behavior on its own is not enough to the explain "quantum computational advantage". The exponential behavior is well known in plenty of classical contexts (e.g. under the name "curse of dimensionality") and classically solved with Monte Carlo modeling.
Scott Aaronson's lecture notes cover that quite well.
At the end, the issue of crosstalk is modeled in a very similar way in the quantum and classical case, and if it forbids the existence of a quantum computer, it should forbid the existence of classical ones as well. In both cases it is the use of linear binary (or stabilizer) codes that solves that issue.
I may be misunderstanding your argument, but it seems like you are saying that modeling faults in a classical computer also needs to take into account the states of all bits in relation to one another, and that this somehow proves that the problem of crosstalk is similar to interference in quantum computers.
If I have understood your argument correctly I don’t think the conclusion follows from the premise because crosstalk is not fundamental to classical computing. By that I mean that for a given logical cpu design, crosstalk can be (and is) minimized through physical design characteristics (eg grounding, wire spacing etc) without reducing the size of computation. The same cannot afaik be said of quantum computers, where the interaction between qbits is essential to computation
I do not think I follow your last two sentences. Both for bits and for qubits, two-bit gates are required (e.g. NAND for classical and CNOT for quantum). The bits/qubits need to interact for such a gate to happen and should be isolated from one another before/after the gate. And same as with bits and grounding and wire spacing, we use various decoupling techniques for qubits.
Of course, qubits are drastically more difficult, but the principle is the same.
Good point, I said computation, but I was thinking more of the storage and routing pieces of that rather than the gates, likely because I don’t understand quantum gates very well. Most of the quantum ecc I have read about was in the storage lifetime and retrieval process.
Basically what I mean is that classical computing can split things up during storage or routing to reduce coupling, but quantum can’t do this because the qbits must stay entangled to be useful
WiFi, 5G, PCIe, Ethernet, QR codes, and pretty much any other classical communication protocol uses error correcting codes where classical bits need to be sent in blocks, e.g. for PCIe6, blocks of 256 physical bytes are sent, but only 242 of information is transmitted because the rest is used to enforce some classical correlation for the purposes of error correction. We can not send smaller packets of data (unless we pad). There are probably many technical details I am oblivious to here on the classical side of things, but this "can not split things up" constraint does not seem unique to quantum correlations (entanglement).
And the way we mathematically express and model classical correlations in classical error correcting codes is virtually the same as the way to model quantum correlations (entanglement) in quantum error correcting codes.
All of this with the usual disclaimer: the quantum case is much more difficult, engineeringly speaking.
What makes qbits valuable is that they scale superlinearly the more of them you have entangled together. This means 8 individual qbits can store less information than 8 entangled qbits. Since they have to remain entangled to do valuable work, you can't physically separate them to prevent interference the way you could with classical bits. This is actually really well demonstrated in all the protocols you mention. None of those protocols operates on a bus that can send 256 bytes of data all together at once. They all chunk the data and send a small number of symbols at a time.
For example in PCIe each lane can only cary one bit at a time in each direction. In typical consumer equipment, there are at most 16 lanes of PCIe (eg a graphics card socket) meaning there can only be at most 16 bits (2 bytes) on the wire at any given time, but the bits are sent at a very high frequency allowing for high transfer rates. This only works because taking those 256 bytes and sending them one by one (or 16 by 16) over the wire doesn't lose information.
I believe there are a couple of misconceptions in your first paragraph (but it is also plausible that both of us are talking past each other due to the lack of rigor in our posts). Either way, here is my attempt at what I believe is a correction:
- Both classical probabilistic bits and qubits need exponential amount of memory to write down their full state (e.g. stochastic vectors or kets). The exponential growth, on its own, is not enough to explain the conjectured additional computational power of qubits. This is discussed quite well in the Aaronson lecture notes.
- Entanglement does not have much to do with things being kept physically in contact (or proximity), just like correlation between classical bits has little to do with bits being kept in contact.
- Nothing stops you from sending/storing entangled bits one by one, completely separate from each other. If anything, the vast majority of interesting uses of entanglement very much depend on doing interesting things to spatially separate, uncoupled, disconnected, remote qubits. Sending 1000 qubits from point A to point B does not require a bus of width 1000, you can send each qubit completely separately from the rest, no matter whether they are entangled or not.
- Not even error correction requires you to work with "big" sets of qubits at the same time. In error correcting codes, the qubits are all entangled, but you still work with qubits one by one (e.g. see Shor's syndrome measurement protocol).
- I strongly believe your first sentence is too vague and/or wrong: "What makes qbits valuable is that they scale superlinearly the more of them you have entangled together". As I mentioned, the exponential growth of the "state descriptor" is there for the classical probabilistic computers, which are believed to be no more powerful than classical deterministic computers (see e.g. derandomization and expander graphs). Moreover, Holevo's theorem does basically say that you can not extract more than n bits of information from n qubits.
- Another quote: "Since they have to remain entangled to do valuable work, you can't physically separate them to prevent interference" -- yes, you can and very much do separate them in the vast majority of interesting applications of entanglement.
We need it all. Even if we're carbon neutral the CO2 is still in the air. We need a way to remove it. This is especially obvious given how long it's going to take to even get to carbon neutral.
Can we stop with "we don't need solutions like THIS, we need this other solution." We're going head first into a climate crisis. We need all solutions here.
No, we don't. Right now we need to cut the 90% of the problem that is caused by the same 3 things.
If you have some solution for a small part of the remaining 10% that is profitable or neutral, go for it.If you are arguing for research, so we can do it later, also, great. But if you want to divert attention from 90% of the problem just so you can improve your favorite 0.01%, just go sit in a corner somewhere until you get reasonable.
It's not necessarily a questions of "need". Physicists are just measuring what protons ARE. Whether or not gluons and the strong force are necessary to form an object that looks like a proton is a separate point from what protons actually look like in our universe.
To your point on if such an arrangement would be possible or not ignoring the strong force, it would not. The "net-charge" viewed from the +2 quark would be repulsive, resulting in an unstable arrangement of matter, even if you could construct it in an equilibrium state it would be the unstable kind.
See https://www.youtube.com/watch?v=pTn6Ewhb27k for an explanation. You can have a spatially asymmetric speed of light and be perfectly in line with every experiment to date.
The speed of light appearing constant in every inertial reference frame is experimentally verified and measured. But it's an axiom that the speed of light has no spatial preference. Each measurement of the speed of light sneaks in this axiom in subtle ways.
I think you misinterpreted the parent's point. They weren't saying c being a constant in all reference frames is an axiom. Rather, they were saying that it's convention that c doesn't have a spatial, directional, preference. It's a different claim.
What is not true?
You can't have reference frame independent (which is a term that also includes orientations) Maxwell equations and anisotropic speed of light at the same time.
If Maxwell equations are correct (which was already well-tested by then), speed is already the same for forward and backward propagating electromagnetic waves (=light), and there is no other spatial anisotropy either.
Differing one-way speed of light is an amusing "loophole" in the experiments measuring the speed of light (which requires one particular magical angular distribution of c to slip through a Michelson-Morley interferometer) but never existed in the theory that directly predicted it to begin with, so if you insist on it, one needs to ask how would that even work with the rest of physics? c doesn't have a spatial/direction preference in electrodynamics or quantum electrodynamics, vacuum permeability and permittivity (\mu_0 and \epsilon_0) don't have any observed spatial dependence. (Such a thing happens in condensed matter systems, effective mass, vacuum permittivity, g-factor, etc etc are in general anistroptic due to the medium, and is easily detectable, and their spatial derivatives do show up and need to be taken into account to match the observations as in the case of the kinetic term -\hbar^2(d/dx)(1/2m(x))(d/dx). Coulomb force doesn't get stronger or weaker when you rotate the table you perform your experiments on, current carrying wires don't produce stronger magnetic fields as you change their orientation (at least not within any observed precision). Similar goes for any field theory in the standard model.
I should add that in terms of experimental precision, quantum electrodynamics is the most accurate theory that we have, and can put very strong limits on possible anisotropic deviations if any.
You should watch the video. There is no experiment that has been done that shows the speed of light does not have a preference because every measurement sneaks in the assumption it's symmetric.
See also: https://en.wikipedia.org/wiki/One-way_speed_of_light . From the article: "Experiments that attempt to directly probe the one-way speed of light independent of synchronization have been proposed, but none have succeeded in doing so.[3] Those experiments directly establish that synchronization with slow clock-transport is equivalent to Einstein synchronization, which is an important feature of special relativity. However, those experiments cannot directly establish the isotropy of the one-way speed of light since it has been shown that slow clock-transport, the laws of motion, and the way inertial reference frames are defined already involve the assumption of isotropic one-way speeds and thus, are equally conventional.[4] In general, it was shown that these experiments are consistent with anisotropic one-way light speed as long as the two-way light speed is isotropic.[1][5]
"
I get what you're saying and I'm well aware that Maxwell's equations are rotation invariant. I'm saying it's more subtle and complicated than you think. For instance, time dilation will have an asymmetry under these assumptions.
You can call in convention as many times as you want, but unfortunately, it just is not a convention. It is called theory of electrodynamics which is a well established, experimentally verified branch of physics.
What exactly is more subtle and complicated in the context of Maxwell equations? If speed of light has the anisotropy that you are describing, Maxwell equations must be incorrect. In what electromagnetic experiment has such anistropy of magnetic or electric constants have been ever observed?
You're basically saying "you haven't measured the one-way speed of light directly, so you haven't ruled it out the possibility of my exotic theory", but it is actually been ruled out by Maxwell equations a long time ago. Unless you have some experimental proof that Maxwell equations need to be modified to accommodate that elusive version of your aether, you can't claim the existence of such an anisotropy.
Physics is well connected in that you can't change one part of it (in your case, c in the context of special relatively) just because you found something that wasn't experimentally ruled out, and hope the rest of the physics (basically all massless field theories and relevant experimental results in this case) won't break.
For a (physical) relativist, the speed of light is really simple. c = 1, everywhere and everywhen. <https://en.wikipedia.org/wiki/Geometrized_unit_system> This is because we have excellent evidence for the utility of <https://en.wikipedia.org/wiki/Pseudo-Riemannian_manifold#App...>, and the further astrophysically-driven demands of global hyperbolicity or at least reasonably strong causality conditions, no isometric embeddings, geodesic incompleteness, asymptotic flatness around sources, junctions in sufficiently flat space, and energy conditions. Those further demands are the basis for continuing to rely on Special Relativity in laboratory settings.
[1] quoting your wikipedia link, "... inertial frames and coordinates are defined from the outset so that space and time coordinates as well as slow clock-transport are described isotropically". Well, yes. Establish points first then assign coordinate labels is the relativist's procedure, surely?
On this point, Earth laboratories are in general not in inertial frames, thanks to gravitation. No laboratory is in general in an inertial frame, thanks to the metric expansion of space. We can in principle extract a preferred foliation (e.g. the scale factor a, or some function on lunisolar tides) and use that as the basis for time coordinates instead. In effect this is what we do for high-redshift objects and many lunar laser ranging experiments <https://ssd.jpl.nasa.gov/ftp/eph/planets/ioms/>[a] <https://arxiv.org/abs/1606.08376> §3,§4. <https://link.springer.com/article/10.1007/s10569-010-9303-5> discusses aspects of how to choose a preferred foliation (in the context of gauge freedom) in the solar system, and in the context of grinding out a results-prediction for some future LLR experiment. The goal is to be able to show that the locations of the three instruments were accurately predicted, and Lorentz-invariance is thoroughly baked in (the calculations are so exceptionally sensitive to the introduction of tiny breaking parameters in the style of SME <https://arxiv.org/abs/0801.0287> that it has led to the discovery and/or better understanding of several of the features listed as parameters at [[a] LLR_Model_2020_DR.pdf §4]).
tl;dr: coincident-events first, then labels (coordinates). [Einstein 1916, p.117 [2] although I remembered to look there only after writing all of the below]. One-way speed of light arguments are in danger of being coordinates-first, and thus insufficiently general for physics.[3]
The key word in your comment is
> directly
But why do we care? We have an abundance of indirect evidence, premised on direct tests of coordinate-independent features of our best most-fundamental theory. The two important features of (general) relativity are pointwise local Lorentz covariance -- where c is the only free parameter of the Lorentz group -- and the minimal coupling. Special relativity's Minkowski space is in this view a special static time-orientable spacetime in which we have global Poincaré invariance (c again is the only free parameter of the Poincaré group; the Lorentz group is a subgroup of the Poincaré group -- the latter includes all the spacetime translations, and in the Minkowski case the space-translation and time-translation symmetries all commute). When we go blithely parallel-transporting null vectors, this is what matters.
It is far from silly to write down a theory where c varies in spacetime. It is the foundation of several alternative-to-cosmic-inflation decaying-bimetric theories of the very early universe, where c eventually stabilizes to its value in our local spacetime having been a different (typically much much much -- ~30 orders of magnitude -- higher) value during the formation of primordial matter density variations. The faster speed of light allows for distant reaches of the early universe to reach the same temperature with uniformity up to the small fluctuations in the cosmic microwave background.
Of course we run into the same point you've been working in this thread: it's hard to discover the exact function on c in the early universe. We have to rely on indirect evidence, and strong gravitational lensing is useful there. SVOM <https://svom.cnes.fr/en/SVOM/GP_mission.htm> is looking for Lorentz-invariance-violation (LIV)-induced modifications to the photon dispersion relation in vacuum, and is a particularly good platform for test of a Taylor-series expansion like E^2=p^2 c^2 ( 1 +- \sum_{n=1}^{\infty} a_n ), since GRBs at least somewhat escape the problem that the lowest order terms dominate at small energies, and they are distributed across the sky and at different redshifts. We are also now better equipped to study light echos (oh for a galactic supernova!) and detailed strong galactic lensing studies. Spoiler: the constraints on a spacetime-translational variation of c grow tighter with every observation. However, to fully rule out a sharp phase-change in c, we will need practical ~ 10^-15 Hz gravitational-wave astronomy. LIGO is most sensitive around 10^2 Hz; eLISA would be around 10^-2 Hz. (I am fairly sure the authors of most modern variable-speed-of-light early cosmologies knew as they were writing that they probably could hide in that hard-to-explore space through a few generations of gravitational wave observatories. One might say the same about a wide variety of recent cosmic inflation theories, too.)
In a general dynamical spacetime the notion of a two-way path is tricky. Even in Minkowski space, for a two-way signal, the return detection arrives at a different, later, point in spacetime than the outbound signal, even if the spacelike coordinates are always (0,0,0) [this is somewhat reminiscent of the twin paradox]. Outbound-and-return are two future-directed null geodesics. Your argument in this setting is equivalent to saying that we are somehow in trouble because the "outbound" and "return" null geodesics may have, without rescaling, different affinely-parametrized lengths.
In SR what we care about is that the signal is Lorentz invariant at each spacetime point where it could be sampled, even as sender and receiver/reflector are moving ultrarelativistically or are ultraboosted. Given Lorentz-invariance we can determine the three relevant points on the manifold. (Poincaré invariance means we can do this same test at any time or place in the flat space universe). Your complaint is that this is not a direct one-way measurement. OK, it's not. So what? We can in principle directly test Lorentz-invariance at any point (e.g. we can have a sparse gas with a well-understood (as in at the Standard Model of Particle Physics level) low extinction coefficient). If we have no flat space violation of Lorentz-invariance, we must have symmetry of light travel time for constant light-like separation.
In a dynamical general spacetime (Lorentz invariance -> local Lorentz invariance (LLI)), we can readily move the intended recipient of a one-way light pulse outside the reach of the light pulse itself; nature already does this for us in at least a couple of ways (metric expansion and astrophysical black holes). We can also have different delays on each arm of a two-way measurement, e.g. through Shapiro delay, around a spinning mass, or in the presence of a gravitational wave. However, at each point in the (vacuum part of the) curved spacetime [a] light obeys the massless wave equation and [b] local Lorentz invariance demands that the fraction of the wave at X propagates to a neighbouring point X' at c, and that X' must be drawn only from certain available neighbouring points.
So, really, it's not so much "what is the one-way velocity of light?" but rather "how much spacetime does a pulse of light traverse between two spacetime points?". Or in other words, we are looking for an affine parametrization on a curve of zero interval and that extremizes the length between two points on the manifold m and is constructed by parallel-propagating a tangent vector on m and in its own direction.
Consequently, I think the issue at the core of your points about measuring one-way speed of light is how to best label, with coordinates, two particular coincidence-points in a Lorentzian spacetime, rather than choosing two labels[1] (via your favourite synchronization scheme, for example) which are then used as the basis for a parametrization of a null curve. Then the question you ask is: "how can we know that the spacetime is Lorentzian?" or alternatively, "how do we know there is not some vicious additional gravitational field and vacuum polarization which makes the spacetime only seem Lorentzian?" for which experimentalists have generated a century worth of answers. The answer of how to best label two points in a Lorentzian manifold is \mu : it really depends on what and how you want to calculate. The physics here are that the two points (in your null-curve-parametrizing / one-way-light-travel-time experiment) are timelike-separated.
Only in a technical sense. Running experiments under unprecedented conditions that generate predictable results is trivial -- it happens literally every time you run any experiment, since conditions can never be perfectly replicated. The fact that nothing notable has popped up from recent LHC experiments is itself notable, but only just, and certainly not notable enough to justify the costs involved.