(not a physicist) One thing that I always wondered about that I never see "debunked" anywhere is any discussion about whether or not entanglement is actually just because the two entangled particles are put into a pretty predictable state (opposite of each other). If one is measured "up" the other will measure "down".
To me that just screams "particle physics are predictable(determinant) as long as the particles are shielded from outside noise, not because they are connected/bound together by some mysterious force or law of physics."
I suppose a thought experiment to prove/disprove that would be to send one of the entangled particles, particle A, around a black hole to slow it's time down and then afterwards measure if the entangled particles still give opposite results but consistent with the time delay.
There is a very important and often forgotten caveat to the Bell's inequalities: it covers local hidden states. If we are to assume that measurement and particle creation apparatuses are entangled with each other (after all, they had plenty of opportunities since the big bang), then we have a global hidden state and the quantum measurment randomness becomes a simple artifact of removing global state from the picture. This interpretation of QM is usually called "superdeterminism" and, personally, I like it much more than the black vodoo measurement magic with collapsing wave functions or creation of whole new worlds on each tiny measurement. This video can be a good introduction to this topic: https://youtube.com/watch?v=dEaecUuEqfc (don't mind the clickbaity title, the video itself is good)
superdeterminism is deeply weird, just in a different way that most of the other quantum mechanics interpretations are deeply weird. The main thing is that it implies that this global quantum state is just-so arranged that no matter how you make your decisions about what to measure, it's always correlated with the underlying quantum state.
I would say it differently. Suprdeterminism opposes to the deeply ingrained assumption that we can design experiments in a way which removes influence of a measurement apparatus (including experementators themselves) on the measured process.
It's more than that, though. Just 'having an influence' on the measured process doesn't explain the bell inequality. Super-determinism basically requires that there is some common state from the big-bang which means that if I were to decide to e.g. seed the random number generator I'm using in an experiment with a description of what I had for breakfast that morning, the particles in that experiment (which could in principle come from far enough away they had no way of causally interacting with me or said breakfast) somehow 'know' that I had made that decision, what I had for breakfast, and the details of the random number generator and act accordingly. Absent some mechanism by which this might occur, it requires an incredibly complex kind of setup to the universe to create that result, one that has so many free variables it could explain almost any universe with any physics.
It's not a some sort of particle conspiracy. The idea is not so different from the Laplace's demon. We have an initial state of the Universe at the moment of Big Bang (a PRNG seed, if you will) and a set of differential equations (QM is not different in this regard). Theoretically, it allows the demon to predict everything in the Universe. The wave nature of QM equations introduces a certain quirk to it, but, effectively, with your example the breakfast was already "preordained" at the moment of the Universe creation.
Surprisingly, this idea makes many physicists very umcomfortable and they start to object to SD using philosophical arguments about "free will".
It should make anyone uncomfortable (a trait it shares with all other known interpretations of QM). It implies a degree of correlelation across many different levels of abstraction which basically nothing else in physics does. As the name implies it's not just an abstract sense of determinism but one which tips the scales of everything at every level towards a specific outcome.
>one which tips the scales of everything at every level towards a specific outcome
According to the SD, this is nothing more than an artifact of splitting an entagled system into an "observer" and an "observed" parts. The linked video covers this part relatively well, the "randomness" of quantum measurment is nothing more than an artifact of artificial split of the Universe done by humans.
And since that precludes it from ever being testable, falsifiable, or making predictions different from a Bell's Inequality based theory, it just isn't physics
Just to try and summarise the issue. It turns out measurements of one of the entangled particles (particle A) are correlated to the settings of the measurement apparatus. For example the axis on which the polarisation of a photon is measured affects the measurement you get.
That setting is not known when the particles become entangled, and so in principle cannot affect the state of particle B. However since the setting does in fact correlate with the measured state of particle A, it also correlates with the state of particle B.
It proves there's no "local hidden variable"--the state is indeterminate until they are measured. It's proven through some pretty simple probability theory, and is (relatively) easy to follow. There's a great video of Leonard Susskind explaining it somewhere
Imagine I send you a box. The box has three buttons: one on the top, one on the front, and one on the side. You can press only one of these buttons. When pressed it will either light up green or red. Pressing other buttons afterward causes no effect, the box is disabled.
I send your friend a copy of the same box. I tell you that the result of pressing each button is random but no matter what, if you both press the same button you will see the same result. If you press two different buttons, the results will be uncorrelated.
You ask me to send you and your friend a bunch of these paired boxes and start testing. You then both press random buttons on each box and record your results.
Comparing notes afterward you can see that every time you happened to press the same button you received the same result. You confirm that if you pressed two different buttons, you get uncorrelated results. No problem, you think. I have obviously preprogrammed each box to be one of GGG, GGR, GRG, GRR, RGG, RGR, RRG, or RRR.
But then you notice something strange. If this theory was true, for 2/8 boxes you would have an RRR/GGG box and would see the same answer no matter what. The remaining 6/8 boxes you should get the same answer 2/3 of the time. This means that your answers should agree 3/4 of the time. Even if you surmise that I never send you a box set to the same three values, your results should agree 2/3 of the time.
You crunch the numbers and find that your results agree precisely 50% of the time.
The interesting thing about entanglement is not the correlation per se. You can take a pair of hand gloves, put each one into a box, and send them to opposite ends of the universe. When you open one box at one end of the universe and see the left glove, you immediately know that someone at the other end of the universe will find the right one. The interesting thing about entanglement is that decision which glove goes into which box is not made when you prepare the boxes before sending them to opposite ends of the universe but only at the moment you look into the first box.
If all you could measure was "up" and "down" then I think entangled particles would be indistinguishable from unentangled particles that were created as up/down pairs. But particles can be measured in other directions, and that's where the determinism goes away.
A nice thought experiment is the CHSH game. It's a two player game where the players (player A and player B) cooperate to beat the house. It is played as follows:
1. Each player is assigned a referee.
2. The players, accompanied by their referee, go to separate rooms. Before going to the separate rooms the players can confer. They may also bring any equipment with them that they want. The rooms are shielded to block any communication between the players during their time in the rooms. You may assume that the communication blocking is 100% effective.
3. Each referee uses a true random number generator to generate a bit, and tells the player the value of that bit.
4. The player then generates a bit, by any means, and tells it to the referee.
5. The referee records the bit they generated and the bit provided by the player.
6. Steps #3-5 are repeated 999 more times.
7. After both players have gone through #3-5 1000 times, the referees confer and check their records. For each round the players win $100 in these two cases:
The players generated different bits and the referees both generated 1
The players generated the same bit and at least one referee generated 0
In a classical universe the best strategy for the players is simply to agree on an algorithm that will result in them picking matching bits every round, such as "always pick 0". 75% of the time the referees will generate 00, 01, or 10 and the players will win $100.
In a quantum universe the players can do better. They can generate 1000 pairs of entangled particles and each take one particle from each pair. Let's assume that the particles are linearly polarized photons polarized in the up/down direction.
When player A is given the referee's bit, A sends their particle from the first pair through a polarizing filter and reports a 1 if the particle makes it through the filter, and a 0 if it is blocked.
If the referee's bit was a 0 the player orients their polarizing filter along the up/down axis. If the referee's bit was 1 they orient their filter rotated 45° to the right.
Player B does a similar thing, except their filter is rotated 22.5° to the right if they got a 0 from the referee and 22.5° to the left if they got a 1.
Here's a diagram of their measurement angles, where X0 means player X got a 0 from the referee and X1 means they got a 1:
They do this for each round, using the photons from the n'th entangled pair for round n.
Note that if either player receives a 0 from the referee the angle they use will be 22.5° apart from the angle the other player uses no matter what bit the other player got from the referee.
When the measurements on a pair of entangled particles are taken at an angle θ the results match the result you'd get at a 0° difference cos^2(θ) of the time.
For 22.5° that's 85.4% of the time, so when either referee generates a 0 they players will win 85.4% of the time.
If both referees generate 1, B measures at -22.5° and A measures at 45°. That's 67.5° apart and the player bits only match 14.6% of the time, but when both referees generate 1 the players want to generate different bits so that's good. They players win 85.4% of the time in this case.
That's an 85.4% win rate in all cases, which beats the 75% that they can get in a classical universe.
If you try to make some sort of classical-only thing that can take the place of entangled particle pairs you'll find that you can't make it work. You won't get past 75%.
Another thought experiment that might be clearer (or might just muddle things) involving two mysterious devices that you are trying to reverse engineer is here [1]. That puts it more mechanical/computational terms which may be easier to play around with.
I also quickly ditched mine. I mean it worked sorta fine - but the usability was absolutely terrible. Often apps lost connection with it so any requests to pause or resume the media was several seconds delayed. If I got a phone call I often wanted a quick way to pause my media but chromecast made this super inconvenient, slow and stressful when the phone is blurting out it's ringtone.
App support was also spotty at best.
In the end I realized that since I've already chosen between a rock and a hard place (went with the Apple ecosystem), I could just screenshare using an old Apple TV. This ended up working much better in practice (although lower quality video stream) than Chromecast. Today I don't cast much video anymore for some reason, not really sure why. I have an Apple TV 4K and just mostly use the native apps from various services. Having a remote to a system that is completely detached from your phone is much nicer usability wise IMHO.
I suffered from canker sores all of my childhood and well into my early adulthood. I sometimes had as many as 20 sores around my mouth. I've as long as I remember used a non-SLS toothpaste but I still got the sores - especially around stressful periods of time like around exams.
For some reason they started appearing less and less as I approached my 30'ies. I never have them anymore now. I still use the same non-SLS toothpaste though as I'm not really keen on them returning despite the toothpaste not really helping me much when I was younger. I'm imagining it helped with them not being worse than they could be.
Another reason why I kept the same toothpaste brand is that I've never had a single cavity in my adult teeth while my siblings have had plenty (they used a different brand than me). Not sure if it's credited to the toothpaste or just if it's because I never flush my mouth with water after brushing - they still do that to this day.
This one always comes to mind when I see some fancy toilet designs at various venues. Which one do I press? Normally in UI a bigger button usually means a more used feature. I can't imagine the average person doing number 2 more than number 1.
Most toilets around here have the same size button but an icon with a big circle representing a big flush and small circle for small flush. The buttons being the same size makes the icons have more "meaning".
I have just gotten used to that is not always the case and they actually went for small button = small flush. It makes sense physical-wise but UI-wise not so much.
I've somewhat convinced myself that someone in the postal service is leaking information about pending parcels to scammers (or the scammers have access to some servers). Whenever I'm expecting a package the number of phishing attempts in my email skyrockets. Period of no packages - a lot less attempts. Waiting for a new package? Phishing emails ramp up again.
How common is writing checks in America these days? In Denmark it is more or less completely phased out (with few exceptions). Nobody uses them anymore in practice. In Denmark I've had no issues cancelling my gym memberships directly online but haven't been to that many different ones.
A few situations apply for me. I write checks to churches when I show up in person, even for a $1.00 votive candle. I can toss it into the collection basket or the donations jar for donuts and coffee. However, more churches are improving their online donations, and I write fewer checks as a result.
I also cut checks as part of online bill payments. If the bank can't identify an electronic recipient, then the payment gets mailed out on a paper check. This happens for my landlord, and anything I can't set up for AutoPay from the other side. I can even send money to friends this way: anyone with a postal address, and I don't pay fees or postage for this Bill Pay service.
In fact, I make use of Bill Pay so much that I didn't reorder checks for one account. My father was mystified by that decision.
Billpay Checks are actually Cashier's Checks and are drawn from account numbers that aren't your account, btw. It's one of the reasons it's far more secure to pay any bill requiring a check this way.
Check fraud is massively on the rise. They don't even need a physical check, just the info. They're printing their own checks now and depositing them electronically. They also hire homeless right off the street to go in and cash the checks for them. Homeless keeps $100, the fraudsters make the rest.
If you're using checks the way you say you are, it's only a matter of time before you have to deal with swapping out bank account numbers.
> Billpay Checks are actually Cashier's Checks and are drawn from account numbers that aren't your account
They are most definitely not cashier's checks. I checked both institutions and both of them issue actual checks on my account. The only difference is their sequence numbers. The money doesn't leave my account until the recipient deposits them. I have, in the past, issued stop payment orders for them.
I loathe the cost-cutting keyboard manufacturers have turned to over the recent years.
I cannot get a good danish keyboard anymore without the ÆØÅ keys having the Swedish, Norwegian and Finnish/Suomi keys printed on them as well, often in different colors.
My decade+ old "Logitech Illuminated" keyboard has been the best keyboard I've ever had but recently it's been acting up for me. It occasionally adds diacritics to random letters. Tried cleaning it with no luck. It was a danish-only version, laptop-like flat keys but with more "travel-distance" so it feels like the best of both worlds. Also had a nice flat palm-rest.
Unfortunately I can't get this keyboard or find something that has a similar form factor anymore anywhere - and if I can find some version of it, it has the the terrible multi-country-cost-cutting-keys on it. I suppose that if I find a replacement keyboard that I can move the old keys over to the new one (if they are even the same after all these years)
Do the danes not have the mechanism that is found on Finnish keyboard layouts, where pressing AltGr+Ö yields Ø and AltGr+Ä yields Æ, except in reverse?
Those mappings are not universal. They are present under Linux but not on MS-Windows.
I don't know about Mac, but the layout has in the past been slightly different there from Windows also.
Interesting, it's been like a decade since I last used windows, but I had to go and check what layouts are available, since I remember having these combinations on my layout. Apparently those combinations are provided by windows in the "Finnish and Sami" layout, which provides a number of extra letters (not just ones used by the Sami languages) through AltGr+letter combinations. I must have selected that as my layout at some point while I was still using windows, possibly for the purpose of getting easier access to letters like ÆØÕ, and just forgotten it after some time.
They are there on mac too, use option-Ä to get the Æ or the other way around. What's more, it has worked like that since system 7.x times or so, it's just a good idea.
I had to change settings on Windows to get access to a mode. I could then enable that mode to be able to readily type Spanish correctly. The mode uses the key combinations as described.
There are libraries that help with iterating both code-points and grapheme clusters...
- but are there any of them that can help decide what to do for example when pressing backspace given an input string and a cursor position? Or any other text editing behavior. This use-case-dependent behavior must have some "correct" behavior that is standardized somewhere?
Like a way to query what should be treated like a single "symbol" when selecting text?
Basically something that could help out users making simple text-editors.
There are so many bad implementations out there that does it incorrectly so there must be some tools/libraries to help with this?
Not only for actual applications but for people making games as well where you want users to enter names, chat or other text.
Not all platforms make it easy (or possible) to embed a fully fledged text editing engine for those use-cases.
I can imagine that typing a multi-code-point character manually by hand would allow the user to undo their typing mistake by a single backspace press when they are actively typing it, but after that if you return to the symbol and press backspace that it would delete the whole symbol (grapheme cluster).
For example if you manually entered the code points for the various family combination emojis (mother, son, daughter) you could still correct it for a while - but after the fact the editor would only see it as a single symbol to be deleted with a single backspace press?
Or typing 'o' + '¨' to produce 'ö' but realizing you wanted to type 'ô', there just one backspace press would revert it to 'o' again and you could press '^' to get the 'ô'. (Not sure that is the way in which you would normally type those characters but it seems possible to do it with unicode that way).
I'd argue that you must use grapheme clusters for text editing and cursor position, because here are popular characters (like ö you used as example) which can be either one or two codepoints depending on the normalization choice, but the difference is invisible to the user and should not matter to the user, so any editor should behave exactly the same for ö as U+00F6 (LATIN SMALL LETTER O WITH DIAERESIS) and ö as a sequence of U+006F (LATIN SMALL LETTER O) and U+0308 (COMBINING DIAERESIS).
Furthermore, you shouldn't assume that there is any relationship between how unicode constructs a combined character from codepoints with how that character is typed, even at the level of typing you're not typing unicode codepoints - they're just a technical standard representation of "text at rest", unicode codepoints do not define an input method. Depending on your language and device, a sequence of three or more keystrokes may be used to get a single codepoint, or a dedicated key on keyboard or a virtual button may spawn a combined character of multiple codepoints as a single unit; you definitely can't assume that the "last codepoint" corresponds to "last user action" even if you're writing a text editor - much of that can happen before your editor receives that input from e.g. OS keyboard layout code; your editor won't know whether I input that ö from a dedicated key, a 'chord' of 'o' key with a modifier, or a sequence of two keystrokes (and if so, whether 'o' was the first keystroke or the second, opposite of how the unicode codepoints are ordered).
> I'd argue that you must use grapheme clusters for text editing and cursor position
Korean packs syllables into Han-script-like squares, but they are unmistakably composed of alphabetic letters, and are both typed and erased that way (the latter may depend on system configuration), yet the NFC form has only a single codepoint per syllable (a fortiori a single grapheme cluster). Hebrew vowel markings are (reasonably) considered to be part of the grapheme cluster including their carrier letter but are nevertheless erased and deleted separately. In both of those cases, pressing backspace will erase less than pressing shift-left, backspace; that is, cursor movement and backspace boundaries are different.
There are IIRC also scripts that will have a vowel both pronounced and encoded in the codepoint stream after the syllable-initial consonant but written before it; and ones where some parts of a syllable will enclose it. I don’t even want to think how cursor movement works there.
Overall, your suggestion will work for Latin, Cyrillic, Greek, and maybe other nonfancy scripts like Armenian, Ge’ez, or Georgian, but will absolutely crash and burn when used for others.
OK, I understand that the initial sentence is too strict, however, using codepoints for text editing and cursor position is even worse - even in your example of Korean there's a clear distinction depending on how the same character is encoded (combined NFC or not), but it should be the same to the user; and obviously if someone inputs a latin-diacritic character by pressing a modifier key before the base letter, then backspace removing the diacritic (since unicode modifiers are after the base letter) would be just ridiculous.
Backspace in general seems to be a very difficult problem because of subtly incompatible expectations depending on the context, as 'undo last input' when you're typing new text, and 'delete previous symbol' if you're editing existing text.
Some platforms (e.g., Android) have methods specifically for asking how to edit a string following a backspace. However, there's no standard Unicode algorithm to answer the question (and I strongly suspect that it's something that's actually locale-dependent to a degree).
On further reflection, probably the best starting point for string editing on backspace is to operate on codepoints, not grapheme clusters. For most written languages, the various elements that make up a character are likely to be separate codepoints. In Latin text, diacritics are generally precomposed (I mean, you can have a + diacritic as opposed to precomposed ä in theory, but the IME system is going to spit out ä anyways, even if dead keys are used). But if you have Indic characters or Hangul, the grapheme cluster algorithm is going to erroneously combine multiple characters into a single unit. The issue is that the biggest false positive for a codepoint-based algorithm is emoji, and if you're a monolingual speaker whose only exposure to complex written scripts is Unicode emoji, you're going to incorrectly generalize it for all written languages.
IMHO backspace is not an undo key. Use CTRL+Z if you want to undo converting a grapheme to another grapheme with a diacritic character. Backspace should just delete it.
On the other hand, a ligature shouldn't be deleted entirely with just one backspace. It's two letters after all, just connected.
So how do we distinguish when codepoints are separate graphemes, and when they constitute a single grapheme? Based on if they they can still be recognized as separate within the glyph? If they combine horizontally vs vertically (along the text direction or orthogonal?) What about e.g. "¼" - are those 3 graphemes? What about "%" and "‰"? What about "&" ("et" ligature)? It seems you can't run away from being arbitrary…
> Or typing 'o' + '¨' to produce 'ö' but realizing you wanted to type 'ô', there just one backspace press would revert it to 'o' again and you could press '^' to get the 'ô'.
This is a good example because in German I would expect 'o' + '¨' + <delete> to leave no character at all while in French I would expect 'e' + '`' + <delete> to leave the e behind because in my mind it was a typo.
The rendering of brahmic- and arabic-derived scripts makes these choices even more interesting.
Same for a French keyboard with éèàù which are all typed using one key. But even êôûæœäëïöü, all typed using at least two keys, if not 3 with a compose key (from memory, I'm using a phone). Everybody is used to the way it has been working on all OSes.
I realize that the editor would be the system to keep track of how the character was entered for this to work. If you made the character from a single keypress it would only make sense that backspace also undid the entire character. Only if you created the character from multiple keypresses it would make sense to "undo" only part of it with backspace (at least until you move away from the character).
> make sense to "undo" only part of it with backspace
I'm not sure that ever really makes sense: it would be a misnomer if "backspace" didn't bring you "back" some amount of horizontal "space," I reckon. This logic holds up not only for cases like ö and emoji (where I'd hope the whole grapheme disappears), but also for scenarios like if one types <f><i> and an <fi> ligature appears, where I'd hope only the <i> disappears: that's fine because you are still going back some space.
If the key ever gets repurposed from "backspace" to "undo" then I would agree that it should step back to the previous state with as much granularity as possible.
“Backspace” is already a misnomer: the original intention was for it to move the caret one position back, the other way around from the usual space, thus enabling overstriking (whence also the characters _ ^ ` , unheard of before the typewriter age). You can still see this used by the troff|less internals of man, which encode underline and bold as _<BS>a and a<BS>a respectively.
The text already contains the data about how the character was entered (more specifically, what codepoints it's made of), unless it has been normalized somehow. It seems like a tarpit to me to properly specify these behaviours in a way that won't be an annoyance/surprise to a lot of people.
It would probably expose the implementation too much, but if you wanted combined characters to be split apart, I would expect "ö" to be removed in one go, whereas an "o" with a combined diaeresis (No idea if it's called that, I got the name from the code chart) would take two backspaces.
The mark with two dots can refer to various things.
Dieresis (Greek word we use in English) is a phenomenon where two vowels don’t merge together to make a new sound. Co-operate, coöperate, or cooperate. You could also shove a glottal stop in there but this is much less extreme. The diaresis marker isn’t used much in the USA. Another common example is naïve.
Umlaut (German word) is the opposite phenomenon: the first vowel sound is inflected by the following, so in German Apfel (apple, pronounced UPfell) becomes Aepfel (apples, EPPfell), instead commonly written Äpfel instead.
People usually refer to the marks by the term in their own language (umlaut, diaresis, caron, accent…). I say Umlaut in German (which doesn’t mark the phenomenon, you just have to know) and diaresis in English though I’m talking about the same symbol.
But diacritics are used in no systematic way in different languages anyway (this is true of the letters themselves too: letters like S, W, V and Z have different pronunciations in English and German, for example). Ppl just needed ways to say “the sound here is similar to the sound of this letter but not really the same” or “when you say this letter also do something else” (like, say, add stress for languages that have that).
English mostly dispenses with them, sometimes adding a letter (g vs gh) but mostly having a very casual relationship with the sound at all. You simply have to know that cooperate isn’t pronounced like chicken coop. I personally think that’s a good thing.
Pragmatically, to get from "ö" to "o" one would delete "ö" (hopefully by pressing backspace) and then type "o"; deleting the combining marks isn't actually useful.
Definitely agree with that! I use a US kbd (incl on phone) no matter what language I’m writing in. A little annoying but switching kbd layouts is more disruptive for me.
In French, è is a single character issued by a single keypress on a French keyboard, like e, or +. (Note that A is shift+a). Why should it need two backspaces? If you press e+` well you have e`, not è.
I am assuming that means "on French keyboard", not "in French". I have a usa keyboard and live in Canada...Every now and then it thinks I'm typing French and keyboard indeed behaves in a way that some vowel plus some quotation mark indeed gives me some other character (that I don't need :)
That would feel very strange to me. The Canadian layout probably behaves differently, but à is a modifier letter, not two letters. I’d expect backspace to remove the whole letter, including the accent.
Behavior that depends on whether you edited something else in between, or that depends on timing, is just bad. Either always backspace grapheme clusters, or else backspace characters, possibly NFC-normalized. I could also imagine having something like Shift+Backspace to backspace NFKD-normalized characters when normal Backspace deletes grapheme clusters.
As for selection and cursor movement, grapheme clusters would seem to be the correct choice. Same for Delete. An editor may also support an “exploded” view of separate characters (like WordPerfect Reveal Codes) where you manipulate individual characters.
To me that just screams "particle physics are predictable(determinant) as long as the particles are shielded from outside noise, not because they are connected/bound together by some mysterious force or law of physics."
I suppose a thought experiment to prove/disprove that would be to send one of the entangled particles, particle A, around a black hole to slow it's time down and then afterwards measure if the entangled particles still give opposite results but consistent with the time delay.