As many things Apple does, I think this will not work well world wide. As it is the case of DDG. In my country, Google Maps and Google Search blow everything away.
> Nearly a century ago, Alonzo Church invented the simple, elegant, and yet elusive lambda calculus. Along with Alan Turing, he then proved the Church-Turing thesis: that anything computable with a Turing machine can also be computed in the lambda calculus.
The Church-Turing thesis is not something one can prove. It's more like an intuition and/or definition we have. It states that anything that is computable can be computed with a Turing machine. See [1].
"Anything computable with a Turing machine can also be computed in the lambda calculus" this is true but it not the Church-Turing thesis and its called a Turing-equivalence. This was also not proved by Church, but by Turing in an appendix to his paper [2].
About a computation model being Turing-equivalent: the fact that most reasonable computation models we came up with were proven to be Turing-equivalent is one of the reasons we believe in the Church-Turing thesis.
Another statement of the Church-Turing thesis is to say that the the functions computable by a Turing machine (or lambda-computable, or general recursive) makes a good proxy for the intuitive concept of computability.
From this perspective, "thesis" is a misnomer: it just happens to be a useful definition for "computability".
That's not to say that there aren't edge cases where intuitive "computability" and Turing computability disagree; for most people, any number of oracle machine constructions will fall under the former but not the latter. We've just chosen to define the term in a way that's "easy" to define somewhat rigorously and has useful properties.
It's kind of just like if we were to call the usual definition of the reals "Cauchy's thesis" or somesuch.
The Church-Turing proposition carries an implicit challenge: find an algorithm to evaluate a function which is not Turing computable. Because it can be challenged in this way, it’s not just a definition (mere stipulation how to use the word “computable”).
Back in 200 B.C. we had something called the "Euclid-Archimedes Thesis" (okay it wasn't literally called this) which stated that every measurable curve was either a line or a circle. The idea that there was a way to measure the length of any irregular curve was simply impossible. And this thesis went unchallenged for an amazing 1,800 years until the 17th century when several mathematicians constructed measures for various irregular curves.
Be very skeptical about "theses" that are handwavy about a frontier subject (pure, abstract geometry was just as much of a frontier subject as abstract machines) suspiciously lacking any proof.
Turing machines and being turing computable doesn't require any finite constraints. Both infinite time and infinite storage space are expected for a universal turing machine. We obviously have never built one.
There are many finite constraints. First, there is no infinite time [1]. The most famous uncomputable problem is the halting problem (given an algorithm A and an input x, can we compute it? that is, will A stop on x?). Although we have an infinite tape, since the time of a computation is finite, the space it uses is also. Second, we describe a Turing machine using finite sets.
[1] Actually, as described by Turing, a Turing machine neves stops, but this is only because he is interested in computing real numbers (for example pi). But even using an original Turing machine, we do not "compute" pi. We only "compute" pi with some precision.
In a way, anything discrete is a logical assertion that exists purely in imagination. Base reality is continuous in every way. It’s imaginary human reality to separate things apart from the whole. Physics and philosophy have struggled with this over atom vs non atom view of “base reality” - interesting to see how it shows itself in many different ways. Anything quantized runs into issues with consistency, as the quantity only works in an imaginary model. No model is sufficient as it has a fixed size of requirements to be a model.
(I’ve also been wondering for a few years whether it is possible to make a version of quantum mechanics which works with the wavefunctions taking on values from a (very large) finite field instead of the complex numbers. My impression so far is “probably not” because you need exponentials, and the multiplication group of a finite field doesn’t have a nontrivial group homomorphism to/from the additive group. Also, for the simple way of defining differentiation for functions over finite fields, has 0 as the only eigenvalue of differentiation, and I’m thinking that the other definition I made up for it also does. I think I saw some definition of differentiation for finite fields which maybe does have a non-zero eigenvalue,
Yeah found it, https://arxiv.org/abs/1501.07502
but, this doesn’t have the “derivative” of a constant be 0, so I’m not sure that it would be suitable either.
So, my guess is that probably one can’t make quantum mechanics work with wave functions that have their values all from a finite field. Still going to keep looking a bit more though.
)
Maybe your intuition provides an analogous but different conclusion that may be more suitable to novel conceptual models of reality. Pursue your intuition I’d say. I’m in a similar path.
Oddly I find music to be a good hint of reality lately. I wasn’t too interested at musical theory but now digging into it the geometrical spaces it shadows is inspiring. Projective geometry allows continuity. Axiomatically it also allows arithmetic among other things. Very intriguing.
The unfortunate truth is that once you pick apart the definitions of deep things like logic and math, you see that we don’t even all agree on them. We like to think that because we can describe something with math it’s inherently true, but the answer is pretty much always up for debate.
We found flaws in basic set theory and had to move to the updated Zermelo-Frankel set theory because of it. People try and create new foundations of math all the time for varying reasons.
Anyway, I see so many people speak with such conviction about truth and objectivity, that it’s refreshing to see someone show humility about what we actually know about our universe. Lots is still up for debate, is the sad truth.
> In a way, anything discrete is a logical assertion that exists purely in imagination. Base reality is continuous in every way.
This is a strong statement. What about any thing that can be counted? Apples, sheep, atoms in a molecule, permutations of objects? Certainly these are discrete quantities?
Just because you can recognize something as a unit doesn’t mean that it isn’t continuous fundamentally.
If you look at your hand you may count your fingers but they’re still part of your body as a whole. Any “border” is an arbitrary limit that is fully in the domain of human imagination. Our senses of the world allow us to conceptualize arbitrary borders and units of a continuum.
Base reality is continuous, absolutely nothing exists without the whole. That is to say, only the whole exists. Anything in the finite will never be adequate to describe true reality.
Physics actually does understand this quite a bit. Much of theoretical physics is divorced from reality to the point where quantum physics itself is nowhere near describing reality. It may help describe certain properties based on our observations but many postulations are dependent on phenomena that are entirely non physical.
There are factions within physics and the “atomic mathematical model centric” faction has the popular voice. There exists within physics those who do not see the universe as atomic. You don’t hear much about them today because information is gated.
Even if something is continuous, that doesn’t imply that all borders within it are arbitrary. And even among borders that are somewhat arbitrary, not all of them are equally arbitrary. Whether a metal is iron or lead is a kind of question that aligns more with the workings of the universe than the question of whether the current month is September or October.
The goal of cleaving reality at the joints is not entirely hopeless.
In a bimodal distribution, there is a point which is a local minimum for the probability density which lies between two local maxima of probability density.
As to whether an infinite amount of classical information is needed to perfectly describe the behavior of physical objects, I am agnostic. As to whether a bounded-in-size physical object can be used to store an unbounded amount of classical information, I’m fairly confident that this is not possible (as a thing that happens to be true of the universe, not something required by logic alone).
(I am not arguing that “whether a sample is entirely lead” is a perfectly discrete question, as, I hear that it is thought that on extremely extremely long time scales that quantum tunneling may cause collections of lighter-than-iron elements to combine and form iron, and so even in a sample of “pure” [some element lighter than iron], maybe there might basically immediately be some minuscule component of the wave function in the “these atoms combined into some heavier element” direction? I’m not sure. I don’t know whether that would decohere or whatever immediately and therefore with high probability go back to a state of purely the one element? Idk.
What I am claiming is that even if there is a continuous path connecting states we would call element A with those we would call element Z, it is nonetheless natural and _Correct_ to make a distinction between different elements, and these distinctions are more natural and less arbitrary than other distinctions we might make.)
Yeah I appreciate your thoughts in this realm. It’s balanced. Good to think about. I am of the perspective that it’s all a whole and interactions within a whole create observational effects. I don’t think there’s an absolute answer as much as the consideration of being able to maintain multiple perspectives of thought as equally valid depending on the need.
Distinction is almost strictly a utility of communication. But that could be expanded into language and mind itself and gets meta quickly.
Maybe my poking is more related to an irk that language itself being used is in the context of finality and truth. Every sentence is a statement of absolute truth. No wonder everyone argues endlessly over what is.
My understanding is that the halting problem is not computable even given infinite time.
It might be possible to show that with some degree of infinity the halting problem is solvable, but giving yourself infinite tape and time emphatically do not solve the problem.
I need to collaborate with other people and because of COVID we are doing everything remote. Also modifications would be hard and time consuming with pencil and paper.
> the supremum of an increasing sequence is equal to the limit
-- this is not misinformation (and to anyone familiar with some introductory analysis, correct[1]). Of course, calling it "dogma" is a bit inflammatory, but not technically wrong. It's kind of of a made-up rule to help us work with infinities (particularly in ℝ -- but it happens all the time in set theory, as well).
But to agree with GP, touting it as "intuitive" or "mind-blowing" is indeed silly.
I do think it is technically wrong to call it dogma - the decimal is a geometric series with a limit, right? And limits have an unambiguous definition, it's the smallest value that the series approaches but never exceeds as it tends to infinity. I think the part that is admittedly weird is that the notation "0.999..." refers to the limit as the series tends to infinity, and it kind of hides that fact from you. Even just writing the geometric series down and plopping "infinity" as the value for x would be wrong, as it's the limit that is equal to 1 as x tends to infinity. So there's arguably more hidden notation than the ellipses implies, but nothing is pulled out of a hat here or defined for definitions sake.
> the decimal is a geometric series with a limit, right..
Right, but that's not really the crux of the matter. Hint: look at how the supremum is defined[1]. The definition of the supremum is how we end up with 0.999... = 1.
I suppose my point is that you could turn the repeating decimal into an infinite series, and a student might accept that, and you could define the suprememum and they might agree that it is 1, but then you ask them if the series is equal to the supremum, so they don't know what to do with the series, so they turn it into a sequence. Now they ask whether the last item in the sequence is equal to the supremum. Of course not! This is by definition.
And now you realize that you and the student have been operating by different rules. Their rules of equality are based on symbolic equality, so you actually have to relax the rules a bit to make limit equality work. And then, more importantly, you have to show that all the other rules are still intact. Actually, in this case, they aren't. Symbolic equality involving infinity is now horribly broken, and you have to express all equality in terms of limits to maintain consistency. Explore this further and you keep finding more inconsistencies that have to be settled by new rules that define new areas of mathematics.
So who is right? The natural world appears to be much more permissive than limit equality, preferring epsilon-equality. Symbolic equality is the only purely self-consistent system, but you can't do much with it. It's also possible that the natural world works with symbolic rules (quantum) but the complexity is great enough to resemble epsilon equality (continuum).
So, .999... == 1 by tautology. It's not some brilliant mathematical insight. The interesting part is the consequence of defining it as so.
Congratulations, looks like a really nice language.
One nitpick: I think "val" looks too much like "var" and this will make it harder do differentiate them by code skimming. I suggest changing "val" to something like "const", or "fix", or "imm"...
The only one of those, that does not indicate by name to me, that it is about mutability, is the on of Swift. I guess you could argue, that in math, when someone says "let x be 2", then x is constant. In the context of programming though, I am more familiar with the var and let in JS, which is not at all about mutability.
I think the name is really bad in Brazilian Portuguese. Here "pika"* is a slang for dick. At the same time, it is still the sound Pikachu makes ("pika pika"). So maybe there is no problem.
* I don't know about European Portuguese.
The correct spelling for the slang would be "pica", but both have the same pronunciation.
I don't know why but someone edited my comment. The original had 2 footnotes which were combined into one. Now it is ambiguous. Thank you for your disservice.
Most likely, you had a single line break between the two footnotes, but you need two to actually start a new paragraph. And if you started your second footnote with two asterisks, that gets read as an empty italicized string.
> This was probaly funny for the writer, but is almost never a good idea, since the image makes no more sense as soon as you switch languages
He is writing in English why should he be preoccupied if it does not make sense in other languages. There are many English expressions that have no exact translations. Should writers avoid using these expressions too?
Also, as a curiosity, terminal has the same ambiguity in Brazilian Portuguese
That's a good question -- I should not have been so terse/flip. I agree they do find some interesting issues and that's why I liked the paper.
On the practicality side: If I read the paper correctly this is a lint like tool for finding suspected problems (hence e.g. the "litmus tests" -- the tool iterates out the possible states, which is great). It isn't a tool for finding new problems. The compiler testing is interesting (I'm an former compiler guy) but reading Sec 7 didn't lead me to believe it could identify code generation problems in cases of very high optimizations.
Also a lot of recent security issues have been in the implementation of the microarchitecture (not just microcode bugs but functional unit errors such as the recent hyper threading leaks).