If such a machine existed in our universe, as it sped up its execution, its parts would approach the speed of light. To continue speeding up its execution, it necessarily must occupy a smaller and smaller volume, to keep the parts nearer and nearer to each other. Continue further, and you realize that the total information content of the machine can't be sustained at any amount without the thing collapsing itself into a black hole. So, even if the machine, in any sense, "finishes" the computation, the output will live inside a black hole and thus be physically unknowable to us.
I find it so fascinating that fundamental properties about the laws of physics integrate so tightly with what is computable. The standard definition of compatibility involving Turing machines sounds kind of arbitrary, but those machines are representative a truly fundamental concept in our universe: that which is physically knowable, since you can build Turing machines out of the stuff in the universe.
Almost. Turing machines have unbounded tape. But the universe does not contain an unbounded amount of information or space with which to work. Sufficiently large Turing computations, though theoretically finite, are not realizable in our universe. Should such computations be considered decidable or undecidable?
> Continue further, and you realize that the total information content of the machine can't be sustained at any amount without the thing collapsing itself into a black hole
Non-sequitur here. You never argued why making a machine smaller and smaller makes it into a black hole. As you make circuits denser, you can also make them thinner, therefore avoiding mass increase.
The parent poster is correct. I’m not a physicist but Nima Arkani Hamed is and he talks about these limits all the time. Going smaller requires more and more energy. Why I’m not 100% sure but it does.
At a certain smallness you get a black hole because of this (energy/volume) and more energy only gets you a bigger black hole believe or not. So there is a size limit according to our best theories.
BTW I didn't say the OP is wrong, I said he committed a non-sequitur. Which would mean he's wrong in the sense his reasoning is wrong, but not in the sense his conclusion is false.
> Sufficiently large Turing computations, though theoretically finite, are not realizable in our universe. Should such computations be considered decidable or undecidable?
In Practice, I think Undecidable is the only "correct" answer -- because our knowledge is still very incomplete. In Theory/Principle I think Decidable is the "correct" answer because there will very probably be limits that cannot be exceeded.
... but then, to reduce this to a simpler issue: The Halting Problem becomes trivial for any amount of finite state. It just takes a really, really long time (or a huge amount of space) to decide.
This just came up on a separate recent HN thread... the fictional hypercomputer:
"I Don't Know, Timmy, Being God Is a Big Responsibility" by qntm:
https://qntm.org/responsibility
I used it as an interview question many years ago. I wasn't very rigorous about it, basically any plausible answer was good enough for me. Answers fell into two categories, the theorists (it's an infinite series that doesn't converge) and the pragmatists (you couldn't physically do it).
As anyone who's played light switch disco will know, the light bulb will blow long before you reach the theoretical limit of the series. :D
(Actually thinking about it, that's probably not true because the switching accelerates so fast... the limiting factor is probably actually friction from moving the switch, and/or the tensile strength of the switch housing. I wonder if it would just disintegrate or if it would build up enough heat to properly explode...?)
I think the limiting factor will be the inductance of the wire and lightbulb filament, which sustains a current even when the switch attempts to break the circuit!
Ooh good point! Whatever kind of light bulb it is, it will (through inductance, thermal mass or fluorescence) continue to glow (which I’m counting as being “on”) for at least 1/60th of a second after losing AC power. So I’d say it will be glowing quite a lot at the 2 minute mark even if the switch components are forming their own singularity right nearby.
I'll give you a good in between. The problem is underspecified because the universe in question is underspecified. You can get pretty much any result you want out of such a strange thing by specifying a more fully-fleshed out universe in which that is the answer. You can create one in which the answer is half-on, half-off, even though you initially specified your cells as only containing integers and not realized that your axioms entailed additional values, which themselves could be limited to rationals with the right axioms, or the computables, or the reals, or several other things.
In this, your universe may admit of a natural "averaging" operation, where you take a "half on" thing and a "quarter on thing" and produce a "three quarters on thing" in some natural manner. Playing around in my head, I find you need to be careful about your definition of "fully on" and "fully off" here; you may or may not want to permit an "infinite sequence" that contains zero changes in it. But you'll get a weirder number system if full-on and full-off are not permitted, where you can arbitrarily close to them but not quite achieve them.
Or you may find your axioms force you to admit a new "undefined" value that, again, you didn't realize when you started that you were adding but it turns out you were. Or you could create a scenario where this actually freezes your entire universe because there simply is no way to proceed past the singularity, because your "take next step" function simply stops working and you get the mathematical equivalent of a crashed or hung program. (Just because something is "mathematical" does not imply totality; there's many, many, many cases where the answer is just "That function doesn't work there". Sometimes that leads to an exploration of "well what if we force it anyhow", e.g., "square root of negative numbers", but sometimes it really is just "this function stops here".)
An intriguing exercise for those who may want to experience this sort of "I didn't realize these axioms did that!" first hand is to look up the "surreal numbers" if you've never heard of them before. Read the definition of surreal and try to guess what numbers will emerge from it. Then watch as your mind is blown about what you admit when you admit those axioms. Unfortunately, I could not find a presentation of surreal numbers that started with the bare definition; for obvious reasons everyone leads with where you're going because who reads unmotivated mathematical definitions for fun, right? Still, try to imagine what you would have expected such axioms to produce, what someone might have "intended" them to produce, versus what the actually do.
2. In the early days of the web, many people's personal sites were hosted on shared computers (e.g. my first was on my university's DEC Ultrix system). The URLs started with a reference to the user's home directory. This page is hosted on a machine that has a page to lists all the user's home pages: https://www.chiark.greenend.org.uk/users.html
3. The home page has this awesome spam detector:
If you want not to be able to send mail here in future, please send mail to tabasco@chiark.greenend.org.uk.
I would make a programming language where you couldn't write code, only detailed specifications and tests. In a VM Every possible permutation of bytes/opcodes/instructions/whatevers would be tests to produce the smallest and fastest (presuming the code is to run on other machines) possible binaries that pass your tests. Ideally this would produce a Pareto front of possible outputs and you could choose from among them.
Another idea solving old problems the hard way. Since fermat's last theorom is countably infinitely problem and this computer is uncountably infinite in performance it should be able to knock it instantly by simply enumerating all integers and trying them.
If we used such a machine to to enumerate Pi what would happen? Is it proven that Pi infinitely long? If so that would be a countable infinite because it is digits of precision, right?
Could we write algorithms that search all the digits of pi for patterns? Or mayb messages on speculation of creators or others who might be able to tamper with it?
Could other math things be searched? Have we proven there is no upper limit to prime numbers? I know the largest known mersenne prime fills a novel sized book with just one number, but when does a number get too big to be prime if ever?
Yes, pi has an infinitely long decimal expansion, and there are infinitely many prime numbers.
No deep messages have been found in either yet, and I am sure some people tried, so it stands to reason that no such messages exist. Mathematical objects can be complicated and difficult to understand and go on for infinitely long without having secret messages hidden in them.
I thought it's interesting that the author went the way of add-more-via-subdivision. It feels very analog, kind of like how with old photos, the way you get more resolution is to just print them bigger (of course, what you actually get in reality is more grain).
It's different from how we modeled things like Turing machines, where if we needed more memory, we would append more tape, as opposed to subdivide the existing tape further.
Zeno's paradoxes are thought experiments. The original paradox of Achilles and the tortoise purported to show that Achilles would never be able to overtake the tortoise. But of course, in a real race Achilles would encounter no difficulty whatsoever in doing that.
The Infinity Machine almost certainly isn't physically possible, but that's because of the laws of physics, not because of Zeno.
Even with plunging cost and increasing capability... this is talking about infinite compute, it's essentially impossible to achieve by definition.
You could dedicate the worlds entire compute to the Goldbach's conjecture function he wrote up - it (would/might) never complete. You could 1000000x the worlds compute and it (would/might) never complete. It's not even a problem of compute speed, it's a problem with infinity.
Agreed, but there is plenty of utility that can be harvested, if we can direct our attention adequately.
Also: humans have the ability to arbitrarily redefine words, and believe those redefinitions! There is a surprising amount of leeway in this simulation.
> Agreed, but there is plenty of utility that can be harvested, if we can direct our attention adequately.
Only for problems no worse than O(n^m) where m is not much more than 1.
Greenend (the domain name in the link) is, or so goes local legend, named after an actual street in Cambridge, quite close to where I was working a decade ago; one particular job, we'd just changed a file format for a mobile app, and the upgrade process was taking 20 minutes on test devices. The other engineer insisted it couldn't possibly be improved despite the two observations (1) it was fine before the upgrade, and (2) it was fine once the upgrade was complete. After a bit of digging, I found an O(n^2) operation we didn't need, and turned 20 minutes into 200 ms.
Increase the available compute by a factor of 1024 in an O(n^2) task, only compensates for n growing by a factor of 32.
I find it so fascinating that fundamental properties about the laws of physics integrate so tightly with what is computable. The standard definition of compatibility involving Turing machines sounds kind of arbitrary, but those machines are representative a truly fundamental concept in our universe: that which is physically knowable, since you can build Turing machines out of the stuff in the universe.
Almost. Turing machines have unbounded tape. But the universe does not contain an unbounded amount of information or space with which to work. Sufficiently large Turing computations, though theoretically finite, are not realizable in our universe. Should such computations be considered decidable or undecidable?