Try to build relationships and friendships with those around you, without asking for anything in return. Relationships are the best asset we have. Find, build, foster, and nourish and treasure them. It will help with mental health, emotional stability, happiness, connections, introductions.
Heuristic or not, AI is still ultimately an algorithm (as another comment pointed out, heuristics are a subset of algorithms). AI cannot, to expand on your PRNG example, generate true random numbers; an example that, in my view, betrays the fundamental inability of an AI to "transcend" its underlying structure of pure algorithm.
1. If an outside-the-system observer cannot detect any flaws in what a RNG outputs, does the outsider have any basis for claiming a lack of randomness? Practically speaking, randomness is a matter of prediction based on what you know.
2. AI just means “non human” intelligence. An AI system (of course) can incorporate various sources of entropy, including sensors. This is already commonly done.
What a wonderful text. Easy to read, concise, clear, interesting -- and above all, important.
I would add context for 2025 about the fundamental limits this places on what (modern) AI is in principal capable of. Perhaps some "non-computable" features would need to be hard-coded into AI, so that it could at least better approximate the types of incomputable problems we might ask it to do?
Also, a search of the text for "conscious" does not yield anything, which is probably a good thing. This text also reminds me of the questions like, "What does it mean to be conscious?" and, "How are human brains able to reason about (things like) incomputability, which in some sense, computers as we currently understand them could never do?" and, "What specifically beyond pure mathematics does a brain have or need in order to be conscious enough to reason about such things?"
Gödel's Incompleteness Theorem places a limit on what you can prove within a formal system. Neither humans nor LLMs are a formal system, so it says nothing about them.
Someone's going to think that, since you can formally model computer programs (and thus formally model how an LLMs runs), that means that Gödel's Incompleteness Theorem applies to computer programs and to LLMs. But that's irrelevant; being model-able doesn't make something a formal system!
"Formal system" has a very technical meaning: it has a set of consistent axioms, and the ability to enumerate formal proofs using those axioms. For example, Zermelo-Fraenkel set theory is a formal system [1]. It has nine formal axioms, which you can see listed in its Wikipedia article. Utterly different kind of thing than humans or LLMs. Comparing an LLM to ZFC is like comparing a particular watermelon to the number 4.
This is unnecessary nitpicking. You can easily derive a logical contradiction of your choice by assuming that a system that can be formally modeled can prove something that cannot be proven within a formal system. If a specific theorem doesn't prove that, a simple corollary will.
If you want to escape the limits of computability, you have to assume that the physical Church–Turing thesis is false. That the reality allows mechanisms that cannot be modeled formally or simulated on any currently plausible computer.
> by assuming that a system that can be formally modeled can prove something that cannot be proven within a formal system
Something that cannot be proven within which formal system? Every true statement can be proven by some formal system. No formal system can prove all true statements.
(This is all about Godel incompleteness. Turing incomputability does apply though, see the sibling comments! It's important to keep them separate, they're saying different things.)
Oof, I pattern matched OP to a different argument I've seen a lot. That's embarrassing.
Yes, that's correct, it's provably impossible to construct an LLM that, if you ask it "Will this program halt? <program>", answers correctly for all programs. Likewise humans, under the assumption that you can accurately simulate physics with a computer. (Note that QM isn't an obstacle, as you can emulate QM just fine with a classical computer with a mere exponential slowdown.)
While it's true that LLMs are not strictly a formal system, they do operate on Turing-complete hardware and software, and thus they are still subject to the broader limitations of computability theory, meaning they cannot escape fundamental constraints like undecidability or incompleteness when attempting formal reasoning.
LLMs don't do formal reasoning. Not in any sense. They don't do any kind of reasoning - they replay combinatorics of the reasoning that was encoded in their training data via "finding" the patterns in the relationships of the tokens at different scales and then applying those to the generation of some output triggered by the input.
That's irrelevant. They have an "input tape" and an "output tape" and whatever goes on inside the LLM could in principle be implemented in a TM (of course that wouldn't be efficient but that's besides the point).
Choose any arbitrarily small segment of the real line, all those numbers will be uncomputable almost everywhere
While there are many (often equivalent) definitions of what a computable number are. And easy to grasp informal explanation is that a number is computable if you can write a function/algorithm f(n) such that it returns the n-th digit in a finite amount of time.
For Pi as an example: f(3) = 4 (from 3.14)
Almost all applies to the elements of an uncountable set except for countable subset.
As the reals have a cardinality of 2^aleph_0 or aleph_1 (uncountable), but the Natural Numbers, Integers and Rationals are aleph_0 (countable), you have to find some many to one reduction.
Chaitin's constant is uncomputable because it is effectively random, and no random problem is decidable, or (co)recursively enumerable)
It gets mapped to the halting problem, because that is the canonical example but you could use the system identification problem, symbol-grounding problem, open frame problem etc....
But to answer your question, in the real interval [0,1], the computable numbers have measure zero, and the uncomputable numbers have measure one.
So right there is an uncountable infinity of non-computable numbers.
I should've been more clear. The Reals are very unsettling because there are supposed to be so many of them, but nobody has used, named, or described more than a few which aren't computable. They are angels dancing on a pin which we've never seen and probably don't need for any of applied mathematics.
No, all the numbers most anyone actually uses are computable, and it'd say something weird about determinism if they weren't. The word "need" was carrying some weight, as in necessary vs sufficient.
It's not about individual numbers. Most applied maths fields (including physics) use the real numbers and the properties of real numbers as a complete field are implicitly assumed all the time (e.g. for differential equations). It's only ever some mathematicians, computer scientists and philosophers who care about the fact that most of them aren't computable.
If you want to argue that one could derive a theory of physics etc. that doesn't rely on the real numbers that's one thing. That's probably possible although I doubt that it would be very elegant. But it's simply not what people use.
> If you want to argue that one could derive a theory of physics etc. that doesn't rely on the real numbers that's one thing.
That wasn't my original intent, but I think it's a legitimate stance. The Reals are bizarre (Banak Tarski, etc...), and we practically never use the non-computable ones.
> It's only ever some mathematicians, computer scientists and philosophers who care about the fact that most of them aren't computable.
Lol, who else WOULD care? :-)
> That's probably possible although I doubt that it would be very elegant.
That's an aesthetic judgment. Someone else might argue that sticking to the numbers we actually use and can calculate is simpler/cleaner/more elegant/whatever. And besides, I'm guessing you could replace ℝ with T (for Turing) and most of the proofs would go just fine.
(Although the Computables have plenty of weirdness to them too...)
> And besides, I'm guessing you could replace ℝ with T (for Turing) and most of the proofs would go just fine.
This is wrong. LUB doesn't hold for the computable reals, and this means that a lot of classical proofs straight up don't work anymore.
That's not to say you can't build a different kind of real analysis built only on the computable reals, some mathematicians have tried to do similar things in the past, but it's going to be a different framework with not exactly the same theorems and not exactly the same proofs; for example, there are no discontinuous functions anymore.
Yeah, I'm sure there are some holes (poor attempt at a pun), but you already know you can get arbitrarily close to whatever bound with rationals, much less computables, so it seems like someone better educated than me could shore that up.
Anyways, the computables have unfortunate properties too (not knowing if you'll eventually need to carry for addition, or when to stop testing for equality), so I'm left wondering if some genius in the future will come up with a new way to build the numbers which are more satisfying.
These are distinct senses of "random". Pi is conjectured (but not proven) to be a so-called normal number, which means that n-grams have uniform frequencies, so if you slide a window of size n over the digits, then every sequence of n digits will come up with equal frequency in the limit.
Chaitin's omega is "random" in a stronger sense. There is no finite algorithm that can produce the digits. There's no compact, finite representation from which you could "unpack" those digits. The digits are what they are, the best way to represent them is simply to list them.
Pi's digits are very "regular", but the pattern is not about repetitions in digits or skewed frequencies of certain digit sequences but that the digits follow from simple rules. A short program can generate those digits to arbitrary length. So in some sense they have redundancy, a kind of pattern to them.
π is a transcendental number meaning that it is not algebraic
It is NOT an algorithmically random sequence.
Chatlin's halting probability is normal, transcendental, and non-computable.
https://arxiv.org/pdf/0904.1149
> Chaitin [G. J. Chaitin, J. Assoc. Comput. Mach., vol. 22, pp. 329–340,
1975] introduced Ω number as a concrete example of random real.
Being non-algebraic is not the same as being algorithmicly random in the formal sense.
If I had children aged 7-17 and felt China was intentionally nudging them via algorithmic suggestions away from STEM and toward vapidness, and if I was unable to control their access to it, I guess I might appreciate that my government had banned it. But, as others have mentioned, it sets a dangerous precedent. If nothing else, this attempted ban has raised national awareness about the negative impacts of TikTok. What could the US Federal Government do instead, assuming it is important to consider such platforms as per their effects on the population?
If China sold candies that contained poison and marketed them to Us children, it would be easy, since the FDA prohibits this. If the FDA didn't exist, perhaps poisoned candy sales would prompt the creation of such a regulatory body.
So I guess I oppose the ban while recognizing the danger, and suggest we consider regulating digital goods in the same manner as consumable foods; if provable harmful effects are evident then that is grounds for a ban of a product on the basis of health protection.
The forced divestment is for national security reasons. Bytedance, as a Chinese company, is required by law (Cybersecurity Law of the People's Republic of China) to provide full data access to the Chinese government on request, and they are compelled not to reveal when this occurs. Since this is done through legitimate channels (on Bytedance's side), this won't even be caught with an audit. So you have a situation where an app installed on half of America's phones shares all its data with China, along with any potential changes the government recommends for influencing the content.
Meta was selling data to Chinese groups and buried a report stating this until recently. This has nothing to do with national defence and everything to do with ensuring American companies control the narrative without competition.
Meta & co are required by US law to do the same for people in the rest of the world. Didn't see a huge US outcry about that, in fact I saw a lot of hate for things like GDPR
The hate for the GDPR I read of is actually about the "allow cookie" popups that aren't needed at all are are just a form of protest by those individual sites because they are storing and selling personal information including IP addresses.
If you aren't engaged in those practices then there's no need for any GDPR annoyances for users.
The allow cookies was already there because of the cookie banner law, unfortunately GDPR did not stop the cookie law, but GDPR does say you need a way to agree to tracking etc. and to be informed when it happens so it sort of seems reasonable the allow cookies popups would be used for this.
I think the easier framework is this: China has banned her citizens from using most United States-based social networks. This prevents American companies from accruing profit from Chinese citizens and advertisers, and shrinks their potential pool of user data for refining algorithms or selling. As such, it's effectively a trade policy for us to in turn ban her social networks. Unless and until we are equally able to harvest Chinese data and suck yen out of China, she will not be allowed to harvest American data and suck dollars out of here.
China is classified as a foreign adversary, so this goes beyond trade policy. Foreign adversaries show a pattern of conduct that threaten national security. People are not comfortable with foreign adversaries having a direct line to our youth's attention and having their finger on the dial.
It’s overly simplistic. It doesn’t take into account the ideals the USA was founded on (including free speech as an inalienable right), nor does it take into account the large shift in US government policy.
That’s asinine. Every nation responds to things such as tariffs with a proportional response.
We have plenty of evidence that the U.S. has been harmed with our open approach to unfettered access to our electronic systems. Meanwhile our geopolitical adversaries have no qualms about fire walling their citizens from accessing foreign networks at all.
This is a clear case where the U.S. should treat them as they treat us. IMO any 1st amendment arguments are made in bad faith because there are no shortages of non-hostile channels for Americans to speak freely and openly.
Does anyone else remember “free speech zones” from the Iraq War era? Where was this argument then?
Wait, so your position is that the US is harmed by being the global master of the internet (through companies like Meta that are synonymous with the internet in some places in the world) and that we should build the great American firewall to keep our internet in and other's out?
Well, it does offer an avenue for inacting some form of ban. And not so sure it's all that morally low.
Because what China's ban of US social media might say is that China recognizes social-media's power to influence the populace (think Russia's use of Twitter and FB in the 2016 election). Yeah, am actually in favor of some form of restrictions, because we as a country need to realize that social media is a tool that can be used against us.
Yeah, if it was an outside country owning a major US newspaper, it'd be more clearcut.
If social media is so bad let's regulate ALL of it, US firms included.
Personally I fear US-controlled social media more than Sino-controlled ones. It's not like the CCP can come and arrest me here in the US, or really use my data against me in any way. Both have plenty of reasons to throw propaganda at me, or censor certain viewpoints.
At least when it's a foreign country I have a chance of seeing through it, compared to so many domestic media sources currently licking Trump's feet. I'm supposed to feel secure that they'll be telling me the truth over the next 4 years? Acting in my interest?
I understand the sentiment, and in the short term I agree with you (what can a foreign-controlled tik-tok going to do to us?). But think giving influence to our online behavior directly over to an adversarial foreign-power doesn't sound like a good idea. To me, it reminds me of WWII and the codebreakers in England who broke the German codes. They didn't use it to decode messages all time, because they didn't want to alert the Germans, only during significant events. Yeah, control over a major social media outlet could be a tool to use selectively during important events to influence the masses.
Not in a good idea in my opinion, but am no expert in this.
That's fair. I do think China would probably just omit viewpoints rather than outright lie and it'd be hard to detect.
I guess I don't personally fear China as much as Congress does, I view them more as a rival than an enemy. I wish them well, and hope they make more cool apps and games for me to enjoy. And they'd probably love nothing more than for us to overthrow our oligarchs (just like we want to see the fall of the CCP), and that's not necessarily against my interests as a regular Joe.
I can understand why Congress is concerned, I just wish they'd try to address the root of the problem (enforce common safety and transparency standards for all social media) rather than the targeted approach they've taken.
meta also cannot arrest you here in the US. we should maintain tighter controls on the state's use of data and access thereto because it's the entity we've given a monopoly on violence. that is the appropriate point for controls to be applied.
> If China sold candies that contained poison and marketed them to Us children, it would be easy, since the FDA prohibits this.
The FDA was created by an act of Congress, as was this ban. These are identical scenarios -- the FDA has a mandate to block certain things, as does the TikTok ban. What's being debated is the constitutionality of it; and there are arguments both ways, but it seems very likely that the ban will hold.
> Would HN be OK with European governments banning Meta, X, Discord etc?
I'm a bit surprised it hasn't happened yet, although those companies are also willing to adjust policies in foreign nations—for instance, Meta saying it won't eliminate fact checking outside of the US.
A very naive and hopeful part of me would wish for Facebook, Twitter, and other vapidness-enhancing platforms be regulated too. But the untrusting, freedom loving red-blooded American in me is also wary of government controls and power consolidation bordering on censorship. No easy answers I suppose; we'll just have to find a way to thrive in spite of platforms that profit from our wasted time.
I think the US social media mega corps are kindred spirits and if TikTok is considered harmful/propaganda then so are the US products. The subject draws an uncomfortable amount of heat.
I think that's where we were with seatbelts in the 1950s, tobacco in the 1920s and alcohol in the 1850s. In all of those cases, society ultimately decided that guardrails were needed.
Speed-ran the game using this (well, I injected jquery first to select the element using $() because I'm an absolute Baboon) in about 45 seconds, spam clicking all the upgrades, and clicks stopped going up after hitting "342,044,125,797,992,850,000,000,000,000 stimulation" with 10k clicks per second.
What a ride. Love the implied commentary on our over-stimulated lives!
Fun fact: browsers' devtools consoles have de-facto standardized convenience aliases for querying the DOM, similar to jQuery [0][1][2][3][4]. This means you could do something as simple as:
to create the simplest dependency-free cheat speed runner. (And, as mentioned earlier, shrinking -- or logically also zooming in -- the page results in more DVD bounces.)
Ah, thanks for the heads-up, apparently there is something borked in Chromium wrt $ / $$ encapsulation, as it seems they are nor reachable from the (global) context setInterval so doing `window.$ = $; window.$$ = $$;` fixes that in Chrome. Not sure why. (Yet again embarrassed myself by trying a snippet that "simply must work ® according all documentations ™" in single a browser only before posting. Sigh.)
I bet it's working as intended. The $ symbol is probably a special feature of the console and is not intended to be a property of window. Inside setInterval, the function is no longer being executed in the special console environment, which has access to that symbol.
Yes, I guess there could be some intention behind that, presumably some security precautions, but still: the fact that you can see $ in the globalThis (as a non-enumerable prop), and that globalThis you see from inside the timeout-ed function is strictly equal to globalThis seen directly from the console, that makes it somewhat spooky.
And it (`setTimeout(()=>{console.log(typeof $==="function")},0)`) works in Firefox. (Interestingly, you cannot get $'s descriptor in there, but you have it always available in timeout.)
131,903,042,042,866,960,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 stimulation per second
I envy your rig - mine glitched a lot to get it in <3min. Might not be doing myself a service by actually answering the Duolingo questions via LLM... https://www.youtube.com/watch?v=I-J0ppP-H9s
Wonderful article on fractals and fractal zooming/rendering! I had never considered the inherent limitations and complications of maintaining accuracy when doing deep zooms. Some questions that came up for me while reading the article:
1. What are the fundamental limits on how deeply a fractal can be accurately zoomed? What's the best way to understand and map this limit mathematically?
2. Is it possible to renormalize a fractal (perhaps only "well behaved"/"clean" fractals like Mandelbrot) at an arbitrary level of zoom by deriving a new formula for the fractal at that level of zoom? (Intuition says No, well, maybe but with additional complexities/limitations; perhaps just pushing the problem around). (My experience with fractal math is limited.) I'll admit this is where I met my own limits of knowledge in the article as it discussed this as normalizing the mantissa, and the limit is that now you need to compute each pixel on CPU.
3. If we assume that there are fundamental limits on zoom, mathematically speaking, then should we consider an alternative that looks perfect with no artifacts (though it would not be technically accurate) at arbitrarily deep levels of zoom? Is it in principle possible to have the mega-zoomed-in fractal appear flawless, or is it provable that at some level of zoom there is simply no way to render any coherent fractal or appearance of one?
I always thought of fractals as a view into infinity from the 2D plane (indeed the term "fractal" is meant to convey a fractional dimension above 2). But, I never considered our limits as sentient beings with physical computers that would never be able to fully explore a fractal, thus it is only an infinity in idea, and not in reality, to us.
> What are the fundamental limits on how deeply a fractal can be accurately zoomed?
This question is causing all sorts of confusion.
There is no fundamental limit on how much detail a fractal contains, but if you want to render it, there's always going to be a practical limit on how far it can accurately be zoomed.
Our current computers kinda struggle with hexadecuple precision floats (512-bit).
1. No limit. But you need to find an interesting point, the information is encoded in the numerous digits of this (x,y) point for Mandelbrot. Otherwise you’ll end up in a flat space at some point when zooming
2. Renormalization to do what ? In the case of Mandelbrot you can use a neighbor point to create the Julia of it and have similar patterns in a more predictable way
3. You can compute the perfect version but it takes more time, this article discusses optimizations and shortcuts
1. There must be a limit; there are only around 10^80 atoms in our universe, so even a universe-sized supercomputer could not calculate an arbitrarily deep zoom that required 10^81 bits of precision. Right?
2. Renormalization just "moves the problem around" since you lose precision when you recalculate the image algorithm at a specific zoom level. This would create discrepancies as you zoom in and out.
3. You cannot; because of the fundamental limits on computing power. I think you cannot compute a mathematically accurate and perfect Mandelbrot set at an arbitrarily high level of zoom, say 10^81, because we don't have enough compute or memory available to have the required precision
1. You asked about the fundamental limits, not the practical limits. Obviously practically you're limited by how much memory you have and how much time you're willing to let the computer run to draw the fractal.
1. Mandelbrot is infinite. The number pi is infinite too and contains more information than the universe
2. I dont know what you mean or look for with normalization so I can’t answer more
3. It depends on what you mean by computing Mandelbrot. We are always making approximations for visualisation by humans, that’s what we’re talking about here. If you mean we will never discover more digits in pi than there is atoms in the universe then yes I agree but that’s a different problem
Pi doesn't contain a lot of information since it can be computed with a reasonably small program. For numbers with high information content you want other examples like Chaitin's constant.
> Pi doesn't contain a lot of information since it can be computed with a reasonably small program.
It can be described with a small program. But it contains more information than that. You can only compute finite approximations, but the quantity of information in pi is infinite.
The computation is fooling you because the digits of pi are not all equally significant. This is irrelevant to the information theory.
No, it does not contain more information than the smallest representation. This is fundamental, and follows from many arguments, e.g., Shannon information, compression, Chaitan’s work, Kolmogorov complexity, entropy, and more.
The phrase “infinite number of 0’s” does not contain infinite information. It contains at most what it took to describe it.
Descriptions are not all equally informative. "Infinite number of 0s" will let you instantly know the value of any part of the string that you might want to know.
The smallest representation of Chaitin's constant is "Ω". This matches the smallest representation of pi.
„Representation“ has a formal definition in information theory that matches a small program that computes the number but does not match „pi“ or „omega“.
No, it doesn't. That's just the error of achieving extreme compression by not counting the information you included in the decompressor. You can think about an algorithm in the abstract, but this is not possible for a program.
You seem wholly confused about the concept of information. Have you had a course on information theory? If not, you should not argue against those who’ve learned it much better. Cover’s book “Elements of information theory” is a common text that would clear up all your confusion.
The “information” in a sequence of symbols is a measure of the “surprise” on obtaining the next symbol, and this is given a very precise mathematical definition, satisfying a few important properties. The resulting formula for many cases looks like the formula derived for entropy in statistical mechanics, so is often called symbol entropy (and leads down a lot of deep connections between information and reality, the whole “It from Bit” stuff…).
For a sequence to have infinite information, it must provide nonzero “surprise” for infinitely many symbols. Pi does not do this, since it has a finite specification. After the specification is given, there is zero more surprise. For a sequence to have infinite information, it cannot have a finite specification. End of story.
The specification has the information, since during the specification one could change symbols (getting a different generated sequence). But once the specification is finished, that is it. No more information exists.
Information content also does not care about computational efficiency, otherwise the information in a sequence would vary as technology changes, which would be a poor definition. You keep confusing these different topics.
Now, if you’ve never studied this topic properly, stop arguing things you don’t understand with those who’ve learned do. It’s foolish. If you’ve studied information theory in depth, then you’d not keep doubling down on this claim. We’ve given you enough places to learn the relevant topics.
... and how would you decode that information? Heisenberg sends his regards.
EDIT: ... and of course the point isn't that it's 1:1 wrt. bits and atoms, but I think the point was that there is obviously some maximum information density -- too much information in "one place" leads to a black hole.
Fun fact: the maximum amount of information you can store in a place is the entropy of a black hole, and it's proportional to the surface area, not the volume.
We can create enough compute and SRAM memory for a few hundred million dollars. If we apply science there are virtually no limits within in a few years.
In the case of Mandelbrot, there is a self similar renormalization process, so you can obtain such a formula. For the "fixed points" of the renormalization process, the formula is super simple; for other points, you might need more computations, but it's nevertheless an efficient method. There is a paper of Bartholdi where he explains this in terms of automata.
As for practical limits, if you do the arithmetic naively, then you'll generally need O(n) memory to capture a region of size 10^-n (or 2^-n, or any other base). It seems to be the exception rather than the rule when it's possible to use less than O(n) memory.
For instance, there's no known practical way to compute the 10^100th bit of sqrt(2), despite how simple the number is. (Or at least, a thorough search yielded nothing better than Newton's method and its variations, which must compute all the bits. It's even worse than π with its BBP formula.)
Of course, there may be tricks with self-similarity that can speed up the computation, but I'd be very surprised if you could get past the O(n) memory requirement just to represent the coordinates.