Hacker News new | past | comments | ask | show | jobs | submit login

„Representation“ has a formal definition in information theory that matches a small program that computes the number but does not match „pi“ or „omega“.



No, it doesn't. That's just the error of achieving extreme compression by not counting the information you included in the decompressor. You can think about an algorithm in the abstract, but this is not possible for a program.


You seem wholly confused about the concept of information. Have you had a course on information theory? If not, you should not argue against those who’ve learned it much better. Cover’s book “Elements of information theory” is a common text that would clear up all your confusion.

The “information” in a sequence of symbols is a measure of the “surprise” on obtaining the next symbol, and this is given a very precise mathematical definition, satisfying a few important properties. The resulting formula for many cases looks like the formula derived for entropy in statistical mechanics, so is often called symbol entropy (and leads down a lot of deep connections between information and reality, the whole “It from Bit” stuff…).

For a sequence to have infinite information, it must provide nonzero “surprise” for infinitely many symbols. Pi does not do this, since it has a finite specification. After the specification is given, there is zero more surprise. For a sequence to have infinite information, it cannot have a finite specification. End of story.

The specification has the information, since during the specification one could change symbols (getting a different generated sequence). But once the specification is finished, that is it. No more information exists.

Information content also does not care about computational efficiency, otherwise the information in a sequence would vary as technology changes, which would be a poor definition. You keep confusing these different topics.

Now, if you’ve never studied this topic properly, stop arguing things you don’t understand with those who’ve learned do. It’s foolish. If you’ve studied information theory in depth, then you’d not keep doubling down on this claim. We’ve given you enough places to learn the relevant topics.


Actually it does, you can look it up. It’s naturally a bit more involved than what I use in a causal HN comment.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: