Hacker News new | past | comments | ask | show | jobs | submit login
Kolmogorov Complexity – A Primer (2012) (jeremykun.com)
95 points by poindontcare on Oct 2, 2016 | hide | past | favorite | 20 comments



An extremely detailed treatment is presented in "An introduction to Kolmogorov complexity and its applications": http://www-2.dc.uba.ar/materias/azar/bibliografia/LiVitanyi1...


There are a lot of good recent books on this topic (I don't know if this area is undergoing a revival) :

1. N. K. Vereschagin, V. Uspensky, Alexander Shen : Kolmogorov Complexity (English draft in preparation: http://www.lirmm.fr/~ashen/kolmbook-eng.pdf)

2. Downey, Hirschfeldt: Algorithmic Randomness and Complexity (http://www.springer.com/gp/book/9780387955674)

3. Andre Nies: Computability and Randomness (https://global.oup.com/academic/product/computability-and-ra...)

in addition to the now classic book by Li and Vitanyi that others have mentioned.


When two strings have the same Kolmogorov Complexity, one of them might take significantly longer to "decompress". Shouldn't we then say that this string has higher information content?

It feels to me like Kolmogorov Complexity (while very elegant) might just be a crude approximation to a measure that also takes into account the time it takes to print the string.


See "Logical Depth" as defined by Charles Bennett: http://researcher.ibm.com/researcher/files/us-bennetc/UTMX.p... as well as Chapter 7, "Resource-Bounded Complexity", from "An introduction to Kolmogorov complexity and its applications".


I'll take a look. Thanks!


It might be interesting for you to contemplate Schmidthuber's "speed prior".

http://people.idsia.ch/~juergen/speedprior.html


Kolmogorov Complexity is also presented in Chapter 7 of the classic textbook "Elements of Information Theory": http://poincare.matf.bg.ac.rs/nastavno/viktor/Kolmogorov_Com...


An overview with additional references from Marcus Hutter: http://www.scholarpedia.org/article/Algorithmic_complexity


I've used it (by using Normalized Compression Distance) in my Master degree.

It was very interesting to find out how efficient it was in authorship attribution, even having 100 possible authors.


Linux dev/random entropy quality check is (was, I have not checked it recently) based on Kolmogorov complexity.


That seems unlikely because actually computing kolmogorov complexity is impossible, even approximating it is super hard. But you can run random numbers through compression software, and if they compress, something is very wrong.


I am not an expert in this. Here is the reference material https://eprint.iacr.org/2012/487.pdf and it feels plausible.



How many times do you want people to tell you that everybody knows that Kolmogorov complexity (KC) is only defined up to a constant?

This does not affect the results that people use KC for, like the incompressibility of most strings.


That's what my second post linked above addresses. I think it does affect the 'incompressibility of most strings' result. I'm fine with not rehashing the argument though, unless you're keen to do so :)


What you need to do is produce a programming language L such that the existence of incompressibility of strings becomes false, e.g. contradict Theorem 2.2.1 of Li & Vitanyi.


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=12621552 and marked it off-topic.


Thank you for doing that. I sincerely apologize.


I don't get it sorry. Care to explain for the non-Southpark buffs?


The quote was intended to indicate general disagreement without specific justification.

It was a sleep deprived and childish reply.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: