He designed a 29 state cellular automata architecture to implement a universal constructor that could reproduce itself (which he worked out on paper, amazingly):
He actually philosophized about three different kinds of universal constructors at different levels of reality:
First, the purely deterministic and relatively harmless mathematical kind referenced above, an idealized abstract 29 state cellular automata, which could reproduce itself with a Universal Constructor, but was quite brittle, synchronous, and intolerant of errors. These have been digitally implemented in the real world on modern computing machinery, and they make great virtual pets, kind of like digital tribbles, but not as cute and fuzzy.
Second, the physical mechanical and potentially dangerous kind, which is robust and error tolerant enough to work in the real world (given enough resources), and is now a popular theme in sci-fi: the self reproducing robot swarms called "Von Neumann Probes" on the astronomical scale, or "Gray Goo" on the nanotech scale.
>The von Neumann probe, nicknamed the Goo, was a self-replicating nanomass capable of traversing through keyholes, which are wormholes in space. The probe was named after Hungarian-American scientist John von Neumann, who popularized the idea of self-replicating machines.
Third, the probabilistic quantum mechanical kind, which could mutate and model evolutionary processes, and rip holes in the space-time continuum, which he unfortunately (or fortunately, the the sake of humanity) didn't have time to fully explore before his tragic death.
p. 99 of "Theory of Self-Reproducing Automata":
>Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self-reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self-reproduction and evolution. [9]
[9] For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems", and "Concerning Efficient Adaptive Systems".
Thank you very much for this. I originally learned about the Von Neumann / CA connection here on HN, but never really found a lot of sources for it. I've been deeply wanting to learn more. You've posted a lot of awesome sources above - is there anything else you'd care to add, for someone who wants to know as much as possible about what Von Neumann was up to?
Von Neumann was a friend and colleague of Claude Shannon, the founder of information theory (mathematical theory of communication). Back then, it was a really small world of scientists working in this field. Von Neumann, Turing, and Shannon spent a lot of time complimenting each other's work. Shannon eventually discovered that information encoding/decoding is optimally done as a collection of bits (the name was later coined by John Archibald Wheeler "it from bit"). At some point Shannon recommended to Turing that his machine could be whittled down to only two states. Von Neumann was inspired by Turing's work, which led him to create the Von Neumann architecture. Von Neumann was brilliant but he relied heavily on Shannon and Turing, and vice versa. And to answer your question... Shannon wrote a paper on that very subject out of deep admiration of his friend Von Neumann. https://projecteuclid.org/download/pdf_1/euclid.bams/1183522...
https://news.ycombinator.com/item?id=21858465
John von Neuman's 29 state cellular automata machine is (ironically) a classical decidedly "non von Neumann architecture".
https://en.wikipedia.org/wiki/Von_Neumann_cellular_automaton
He wrote the book on "Theory of Self-Reproducing Automata":
https://archive.org/details/theoryofselfrepr00vonn_0
He designed a 29 state cellular automata architecture to implement a universal constructor that could reproduce itself (which he worked out on paper, amazingly):
https://en.wikipedia.org/wiki/Von_Neumann_universal_construc...
He actually philosophized about three different kinds of universal constructors at different levels of reality:
First, the purely deterministic and relatively harmless mathematical kind referenced above, an idealized abstract 29 state cellular automata, which could reproduce itself with a Universal Constructor, but was quite brittle, synchronous, and intolerant of errors. These have been digitally implemented in the real world on modern computing machinery, and they make great virtual pets, kind of like digital tribbles, but not as cute and fuzzy.
https://github.com/SimHacker/CAM6/blob/master/javascript/CAM...
Second, the physical mechanical and potentially dangerous kind, which is robust and error tolerant enough to work in the real world (given enough resources), and is now a popular theme in sci-fi: the self reproducing robot swarms called "Von Neumann Probes" on the astronomical scale, or "Gray Goo" on the nanotech scale.
https://en.wikipedia.org/wiki/Self-replicating_spacecraft#Vo...
https://grey-goo.fandom.com/wiki/Von_Neumann_probe
>The von Neumann probe, nicknamed the Goo, was a self-replicating nanomass capable of traversing through keyholes, which are wormholes in space. The probe was named after Hungarian-American scientist John von Neumann, who popularized the idea of self-replicating machines.
Third, the probabilistic quantum mechanical kind, which could mutate and model evolutionary processes, and rip holes in the space-time continuum, which he unfortunately (or fortunately, the the sake of humanity) didn't have time to fully explore before his tragic death.
p. 99 of "Theory of Self-Reproducing Automata":
>Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self-reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self-reproduction and evolution. [9]
[9] For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems", and "Concerning Efficient Adaptive Systems".
https://www.deepdyve.com/lp/association-for-computing-machin...
https://deepblue.lib.umich.edu/bitstream/handle/2027.42/5578...
https://www.worldscientific.com/worldscibooks/10.1142/10841