Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Wolfram Physics Project (stephenwolfram.com)
588 points by pokolovsky on April 14, 2020 | hide | past | favorite | 323 comments


We merged this thread with https://news.ycombinator.com/item?id=22867707 and kept its title, in the hope that it would have less of a flamebait effect.

All: no more Wolfram Derangement Syndrome comments, please. They're off topic because they're always the same, and they compose into a weird counterversion of the very thing they're deriding.

Since this comment was becoming top heavy, I forked it. If you want the meta-meta stuff, it's this way: https://news.ycombinator.com/item?id=22869384.


I don't see anything of substance here, besides a lot of pretty graphs. Just like Wolfram's "A New Kind of Science", we have the problem that there is a vast gulf between what you need to make flashy popsci and what you need to make a real physical theory. In increasing order of difficulty, you need to:

1. make a set of dynamical rules that matches general relativity in the low energy limit, such as recovering Lorentz invariance and the Einstein field equation (this is supposed to be the easiest part -- without at least doing this, a theory of everything is worth less than the graph doodles in my middle school notebooks)

2. demonstrate that you can add something that looks like matter

3. reproduce effects that we know have to appear in quantum gravity in the semiclassical limit, such as Hawking radiation and black hole entropy

4. demonstrate that you can add matter that behaves like the Standard Model

5. make specific predictions that we didn't already know from purely semiclassical considerations

6. find a way to verify those predictions

7. have the predictions actually be correct upon verification

These 7 steps are hard, which is why nobody has managed to do them. But it looks like Wolfram hasn't even bothered to start on step 1. His new book is just hundreds and hundreds of pages of pretty graphs and big words. It's more akin to a reformulation of the foundations of mathematics than a theory of physics -- and it's not a particularly good one, at that.

It's the same complaint I have about category theorists trying to do applied physics. (And category theory is a much more powerful language than Wolfram's!) Yes, you might have an incredibly general language, with which you can talk about vast swaths of possible physical theories. But we already had way too many possibilities using ordinary mathematics! We need to narrow down on specifics, not muddy the waters by making things even more general. I mean, it's like trying to rescue a startup by translating the documentation into Esperanto.


(1) We certainly do have formal derivations of Lorentz covariance and the Einstein field equations, given in detail here:

http://wolframcloud.com/obj/wolframphysics/Documents/some-re...

(2) The article above already discusses the derivation of the matter contribution to the Lagrangian density, the derivation of energy-momentum tensor, and Lorentz transformations for elementary particles.

(3) Both Hawking radiation and black hole entropy, and connections between our formalism and the AdS-CFT correspondence, are detailed here:

http://wolframcloud.com/obj/wolframphysics/Documents/some-qu...

(4) We do not yet know how to do this.

(5) The quantum mechanics paper above makes, for instance, quite specific predictions about the location of stretched horizons around non-semiclassical black holes.

(6) (7) This we are still working on.


Thanks, this is much better than the incredibly vague Wolfram documentation.

Skimming through it, the links between the specific physics and the general statements about graphs seem to me to be rather weak. A typical section looks like setting up a bunch of very general definitions about graphs, and then suddenly jumping into a standard physics derivation in standard physics notation without even using the definitions you set up a page earlier. In other words, the graph stuff seems to not be doing much of the work!

But I do admit that I only skimmed. This could be an unfair assessment. I hope your papers get a good, thorough peer review.


Reading through the paper in (3) above. If I understand the text on page 26 correctly, you predict that quantum computers will not be more efficient than classical computers:

"The class of problems that can be solved efficiently by quantum computers should be identical to the class of problems that can be solved efficiently by classical computers: More precisely, we predict in this appropriately coarse-grained case that P=BQP, where P and BQP denote the complexity classes of polynomial time and bounded error quantum polynomial time, respectively."

And:

"In other words, in order to maintain a causal invariant representation, the observer must perform a sufficient level of coarse-graining to ensure that any apparent advantage obtained through the use of a quantum computer over a classical one is effectively lost."

Am I missing something fundamental (most probably)? Are you predicting that quantum computers will not be able to, for example, factor RSA keys much faster than todays non-quantum machines?


I'm not sure if that is the case or not but if that is the prediction then they are in good company.

Nobel laureate Gerard 't Hooft also has a cellular automaton theory and one of his conclusions is: "If engineers ever succeed in making such quantum computers, it seems to me that the CAT is falsified; no classical theory can explain quantum mechanics." By "such quantum computers" he means computers that can run Shor's algorithm. "...but factoring a number with millions of digits into its prime factors will not be possible – unless fundamentally improved classical algorithms turn out to exist."

https://arxiv.org/abs/1405.1548


I believe most of these theories essentially suggest that the error correction required to produce a precise and reliable answer from more qubits, will grow faster than the computing power added by those qubits.

> no classical theory can explain quantum mechanics."

Not sure I can agree with 't Hooft on this. A GR-based theory with closed timelike curves can easily have particles travelling through time to interfere with themselves, and thus reproduce the quantum phenomena like the double-slit experiment. There's been some work on this:

https://www.reddit.com/r/Physics/comments/1gjekn/the_logic_o...


In your last sentence, you compare future quantum computers to “today’s” non-quantum computers, which might be a false dichotomy.

[warning: uninformed tangent]

A more optimistic interpretation could be that quantum & non-quantum machines will be similar because we have huge leaps to make in non-quantum computer architecture.

This is strictly a theoretical thought-experiment for me, but it has always intrigued me that quantum computers sort-of model the problem itself in the hardware circuit & shove a bunch of qubits through it.

In digital computers, we mostly model Boolean logical structures & then, in software, translate our problem into that Boolean logic. This translation into discrete steps places a limit on the theoretical efficiency.

However, perhaps there is room in analog computing hardware to more closely model specific types of optimization problems & then shove a bunch of electrons through it (shouldn’t the electrons follow the path of least resistance?).


> In your last sentence, you compare future quantum computers to “today’s” non-quantum computers, which might be a false dichotomy.

Ah, good point.

Though I was more thinking of Shor's algorithm and Grover's algorithm that tells us the theoretical expected performance that could be achieved with quantum computers. Normally these are described as showing the speedup provided by a possible quantum computer (in relation to non-quantum computers).

So, when reading the Wolfram Model paper I cited, I read the statement regarding quantum computers as dismissing the possibility of achieving qantum computers capable of realising Shor's and Grover's.

But one could of course read it in a flip-side way, that there are algorithms out there to be discovered that achieves the same lower bound complexities on non-quantum machines.

Considering that the Wolfram Model is all about graphs and cellular automata, the statement should probably be considered not based on a RAM complexity model, but something like PRAM that considers parallelism.


What you are describing is an analog computer or circuit. These definitely exist, I had to build a circuit to model the physics of a bouncing ball in a Circuits class in college. However, I don't know how often analog computers are used in professional/practical applications these days. Here is some more info: https://spectrum.ieee.org/computing/hardware/not-your-father...


> However, perhaps there is room in analog computing hardware to more closely model specific types of optimization problems & then shove a bunch of electrons through it (shouldn’t the electrons follow the path of least resistance?).

Congratulations, you've rediscovered quantum annealing!


I'm certainly going to look forward to Scott Aaronson's response


I haven't checked the context, but that indicates a flaw in either their framework or the way they draw this conclusion. We know for a provable fact that some problems (not necessarily interesting or useful ones) can be solved exponentially faster on a quantum computer than a classical computer: Deutsch-Jozsa algorithm (or it's generalization: Simon's algorithm) demonstrates this.


We know problems can be solved faster on a "quantum computer" as the term is defined; we don't know that the version of QM that our reality runs on actually allows us to create such a computer, or at least scale it up to a demonstrably (not provably! Demonstrably!) superclassical level. That's what the Quantum Supremacy debate is about.


Could I understand the effort this way? In general any Turing complete set of computational rules should be able to generate any computable expression and hopefully physics can be expressed with computable expressions (at least if it is at all comprehensible to humans). So you are looking for a particular set of rules (instead of the English language and math symbols, e.g.) that meet certain kind of aesthetics. Do you expect only valid physics to be expressed or is the research on the kind of restriction that will lead to only valid physics being expressed?


From the main article, it looks like they believe that relativity and quantum mechanics are both things that directly emerge as necessary consequences from the structure, not that those are arbitrary results that may be computed inside. It wouldn't be very interesting in the latter case; there's no shortage of Turing tarpits.


Could you provide some details on the algorithm used to graph the plots? I'm pretty sure nothing in graphviz would result in such nice looking graphs, so probably not a spring relaxation-type algorithm. Perhaps something using the graph Laplacian eigenvectors? I recall some papers by Yehuda Koren using the technique on massive graphs


Are you guys planning on submitting your novel findings to peer reviewed physics journals? Have you done that? How has that gone?


Our preprints are currently in peer review (for different journals, I should add). Nothing terrible has happened yet.


Why didn't you wait with the announcements until after the pre-prints were accepted by reputable journals?


It's relatively common practice in academia to make announcements about results while they're still in pre-print form (particularly given how slow the peer review process is for some journals).


I've never announced my papers before they were accepted by a conference or journal (other than putting them on arXiv). I would certainly not let my university's press office mention my work if it was not fully peer reviewed. I've never seen this from my colleagues either.


It's common practice to "announce" in the sense that you email colleagues about it and give seminars. It's not common practice to do a press blitz, solely directed at an audience that will be unable to criticize it.


The internet is pretty good at criticizing things.


Well yeah, but a lot of the criticism is baseless. For example, you have people downthread saying things like "this can't be right because it's discrete", or, "it's inherently impossible for a theory of everything to make predictions", and so on. These aren't true at all.

Every physicist knows that if you want good criticism, you need to go to people with relevant expertise. That's what peer review is!


But they're doing that too. Given we live in the information age, why the hell not send it out in both a traditional research publication as well as a forum like this one? A lot of smart people out there in the world and if you can get 10,000 geniuses to look it over, who says it won't lead to either better criticisms or expansion of a potentially sound theory into other discoveries?


In a lot of physics areas, things move very fast, so generally people submit to a journal and arXiv simultaneously, in case someone else posts on arXiv first and goes down in history as first.

However, rarely are arxiv publications accompanied by such fan fare.


Yes, but press releases are only released after papers have been accepted, not only by journals, but overall by the scientific community.

Specially for papers that claim new or bold things.


Why rely on narrow peer review when you can put the ideas out in public and receive much broader feedback. Most of that feedback will be noise, but the Wolfram crew are certainly able to find the signal in it and use that to improve or fix their ideas.


Because they don't need to?


Because they wanted more peer review?

Why didn't you wait for approval of your HN comment?


I forget what kind of fallacy you're arguing, but it's definitely a fallacy of some sort.


Right. Except for all this fanfare. Why is it necessary to celebrate the writing of two papers like this?


The substance a good dsl for implementing rewrite/string substitution systems, as well as (likely high-quality) implementations of algorithms causal invariants, knuth-bendix completion, graph isomorphism, etc. [1]. A lot of that pre-existed the Wolfram Physics Project. The new thing appears to be:

1. a rebranding with improved documentation / learning resources for using those features (hey, documentation is hard! discoverability matters!), and

2. a set of really impressive rendering algorithms for visualizing these sorts of systems. (hey, visualizing mathematical objects is quite substantive! I'll happily pay good money for "low-effort pretty charts"!)

I addressed in the companion thread why I don't think this is particularly likely to result in "a new type of physics": https://news.ycombinator.com/item?id=22866284

Mathematica is good software. Pre-wolframengine I did shell out the $2K for Mathematica because it contains best-in-class implementations of some important algorithms. A high-quality reimplementation that matches Mathematica's performance would take me several years (in an area where I have a phd). Wolfram Research employs smart folks and lets them spend their time writing really high quality code, and the only way to access that code is by calling Mathematica's built-in functions.

But just because the goal is perhaps a bit grandois and the presentation definitely breathless doesn't mean there's "nothing of substance".

[1] https://www.wolframcloud.com/obj/wolframphysics/Tools/guide-...


I do agree that Mathematica is great software! Physics would be much poorer without its help.


Seconded! Nothing beats Mathematica at mapping the concepts of mathematics into software. In certain verticals, sure, but overall? Wolfram has been spinning this flywheel for decades, and it shows.


Maplesoft Maple is more powerful. hides


> But we already had way too many possibilities using ordinary mathematics! We need to narrow down on specifics, not muddy the waters by making things even more general.

Building a more general tool can sometimes solve a specific problem than trying to tackle the specific problem directly.

I also think you're missing the forest for the trees here. There are many general relationships in physics that are spontaneously appearing in the structure of Wolfram's hypergraphs. That's interesting enough on its own to be worth further research.

In one sense, this shared structure shouldn't be surprising, because it's often harder to not make something Turing complete, so in a way I expect a lot of shared structure between physics and various general computational models. On the other hand, it's getting harder to squeeze more progress out of traditional approaches, so shrinking a computational model to exactly match physics on long timescales is a unique attack vector worth exploring.


> We need to narrow down on specifics

I have an unfounded suspicion that there exists a simple algebra akin to a continuous geometric algebra that allows all of physics and only physics.

Something that's bothered me for years about contemporary physics is the use of constraints "on top of" a general purpose algebra over the infinite precision real or complex numbers.

Mathematicians would argue that it's all "equivalent", but to me it feels arbitrary and in a sense missing the point. The final theory whatever it is, shouldn't sound like "everything except almost everything leaving one thing". It should sound like "this one thing only, that can be no other thing".

A friend of mine once put it this way: Ask a mathematician to describe a Rubik's Cube and he'll start with the space of all possible transformations, whittle it down to some space of discrete modular transformations by throwing out all other continuous and infinite transformations, then by throwing out even more transformations eventually wind up with a set of rules that matches a Rubik's Cube. If you ask him to draw what it looks like he'll start spouting about how that's impossible and you just need to "learn the maths", pointing at a stack of textbooks.

Modern theoretical physics has the same problem.


This reminds me what Feynman talk to us: https://www.youtube.com/watch?v=NM-zWTU7X-k

> If you can find any other view of the world which agrees over the entire range where things have already been observed, but disagrees somewhere else, you have made a great discovery. It is very nearly impossible, but not quite, to find any theory which agrees with experiments over the entire range in which all theories have been checked, and yet gives different consequences in some other range, even a theory whose different consequences do not turn out to agree with nature. A new idea is extremely difficult to think of. It takes a fantastic imagination.


This sums up the last decade of my life. In my spare time I stare out the window trying to dream up a "rule system" that coughs up both relativity and quantum mechanics. It's very hard to even reproduce the basics, let alone the more irritating things like the three generations of particles. That in particular is a good "acid test", and it isn't even mentioned by 90% of the conceptual papers I've been skimming. Quantum gravity, loop this, string that... show me three generations and how those particles conspire to make gravity waves, and then I'll be interested.

Right now I'm toying around with a particularly mathematically elegant model with (IMHO) incredibly beautiful "symmetry" in the sense that the equations are both trivially simple yet capable of producing a rich particle zoo, but still limited to a small finite set. My problem is that I'd need a few petabytes of memory to play around with it sufficiently to see if it passes the basic tests.

I'm hoping Moore's law will allow me to run some simulations before I die of old age...


I was curious, so I did the math:

It’s $5M/mo to rent 1 PB of RAM on EC2 and $21,000/mo per PB in S3.

Conceivably, your dreams will be possible in a decade — they are feasible, but likely too costly now.


That, or I get way better at mathematics. At this rate, I think Moore's law will win. Apparently TSMC is mass-producing 5nm chips and they're building their 3nm fab...


As someone who also works on physics models in their spare time — the computational hurdles are real.

Best of luck!


> I mean, it's like trying to rescue a startup by translating the documentation into Esperanto.

I will quote you with this amazing analogy. Thank you!


Is the following section a beginning of an attempt at that?

https://www.wolframphysics.org/technical-introduction/potent...


Not really. It's very vague -- he is just repeatedly saying "well, you could write down X in terms of graphs in this way." But you can write down anything in terms of anything. You can write a compiler in PowerPoint because it's Turing complete. You can write the Bible in Klingon. You can write about making a burrito in category theory.

Languages are just packaging. In order to have content, you have to nail down specifics, and here the choice of language can be useful because the specifics might be more naturally expressed in some languages than others. But Wolfram hasn't begun this journey.


Sure, I agree. However there is value in a phrasing, eg., phrasing classical mechanics in Hamiltonian terms permitted unification with QM, and understanding of non/classical limits.

...

also,

> In our previous paper[1], we formally introduced the Wolfram Model[2] - a new discrete spacetime formalism in which space is represented by a hypergraph, and in which laws of physics are modeled by transformation rules on set systems - and investigated its various relativistic and gravitational properties in the continuum limit, as first discussed in Stephen Wolfram’s A New Kind of Science (NKS)[3]. Our central result was the proof that large classes of such models, with transformation rules obeying particular constraints, were mathematically consistent with discrete forms of both special and general relativity

https://www.wolframcloud.com/obj/wolframphysics/Documents/so...


All your points make sense. Also, Wolfram is trying to create a language and tool for others because he very well realizes that his part in all this won't be solving those 7 things.


The book doesn't seem to be out yet? Do you have access to an advance copy?


I'm following his "technical introduction" here: https://www.wolframphysics.org/technical-introduction/


After a few iterations, I think I understand the point of this piece. It was a bit difficult to hone in on, though.

The article suggests that working out the theory of something like rule production systems, and then figuring out how that theory relates to existing insights from physics, is the best path toward a Fundamental Theory of Physics.

My primary source of skepticism stems from the fact that the theory of rule production systems is not exactly a new area of study. It's been well-developed at various points in time and from various perspectives by the theoretical CS, programming language theory, automated theorem proving, and mathematical logic communities. That theory addresses most of Stephen's "big difficult questions" about the non-physics side of things. For example, his "emulation cones" are a new name for a very old and extremely well-studied idea. The term "rulial space", similarly, is a new name for an idea that's well-developed in programming language theory.

I sympathize with Stephen. In fact, he sounds a bit like I did early in my scientific career. Unfortunately, though, I just don't see how these old well-understood ideas from computer science are going to result in a new fundamental theory of physics.


>For example, his "emulation cones" are a new name for a very old and extremely well-studied idea. The term "rulial space", similarly, is a new name for an idea that's well-developed in programming language theory.

What are the old names for these ideas?


Have ideas from computer science had significant reach inside theoretical physics before? It seems like physics has only recently discovered its love-affair with information theory, but information theory had existed for a long time before quantum information theory became a hot area of study. Maybe what's new here are not the ideas themselves, but bringing them into an area of study that hasn't payed attention to them before.


Maybe. I doubt it, though. There has always been substantial cross-talk between CS/information theory and Physics. Even through the 1990s it was difficult to be a computer scientist without eventually coming into contact with a non-trivial number of physicists. Especially in industrial research labs. Bell Labs, PARC, and IBM Research were full of physicists. Bell Labs and PARC are dead, but AFAIK IBM Research still has a bunch of physicists and the newer kids on the block (Google Research, FAIR, Deepmind, Microsoft Research, Intel, AMD) also have a share of physicists.

Besides, Stephen's approach here is to ignore 15-20 years of research from various CS sub-communities; his best case scenario is spending a decade reinventing that wheel. The problem with cross-talk that isn't "humble on both sides" is that it's either a) a waste of time because one side's ideas aren't that important, or else b) a waste of time because one side has to reinvent the other wise.


I really think that cross-domain concepts are almost the only way to make huge leaps, so that's a precondition in my mind for any advancement. Check.

In terms of "humility on both sides", it's such a common theme that this oft-cited assumption is taken as truth. Some of the greatest minds who had the most impact in our history were also insufferable assholes, who were stubborn and would not yield until people were forced to reckon with their ideas. Is this me defending Wolfram's ideas? No. But it's me defending the idea that "humility and civility" as a prerequisite for scientific advancement seems false, and in fact, in stagnant fields, the need for a disruptive personality who happens to be right may be perhaps the only real way out of the rut.


Sure. The problem here is that exactly the ideas he's proposing to explore have already been explored. I've slightly edited my previous comment to point this out.

The problem, in the very particular case of this blog post, is that the cost for lacking intellectual humility is spending time reinventing other people's wheels. And those wheels won't get him as far as he thinks they will. We know because they've already been built by others.


That makes sense. I can't assess your argument given my lack of understanding. In my own experience though, deriving things from first principles, even if they've been re-invented countless other times, is a good way to build up the intellectual super structures necessary to think new thoughts.

I think we should separate:

- Wolfram acting as though he thought of the ideas first

- Wolfram being underinformed so as to undermine his own progress

People typically get bent out of shape on the former, which is in evidence, and is a problem of politics. The latter, we can't prove or disprove unless you see him drawing significant conclusions that are falsifiable via current understanding. If that is the case, then I'll yield. But I suspect Wolfram may be more well read than he lets on, but for whatever reason, has a dysfunctional personality trait where he sees his own wrangling with ideas already put forth as a form of authorship, when he incorporates it into his long chain of analysis that he's been doing for decades. A potential analogy is one of "re-branding" - but in this case it's re-branding as part of an internal narrative, one where in the final chapter, Wolfram sees himself as the grand author of the unified theory. In that mental model, each idea he draws from is not one he cobbles together into a unified form, but instead, ideas he incorporates and reinterprets in his own bespoke system and methods, leading him to forget that the core ideas are not his own. (I'm definitely reaching here, but trying to to highlight how the two things above could be in fact very materially divergent and consistent with the evidence.)


You say:

> Wolfram [is] acting as though he thought of the ideas first.

This is called plagiarism. Independent reinvention is no defense if you keep on acting as though you had the idea first. He has already been informed many times that parts of his work are not original, and his behavior doesn't change.

And he knows it, on some level. He made the decision to communicate his "discoveries" in press releases and self-published books. He knows he's not subjecting himself to peer review. He may know, on some level, that his work couldn't pass it. He sued one of his employees to prevent him (the employee) from publishing a proof that Wolfram claimed he had discovered in his book. https://en.wikipedia.org/wiki/Rule_110

I understand what you're up to in trying to invent a psychology that explains his bad behavior, but at some point you have to withdraw empathy and think pragmatically about consequences. Wolfram's actions are already more than sufficient to disgrace an ordinary academic. He's damaged at least one career that we know of. He tries to pass himself off as a visionary scientist only he never delivers. If he wasn't independently wealthy no one would be listening to him at all. But non-experts do listen, which is precisely why speaking up against pseudoscience is part of every real scientist's professional responsibilities. Rather than spin these theories, it would be a better use of your time to send Stephen some email urging him to stick to working on Mathematica.


> He sued one of his employees to prevent him (the employee) from publishing a proof that Wolfram claimed he had discovered in his book.

The wikipedia article claims that Wolfram conjectured rule 110 in 1985 many years before Cook. Out of curiosity, do you have any info that disputes this?


I've read Wolfram's Wikipedia page. It doesn't contain a single word about the controversy that surrounds him and that is in evidence in this discussion thread. On the page for his book, A New Kind of Science, all the allegations of academic dishonesty, which to working scientists is probably more important than the contents of his work -- assigning credit for discoveries is how they get paid, after all -- has been compressed down to a single paragraph at the very end. And that paragraph contradicts itself on a sentence-by-sentence basis, first blaming Wolfram, then excusing him, then blaming him again and so on. So it seems that someone has been pretty successful -- more successful than not -- at erasing criticism of Wolfram from his Wikipedia presence. Therefore, I think Wikipedia's claim that he invented rule 110 in 1985 is highly suspect.

That doesn't matter much, though. Academics have a lot of ways to deal with priority disputes. Sometimes they author a paper together. Sometimes they each publish separately in the same issue of one journal. That's what happened when Darwin and Wallis simultaneously developed the theory of evolution. Sometimes, if the first discoverer was much earlier than the second, the second author might publish the work, and make a public statement in the paper saying the first author was first. This is what happened when Claude Shannon invented information theory only to learn that Norbert Wiener had done the same thing twenty years before. If Wolfram had documentation of his claim, some compromise could probably have been worked out.

Instead, it's a matter of public record that he sued Cook, alleging that the knowledge that Cook had done the work was a trade secret of Wolfram Research. I said before that scientists get paid by correctly being assigned credit for their discoveries. Suing to prevent a scientist from taking credit for their research is like armed robbery. There had been some grumbling before, but this was the moment when scientists recognized that Stephen Wolfram was Not A Real Scientist Anymore.


What if sometimes reinventing the wheel is in fact the efficient procedure and "humility" has nothing to do with it? Simultaneous discoveries and rediscoveries are something rather common. Rather than getting familiarized with some literature and, consequently, getting also tangled with the problems peculiar to how the literature has developed, maybe a fresh start from a different approach is sometimes preferable.

What specific literature are you referring to?


Yes, for example, it took Lagrange reinventing classical mechanics using the principle of least action to put physics in a spot where quantum mechanics and general relativity could be seen.


To be fair, both quantum mechanics and general relativity were first "seen" without the aid of the Lagrangian. (Not to dismiss its role in later developments.)


The name that springs to mind is https://en.wikipedia.org/wiki/Edward_Fredkin

> Everything in physics and physical reality must have a digital informational representation.


> For example, his "emulation cones" are a new name for a very old and extremely well-studied idea. The term "rulial space", similarly, is a new name for an idea that's well-developed in programming language theory.

Can you go into a little more detail about the other versions of these ideas? What are they called in other theories?



> For example, his "emulation cones" are a new name for a very old and extremely well-studied idea. The term "rulial space", similarly, is a new name for an idea that's well-developed in programming language theory.

What are the names for these old, well-studied things in programming language theory, so we can look them up?


Oy, I've been dreading having to answer this question since I pressed "post" :)

I've decided that I do not have the time or interest in writing the Related Work section for a paper-length blog post touching on an enormous number of fields, some of which I know well and some of which I haven't thought about in a decade. (As an aside, one real and substantive problem with trying to build a research program without taking the time to share a survey and comparison with related work is that you'll have difficulty communicating with others. It will then take extra effort on others' part to build up a knowledge base. Surveying and comparing to related work is hard and thankless but important work. It's not about credit, it's about building up a shared knowledge base.)

However, I can spare an hour or two and take the time to flesh out one or two in order to demonstrate what I mean.

So, I'll make the following offer: is there any particular excerpt from Stephen's blog post to which you would like the CS/PL/info theory analogue? Or, would you prefer me to pick a particular sentence and identify the body of work that explores that question and the major results of that body of work? (I will take this opportunity to emphasize the "about the non-physics side of things" portion of my original post.)

I'm going to link to this thread in other places where people have asked this question instead of monitoring 3 or 4 different threads going forward. I'll do my best to occasionally monitor this thread for requests and do my best to reply. FYI, I probably won't get around to answering more than one reply until the weekend.


Earlier you wrote 'For example, his "emulation cones" are a new name for a very old and extremely well-studied idea. The term "rulial space", similarly, is a new name for an idea that's well-developed in programming language theory.'

I don't understand how things could be extremely well studied and developed, but also not exist in some fashion where you could just name and link to it in a matter of minutes rather than hours. Example "emulation cones are called X here".

I've listened to Wolfram and skimmed one of his books before deciding he's beyond my ability to evaluate as genius or crackpot. I'd love to be able to nail down a specific thing where I could read about some existing topic and then read about Wolfram claiming to reinvent it or something, because that could help me learn towards one conclusion over the other in the genius versus crackpot consideration.

One frustrating thing that I often find is that much of Wolfram criticism is non-specific and as it's impossible for me to bucket Wolfram I can't bucket his critics either because they tend not to provide enough detail or clarity.


The question you're asking requires non-trivial effort to answer precisely because "emulation cone" and "rulial space" are never quite all the way defined, and the question being asked in terms of these definitions is also left a big vague.

Emulation cones go by various names, but perhaps the most common is the (bounded) reflexive and transitive closure of a reduction rules of a system. Another common name is the (bounded) reachable set.

Rulial spaces, by which I mean the particular ones Stephen seems interested in toward the end, are higher order term rewriting systems or higher order syntax. But actually, rulial space is used throughout the text in a much more general sense. I'd consider even very canonical results from PL theory, e.g. confluence of rewriting, to be non-trivial observations about a particular rulial space.

The reason for giving (or at least very vaguely hinting at) a definition for rulial spaces and emulation cones is to talk about foliations and then expressiveness. There's some connection between foliation and bisimulation that's difficult to exactly nail down, because nailing it down requires a lot more precision about the exact sort of (emulation cones we are interested in and for which) spaces. The connection between expressiveness and complexity hierarchies is immediately obvious, I think, right?

> One frustrating thing that I often find is that much of Wolfram criticism is non-specific and as it's impossible for me to bucket Wolfram I can't bucket his critics either because they tend not to provide enough detail or clarity.

Oy, no good deed goes unpunished :)

Look, I get why it's frustrating.

But, really, there's a reason that rule #0 of technical writing is to define terms before using them. The reader can only do so much.


I would like the PL theory analogue of "emulation cones" and "rulial space" please :)

If these concepts don't have a single name that you can just rattle off, and that we can Google - if describing them in terms of existing theory would take serious effort - then surely identifying and naming them is a major contribution?


PL theory is a bit of a hobby of mine, but I don't really see an exact equivalent to what Wolfram seems to be describing. His rules are like rules in a term rewriting system, but the rules of rulial space are permitted to change so they may be more expressive, perhaps like a higher-order rewrite system.


I'm interested in research direction of using code-data dual algothms that modify each other and form natural selection process to formally abstract notion of evolutional open-endedness (like Turing completeness is an abstraction of algorithms notion). More details: https://www.reddit.com/r/DigitalPhilosophy/comments/dzghec/o...

Maybe you could advise some developed language or model for this task? The interesting part is to have code-data duality and enough rich language to kick start natural selection that would produce competing algorithms that would gradually become more and more complex (and gradually become closer to sentience).

Though the language might not even be Truring complete as it is. As natural assumption would be that the model should be finite in resources and it can get access to infinite time or memory only in time limit (assuming that the individual algorithms would survive for this to happen).


Specifically just looking for a brief explanation of this line:

>For example, his "emulation cones" are a new name for a very old and extremely well-studied idea. The term "rulial space", similarly, is a new name for an idea that's well-developed in programming language theory.'

What's the old well-studied idea, and the well-developed programming theory idea? A one sentence reply is fine.


The idea of defining a set of transition rules and then analyzing the properties of some closure (e.g., think reflexive and transitive) of those rules. For transition rules of various orders, expressiveness, etc. And then stepping back and realizing that's what you're studying and generalizing it by thinking rigorously about the relationships between those systems and so on. That's really pretty much then entire modus operandi of a huge chunk of PL research, and there's a ton of mathematical and actual technology built up for doing so. The algorithms that are core components of Stephen's "standard library" for this project scratch the surface.


Scott Aaronson said basically this ending his review of NKS:

"However, were the book more cautious in its claims and more willing to acknowledge previous work, it would likely be easier for readers to assess what it does offer: a cellular-automaton-based perspective on existing ideas in science"


I don’t understand how this model can ever be predictive. It seems like a good way perhaps to create an approximation engine but I’m not sure what sort of predictive insights you can gain from this approach.

In order to be useful, this model should either create a new testable prediction or speed up computations in existing models while retaining accuracy. It seems to be in the latter camp. I would like to understand more about why this model is more computationally efficient.

Perhaps there’s more work to be done on the process of generating rules or limiting the types of rules. Arbitrarily choosing rules to create properties that look similar to observed physical properties doesn’t seem to point to a fundamental theory.


I think Wolfram makes the point himself that the model may not directly be predictive in the way that states "encrypt" their predecessors. I guess the best bet is to show that there is a direct connection between his model and higher models, because then you can start to look for physical manifestations of things his lower-level model predicts that don't exist in higher-level models. Kind of like how GR keeps getting reinforced by verifying its predictions with phenomena that hadn't been considered before its advent.


If someone wants to read something that is more like a classical physics paper there are these two papers by Jonathan Gorard:

https://www.wolframcloud.com/obj/wolframphysics/Documents/so...

https://www.wolframcloud.com/obj/wolframphysics/Documents/so...


Are these going to be submitted for appropriate peer review?


They already have, as I said in response to your other version of the same question :)


I’ve read through your GR part. Just to counter all the negativity here, congrats for pulling this off! I felt your papers do a much better job in explaining what’s actually the idea. Probably would be better to directly link to them here on HN. I’m a bit rusty but everything you write seems very reasonable to me. One thing I was a bit nervous about, and correct me if I’m wrong, is that you seem to somewhat artificially put in “causal invariance” by restricting all hyper graphs you consider to those that have this property. This isn’t obvious from the rule itself and it seems like it could just go wrong at one point deep into the time evolution. Are you worried about that?

Edit: Additionally the model seems very flexible. To the point that I’m unsure whether it’s surprising to be able to recover GR and other properties. I’m out of my depth here but given that string theory seems to face a similar problem of model selection have you looked at equivalences between those or does this not make sense in your eyes?


"Perhaps the single most[0] significant idea conveyed within Stephen Wolfram’s A New Kind of Science, and the initial intellectual seedling from which the contents of the book subsequently grow, is the abstract empirical discovery that the “computational universe” - that is, the space of all possible programs - is far richer, more diverse and more vibrant than one might reasonably expect. The fact that such intricate and complex behavior can be exhibited by computational rules as apparently elementary as the Rule 30 and Rule 110 cellular automata, which are so straightforward to represent that they can easily be discovered by systematic enumeration, is profoundly counterintuitive to many people.[1]"

"However, once one has truly absorbed and internalized this realization, it leads to an exceedingly tantalizing possibility: that perhaps, lying somewhere out there in the computational universe, is the rule for our physical universe[2]. If an entity as remarkable as Rule 30 could be found just by an exhaustive search,then perhaps so too can a theory of fundamental physics.[3] The idea that there could exist some elementary computational rule[4] that successfully reproduces the entirety of the physical universe at first seems somewhat absurd, although there does not appear to be any fundamental reason (neither in physics, nor mathematics,nor philosophy) to presume that such a rule could not exist. Moreover, if there is even a remote possibility that such a rule could exist, then it’s slightly embarrassing for us not to be looking for it. The objective of the Wolfram Physics Project is to enact this search.[5]"

[0] There is in fact only one idea in that book, and this is that idea. But is this an original idea? When computer scientists and mathematicians come up with novel results in programming language theory or type theory (or whatever) what is to stop them claiming that they are empirically exploring a "computational universe"?

[1] Probably, but not to programmers, or the computationally literate.

[2] An extraordinary leap within the context of the introduction. Though note, neither is this an original idea, that the universe may be "digital" is not an idea original to Wolfram and it redates his magnum opus by I don't know how many years. Note there are actual physics projects that seek to kick the tyres this hypothesis.

Have you got that so far? A recasting in lofty terms of an unoriginal idea followed by a a giant leap to another unoriginal idea which serves only to motivate the project.

[3] So you say, but this is a giant non-sequitur.

[4] Why just one rule? And the rule hardly runs itself. What does it run on? Great you have a generalised term-rewriting system (how completely un-novel). "What rewrites the terms?* How is this not the first question you ask yourself?

[5] Hey, why not just say: “You know that "it from bit" idea? We have a hunch that term rewriting hypergraphs is the way to go. These are our explorations. We've encountered stuff that echoes contemporary physics.” Why not write the intro like that? Not grandiose enough for you?

===

Any sufficiently worthy "it from bit" project must answer the following questions.

(1) Given that we know that any sufficiently powerful computing system can emulate any another what motivates your choosing this particular computational system and model?

(2) Demonstrate convincing physics (not toy models)

(3) Make testable predictions – this is not something for "down the line", this is what theories of anything must do. No predictions, no dice, no matter how nice.

(4) Is it software all the way down? If so, how? If not, what is the hardware and what does that imply?

(5) I would direct this last point at all TOE-heads like Wolfram and Weinstein and whoever. Why does it have to be simple? Why does it have to be elegant? Why is it always encoded in the formal systems you happen to play around with (geometry for Weinstein, term-rewriting systems / cellular automata for Wolfram).

===

I think the wider scientific community needs to call time on savants like Weinstein and Wolfram.


I think the way you nitpicked his phrasing was mean and unconstructive.


>[4] Why just one rule?

Is a specific combination of rules not itself a rule? A lot of descriptions of Conway's Game of Life describe it as multiple rules, and other places refer to its whole setup as a "rule". Rule 30 is sometimes called a "rule set". I don't think there's a strict difference between a rule set and a rule, though the simpler rule(set) the better seems to be easy to agree on.

...

>(5) I would direct this last point at all TOE-heads like Wolfram and Weinstein and whoever. Why does it have to be simple? Why does it have to be elegant?

A theory with fewer free parameters is better than one with more. I think this extends to the complexity of the theory too: a theory with more rules (rule A applies to small stuff, rule B applies to big stuff, rule AB-patch applies to mediumish stuff) is worse than as a theory that explains the same stuff with fewer rules (a single rule X that naturally has A-behavior with small stuff and B-behavior with big stuff) in the same way a theory with more free parameters is worse than a theory with similar predictions and fewer free parameters.

It's Occam's Razor. Complex theories can have lots of different variants that each match the existing evidence but make different predictions in untested scenarios. Simpler theories have fewer variants that successfully match the existing evidence and tend to be more useful for making predictions, indicating that they match reality better.

>And the rule hardly runs itself. What does it run on? Great you have a generalised term-rewriting system (how completely un-novel). "What rewrites the terms?* How is this not the first question you ask yourself?

Is that not an obstacle for any theory? Tons of theories are meant to model what we see, without presuming some underlying mechanism. Newton came up with a theory of gravitation that modeled how objects tend to pull each other in without any idea of why nature chose for that to happen.

Even if the idea that not explaining what executes the rule of reality is a problem, then a simpler theory with fewer rules is obviously better because there's fewer unexplained rules.

>[5] Hey, why not just say: “You know that "it from bit" idea? We have a hunch that term rewriting hypergraphs is the way to go. These are our explorations. We've encountered stuff that echoes contemporary physics.” Why not write the intro like that? Not grandiose enough for you?

Personally, I found their intro to have a lot more background detail and motivation explained. Is your primary objection really that they were too grand for a few paragraphs?

>(1) Given that we know that any sufficiently powerful computing system can emulate any another what motivates your choosing this particular computational system and model?

Any system capable of having relativity and QM-like effects emerge out of it as described is interesting enough to study, even if it did end up having defects that meant it couldn't be a good model of reality overall.

I feel like you're treating this as if he's asking everyone to commit themselves fully to this model instead of to explore it.

>(5) I would direct this last point at all TOE-heads like Wolfram and Weinstein and whoever. Why does it have to be simple? Why does it have to be elegant? Why is it always encoded in the formal systems you happen to play around with (geometry for Weinstein, term-rewriting systems / cellular automata for Wolfram).

Presumably they chose those systems to play around to begin with because they believe those systems were promising.


> I don't think there's a strict difference between a rule set and a rule

Okay then. Why just one rule or rule set? I meant as much when I wrote what I wrote.

Why a tiny/simple initial starting state and one rule (or rule set). Sure, simple elegant formal systems are enticing to our brains but why assume that of our universe? Why not even try to explain why you feel this to be true? It's a pretty huge assumption in my eyes.

> A theory with fewer free parameters is better than one with more.

Sure. But the full statement is – the theory which is in best accordance with reality and with fewer free parameters is better. Starting from some entirely arbitrary simple formal system and working upwards and hoping you'll bump into reality along the way is very, shall we say, optimistic. And I do mean entirely arbitrary. Because you haven't motivated why this system rather than another this system is entirely arbitrary.

> Is that not an obstacle for any theory?

Yes, and with good reason. Because TOEs claim to be fundamental – how can they be fundamental if there's something underneath them so to speak. Which came first: the PC or Windows? Can't have one without the other.

> Personally, I found their intro to have a lot more background detail and motivation explained.

I didn't.

> Is your primary objection really that they were too grand for a few paragraphs?

I object to it on stylistic grounds and also, you know, I'll be the judge of the intellectual consequences of your theory. Lay out your theory and leave others hype it up if they so wish. (I'm so sorry if asking for a little intellectual humility is asking for too much these days. /s)

But mostly I object to the huge leap in their argument (as I said). It'd be nice for once if people like this were more honest that intuitively speaking there clearly are big gaps in their reasoning.

> I feel like you're treating this as if he's asking everyone to commit themselves fully to this model instead of to explore it.

Yes, that's exactly what I'm doing. Why should I explore it if you don't give me a compelling reason to explore it?

> Presumably they chose those systems to play around to begin with because they believe those systems were promising.

No. They chose the formal system they were familiar with.

And you skipped my whole part about making testable predictions. Which is a pretty big part.

If I want to read breathless computer science / physics / mathematics articles I've got Quanta Magazine for that: https://www.google.com/search?q=+%22fundamental%22+site%3Aww...


>Starting from some entirely arbitrary simple formal system and working upwards and hoping you'll bump into reality along the way is very, shall we say, optimistic.

Honestly, I think this is a great way of describing what they're trying to do. I guess I think it's a little more realistic than you do: if our universe's physics could be described by a program on the order of tens of bits (I give an argument below for why we could expect that), then it's possible for us to come up with something like it from scratch by trying to construct a simple program that could have rich dynamics. If someone came up with a tiny model that happened to have physics emerge in it resembling our own, I'd be really interested to see how much we could learn from the model. They're trying to show that they happened to bump into interesting parts of relativity and quantum mechanics.

If they really did bump into relativity and QM from a simple system, then I think this is significant enough for more attention, even if they don't have any testable results yet, because it's possible that testable results will come from it. It might be that this model deeply resembles reality and we can flesh it out further, or it might just be that there is a class of models (that includes our physics) where relativity+QM arises (but not the rest of our physics, like the standard model), and we can learn about relativity and QM by studying models where the pair of them arise. Investigating this model is hard and it makes sense they want help.

> Sure, simple elegant formal systems are enticing to our brains but why assume that of our universe? Why not even try to explain why you feel this to be true? It's a pretty huge assumption in my eyes.

>Yes, and with good reason. Because TOEs claim to be fundamental – how can they be fundamental if there's something underneath them so to speak.

I've always imagined (and given the article's talk about "rule space", I think what Wolfram believes is something roughly similar) that the true-root TOE looks like the mathematical universe hypothesis / UDASSA (http://fennetic.net/irc/finney.org/~hal/udassa/) where in some sense, every possible computation exists, and they have measure (~the probability we find ourselves in it) inversely related to the length of information describing the computation (because if every possible computation existed, then every finite-length program would be instantiated infinite times by equivalent infinite-length programs with lots of ignored garbage code, and shorter programs would be instantiated proportionately more often).

From that, you would expect at large probability that our own universe's physics is described by the shortest possible program/rules that gives dynamics as rich as we see. If we imagined some universal computational language, it seems like specific cellular automata and hypergraph-rewrite systems could maybe be specified in tens of bits, so they're prime candidates for exploring. Even if those systems specifically aren't how reality works, then if they can produce dynamics about as rich as reality, then it implies that our reality's rules might be even shorter (or else we'd be more likely to exist in a cellular automata or hypergraph-rewriting system). At such short program lengths, the strategy of guess-and-check could be realistic, though the check part is really hard since we can't directly compute a significant number of timesteps, and instead have to try to reason about what large-scale patterns must emerge from the program.

(Wolfram specifically seems to believe that the hypergraph-rewriting system is universal enough to fill the role of the universal computational language, but I don't think that's strictly critical. As long as it's sufficiently simple to represent in whatever the root-TOE computes by, then it could be a candidate for our universe's physics. Hmm, I guess it would make sense that the way the root-TOE computes would have structure in common with whatever our universe's physics program is, because then our universe's program could be specified with fewer bits.)


And I've taken a virtual hammering for having the temerity to ask that they at least take the time to articulate their motivations and assumptions, at least as well as you have here, with intellectual humility and consideration for the reader.

So be it.


Cheers!


As much as the guy can be annoying, the associated "Project Announcement" is a fascinating 20,000-word-ish article. [1]

I've long assumed that the basic structure of the universe must be some kind of graph, simply because it's the most elementary structure there seems to be, and so easily gives rise to dimensionality. So seeing someone try to tackle this bottom-up and see if it can ultimately give rise to quantum mechanics and beyond sure is fun to watch.

[1] https://writings.stephenwolfram.com/2020/04/finally-we-may-h...


"I've long assumed that the basic structure of the universe must be some kind of graph, "

Do you mean this literally, or rather: "the basic structure of the universe CAN BE DESCRIBED by some kind of graph"?


Literally, i.e. the fabric of space-time.

Then particles and energy are patterns in the graph -- that we can describe useful physics in terms of the graph, but that the graph just is, it's not a description or approximation of something deeper.

Unless there's a semantic difference you're getting at that I'm not aware of?


Interesting concept, but I fail to see how it can model the whole universe. Describing it, maybe, but the universe being a graph, well for a start, what is the graph made of? Is it "material" is it "information"?


Any fundamental theory of the universe will have to posit a bottom, base structure that everything else is "made" out of...

...but therefore that base isn't ever going to be made of anything itself, by definition. It just is. It is what it's described as -- no more, no less.

It's not going to be material or energy. I suppose "information" is probably as good a word as any if you want to think of it that way.


> Any fundamental theory of the universe will have to posit a bottom, base structure that everything else is "made" out of...

I think that's being too closed minded about what a fundamental theory of the universe will look like. The only way it could be like that is if that fundamental structure is the only fully general option available.


there could be no bottom ... there would also never be any way to prove that the proposed bottom is truly the bottom and not just an event horizon


If a theory's predictions are 100% consistent with experiment, then it becomes the bottom for all practical purposes, since all we have is practical knowledge. Saying we can't prove it's the bottom would mean as much as saying we can't prove the universe is on the back of a stack of giant tortoises. Technically true, but of no practical importance.

Of course, if a theory doesn't agree 100% with reality, then there's more to find in a practical sense, in which case "not having reached the bottom" is true by definition.


That is philosophy, not physics. We describe what matter does with physics, not what it is. Personally I think the "void", the Buddhist concept, fits the bill nicely. Objects are created from the void and from those objects all possible objects are created. All of mathematics is expressed, and in one corner of that infinite structure lies our universe.


Do you think the simulation uses Neo4J or JanusGraph?


It’s probably implemented in Mathematica, so probably not.


It's all Mathematica/Wolfram Language. And a very, very warm computer.


This is absolutely fascinating.

Even just a demonstration that rule application on hypergraphs is expressive enough to potentially describe (for some rule yet to be found) Quantum Mechanics and General Relativity is incredibly exciting in my opinion.

Arguably string theory is in a similar situation: a very powerful framework that could potentially describe reality if accurately utilized.

Also, notably Gerard 't Hooft has been working on a very similar topic recently: Quantum Mechanics described in terms of cellular automata [0].

[0] https://arxiv.org/abs/1405.1548


the idea is kind of old and even Wolfram played with it in his previous work/book, ie A new kind of science.

If you're interested in the subject and also can appreciate what feels like a piece of performance art give the following book a shot: https://www.amazon.com/Alien-Information-Theory-Psychedelic-... It's mind-blowing and uses as one of its starting points Wofram's 1D cellular automata work.


Despite 80% of comments focusing on Wolfram's pompousness, this is an interesting expansion of cellular automata to arbitrary dimensions.

Coincidental timing of publication - one day after John Conway's passing. We may be in a n-dimensional Game of Life after all.


Interestingly this mirrors other disciplines. Consider for example Schelling's model of segregation in social science, which is (probably not by accident) very similar to cellular automata.

Just a few years laters, people quickly translated this to graphs - a network. The spacial interpretation aside, the idea surely should hold for higher dimensional social relations.

And that's where we reach and interesting point that Wolfram writes about: These system are useful, as long as we can calculate or derive a state. If all we can do is simulate it, then it's much less useful. And indeed, creating such models is the true "art".

So while the "idea" of using relations and hypergraphs lays a foundation that I am sympathetic to, I also feel like the things "on the to do list" are, indeed, the meat of the issue.


A lot of people criticize Wolfram, but I think the project he's pursuing is definitely worthwhile. Quantization has proven to be an incredibly powerful tool, and it's only the first step to turning our continuous physics into a discrete model.

Starting with cellular automata is flipping the table over and starting the game anew, starting with discrete models instead of continuous models, with the ultimate goal of producing a purely discrete theory for all of physics.

Discretizing everything has the potential to provide new mathematical tools and new insights that our continuous theories might obscure. There's a lot of hidden computation in the reals and complex numbers that a discrete theory would have to explicitly unpack, and some of these details might potentially shed light on some real puzzles.


Graph grammars are Turing complete so I don't see why they couldn't express the fundamental theory of physics, but it seems like a weird way of going about the problem.


A turing machine cannot produce randomness, non-determinism.


No, but it can produce something that's unpredictable from the original code. And that's spitting distance from random.


Yup, "unpredictability" or more precisely computational irreducibility is when things tend to get interesting... and this happens deterministically just fine.


In addition as soon as a system comes in contact with the real world randomness is nearly free and easy to get.


We dont know if physical randomness truly is so. It could be pseudo random with a seed (superdeterminism)


how about Conway-Kochen's Free Will Theorem? is it compatible with superdeterminism?


If you are interested in such topics, read

't Hooft, Gerard: The Cellular Automaton Interpretation of Quantum Mechanics

https://link.springer.com/book/10.1007%2F978-3-319-41285-6

Reference number 23.


Yes. Superdeterminism does away with the observer as a separate thing, so the question of free will doesn't even come up. Everything is just one evolving system.


Well, you can give it as many bits of randomness as you need as input (the tape).


Anybody who has coded Game of Life can draw a parallel to this. Simple rules can lead to arbitrarily complex systems. I'm all on board with this concept. Graph theory is amazing and useful in many ways we don't understand yet. I'm all on board with this concept as well.

But a graph has nodes and edges. Nodes, in this case, can be particles.. I guess? But what are the edges? When a "simple rule" is applied to a collection of particles, what is the force that connects them after the interaction? I read some of the material in detail and skimmed some of the rest, but there was a lot of setup and cool graph visualizations and not a lot speaking to this core question.

Disclaimer: I'm not a theoretical physicist but I have read "Quantum Physics for Babies" at least 50 times.


Still not done reading, but in this representation, particles are stable shapes in the successive steps of the hypergraph (e.g. like gliders and other stable shapes in Game of Life).


I don't know anything about Wolfram's theory, but if it were me to create a graph theoretical foundation for Physics then nodes would be events, and maybe edges would be particles. Very Feynman-graph-like.


Yep. A whole lot of this is "Automata are relevant! I'm relevant!" and some hand-waving and "Doesn't this loooook like a mesh? See! We made space-time!"

Save your time, just read More is Different: https://science.sciencemag.org/content/177/4047/393


With respect, I don't think that our derivation of the conformal structure of spacetime, or of the Einstein field equations in the continuum limit of infinite causal graphs, is "hand-waving". See, for instance:

https://www.wolframcloud.com/obj/wolframphysics/Documents/so...


I've just read it based on your recommendation. I enjoyed it but I don't find that it obviates Wolfram's work, or even overlaps with it much at all (and I'm no big Wolfram fan).


Can someone tell me how this relates to the hype of, well it must be some decades back, chaos theory in dynamical systems?

The idea then was the following: We do not need "randomness" and probability theory in our models, once we realize that deterministic systems, even simple ones, can produce arbitrarily complex outcomes.

For example, this was all the rage in economics in the 90's. Surely, those are dynamical systems and besides having an actual proper reason for probabilistic reasoning, one wanted to at least consider that modeling deterministic systems without any randomness could fit reality.

I also make this point because in this instance, graph based discrete models often have a continuous equivalent and it depends on the case which of those offers more useful outcomes.

If I remember correctly, the hype about chaotic systems died down in part because while we could formulate substantively powerful foundations, it was extremely difficult to "get something useful out of it" and it all seemed to go more or less nowhere.


Applying arbitrary iterated transformation rules on graphs is very, very cool.

But I don't think there's much in the way of existing theoretical math attacking this area.

Much will need to be invented / discovered.


There is a big existing literature. Look at 'graph grammars'.


This makes perfect sense, bravo. Go straight to the most general structure possible and try to understand which part of the infinite mathematical structure of the multiverse we inhabit. How interesting that within all possible structures they have found objects that so readily match up with our own reality.


Showing how Yang-Mills and SU(3)×SU(2)×U(1) fall out would be a natural starting point for a proposed unification theory. This model doesn't even try explain the existing particle hierarchy as a special case.


Most of this structure comes from the ordered set properties. Special relativity can be seen as a consequence of trying to quantify length by projecting to two ordered sets.

K.H. Knuth, a professor at Albany, has been working on this for some time. He also has some results about QM.

A free version of his paper "A Potential Foundation for Emergent Space-Time" (2014). What Wolfram is talking about seems to me a consequence of the principles K.H. Knuth has been investigating since at least 2011.

https://arxiv.org/abs/1209.0881


Only skim-read the article. What seems to be lacking are any testable predictions of this theory, or something like derivations of existing measured constants that one would think you would get from a fundamental theory of physics.


I briefly read through the article. As a PhD student in theoretical physics, I can see that some basic ideas of gr and qm can be interpreted out of these graphs. But I’m not convinced why these graphs have to be generated by a single rule. I don’t see any motivation in restricting to a single rule or even procedural generation at all.

It also reminds me of the casual set theory that I heard briefly from a professor a few years ago. I was told that it re-creates vacuum gr just well.


> But I’m not convinced why these graphs have to be generated by a single rule.

Wolfram claims that they don't need to be; on the contrary. Take another look at the last section of the article.


Incorrect predictions are all the rage nowadays, so here is mine: they will find an infinite amount of rule sets that could be the fundamental theory. However, none of it would be testable, so people would pick their own based on their notion of simplicity and beauty. Eventually people would start looking into common properties of the candidate solutions and transformations between them.


Take it further - eventually we will have conceptual physicists whose output is judged on purely aesthetic grounds, and rich people will collect them.


That’s called string theory.


No need to be so smug, he says exactly that in the article. Take another look and read the section "Why This Universe? The Relativity of Rules".


Well, phenomena analogous to what is described by quantum mechanics, for example, have been found in physics of solids (e.g. phonons). Shouldn't be surprising, then, that such a universal and rather abstract structure as a graph (especially if you allow to 'update' it) can be made to reflect some aspects of physical reality. Computations are computations, whichever way you approach them. Classical electrodynamics, too, has several equivalent formulations (with the tensorial notation being pretty "graphical" IMHO), and some people do like to create a fuss around particular ones, but that ain't nothing we haven't seen before. What is important, though, is to always make a distinction between a model and the real thing: it wouldn't make sense to say that spacetime is some kind of a graph. (Or would it?)


I'm pretty sure that this would violate Bell's theorem, and I'd love an explanation of why it wouldn't. It looks to me like a system of local, hidden variables, unless there's some sense in which changes in the hypergraph can propagate faster than the speed of light.


We can prove violation of the CHSH inequality, and hence compatibility with Bell's theorem, as a natural consequence of our formalism. See, for instance:

https://www.wolframcloud.com/obj/wolframphysics/Documents/so...

Or, for a less technical version of the basic idea:

https://www.wolframphysics.org/questions/quantum-mechanics/h...


The theory needs a lot of development, so it would be hard to say whether it could or could not comply with Bells theorem. But since it is Turing complete presumably there is some variant of the theory which complies with Bell's theorem.

I find it interesting that under this theory regions of space and properties of objects are emergent phenomena. I suspect that under some variants of the theory, properties of objects could act as if they are both random and non local.


I'm only halfway through the page, but so far it looks like it's describing a system that's compatible with the Many-Worlds Interpretation, and MWI is local without violating Bell's theorem.


It doesn't rule out superdeterminism.


It doesn't look like a local theory at all.


Also... one of fundamental flaws of Wolfram Physics Project's meta-model is that it uses weightless graphs.

There is literally nothing in his model that can prevent his beautiful graphs from simply collapsing in on themselves, with all the points just piling up on top of each other into a naked singularity before they've even succeeded at forming any space at all!

For starters, empty space should have weighted (space-like) connections of the weight value of 1.

This accomplishes two things: a) prevents the space from outright collapsing in on itself, and b) forces the space to form a stable configuration

For b) part, imagine a 2-dimensional discrete space (graph) made of hexagonal shapes.

You will immediately notice that weighing knot (node) connections (with the value of 1) forces that space to maintain a stable spherical configuration (providing that it had formed uniformly to start with.

Expand this concept to 3 dimensions, and you'll get a similarly stable 3D configuration (that has been uniformly expanding from the very first step of 1), with 2D surface (at any distance r/step k from the center) being a very good candidate for defining a holographic principle (of some form or another) on.

Now...

... wait for Wofram to expand his model with weighted graphs... and then sue his sorry ass for stealing somebody else's idea.

P.S. One of the fundamental flaws of WPP's meta-meta-model is... Wolfram himself.

A terrible, terrible choice for managing the, literally, biggest breakthrough in physics since QM.


The most basic of the basic first step would be finding an infinite series for calculating Pi, that has only positive elements... like Ramanujan's series, but not quite, because elements of this series have to produce numbers Nk, for each step k (k=0->infinity), that satisfy the following condition: Nk/k->2Pi for k->infinity.

Until this is achieved (finally giving the complete description of this discrete space/graph of this space), nothing further can be done in physics.

Furthermore... it is not even needed to prove that this space is discrete, because uncertainty principle already proves that it is, by taking notice of the fact that in continuous spaces, quanta of action/momentum/energy can be made arbitrarily small (and, in fact, is equal to 0), which means that h can be made arbitrarily small (and, in fact, is equal to 0).

In other words, uncertainty principle only makes makes sense in discrete spaces... and, as experiments (from 100+ years ago) have already proven, h > 0 in the space this universe is made of (meaning that this space is discrete). Q.E.D.

Edit: typo and... typo with missing constant 2 from circumference formula.

Addition: Pi is Pi only because of the spatial (graph) configuration. In another uniform space, with a different configuration, circumference formula would have the form of: Nk=aCk, where a is some value (like 2), and C is the (Pi-like) constant derived from spatial (graph) configuration >alone<.

Edit: Added missing k to the general circumference formula. I should really proof-read better, but I'm in a kind of hurry, so you'll forgive me an occasional mistake or two.


This looks very interesting and impressive, and I hope this will catch Jonathan's eye.

Have you though about how the spin of particles could work in this framework? And have you though about what it would look like to extend local gauge invariance from spacetime to multiway?

(I haven't read the actual papers and the assumptions that went into them yet, but to the summary is very impressive and exciting: a basic framework that can derive general relativity and path integral, put them together in equal footing in an extended "multiway", possibly give natural explanations to several long-standing puzzles of physics [dark matter, dark energy, black holes, black hole information paradox, measurement problem, inflation, dimensionality of the universe, arrow of time, ...]. I don't know if you can eventually find a rule that will result in the standard model, but everything fits so far, and it is possibly not a coincidence. Great work so far! Of course, it may or not may not lead to something that recovers all of the physics that we know today, but regardless, thanks for pursuing this exciting avenue!)


> A Project to Find the Fundamental Theory of Physics

Physics noob here, but I do a lot of theory / model development in other areas, at work and in my batcave. So I'm wondering: What if there are multiple fundamental theories, just like there are millions of different ways of looking at things, all with their various leverage/application points?

> we’re going to have to find the specific rule for our universe

I just wonder what makes someone so sure there is a specific rule, when humans are good at generating untold numbers of rules, and in my experience those rules can be very effective _and_ work best together when held lightly, rather than exclusively. (If anything, humans seem more likely to become dangerously dogmatic when they feel they have identified "the one" of something. Like a human anti-pattern.)


I think this might relate to what he discusses at the end of the very long Project Announcement blog/article. Something like there may be many different rules, but the one we find will be correct for us due to the framework we have for evaluating the universe (senses, math, etc.).


If it's correct to the framework we have--do we have "a framework" for evaluating the universe? It seems more like there are many. Within senses--millions of frameworks. Within math, millions. And for good reason; this lens offers a different look at this aspect, and this other lens offers a singular view of that one. Stars are white. Stars are blue or red or yellow. Stars have no color. All of these true and helpful, and also conflicting in some context or another.

Looking at the way he's evaluating universes and aiming to find one that's so well matched to ours, just intuitively I also have to wonder how many different ways there are of modeling our universe that are worth keeping around no matter how poorly they fit in even some big ways.

Interesting stuff to think about though.


How is what’s mentioned here, not just the old idea of determinism demonstrated or shown from a different perspective?

I always thought science was biased toward the assumption that the universe follows basic deterministic rules, and the purpose of doing science was to find the nature of those rules. What I mean by this, is that when we do science, we are trying to determine the nature behind things - are we not? Therefore, determining things, or determinism, is at the heart of doing science. Assuming otherwise, seems like a disservice to practicing science, but maybe that's just me?

Granted, I know nothing is settled yet, but I have a hard time seeing anything new here. Just old ideas communicated in an analogous new perspective…


> I have a hard time seeing anything new here. > Just .. [a] new perspective...

okie dokie


tomato-tomato.


The phrase 'a new kind of science' is a huge red flag, but apart from this I don't think I can evaluate what I'm reading. Does the article make any sense? Does it at least hint at a useful direction? Or is this a case of someone holding a hammer (the language) and seeing everything as a nail (how the universe works)?


It doesn't pass the smell test. I'll be very diplomatic and say this to our high school and undergrad readers: Mathematica is a lovely program, but working for this man on this project will not do good things for your scientific career.

Among the many many many red flags, Wolfram claims to have discovered that complexity can emerge from simple rules in the early 80's. This is a full decade after P. W. Anderson's seminal paper [More is Different](https://science.sciencemag.org/content/177/4047/393).

In fact, if you haven't RTFA yet, save your time and just read the original, rigorous, less self-aggrandizing paper: https://science.sciencemag.org/content/177/4047/393


> Among the many many many red flags, Wolfram claims to have discovered that complexity can emerge from simple rules in the early 80's

Total red flag. His obsession with this - though - does not seem ill placed. As far as smell tests go, I think this thread is worth pulling.


I am also put off by his self-obsession, but it is also true that I discovered how complexity can emerge from simple rules in the 90's. Like wolfram, I was not first, but I did discover it.


I can’t seem to access more than the cover page of that paper.

Do I need a journal membership?


Found a pdf linked from his Wikipedia page

http://robotics.cs.tamu.edu/dshell/cs689/papers/anderson72mo...

Though I must admit I don’t really understand it. I was expecting automata to be referenced somewhere.

I enjoyed A New Kind of Science, though I feel like it would have been a much shorter book if Wolfram included less self-aggrandizing.


It's the paper that established "emergent phenomena" as an interesting and viable field of inquiry. When you understand it, it changes the way you think about the world.

It's more fundamental than automata papers, so of course it doesn't address automata, but automata papers should reference it.

"A New Kind of Science proposed ideas that were not new, were not kind, and were not science. Discuss."



Eddington too had a 'Fundamental Theory'.

https://en.wikipedia.org/wiki/Arthur_Eddington#Fundamental_t...

"Eddington believed he had identified an algebraic basis for fundamental physics, which he termed 'E-numbers' .... These in effect incorporated spacetime into a higher-dimensional structure. While his theory has long been neglected by the general physics community...."


I've always liked the idea that the universe worked something like a cellular automaton at its lowest level, but I was always uncomfortable with how that presupposed a specific grid and a very rigid system of time. This system of hypergraph-rewriting excites me because it seems like a more generalized form/alternative to cellular automatons that fixes these issues. And then it's super exciting to see that you can get relativity and QM-like effects as emergent properties from this sort of system.


Each generation imagines God as whatever is coolest right then. In Newton's day, the universe was a tower clock.

Humanity will be around for, well, a few more years, not a blink of the universe's eye. Our grandchildren will have time to look into things, if civilization doesn't collapse. What are the odds that they will decide we were right, that the universe really is something we know about already?


I got stuck on causality invariance—not getting the orange edges on the graph. It seemed like the root should’ve been connected to every node, so I’m clearly missing something.


I'm not qualified to consider the physics ramifications of what's presented here, but the graph rewriting rules are really interesting and almost seem glaringly obvious in retrospect. I don't know if there's prior work with them but I can think of a few immediate applications for synthetic data generation that would benefit from this approach.


Is it possible to use the knowledge in this paper, which uses machine learning to derive the simple rules for the creation of a specific state and apply it to Wolfram's Physics Project?

https://www.youtube.com/watch?v=bXzauli1TyU


The concept of "connectives" as explained in Frank Herbert's Whipping Star book comes to mind.

One critic remark to make: I am not impressed by the "complex looks" of the hypergraphs because the initial premise is that it does not matter how we visualise them.


Not qualified to comment on the depth of physics in there, but the theory sounds very intriguing and fascinating. It's important to push our collective thinking - and the authors definitely have the expertise to do that.


I wonder if this is in response to Eric Weinstein's Geometric Unity presentation he did on April 1st. Yesterday, on Lex Fridman's podcast, he mentioned how since he revealed his theory there wasnt any feedback from scientific community. BTW, highly recommend watching Eric's Portal.


I would love to see thorough, detailed analysis and critique of Weinstein's Geometric Unity theory and the latest version of Wolfram's cellular automata-like theory of reality. (Weinstein still needs to put up a formal paper, though.)

At the time of writing this comment, the majority of responses here are just calling the guy a pompous crank, rather than pointing out specific issues with the central claims of the proposal. Funny that this is on the website created by the guy who proposed the Hierarchy of Disagreement [0]. There are many comments with valid critiques, but so far there are more of the other kind.

Yeah, Wolfram and Weinstein have some eccentricities, and the intro section to this is kind of unnecessary and grating and could probably be cut out, but they're also very intelligent people who have discovered and created things which are both "new and true" (and/or new and useful) rather than merely "true or new, mutually exclusively" as people like to snidely parrot.

I'm not saying this theory or Weinstein's theory aren't necessarily irreparably riddled with holes. Just that the nature of the criticism often seems bizarre, poor quality, and directed at their personalities rather than their content, like in this comment section. It'd be much more refreshing to see it be criticized on its merits rather than its tone or the personality traits of its author.

[0] https://en.wikipedia.org/wiki/File:Graham%27s_Hierarchy_of_D...


Has any commenter here read the blog post? He starts from "network that evolves through simple rules" then goes on to derive how this results in space dimensionality, time, Sr, GR, the uncertainty principle, path integral, entanglement, and makes predictions along the way.

This is regardless of the specific rule that computes our universe. Also since that rule is Turing complete, it's going to be the same as every other Turing complete rule, and since we can't produce a machine IN the universe that can simulate the universe, it kinda doesn't matter what the rule is.


Lots of "I" right in the first paragraph. If only Wolfram would hire some editors so that we could enjoy his ideas without all the distraction...


The livestream just started (www.youtube.com/WolframResearch). Sounds like it's going to be quite interesting and informative.


i'm very confused about how I can get from these pretty graphs to all of these : https://physics.info/equations/


Wolfram’s header image apart from the article is available in light and dark variants that can be pinch-zoomed more easily:

http://www.wolframphysics.org/visual-summary/

Based on this image alone, having not yet read the article, it looks like he’s taken his prior insights on cellular automata (Conway’s Game type stuff) and advanced them forward to quarks and Feynman diagrams.

I typically have a very rough time with Wolfram’s writing but the image is, at least, simple enough to follow. “What are the rules for particle timeflows?” is certainly a question that’s interesting though my phrasing is probably terrible.

EDIT: Yup, it’s definitely quarks X cellular automata. Clearly an extension of previous Wolfram work, still just as enthusiastic / savior-ish as ever. I hope it pans out somehow in pragmatic real world outcomes someday. Bonus link to diagram showing a fate decision tree behaving like a Conway glider:

https://writings.stephenwolfram.com/data/uploads/2020/04/040...


What does that visual summary even communicate?


“What if the causality that led to the universe can be modeled using a form of Conway’s Game of Life automata built on Feynman diagrams?”

Note that I have no idea if Wolfram is right or not, but I’m glad I tried and failed to read New Kind of Science years ago. The mindset/approach were worth it and make it possible to follow along with today’s post.


As long as some variant of the Game of Life isn’t one of the contenders…


This is incredible! Thanks you for sharing it.


OK, so how does it all work? I’ve written a 448-page technical exposition (yes, I’ve been busy the past few months!). Another member of our team (Jonathan Gorard) has written two 60-page technical papers. And there’s other material available at the project website. But here I’m going to give a fairly non-technical summary of some of the high points.

I wonder if this hapless postdoc-equivalent is going to get sued, too!

https://www.nature.com/articles/417216a

https://cs.nyu.edu/pipermail/fom/2002-July/005692.html


I sure hope not ;)


I'll just say it, I love Stephen Wolfram. The fact he gets under so many peoples' skins, and yet keeps leading teams producing amazing stuff (Wolfram Alpha, Wolfram Language, etc) gets an A+ in my book. Anyone who puts so much energy into this stuff, which such ego, obviously wants to contribute in a real and meaningful way to ensure his own understanding and his legacy. A personality flaw? Sure. But we've seen bigger assholes make huge dents in the universe - physics is long overdue for someone to shake things up. From my vantage point, as a laymen, theoretical physics is dead, and its a shame. It may take someone like Wolfram to do it - it probably takes flipping the bird to peer review at this point to get widespread dissemination of radical takes on theoretical physics. And it certainly seems like a lot of radical takes will be necessary for us to actually make progress.

So, hats off. Even if this isn't the Big Idea, maybe it'll end up sparking an idea in someone else's head down the road that gets us there.


> theoretical physics is dead

I'm genuinely not sure why you believe this especially as we live in an era when theoretical physics and experimental physics are becoming so entwined and the product is a fairly successful testing of our models of physics.


It's not dead in the sense that we've done a better job of experimentally confirming things like general relativity and aspects of quantum mechanics, due to being able to accurately perform more precise observations (like with LHC and LIGO). I'd say this is a triumph of advances in experimental physics, though, rather than theoretical physics.

There doesn't seem to have been any widely accepted fundamental theoretical breakthrough like general relativity or quantum mechanics for a long time, nor has there been any widely accepted way to unify the two theories. Maybe I'm wrong, but that's the impression I and many others have.


This is pure speculation on my part so take it for a grain of salt. But I generally don't find that fields just stop advancing or go cold. It seems to me that usually what is happening is there is a lot of work going on at a level that does not make a lot of sense to report to a lay audience or at leas the media which covers the subject does not think a lay audience could understand or would be interested in it.

Co Sider biology. What is the last major biology idea that made a huge public splsh? DNA? The human genome sequence? But are biologists stuck the world round? Of course not there have been hundreds if not thousands of notable biology findings in the past 20 years.

Theoretical physics probably simply appears stuck to us because as lay people we focus on a couple of large scale questions that for now are probably out of reach. But we don't consider the hundreds of theoretical physics refinements and observations that need to go into the development of the LHC or LIGO or James Webb etc.

Anyway, that's just like my opinion man.


> Co Sider biology. What is the last major biology idea that made a huge public splsh? DNA? The human genome sequence? But are biologists stuck the world round? Of course not there have been hundreds if not thousands of notable biology findings in the past 20 years.

Human genome editing using CRISPR-Cas9 a few years ago was pretty big tbh.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4417674


True, there probably have been some important refinements, but part of the issue is that a lot of that work has been done along very long, sometimes many-decades long, paths that so far have had zero experimental validation.

Basically, people have been building giant ladders and constantly improving them, but we don't know if any of the ladders were put up against the right wall. We have no idea if these theories actually correspond to reality or not. For example, string theory and supersymmetry.

The LHC was created partly with the goal and expectation of detecting particles that would validate supersymmetry, but so far it hasn't, and many are beginning to believe the theory may be flawed or wrong. In some sense that is a bit of a theoretical advancement, but so far, supersymmetry theorists don't appear to have a popular alternative they're considering instead. If they fail to find evidence for it in the next 10 or so years, it's probably just a total dead end. And string theory has been at it for even longer.


I believe it because it seems to be the consensus I gather whenever I read articles about theoretical physics' hunt for the unified theory. I'm talking about that problem specifically. The search for the unified theory sounds basically dead, and we have a generation of physicists who have spent their careers going down dead ends. The meme seems to be for the field to move past it and focus on more "productive" areas of physics that can have practical use.

I can only gather this from 2nd hand accounts in pop-sci articles and mainstream physics books, but the lack of falsifiability and experimentalism for these things, combined with a large surface area of failed attempts, seems to have left physics in a rut.


Sure, theoretical high-energy physics (i.e. particle physics) seems to be in somewhat of a dead end when it comes to finding a theory of everything. But this is an incredibly small part of theoretical physics.


I feel the same way. His book A New Kind of Science (the first chapters) have been revolutionary.

I think his work will lay down the foundations of science for centuries to come.

It is curious though. Nearly every criticism I’ve read hasn’t grappled with his ideas—just the man, out of insecurity and ego.

And he’s a lovely writer. I don’t sense ego from him. Just honesty about the work he’s done and contributed. You read him and get the sense his prose is calm—like light particles flowing down a river.

His stories about the remarkable scientists he’s met is some of the best writing I’ve ever read.


One potential archetype for him that is consistent is he simply cannot recognize these political concerns. Many of us feel anxiety about the idea we may be interpreted as stepping on peoples' work, or failing to recognize their efforts and how they may have led to our own advancement. It may be that Wolfram, for whatever reason, simply doesn't have that - not out of malice, but just as a core deficiency in his ability to empathize. Any person like that would be viewed as a bull in a china shop, even if everything they wrote never directly attacked or diminished others - and that seems to be the case here. The problems people have with Wolfram are problems of "insult by omission" or "implicit bragging" - not about things that are on the page, but the things that aren't. Anyone who lacks this kind of empathy would simply not write them, and be confused as to why people react to their writing the way so many do.


His self-promotional patter is so simple and consistent you can train half-convincing markov models on it; clearly he could intellectually understand the problem and tone it down a notch or five if he wanted to.

He doesn't, and IMO that makes him responsible.


You literally are illustrating my point. He clearly doesn't. Motives unknown. Could be a personality disorder.

In any case, rational people ought to just get over it.


No, I don't think I am.

> he simply cannot recognize these political concerns.

I'm saying that he clearly can, but he chooses not to.

If by "getting over it" you mean passing judgement and moving on to the theory itself, which is new and worth some thought & discussion, agreed 100%.


I think the reaction of people to Stephen Wolfram talking about the brilliance of his work in a very mild-mannered way says more about them than it does about him.


Revolutionary measured in how many correct predictions and Nobel prizes won?


As a foundation for computational methods, 99.9% of theoretical physics is very much alive.


The general tone of the comments here seems to be 'Stephen Wolfram is a pompous ass, so we shouldn't listen to anything he has to say'. After reading through the post I think that, yes, he probably is a pompous ass (you could certainly have trimmed out the paragraphs that sound like a self-congratulatory auto-biography without losing much), but I really hope people don't just ignore this because of that. There are some legitimately interesting things going on here, and I hope other, more traditional, rigorous, and less hype-prone scientific minds are willing to dive into it to see where it leads. If it'll actually be a unification of relativity and quantum mechanics, who knows, but there's definitely _something_ here, and I'd hate for it to just get ignored because of distaste for the man. There are some actual scientific predictions here - e.g. that there is a maximum speed at which quantum entanglements can happen, analogous to the speed of light in relativity - which I hope people start thinking about ways of designing experiments for/falsifying.


Thanks for your encouraging comments! There is already a proposed observational test for the maximum entanglement rate hypothesis (based on the location of the stretched horizon in the context of black hole physics), proposed here: https://www.wolframcloud.com/obj/wolframphysics/Documents/so...


I'm going to try to merge the other thread (https://news.ycombinator.com/item?id=22867707) hither. Please stand by. Edit: that's done. I adopted the other title, in the hope that it will have less of a flamebait effect.

All: no more Wolfram Derangement Syndrome comments, please. They're off topic because they're always the same, and they compose into a weird counterversion of the very thing they're deriding.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


There's a pernicious effect going on, though. You can see on the Twitter threads about this that people are asking physics professors what they think -- and they're all weaselling out, saying stuff like "No comment." Why? Because from decades of experience they know that nothing is going to stop the hype train: at best they'll be ignored and at worst they'll be tarred in some articles as hidebound reactionaries.

But if the real experts can't be bothered to say anything, then the only criticism will be from laypeople, and they can only argue based on Wolfram's personality and record -- in other words, Wolfram Derangement Syndrome. If you discourage this, we are left with no criticism at all!

(In reality the situation isn't totally lost, but only because there are "half-experts", i.e. grad students like me, who know just enough to criticize substantively, and just little enough to waste time doing it.)


> There's a pernicious effect going on, though. You can see on the Twitter threads about this that people are asking physics professors what they think -- and they're all weaselling out, saying stuff like "No comment.

I am quite confident their criticism will eventually come. "No comment" doesn't mean "I will never comment on this." Criticisms take time to formulate, you'll need to be a bit more patient.


You're welcome to criticize substantively. I took a quick look at your comments and didn't see any WDS. Maybe just a little.


Could you kindly consider a less loaded term than WDS? TDS is often used to gaslight and / or too easily dismiss people who have valid concerns about the world's most powerful leader. And surely in the both of these cases it is an exaggeration. The people who lash out at Wolfram as a person are extremely biased and not objective. That does not make them deranged or mentally ill. Perhaps the exaggeration is an indication of your own frustration around the issue? I would suggest that more neutral language in discouraging these types of comments might better achieve the end goal.


* DS as a trope long predates T. I'm using it in a lighthearted way that isn't malicious and certainly not literal. It's obviously not a claim that anyone is mentally ill. That's why the exaggeration is deliberately so silly.

I get your point, though, since as I'm always telling commenters, intent doesn't communicate itself on the internet and the burden is on the commenter to disambiguate (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...). In a case like this, though, I'm not sure that changing it is the best thing to do, because there's also a cost to the precedent that sets.


I have no stake in this at all and (I literally just learned what "Wolfram Derangement Syndrome" was by reading the comments that you linked to). It looks like you've said several times in the past that the people talking about Wolfram's self-obsession are practicing a "counterversion" of the same thing, but you've never explained why or how; to me, it seems obvious that being obsessed with another person who you don't personally know, like Wolfram, is the exact opposite of being obsessed with yourself.


I'm talking about a collective phenomenon that accrues out of many individual contributions. Sometimes this is metaphorically called the hivemind.

What builds up in these threads is an obsession with Wolfram that is even more Wolfram-obsessed than the Wolfram-obsession Wolfram is charged with. This is unmistakeable, and always happens the same way. Out of the material of these comments, a Golem arises that is a variation of the same monster. Arguably even more monstrous: at least Wolfram writes about things other than Wolfram, and very interestingly—just in a way that would take a chemist to separate from its packaging.

I wouldn't get too misled by the polarity being opposite (yay Wolfram-me, boo Wolfram-him). That's only one bit of difference, similar to the negative of a photo. The interesting thing is, what does this arch-enemy that springs out of the internet aether want to talk about? Wolfram wolfram wolfram. And what does it hate the most? How fixated on Wolfram he is.

What makes this fascinating is that in its trivial, internet-in-a-teapot way, it is an example of how we collectively create our own demons. Nietzsche famously said about monsters: be careful when fighting them, lest you become one. Other writers have said: whatever you fight, you create; and so on. That is a profound question. The fact that this is a small and somewhat embarrassing case makes the dynamic easier to see. It's as if it had been isolated in a laboratory for the purposes of studying it—a simple example for early in the textbook.

I know it feels like being obsessed with another is the exact opposite of being obsessed with yourself, but that is how self-deception works: we dress something up as its opposite, so we can have it without seeing it. The gateway into seeing it is to look at how it stirs up your emotions. Why do we create such an intense, and intensely personal, picture of someone whom we don't (as you point out) personally know? Where is that activation coming from? From inside us. $monster just furnishes material.


You're in a position to have a different perspective of course, but that doesn't track with what I've seen. It doesn't seem like anyone has been obsessed with him, in this thread or in ones I've seen before here. Seems like people just point out he's a grandiose jerk when he comes out with grandiose, non-peer-reviewed scientific theories. The depth of their emotional involvement doesn't appear to go much deeper than that, it just comes up commonly because (a) it's commonly known but (b) Wolfram has enough clout to semi-commonly publish stuff that trends on HN.


The phenomenon is most concentrated before we get in and moderate the threads. Maybe that puts my argument in the oxymoronic position of getting weaker the more true it is (since the truer it is, the more moderation is needed).


You seem very interested in this WDS concept. In fact, I would say it is interesting. But I think citing "please reply to the argument instead of calling names" will be less confusing. Here, calling names is short-hand for criticising/analysing the individual instead of the argument.

To my eyes, the collectively specific hn analysis of Wolfram may be interesting, but it is not relevant to moderation of the content. At the same time, succeeding at moderating away the content means that we can't see it, so it can't actually be interesting or even visible to anyone but you.


I'm happy to answer but don't understand. I'm confused about which specific things fall in which buckets you use the words 'interesting', 'the argument', 'moderation', and 'the content'.

I do sometimes post out of personal interest or for fun—not often, but it's an old habit. My comment upthread was definitely one of those. If it were just moderation I would have said something more procedural.


It's all very meta but this is possibly one of the best comments I've read on HN.

Had a bit of an aha moment.


I really have no idea what large parts of this comment are supposed to mean. Regardless:

What is the "monster" in this case? Obsession? That's just a feeling; in a vacuum, there's nothing wrong with it. No philosopher, Nietzsche included, would say "obsession" is always bad. If stuff like that could be dealt with in absolutes, then people like Nietzsche wouldn't have spent their lives writing books about the nuance of it. Clearly, for any feeling or action, "obsession" included, the actor and the object both matter. There is a qualitative difference between self-obsession and obsession with someone else.

I fail to see how this is an example of how we "collectively create our own demons." That's a cute turn of phrase, sure, but in its original usage, "demons" refers to one's own insecurities and internal emotional challenges, not every bad thing that exists. Clearly, if you think about it with some nuance, some "demons" exist outside of oneself. Nobody believes that people trapped in poverty have "collectively created" the demon of inequality. Nobody believes that victims of violence "collectively created" the trauma they experienced. In reality, some demons simply exist, at no fault of one's own; the people complaining about Wolfram didn't collectively invent him. He exists, he's writing and funding these huge research projects, and that's a material fact external to the people complaining about him.

As for your last point-yes, technically, everything we think and feel comes from inside us. But that doesn't make everything we think and feel the same.

Just btw, I think that it's generally cringey to vaguely quote Nietzsche (or any philosopher) to defend a point, unless you are a philosopher writing to other philosophers, or you're citing some unique idea of his. And I hate this whole style of writing, to be honest, where one papers over any nuanced interrogation of their beliefs with shoddy references to pop-philosophy.


Surely there are other people this applies to much more commonly on HN? Musk and Jobs jump to mind.


The community is a lot more divided about those two, making things more complicated.


Yeah, I am not seeing the mirroring claimed. Salesman projects a reality distortion field. Potential buyers end up complaining about the RDF during the latest product advertisement. It is clearly a reaction, and in that sense a "mirror", but they aren't copying the behavior by projecting a reality distortion field themselves.


As a layman bystander walking by: What makes Stephen Wolfram controversial?


I used to work at Wolfram Research and directly interacted with SW on a project he took personal interest in, so I feel a bit more qualified than most to answer this. Stephen is without doubt a brilliant mind, and the way he thinks, his attention to detail, and his broad vision of computing are unique in their depth and philosophical impact.

However, his seminal work titled A New Kind Of Science is frequently critiqued as unscientific in its exposition of its core theories. Other scientific works of SW are similarly criticized for bypassing scientific peer-review and offering little substantive theory. I won't comment further on this so you can form your own opinion.

Here are a collection of reviews of NKS: http://shell.cas.usf.edu/~wclark/ANKOS_reviews.html

Here is my favorite review for its balance and lack of cynicism: https://www.kurzweilai.net/reflections-on-stephen-wolfram-s-...

At the end of the day, say what you want about his theories - he is a wonderful, nurturing man who set me on the course to what I am doing today. He gave me my first programming job and allocated valuable time of the brilliant minds at Wolfram Research toward making even the interns productive and excited to work there. His son Christopher is a mirror of his brilliance and will certainly be making significant contributions to science in his own time.

5 years after I left WR, I received his latest book in the mail with a simple handwritten note - "Is this relevant to what you're working on?". It was. I cherish the man and his mind, despite any criticism directed at his work.


As the "Wolfram Derangement Syndrome" alludes, he has a very unique style, not unlike our President. And as in that case, it's probably a trap to critique that style and better to simply critique the ideas.


> no more Wolfram Derangement Syndrome comments, please. They're off topic because they're always the same, and they compose into a weird counterversion of the very thing they're deriding.

If you really can't stand it then kill the thread. But Wolfram is a bad actor in a discipline that runs on reputation. He needs to be talked about.


He doesn't need to be talked about for the thousandth time in the way he always gets talked about—not on HN, at least, and it's easy to see why: this is a site for intellectual curiosity. Curiosity withers under repetition and fries under indignation. What happens to it under the combo? exercise for the reader.

This is one of those cases where it's super helpful to have a single thing you're optimizing and for and to know what it is: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... It helps with quickly answering questions that might otherwise be conundrums. A previous example: https://news.ycombinator.com/item?id=20186280


An environment full of misleading information doesn't satisfy intellectual curiosity. It penalizes it. And a one-sided discussion about Stephen Wolfram is highly misleading -- even worse than no discussion at all. So I stand by what I said earlier. If you can't stand to hear the case against Stephen Wolfram then kill this thread.


The discussion isn't one-sided in the way you mean. Take out the dreck and it's still highly critical. For example, the top comment (not counting my moderation alert) begins "I don't see anything of substance here".


I agree with this. These theories would not get any traction whatsoever were it not for the entirely-physics-unrelated fame of their author.

It's important that that is pointed out, not everybody reading them will be aware and may invest time they wouldn't grant to the hundreds of other people promoting their pet Theories of Everything without the benefit of credit with programmers and startup followers.


Agreed. Not talking about it sends a very clear message of implicit endorsement and agreement.

In contrast, talking about it teaches those who haven't encountered him before how he works. Without that knowledge, one is liable to either get swept up in his aggressive, self-centered reality distortion field or reject him outright as blogspam. Neither response is appropriate. He deserves positive attention for his ideas and negative attention for the reality distortion field, and these discussions serve that purpose.


>He deserves positive attention for his ideas and negative attention for the reality distortion field, and these discussions serve that purpose.

Here, here!


If I have to read through so much of the article context disguised as bragging without anything resembling something new then maybe this Fundamental Theory is less important than Stephen Wolfram showing how many Important Things he's done


Oh you don’t remember when Einstein published his papers how he vamped for almost all of it like a Mommy blogger or a recipe spamsite?


IIRC, all of Einstein's papers were just a not-so-subtle advertisement for gaia.com


That is the reason we love Stephen Wolfram.


We still don't know what will achieve self-awareness first: Mathematica or Stephen Wolfram


Stephen Wolfram: his physics might be completely unproven, but his blog posts stand as conclusive evidence that shameless self-promotion has no upper bound.


WOLFRM > TREE ?


One of the important things he’s done was ruin math for me, which was a shock because I did not think that was even possible.

His company was running accelerated calculus experiments at universities built around Mathematica. There’s a reason your teacher doesn’t want you using a graphing calculator for your algebra homework. I know that now.

Ever since I’ve been carrying around a block of salt for Mr Wolfram and get a lot of schadenfreude out of him showing what a mad scientist he is.

Does anyone remember that self obsessed book he wrote about fifteen years back? Hell I can’t even remember the title. Sure changed the world, didn’t it.


Through the grape vine I've heard this is "very much like Stephen Wolfram"


Relevant: http://bactra.org/reviews/wolfram/

"A New Kind of Science: A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity" (2002)


Ah this line I think summarises everything perfectly:

>As the saying goes, there is much here that is new and true, but what is true is not new, and what is new is not true; and some of it is even old and false, or at least utterly unsupported.


Stephen Wolfram is well known in the industry as being a pompous, self-indulgent man. There was an old joke I heard which went:

> Q: How do you know if Stephen Wolfram invented something?

> A: He'll tell you.

I saw a TED talk by Conrad Wolfram about using Mathematica as an educational tool [1]. I went to one of my tutorials and tried to discuss it with the Professor (we were encouraged to bring up basically anything we'd seen or encountered in our first few weeks of uni). His response was "I don't think we need to worry about what Conrad Wolfram is saying."

By all accounts, Stephen and Conrad Wolfram are clever men. It is without a doubt that Mathematica is an extraordinary piece of software. Many people here will have benefitted from Wolfram Alpha and will have used Mathematica either in an academic context or in an engineering context.

Stephen Wolfram, however, is not Christ, even though he likes to think so.

[1] https://www.youtube.com/watch?v=60OVlfAUPJg


That may be true, but science is full of pompous, self-indulgent men who make brilliant discoveries. Of course, science is also littered with cranks who make grandiose claims of discovery - especially in physics.

Curious to hear what people who actually study this stuff think about this supposedly discovery.


The thing is, Wolfram earned his stripes as a very competent physics prodigy, and is clearly very, very smart. But as far as I can tell, no one studies this stuff except him and hist acolytes and I think few established physicists would be inclined too, so it's very difficult to evaluate his ideas.

Is he a crank? For a classic definition of crank, no. Does this work represent a clearer understanding of physics than we had prior, or is it an interesting diversion? Harder question, but one has to assume, probably diversion.

On the other hand if time travel or teleportation IS ever invented, my money is that it will be Wolfram.


For what it's worth, he said during his livestream that he thinks time travel would be impossible under his worldview.


He was smart enough to package up a Lisp computer algebra system (CAS) and sell it all over the place.


I'm a big fan of Gerard t' Hooft's Cellular Automata Interpretation. Most of the things described by Wolfram were already there, in fact, I would argue that most of the things with strict connection to real physics were already described by Gerard t' Hooft in a similar manner (energy and momentum interpretations are very similar)

However, they are different in that Gerard t'Hooft has a single ontic state where Wolfram has branching from undeterministic applications of rules. I'm not sure how Wolfram can derive unitary evolution.

The graph rewriting is cool and novel. I do agree that cellular automata are somewhat limited. Their most obvious limitation is having Manhattan distance. 4 or 8 directions to move through in lower level is something that effects large scale. But most of the models wolfram proposes still produce some grid-like structure, and in some sense his visualization is misleading.


>where Wolfram has branching from undeterministic applications of rules. I'm not sure how Wolfram can derive unitary evolution.

I don't think it's right to call it nondeterministic: every possible sequence of rules happens, separately. It seems to lend itself easily to the Many-Worlds Interpretation of QM.


I don't know if that's true, because I think the idea of a rule having causal invariance is that the many paths converge anyway into causal sequences of events that always hold.


I think this has a better chance of succeeding than Hacker News having a Wolfram-related thread that isn't 50% slagging the guy.


It's super sad. I don't know much about Physics at all. But I suspect any huge breakthrough will come when 1000s of people, like Wolfram, spend their life's effort to solve an incredibly hard problem. Almost all will fail, but some will succeed, and at the start it won't be obvious who is on the right path vs. who is a 'crank.'

Imagine just sitting at home, constantly talking shit and disparaging someone who has dedicated themselves to solving an almost intractably hard problem. And I say that as someone with no particular interest or liking of Wolfram.


Well, here I am talking shit about Wolfram, but I'm a physicist. There are thousands of us spending our lives on incredibly hard problems. The only difference between Wolfram and Weinstein and the rest of us is that they're declaring victory in an incredibly premature, flashy way. When the rest of us have an idea, we try hard to prove it wrong (which is fundamental to how science works), not search for a lay audience to promote it to.


Fair enough. I'm not a physicist, so I'll defer to you on this. Do you think Wolfram's strange style is worth trying, as a moon-shot? (Ignoring for a second his personality).


Physics research being the way it is, everything is worth trying if somebody really believes in it, because everything is a moonshot. I'm personally working on one right now, so I'm too busy to work on Wolfram's, but more power to him if he wants to continue!

My personal intuition is that a new language is only useful if it has enough "meat" to constrain things. For example, most physicists know almost nothing about logic, because it seems so far upstream of everything else that changes in it have no effect. (Indeed, logic has changed a lot in the past 100 years, and nothing happened to us!) But almost everybody in physics agrees that the language of differential forms is awesome, because with some minimal assumptions, they say that there's essentially one way to write down the theory of electromagnetism -- and it turns out to be the right way. Similarly, it looks like there's little promise in applying computability theory to physics, because it is grounded in what happens "at infinity" (and hence does not lend itself to predicting anything in our inherently finite experiments), but real promise in applying information theory and computational complexity theory, which can tell us about asymptotics. So that was why my first reaction is that Wolfram's exceedingly general language wouldn't be very helpful.

However, attitudes can and have changed. If Wolfram and co. come up with a sharp success, where they derive something important without directly putting what they want to get into their starting assumptions, people would pay attention. That's precisely why, e.g. special and general relativity, quantum field theory and string theory are so important today. They all started this way.


Is it down to 50%? That's remarkable.


I get downvoted whenever I make negative comments about cellular automata. It never fails. It’s the crowd.


It would be nice that instead of a press release and a long diatribe with fancy graphics, he took the boring but hard step of publishing a paper that advances the field and getting accolades from fellow physicists who agree with him.

Wolfram's press release focused method of advancing his "scientific advances" is so off putting, and highly suspicious that it is more hype that really something new.


What this project needs is someone at the center of it. A visionary. A leader. A genius. A prolix author.

Can anyone think of someone???


I don’t know who that should be, I know who it shouldn’t.


Eric weinstein ?


I'd like to see that. Although honestly I find him and his brother to both be frustratingly vague and abstract at times.


At times, Eric has quite the way with words; yet on Lex Fridman's podcast, he really struggled to communicate Geometric Unity in a way that's even slightly accessible to a layperson, despite Lex's patient prodding: https://www.youtube.com/watch?v=rIAZJNe7YtE

I actually find his brother Bret to be the opposite in his manner, far more measured and comprehensible: though he'll occasionally forget that you might not know a particular term of art (telomeres, extended phenotype), his ideas are more narrowly scoped to genetic selection pressures and game theory, and a little easier to parse. (His experience is primarily as an educator, which Eric's is not.) Bret and his wife have been doing an educational series on COVID-19, which is quite accessible: https://www.youtube.com/watch?v=l-W9O7qhstY&list=PLjQ2gC-5yH...


I love listening to Heather and Bret when they talk about biology.

What irks me most is when Bret starts talking about how our society isn't sustainable and that we need to rethink society by incorporating insights from game theory and evolutionary biology. On that score, he is frustratingly vague. I don't even think I would likely agree with him, but without something more concrete there's nothing there for me to analyze. Perhaps I just haven't found the right podcast or whatever where he goes into exactly what he means by that.


Totally understood. I don't think Bret claims to have a good answer to the problem. I think his thesis is that we're in such uncharted territory, facing multiple existential crises, that none of the thought technologies that helped us up to this point (religion, democracy, capitalism, even scientific materialism) are necessarily going to suffice for us to survive/thrive into the 21st century.

The common-ancestor driving force of those social technologies are genetic selection pressures, which he argues we should consciously and intentionally counter-act. One might fairly claim that that's a quixotic pipe-dream, given that our nature has been shaped by that force for a billion years; but the highly plausible opposing argument is that the alternative is extinction (if not of the species, then perhaps of civilization).

For more proactive ideas on constructing a social operating system for the 21st century and beyond, take a look at the "Game B" model from Jim Rutt, Jordan Hall, and many others:

https://medium.com/@memetic007/a-journey-to-gameb-4fb13772bc...

https://medium.com/@jordangreenhall


Thanks!


Bret helps much more to his listeners than his brother imo.


Surely you mean Eric Weisstein


There can be only one..


What Wolfram is saying, and has always said, is vacuous in the extreme. There is more new science in 't Hooft's book about Quantum Mechanics as Cellular Automata than in all of A New Kind of Science.

His stuff just amounts to large claims and a lot of fiddling around. Its certainly possible that something could come out of it (I mean the idea "physics is just some rules" is vacuously true) but I don't see in any of the work surrounding his ideas any true attempt to get to the bottom of things by trying very hard to understand basic ideas like locality and unitarity and how they must be true or may be violated by physical models which are deeper than the ones we have.

If you want to see that kind of actual hard scientific work, see Nima Arkani Hamad or Gerard 't Hooft.

I, personally, was a bad researcher, which is why I couldn't cut it as an academic. What made me bad was that I got lost in fiddling around rather than trying to hit the most incisive questions in the most useful ways. I see a lot of that in Wolfram.


Given his passion for cellular automata and the timing of his post, it would have been nice if Wolfram had at least mentioned the late great Conway and his recent passing.


I can't tell if this is real or the output of a neural network trained on Mr Wolfram's corpus of bloviation.


Wow, this is... the worst. This is the most egregious clickbait I've seen on HN in a long time.


That's Stephen Wolfram for you.

He's genuinely smart and genuinely looking into an underexplored branch of theory with tons of upside potential, but he also does... this sort of thing, all the time.


TLDR "if you stare at these weird screensavers long enough, it kinda looks like physics. if you are rich enough to hire a PR team to push the idea; lots of credulous people might agree with you."

For an example of similar quackery from quantitative finance, Espen Haug's ideas are funner:

https://www.researchgate.net/profile/Espen_Haug


"A New Kind of Physics"?

> And for example my book A New Kind of Science is about this whole phenomenon and why it’s so important for science and beyond.

Right...


Could somebody explain to Stephen Wolfram physics is already underway and the normal way to join in is to publish your ideas in appropriate venues and submit them to peer review?


N'ah. It's good that folks like Wolfram and Eric Weinstein are taking a different path. Sometimes that's what it takes for new breakthroughs to emerge. If nothing else it could spark new ideas or invigorate others to take bolder approaches and move outside their comfort zone.


What breakthrough for Eric Weinstein? So far he's just been doing this grandiose marketing push where talks in dense comically impenetrable jargon TO LAYMEN (eg the Joe Rogan show) and releases bizarre rambling videos on youtube that attempt to tie together politics, history, his beef with academia and his "theory of everything".

It smells like bullshit but people seem to gobble it up and call him a genius, for what? He hasn't published, there's no complete explanation it's just endless front-matter for never-completely explained "geometric unity." If you've ever heard of the internet scammer Ty Lopez... that's what this guy sounds like but for physics instead of self-help.

At least Wolfram has something to show.


Eric says his theory is a work in progress, may be wrong and may even be fool's gold (his words). That's a ways off from claiming breakthrough. And I don't see him selling magic pills or some get rich quick scheme.

It seems to me he's just exploring ideas in the open. That's great. I don't see the problem.


But he's not really exploring the ideas "in the open".

He is making REALLY HUGE claims in advance of releasing the actual products of his supposed 3 decades of work. He is making these claims in popular media and creating a buzz-- first. I suppose that's his prerogative, but it doesn't seem like science to me. The scientists I admire most take ENORMOUS effort to communicate clearly, adjust to their audience, focus on making compelling arguments, and above all "show their work". This guy does the exact opposite.

One other thing he keeps on citing is this bizarre quote from Dirac: > "... it is more important to have beauty in one's equations that to have them fit experiment...".

That's perfectly fine, for a while, but it doesn't BECOME PHYSICS until it agrees with experiment at a minimum.


Folks like Wolfram and Weinstein

Millionaires with connections.

Let's not pretend there is necessarily any more merit to their theories than that of any unconnected obscure crank or even intelligent outsiders.

They aren't getting publicity on the strength of their ideas (even if there is something to them).


What path is Eric Weinstein taking? I've heard about him for the first time yesterday, when a section of his appearance on the Joe Rogan podcast was linked, where he talked much too confidently about a subject he probably shouldn't.


Weinstein rented a room in an Oxbridge college to lecture on his theory of everything.

His recent campaign was for his brother who he feels was badly mistreated by a Nobel laureate who he alleges failed to credit him for important related work.

You can hear the latter case on a recent edition of his podcast, The Portal.


> theory of everything

I've read a book with this title once, full of formulas. I do not remember who the author was, and the book (or the math in it) did not make sense to me.


Which subject?


He claimed that "all our laboratory mice are broken".

Video links and discussion: https://www.reddit.com/r/longevity/comments/g0ioxq/if_it_is_...


He's speaking on behalf of his brother Brett Weinstein, who is qualified to say such things.


There are still so many things wrong in how he tells the story that it doesn't pass a basic smell test for me.

- He omits that this study was from 2000, so quite some time ago. Given that the paper about that has 200+ citations, I would say that it got the eyeballs it deserved. He doesn't offer any recent evidence that indicate that this is still the case today.

- The core claim he makes "all mice are broken" is a hyperbole that a lot of people will tell you is unsubstantiated. He makes it sound like all tests done on mice are flawed because of it.

- "I called them up and they were not able to produce any records that they changed their breeding protocols" doesn't mean that they didn't change them in the 20 years that passed. It just means that they didn't produce them to some random person contacting them.

I'm not as "surprised nobody is breaking this story" as he is.


>The core claim he makes "all mice are broken" is a hyperbole that a lot of people will tell you is unsubstantiated. He makes it sound like all tests done on mice are flawed because of it.

It isn't hyperbole, if Bret's claims are substantiated. The claims being that most lab mice are from the Jackson lab, that all of their mice have extremely long telomeres compared to wild mice, and that this could have serious implications on the potential harm to humans by some drugs - even while also accounting for the fact that mice are already not a good model of humans and that the same drugs are later tested on humans. (I could elaborate on this, but it's covered thoroughly in the paper and podcast.)

I agree Eric hasn't satisfactorily proved they didn't change the protocols, but if it is the case that all of their mice still have long telomeres, and if all of the other claims are true (most US researchers still using those mice + the other claims), then the claims "most mice used by US researchers are broken" and "most tests done on mice by US researchers are flawed because of it" are likely also true.

I have no idea if the above claims are true, of course, but they should be investigated on their merits.

Bret Weinstein's 2002 paper: https://www.ncbi.nlm.nih.gov/pubmed/11909679

The podcast between Eric and Bret Weinstein covering the full story: https://www.youtube.com/watch?v=JLb5hZLw44s

Not sure if you already listened to the podcast, but if you haven't, I'd recommend listening to all of it.


Well, his reasoning is sound enough - if his results are truly spectacular, it probably won’t matter that they weren’t published in peer reviewed venues.

At the same time, turning boring results into something fit for publishing can be very time consuming - more than the research itself, and at the opportunity cost of attacking juicier problems.

So if your goal is to utterly revolutionize a field, and you are very confident in your abilities to do so, it’s not the worst strategy in the world to just chase the ideas you’re most excited about and forgo the traditional bureaucratic paths.


I haven't been deeply plugged into Wolfram's breakthroughs since "A New Kind Of Science," but if memory serves I don't think he's discovered anything that offers insight in the physics world in the utility sense. By which I mean: progress in physics usually looks like finding a novel relationship between previously-thought-separate phenomena, an improvement on the mechanics of using an existing theory for predictive power, or a conceptual simplification of existing theory (i.e. a new theory that completely explains the known observable phenomena of the previous theory while shaking out Russell's teapots the previous theory held).

I don't think Wolfram's work in the physics space on the cellular automata model has met any of these criteria, so it's not surprising he's not getting mainstrean publication traction. His CA model isn't bringing us novel relationships (yet; I hold out hope here), it isn't making it easier to do what we already do, it hasn't yielded any predictions that existing theories don't also yield, and it's certainly not simpler than field theory and relationship formulae that don't assume a CA. Until there's a breakthrough in one of those three categories that can't be done easily with one of the traditional mechanisms, Wolfram's doing the scientific equivalent of translating lingua franca physics into Klingon; not really wasted effort in that it can be valuable to see something from more than one point of view, but not something that anyone's gonna pay him for or devote their precious column-inches to.


I’ll let the next few hundred years judge how relevant or not his contribution to physics was. I don’t think anyone writing in 2020 has much sense of that.



Although it's also worth noting that that page states:

>Note: Since 1987, Stephen Wolfram's intellectual efforts have not primarily been reported in academic articles.


This is true, but Wolfram was certainly a "normal" professional physicist for years before striking out on his own.


It's been waiting for a genius, a visionary, a maverick, ...

Wolfram's narcissism is so extreme it's just comical. When I was really into complexity science and stuff everyone in the field would joke about it, suggesting a drinking game where you take a shot every time Wolfram refers to how he was a gifted child or started studying math in his teens or single-handedly accomplished some amazing feat. The guy toots his own horn like a circus clown.

It's too bad because he actually does have some very interesting ideas and a good skill at explaining things. His narcissism is self-defeating. He needs to just chill out, write like a normal human being, and publish. Maybe he should take up amateur astronomy. Nothing gives perspective more than carefully observing photons older than the human species with your own eyes.

I sometimes recommend A New Kind of Science with a huge caveat. It's a decent compendium of fascinating cellular automata, complexity, and theoretical CS research with some interesting speculation attached, but (1) Wolfram did not invent all this or do all this work himself, and (2) he takes his speculation too far to the point of flirting with crackpottery. If you can read NKS with those caveats in mind, it's worth at least skimming and taking in the interesting bits.


In university a professor was explaining how Wolfram formulated Physics in the 80s through finite automata instead of Mathematics, that must be the NKS. Is it actually (at least theoretically ;)) possible to calculate something with it, so that results match for instance Newtonian Physics?


It's "just" a different way of formulating mathematics, but sometimes formulating things differently can yield insights. At least attempting physics from this angle is a project with merit, especially since it seems as if conventional mathematical theoretical physics is a little stuck lately.

The problem with the CA-based physics approaches are that while you can create CA models that work and are predictive in ways that mimic conventional mathematical models, so far the approach has failed to produce a compelling model with testable predictions that is unique to the approach.

In other words there's no evidence (yet?) that this approach is better than conventional math, and a strong contrary argument in favor of math can be made from the angle that math maps more clearly to the conceptual space. It's possible to read an equation in terms of the concepts it models, assign units to variables that refer to specific forces or properties, etc. I'm not aware of a way to do that with CA. What you get there are weird rules that manifest something that seems to fit existing mathematical models or concrete observations, but those weird rules are "opaque." It's a relative to the opaqueness problem in AI / machine learning.

Lastly though I would add that the fact that you can create compelling CA models of physics at all does perhaps suggest something about reality. Maybe it suggests that reality is or is similar to a CA system under the hood. The fact that quantum comes from "quanta" and "quantized" suggests that things are in fact discrete at some level.


There is an argument, and I think Wolfram has made it, that one would expect the systems and the math to become simpler the deeper and more fundamental the models became, and the opposite seems to be true with modern physics.

When you say conventional math, you’re referring to math that would be quite difficult let’s say for even a dedicated undergraduate math, physics or electrical engineering major. The CA, on the other hand described a process an interested child can comprehend.


> one would expect the systems and the math to become simpler the deeper and more fundamental the models became

The notion that there exists a single expression, CA rule, or other fundamental truth at the root of the physical universe is an arbitrary assumption. It may or may not be true. We should go where the evidence leads us.

In the end we should have a model that is no more and no less complex than what is needed to model observed reality.


But it's only easier to comprehend because it's vaguer. If you wanted to express specific, nontrivial mathematical statements in CA language, it would immediately get more complicated. Certainly, a paragraph about Euclid's Elements would be easier to comprehend than a paragraph from the Elements, but that's not a fair comparison...


But my point is if the CA and the mathematical expression show the same things, then you could say the mathematics are simply a roundabout and confusing way to characterize the CA.

As a thought experiment (and a timely tribute) how would you (mathematically) characterize and/or describe an ongoing evolution of an instances of Conway's Game of Life if you didn't have a grid. Or even better, if you didn't have the grid and you could only see things a resolution of neighborhoods of 10-20 cell resolutions? You might be able to come up with some crazy complex math which does it. But if you knew about the grid and the underlying rules it would explain all of the observations perfectly AND more simply.

Don't get me wrong -- I don't think Wolfram has done that, nor do I really hold out hope that it could be true. But it's a real compelling, if only it were.


Admittedly, I only skimmed NKS. That said, I don't understand how the approach is different than anything else abusing the incidental fact that something is Turing complete to re-prove existing results.

The problem with these approaches, like trying to write a neural network in MS Excel, is that it's going to be cumbersome, slow, and provide little advantage over more straightforward calculation methods.


He certainly toots his own horn so incredibly much. I could never make it through NKS.

Also, can't we observe photons older than human species with our naked eyes without taking up amateur astronomy?


Sure, let's force one of the most creative dude of this century back into the rigid pen others are allowed to play in, just in case he does something crazy and interesting.

What a lame idea, truly worthy of a mandarin.


Less of the insults please.

As for "one of the most creative dudes of this century", I am puzzled as to how you reach this conclusion.

He certainly seems to be a very bright and capable person, and deserves recognition as the publisher of Mathematica, but that was released last century; what has he created since then that places him so high in your estimation?


Aside: let's put "worthy of a mandarin" away forever


I've never heard the expression before. Could someone explain its meaning please?


>Could someone explain its meaning please?

https://en.wikipedia.org/wiki/Mandarin_(bureaucrat)


An official or bureaucrat, particularly one that is reactionary and secretive.

Eg. American Power and The New Mandarins. (Book)

I think it’s an important and relevant term.


The guy pretty much single handedly built a profitable company that enables him to pursue whatever research interests he want, and that’s been going for over 30 years.

They hate him cause they ain’t him.


So did a lot of other people who aren’t extreme narcissists.

I think it’s reasonable to find someone’s work admirable and yet think they have terrible personalities. There is a line of thought were people should be judged by what they create rather than how they behave, but luckily not many subscribe to it. Not even grown-up Linus Torvalds subscribes to it.


Why is this "lucky"? Its irrational in many cases, frankly. The work survives past the person. The personality quirks of the person are parochial - they only matter insofar as they impact our ability to create knowledge. In the case of a theoretical singular genius (hypothetically, not saying Wolfram is one) then personality traits are completely irrelevant to their impact, since they're not dependent upon collaboration with others. The error would be for those others to disregard their work due to their personality. The benefit would only flow in one direction in that case, and hence, it'd be irrational for the consumers to reject the work.


Has there been any issues of Wolfram assaulting/harassing/abusing/etc other people? Or are people just upset about the pompous style of his writing? Those are different kinds of “behavior”.

Because while we should totally condemn the former, it seems that we’re discussing the latter here, and it’s hard to get riled up about.


Not that I know of, no - so I completely agree, narcissism isn’t in the same division as abuse.


Matthew Cook?


there were a ton of lawsuits and there’s a reason why the local university doesn’t associate with him despite his previous tenure and employment.


Nobody hates Jim Simons.


I'm not sure how well that argument stands on its own, though. Doesn't this sound a bit like what you might read on /r/AskTrumpSupporters?


Wow, Stephen Wolfram has just rediscovers Daoism...


So basically this: https://xkcd.com/505/ huh Stephen?

-Question has anyone validated and/or disproved his Geometric Rules based approach and NKS Cellular automata against traditional physics models and found predictable conclusions that can be tested via experiment? IE is it really anything other alternative notation/representation system?

It essentially presupposes a computational canvas to the universe,but in what substrate does this computation happen?

Obviously the guy has an ego the size of a black hole (and probably as insecure about it, since the whole website (beautifully produced of course) has no community forum, comment section or space for peer review about the theories he presents..

He really needs to get away from sycophants in his company and engage with outside world. Citing others would help. New perspectives on physics and mathematical descriptions of the universe are welcome and I'm sure there is plenty of insights to be found..who knows exactly, since the investment in learning and understanding his approach requires substantial effort to decode and without prediction . sad to see how little things like a ego of one little man can both be a source of insight and frustration at the same time..


I haven't seen the ideas he presents over there exposed anywhere else in either science publications, global media or even hobbyist blogs. Maybe there's a lesson there that some smart minds might do better when absolving themselves of the constraints of traditional science processes and publishing rythm?

Now this guy is in a very interesting position, as someone who became fixated upon an interesting (?) phenomenon at one point in his life, then spent decades studying it, wrote books and papers about it, designed a fundamental theory basically ex nihilo based on his grand idea about that field (I'm not arguing for/against the theory's veracity here), AND then built a whole toolset around it all to do research, AND wrote books about his research, AND recorded hours of videos, AND now apparently is putting it all online for people to get easy access for free.

He's thought about something, he's worked it and now he's sharing it all. So I'm kind of curious, what more do you want him to do at this point?


There's a "Discussion Forum" link on the homepage.


Hey, that's my favorite XKCD!


Nice! This will be seen in the future either as a Nobel prize winning, first revolutionary paper on the theory of everything, or as just another marketing white paper for Mathematica. I can’t tell, but it is interesting for sure!


I am not a physicist but as a layman there are two interesting things that my mind (which admittedly finds connections in things that others don't) thought of while reading the article:

1) Conway has obviously been in the news recently (RIP) and there have been quite a few articles talking about his "Game of Life". The fact that complex patterns emerge from three basic rules seems eerily similar to what Wolfram is now describing. Of course, computational power is now to the point where you can run many more games that we could the first time I entered the program into my Z80-based machine.

2) There's also been a lot of discussion the last couple years about whether we're living in a simulation. With Wolfram performing "zillions" of operations, could it be that there's actually someone living inside his computer wondering the same thing? As we progress in our computing ability, could this happen in the future? If we ourselves are living in a simulation, could those simulating us also be living in a simulation? Is it also turtles all the way up?


1) Yep. Universe is built on few very simple principles, which can be used to implement emulation: dimensions, energy/force, inertia/mass, time. (And infinity).

2) Yep, it will happen in future, but infinity is not possible in simulation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: