Is a TL;DR available or at least some of the ideas covered? Because after 3 paragraphs it seems the good old "it is actually something resembling a cellular automata" post by Wolfram.
Wolfram explains the basic concepts of neural networks rather well, I think. He trains and runs a perceptron at the beginning und then a simpler network. Then he dwells into replacing the continuous functions they constitute into discrete binary ones — and ends up with cellular automata he thinks emulate neural networks and their training process. While this surely looks interesting all „insight“ he obtains into the original question of how exactly networks do learn is trained networks do not seem to come up with a simple model they use to produce the output we observe but rather find one combination of parameters in a random state space being able to reproduce a target function. There are multiple possible solutions that equally work well — so perhaps the notion of networks generalizing training data is perhaps not quite accurate (?). Wolfram links this to „his concept“ of „computational irreducibility“ (which I believe is just a consequence of Turing-completeness) but does not give any novel strategies to understand trained machine models or how to do machine learning in any better way using discrete systems. Wolfram presents a fun but at times confusing exercise in discrete automata and unfortunately does not apply the mathematical rigor needed to draw deep conclusions on his subject.
We know there are infinitely many solutions, it is just hard to find a specific configuration of parameters. The question is, do all those configurations generalize or not.
funny paper, I still don't know what was the goal of it. It is evident to anyone that LLM can't perform any meaningful reasoning, why even bothering in building such an infrastructure to test whether it is able to become a "scientist".
They do a phenomenal job of guessing the next word, and our language is redundant enough that that alone, carried out recursively, can produce quite interesting results. But reasoning? I'm certain everybody has gotten in this pattern, because it happens on pretty much anything where the LLM doesn't answer right on the first shot:
---
LLM: The answer is A.
Me: That's wrong. Try again.
LLM: Oh I'm sorry, you're completely right. The answer is B.
Me: That's wrong. Try again.
LLM: Oh I'm sorry, you're completely right. The answer is A.
Me: Time to short NVDA.
LLM: As an AI language learning model without real-time market data or the ability to predict future stock movements, I can't advise on whether it's an appropriate time to short NVIDIA or any other stock.
Yeah, if an LLM was truly capable of reasoning, then whenever it makes a mistake, e.g. due to randomness or due to lack of knowledge, then pointing out the mistakes and giving steps on correcting the mistakes should result in basically a 100% success rate, since the assistant has infinite capacity to accommodate the LLM's weaknesses.
When you look at things like https://arxiv.org/abs/2408.06195 you notice that the amount of tokens needed to solve trivial tasks is somewhat ridiculous. On the order of 300k tokens for a simple grade school problem. That is roughly three hours at a rate of 30 token/s. You could fill 400 pages of a book with that many tokens.
I think it depends on your standards. LLMs are by far the best general purpose artificial reasoning system we've made yet, but also they aren't really very good at it. Especially more complex steps and things that require rigor (chain-of-thought prompting and such helps, but still, they have super-human knowledge but the reasoning skills of maybe a young child)
> super-human knowledge but the reasoning skills of maybe a young child
Super-human knowledge is certainly true (all of wikipedia in multiple languages at all times, quickly)
Consider however, an important distinction. A young child is really, exactly not the way to think of these machines and their outputs. The implicit connection there is that there is some human-like progress to more capability.. not so.
Also note that "chain of reasoning" around 2019 or so, was exactly the emergent behavior that convinced many scientists that there was more going on that just a "stochastic response" machine. Some leading LLMs do have the ability to solve multi-step puzzles, against the expectations of many.
My "gut feeling" is that human intelligence is multi-layered and not understood; very flexible and connected in unexpected ways to others and the living world. These machines are not human brains at all. General Artificial Intelligence is not defined, and many have reasons to spin the topic in public media. Let's use good science skills while forming public opinion on these powerful and highly-hyped machines.
Whenever you poke people about LLMs solving decidable/computable problems, they get defensive and claim that they are not good for that. You are supposed to generate code that solves the decidable problem, heavily implying that retrieval, approximation and translation are the only true capabilities of LLMs.
Empirically while you wait for yet another print on arxiv, I use the Sonnet 3.5 API every day to reason problems, iterate ideas and write highly optimised python and things like CUDA kernals. There is some degree of branching, guiding and correction; but oh boy there is certainly higher order synthesis and causal reasoning going on and its certainly not all from me.
I don't get it, what's cool about it? Einstein notation seems good enough for most of the things and they are equivalent. Is there anything interesting (i.e. new) that this notation allow?
It is much better for working with (a) low-rank approximations and (b) optimizing the order of operations. While (b) is "merely" a question of computational performance, (a) can have extremely important fundamental/theoretical consequences for the problem under study (e.g. finding efficient classical algorithms for modeling quantum mechanics).
it seems to try too hard on proving things have improved by white lying a bit.
Some examples:
* Environment: air quality in most places has continued to improve (and considering the growing evidence on the harms of air pollution, this may well be the single most important item on this whole page), forest area has increased, and more rivers are safe to fish in - yeah except that are only statistics related to USA, not to mention the Detroit area water pollution thing that it is still going on. Or all the PFAS related drama happening recently.
* LASIK surgery has gone from an expensive questionable novelty to a cheap, routine, safe cosmetic surgery - yeah, almost cheap, safe sure, but none of the people I met actually says they'll do it again (sample N=5), for various reasons. Some: they did have to take glasses again after a few years, colors were less vivid, eyes were less hydrated.
* Food, it is nice that fast food aren't shitty anymore, but there are still plenty of contaminations happening (not only in USA but all-over the world). Moreover, the food is becoming worse because of the lack of microsubstances (and soil degradation) as well as the increase in CO2. Agriculture output is damped by climate change.