Allowing a parrot to iterate on given examples and generate a similar one with the information baked in their weights does not invalidate "Stochastic Parrot" take. On the contrary, it proves it.
LLMs are statistical machines. The catch is you feed it hundreds of terabytes of valid information, so it asymptotically generates valid information as a result of this statistical bias.
Even yet, they can hallucinate so badly. I mean, the same OpenAI model claimed that I'm a footballer, a goal keeper in fact.
It's clearly true that the LLMS are 'stochastic parrots', but for all we know that might be the key to intelligence. It is in itself not a deep observation any more than calling your fellow humans 'microbial meatbags'.
Saying that LLMs are stochastic machines does not establish an upper bound for success.
The thing is, this assumption of LLMs might be intelligent lies in the assumption is intelligence is enabled solely by the brain.
However, as the science improves, we understand more and more that brain is just part of a much bigger network, and its size or surface roughness might not be the only thing determines the level of intelligence.
Also, all living things have processes which allows constant input from their surroundings and they also have closed feedback loops which constantly change and tweak things. Call these hormones, emotions or self-reflection or whatnot.
We the scientists love to play god with the information we have at hand, yet we constantly humbled by the nature by experiencing the shallowness of what we know. Because of that I, as a CS Ph.D., am not so keen on to jump to that bandwagon which claims that we invented silicon brains.
They are arguably useful automatons built on dubious data obtained in ethical gray areas. We're just starting to see what we did, and we have a long way to go.
So, a living parrot might be more intelligent than these stochastic parrots. I'll stay on the cautious critics wagon for now.
We are not stochastic parrots. Old components of our brain help “ground” our thoughts and allow things like doubt or a gut feeling to develop which means we can question ourselves in ways an LLM cannot.
Yesterday my bullshit machine wrote a linker argument parser to hook a C++ library up in a Rust build config. Oh it also wrote tests for it. https://chatgpt.com/share/67a89e5f-b5b4-8011-9782-472d469cc2...