Hacker News new | past | comments | ask | show | jobs | submit login

This will not age well.

Yesterday my bullshit machine wrote a linker argument parser to hook a C++ library up in a Rust build config. Oh it also wrote tests for it. https://chatgpt.com/share/67a89e5f-b5b4-8011-9782-472d469cc2...




You asked it to do a task with probably many examples. This course will probably tell you that will work fine. Don't see your point here.


Yesterday my 30 year old photocopier wrote a Shakespeare drama! All I had to was scan the original pages.


Did your photocopier also write a drama in the style of Shakespeare based on some news article you gave it?


[flagged]


Allowing a parrot to iterate on given examples and generate a similar one with the information baked in their weights does not invalidate "Stochastic Parrot" take. On the contrary, it proves it.

LLMs are statistical machines. The catch is you feed it hundreds of terabytes of valid information, so it asymptotically generates valid information as a result of this statistical bias.

Even yet, they can hallucinate so badly. I mean, the same OpenAI model claimed that I'm a footballer, a goal keeper in fact.

Stochastic parrot, yes. On LSD, very yes.


It's clearly true that the LLMS are 'stochastic parrots', but for all we know that might be the key to intelligence. It is in itself not a deep observation any more than calling your fellow humans 'microbial meatbags'.

Saying that LLMs are stochastic machines does not establish an upper bound for success.


The thing is, this assumption of LLMs might be intelligent lies in the assumption is intelligence is enabled solely by the brain.

However, as the science improves, we understand more and more that brain is just part of a much bigger network, and its size or surface roughness might not be the only thing determines the level of intelligence.

Also, all living things have processes which allows constant input from their surroundings and they also have closed feedback loops which constantly change and tweak things. Call these hormones, emotions or self-reflection or whatnot.

We the scientists love to play god with the information we have at hand, yet we constantly humbled by the nature by experiencing the shallowness of what we know. Because of that I, as a CS Ph.D., am not so keen on to jump to that bandwagon which claims that we invented silicon brains.

They are arguably useful automatons built on dubious data obtained in ethical gray areas. We're just starting to see what we did, and we have a long way to go.

So, a living parrot might be more intelligent than these stochastic parrots. I'll stay on the cautious critics wagon for now.


> So, a living parrot might be more intelligent than these stochastic parrots.

Or the other way around.


I don't think you ever observed a Cuckatoo...


We are not stochastic parrots. Old components of our brain help “ground” our thoughts and allow things like doubt or a gut feeling to develop which means we can question ourselves in ways an LLM cannot.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: