Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[dead]


When I look at how far tech has come in my own life, I'm mid 50's, I don't think the singularity is out of the question in my kids life, or even my own if I'm lucky. When I born there was no such thing as a PC or the internet.

As far as I'm aware, the only missing step is for the llms to be able to roll the results of a test back into its training set. It can then start proposing hypotheses and testing them. Then it can do logic.

I don't understand the skepticism. LLMs are already a lot smarter than me, all they need is the ability to learn.

** Wikipedia definition of singularity. "an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence that culminates in a powerful superintelligence, far surpassing all human intelligence.[4]"


>LLMs are already a lot smarter than me

That's highly doubtful, not unless your definition of intelligence requires volume of regurgiting information and not contextualizing and building on such knowledge. LLMs are "smart" in the same way a person who gets 1600 on the SAT* is "smart". If you spend your time minmaxing towards a specific task, you get very good at it. That skill can even get you as far in life as being a subject matter expert. But that's not why humans are "inelligent" in my eyes.

*yes, there is correlation. Because people who take the time to study and memorize for a test tend to have better work habits than those that don't. But there's a reason why some of those kinds of students can end up completely lost in college despite their diligence to study.

>I don't understand the skepticism.

To be frank, we're in a time where grifts are running wild and grifters are running away red handed. Inside and outside of tech. I am very septical by default in 2025 for anyone who talks in terms of "what can happen" and not what is actually practical or possible.


I'm no Math Olympic Athlete, and I have a terrible memory.

I don't know what they definition of smart is, but you don't have to listen to any of the grifters to know that the current LLMs can do a lot things better than the average person.


It's a machine. I or no I, we knew for decades what they can excel at and what they fall far short of. Llms only amplify those steneghrs and weaknesses.

My definition of intelligence logical intelligence involves the following factors:

- able to recognize patterns in data, even complex ones (machines have always excelled in this, with aid of a human) - an ability to break down complex concepts into fundamentals. I.e seductive reasoning (LLMs as of now don't really do this) - conversely, the ability to learn new concepts and apply them to more complex situations (LLMs especially cannot do this without human assistance) - the ability to synthesize a set of data and come to a conclusion given an environmental context. I.e. Critical reasoning (Ai as of now is poor at this. Partially by design, as critique is avoided in their behavior).

As a few criteria. Artificial "intelligence" as of now is simply leveraging its superior pattern matching to give the illusion of reasoning, and it's mimicry is close enough for non-subject matter experts to choose to believe its output.


Until now computing was running on a completely different model of implied reliability. The base hardware is supposed to be as reliable as possible, software is supposed to mostly work and bugs are tolerated because they're hard to fix. No one is suggesting they're a good thing.

LLMs are more like something that looks like a text only web-browser, but you have no idea if it's producing genius or gibberish. "Just ignore the mistakes, if you can be bothered to check if they're there" is quite the marketing pitch.

The biggest development in tech has been the change in culture - from utopian libertarian "Give everyone a bicycle for the mind and watch the joy" to the corporate cynicism of "Collect as much personal information as you can get away with, and use it to modify behaviour, beliefs, and especially spending and voting, to maximise corporate profits and extreme wealth."

While the technology has developed, the values have run headlong in the opposite direction.

It's questionable if a culture with these values is even capable of creating a singularity without destroying itself first.


> LLMs are already a lot smarter than me

You are almost certainly underestimating yourself and overestimating LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: