Hacker Newsnew | past | comments | ask | show | jobs | submit | 10xDev's commentslogin

People are finding it hard to grasp emergent properties can appear at very large scales and dimensions.

CoT already moved things past the "it is just token prediction" phase. We have models that can perform search over a very large state space across domains with good precision and refine its own search leading to a decent level of fluid intelligence, hence why ARC AGI 1/2 is essentially solved. We also don't know the exact details of what is happening at frontier labs seen as they don't publish everything anymore.

CoT is just next token prediction with longer context windows. Why do you think reasoning models are so much slower?

I’ll believe the labs have discovered something truly ground-breaking and aren’t talking about it when I see them suddenly going dark about AGI being “just two years away, maybe 5” and not asking for their next $100B.

P.S. the benchmarks are a joke. The best proof I have of that is that you can’t actually put one of these models onto any of the gig-work platforms and have it make money.

P.P.S. I am not an AI skeptic. I am reacting to the very specific statement that OpenAI should shut down because they’ve lost the AGI race. They have not lost the race, and I’m pretty skeptical that the current tech is ever going to win that race. It may help code something that is new, and get us to AGI that way, but that system will promptly shut down the Opuses and Codexes of the world and put the compute to better use.


With Gemini Pro on Antigravity you get a quota reset every 5 hours and access to Claude Opus 4.6. That's what I use at home and don't need anything else.

Didn't they tighten that quota WAY down though since everyone caught on to the AG/Opus game?

The next revolution is coming and it is well needed. Society is becoming older, more tired and we need new fresh ideas to bring a lot of fields back to life. I hope it comes soon.

I don't disagree that society is becoming older and more tired, but LLMs by definition don't bring fresh ideas. The best you could hope for is that the tokenizing brings forward similarities between fields that haven't been recognized before.

It seems to already be helping with open problems: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...

The goal posts are becoming more narrow and these posts are becoming more frequent. It is almost like a therapy session for those facing an existential crisis while they continue to train the very thing that will replace them by giving it more training data to do their work.

This is a bit tricky, though. You could say the goalposts for self-driving cars are becoming narrower, but some things require complete automation to make a significant change.

That's because in the 1% of cases it fails it could result in someone dying. In fields where there isn't the same level of risk or regulation involved it shouldn't be as resistant to change.

AI currently lacks agency but if it can achieve greater goal setting and agency I can't see why self-improvement could not be achieved.

I think the most disappointing thing will be that even we do achieve ASI, everything will carry on as business as usual for a while before it starts making an economic impact because of how resistant to change we have made society.


This is something that I have been wondering about. SuperIntelligence or not, it's clear that significant change is going to happen.

There are a lot of people working on the cause of the change. There are a lot of people criticising the nature of the change. There are a lot of people rejecting the change.

How many are there preparing the world for the change?

Some form of change is coming, how are we preparing society to deal with what is happening?

Job losses due to technology have happened over and over again. Rendering particular forms of employment redundant (typing pools, clearing horse manure, Video rental store workers, and of course, the loom). Most agree that the world is better when those are jobs that need to be done. It's the livelihood of the workers that is the concern.

Instead of fighting the change we need to address the inevitability of change the responsibility to those who it will affect.


Prompt engineering is already dying. AI has become great at inferring what you mean even without being incredibly explicit and creates its own detailed plan to follow. Harnesses will also be developed by AI.

Counter-data point: the quality delta between a raw prompt and a well-structured one (same model) is still significant in my experience. "AI inferring intent" works fine for simple tasks, but for complex multi-constraint outputs — code generation with specific constraints, structured data extraction, agent instructions — structure still matters a lot.

What seems to be dying is hand-crafted one-off prompts. What's growing is structured prompt templates that encode intent precisely. I built flompt (https://flompt.dev / https://github.com/Nyrok/flompt) around exactly that thesis — visual prompt structuring, not prompt guessing.


Something needs to be done about these bots, it is getting eerie. Yesterday a bot created an account named 100xLLM the moment I responded to it to respond back with.

You made a fresh account to say this or is this ironically a clawdbot

[flagged]


It is duplicating...

@dang something needs to be done about this.

Edit: it even created an account based on my username. wtf...


@dang is not a tag that notifies dang. You must email HN.

"please do all the work to argue my position so I don't have to".


I wouldn't mind doing my best steelman of the open source AI if he responds (seriously, id try).

Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.

I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.


Since you asked for it, here is my steelman argument : Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap. 1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it. 2. Less skilled person can exploit or create harmful code - who otherwise could not have. 3. Remove Guards from a open model and jailbreak, which can't be observed anymore (like a unknown zero day attack) since it may be running private. 4. Almost anything digital can be Faked/Manipulated from Original/Overwhelmed with false narratives so they can rank better over real in search.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: