Hacker Newsnew | past | comments | ask | show | jobs | submit | more Sol-'s commentslogin

With stuff like this, might be that all the infra build-out is insufficient. Inference demand will go up like crazy.


Unlocking the next order of magnitude of software inefficiency!

Though I do hope the generated code will end up being better than what we have right now. It mustn't get much worse. Can't afford all that RAM.


Dunno, it's probably less energy efficient than a human brain, but being able to turn electricity into intelligence is pretty amazing. RAM and power generation are engineering problems to be solved for civilization to benefit from this.


It'd be nice if CC could figure out all the required permissions upfront and then let you queue the job to run overnight


Except it cannot really do anything unattended


It actually can with the right wrapper. I built an open source loop driver that runs Claude Code CLI autonomously with --dangerously-skip-permissions. It handles session continuity (--resume), budget enforcement, stagnation detection (two-strike system if turns stay low), and auto model fallback (Opus -> Sonnet on consecutive timeouts).

The key is streaming NDJSON output to track cost per iteration and detect completion markers. The human stays in control by editing CLAUDE.md between runs to steer the project.

https://github.com/intellegix/intellegix-code-agent-toolkit


Anyone paying attention has known that demand for all type of compute than can run LLMs (i.e. GPUs, TPUs, hell even CPUs) was about to blow up, and will remain extremely large for years to come.

It's just HN that's full of "I hate AI" or wrong contrarian types who refuse to acknowledge this. They will fail to reap what they didn't sow and will starve in this brave new world.


Agreed, agent scaling and orchestration indicates that demand for compute is going to blow up, if it hasn't already. The rationale for building all those datacenters they can't build fast enough is finally making sense.


Oh yeah I mean if you're a webdev and you haven't built several data centres already you're basically asking to be homeless.


This reads like a weird cult-ish revenge fantasy.


And what about you? Show your "I used AI today" badge, right now!


[flagged]


If ai progresses slow enough, we will end in a society were high unemployment numbers are the norm and we are stuck in capitalism.

And if I think about one 'senior' in my team I would pref an expensive ai subscription over that one person already.


[flagged]


Blue collar work won't be safe for long. Just longer.


What the fuck is wrong with you? This guy is either a troll or legitimately mentally ill.


Is it their version of virtual AI employees that some startups were previously getting into, plus on-site support by FDEs and such?


I really like that they establish it as their niche, and given their corporate focus it's also easy for them to take that stance. They have less to lose from forgoing advertising than ChatGPT with its relatively larger consumer focus.

I don't think ad supported business models are as bad as everyone says, though. Ultimately they trade attention for access, and access to this new kind of intelligence is already very important and will become even more so in the future. Claude already has pretty low limits in the free tier, so that's the price - perhaps a good one - for being ad free I guess.


ChatGPT also doesn't pay much attention to consumers, and the recent 4o incident has caused a stir.


Dunno, I want to agree, but at the same time it's spoken like someone to whom these experiences and human relationship come easily. There are many people out there who, for some reason (anxiety, etc.), cannot easily access this part of the human condition, unfortunately.

Perhaps better to roam a virtual reality than be starved in the real world.


I also find the implications for this for AGI interesting. If very compute-intensive reasoning leads to very powerful AI, the world might remain the same for at least a few years even after the breakthrough because the inference compute simply cannot keep up.

You might want millions of geniuses in a data center, but perhaps you can only afford one and haven't built out enough compute? Might sound ridiculous to the critics of the current data center build-out, but doesn't seem impossible to me.


I've been pretty skeptical of LLMs as the solution to AGI already, mostly just because the limits of what the models seem capable of doing seem to be lower than we were hoping (glibly, I think they're pretty good at replicating what humans do when we're running on autopilot, so they've hit the floor of human cognition, but I don't think they're capable of hitting the ceiling). That said, I think LLMs will be a component of whatever AGI winds up being - there's too much "there" there for them to be a total dead end - but, echoing the commenter below and taking an analogy to the brain, it feels like "many well-trained models, plus some as-yet unknown coordinator process" is likely where we're going to land here - in other words, to take the Kahneman & Tversky framing, I think the LLMs are making a fair pass at "system 1" thinking, but I don't think we know what the "system 2" component is, and without something in that bucket we're not getting to AGI.


Do they assume that the current state of our institutions is normatively correct? AI progress will come and have manifold benefits, therefore we shouldn't really restrict it too much.

If the institutions cannot handle that, they will have to change or be destroyed. Take university, for instance. Perhaps they will go away - but is this a great loss? Learning (in case it will remain relevant) can be more efficiently achieved with personal AI assistants for each student.


I think it's still worthwhile, though. AI, given its current trajectory, will be able to help immensely with science and engineering challenges. Degrowth isn't a recipe for sustainable reduction of CO2 emissions.


The big engineering challenges right now are electrifying everything (which means convincing people that it's the right thing to do and that gas powered vehicles belong to the trashbin of history, amongst others) and banning production of "virgin" plastic items, especially single use items (which also required a whole lot of convincing).

Most of that is convincing is done in the exact opposite direction with... you guessed it... AI.


Pumping even more CO2 into the air hoping the magic box spits out a solution to remove the CO2 from the air doesn't seem like a sustainable recipe either.


This is broadly more PR puffery. We don’t need some magic AI model to tell us how to cut emissions. We just need to execute things we already know work.


His tone and sucking up to his authoritarian government will probably only serve to negatively polarize Europe against Cloudflare, even if he might have a point of the substance itself.


Indeed. There was a much better way to make this point.


I don't get how such idiotic people get into those kinds of positions.


Agreed, he really should learn from how Pavel Durov responded to France after he was treated unfairly by French police.


The post is unhinged. Basically a tantrum. It’s sad really. It reminds me of https://www.kalzumeus.com/2017/09/09/identity-theft-credit-r...

tldr you don’t get angry discussing with institutions because it makes you look like an amateur.


The shape of the politics changed, though. From civil rights, questioning authority and cypherpunk, which inherently has a libertarian bent, there's now much more identity politics and social justice / grievance culture with only tenuous connections to tech.

For a hacker conference, they also are pretty Luddite against new technologies like AI. It's a very conservative degrowth movement nowadays, all in all.


> For a hacker conference, they also are pretty Luddite against new technologies like AI.

Hacking was always against centralization and central control (and towards decentralization) - which is why any lecture celebrating the bigtech AI companies would strongly be against the whole culture.

While for various reasons AI is a controversial topic, I would say that if someone gave a great talk about how to decentrally train some AI model efficiently as some volunteer computing project, this would be perfectly fitting for the C3.

Addendum: There is an AI talk (as pointed out by wunderwuzzi23 at https://news.ycombinator.com/item?id=46390959): https://events.ccc.de/congress/2025/hub/event/detail/agentic...


> For a hacker conference, they also are pretty Luddite against new technologies like AI.

No, just this one, because it steals from almost everyone and gives to the few. Even if it seems to be somewhat failing at monetization for now, control is in the hands of a very few.


I am happy they are careful with new technologies, especially one like AI, and also set the right impulses. Enough non-political reasons to have that stance, especially taking in societal implications and how technology affects everyone and not just stakeholders and techbros. In a time when tech in the US is just accelerating by the top-down agenda of figures like Andreesen, Thiel & Co., that is very much needed imo.


Then just continue to take them? The article just once seems to mention side effects and these were suspected to be related to rapid weight loss and less to the drug itself.

Pointless human interest story with some rent-a-quote expert sprinkled in that tries to imply some ominous danger but can't come up with any hard data on that themselves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: