Hacker Newsnew | past | comments | ask | show | jobs | submit | imiric's commentslogin

I find it amusing that the innovation in this space for the past year+ has been mostly centered around engineering: MCP, "agents", "skills", etc. Now "agent" orchestration is the new hotness.

Meanwhile, the same issues that have plagued these tools since their inception are largely ignored: hallucination, innacuracy, context collapse, etc. These won't be solved by engineering, but by new research and foundational improvements.

On one hand, solid engineering was sorely needed, and can extract a lot of value from the current tech. But on the other, all these announcements and improvements feel like companies grasping at straws to keep the hype cycle going by any means necessary. Charts must go up and to the right, or investors get antsy.

It's all adding to the mountain of signs that suggest that this isn't the path to artificial intelligence. It's interesting tech, with possibly many valuable applications, but the "AI" narrative is frankly tiring. I wish I could fast forward on this speculative phase, go past the inevitable crash, and arrive at a timeframe where we've figured out what this tech is actually good for, and where we hopefully use it more for good than evil.


The possibility that the performance of these tools still isn't at the level some people need it to be is not an option?

It's insulting that criticism is often met with superficial excuses and insinuation that the user lacks the required skills.


When really solid programmers who started skeptical (and even have a ban policy if PR submitters don’t disclose they used AI) now show how their workflows have been improved by AI agents, it may be worth trying to understand what they are doing and you are not.

https://mitchellh.com/writing/my-ai-adoption-journey

My experience mirrors that of Mitchell. It absolutely is at the level now where AI can free up time to do the really interesting stuff.


That possibility is covered by A and B.

GP said 'falls short every time I’ve tried'. Note the word 'every'.


I'm curious: what concrete value have you extracted using these tools that is worth US$thousands?

That doesn't track at all IME.

Programming is not something you can teach to people who are not interested in it in the first place. This is why campaigns like "Learn to code" are doomed to fail.

Whereas (good) programmers strive to understand the domain of whatever problem they're solving. They're comfortable with the unknown, and know how to ask the right questions and gather requirements. They might not become domain experts, but can certainly learn enough to write software within that domain.

Generative "AI" tools can now certainly help domain experts turn their requirements into software without learning how to program, but the tech is not there yet to make them entirely self-sufficient.

So we'll continue to need both roles collaborating as they always have for quite a while still.


Conversely, good developers can now leverage LLM’s to master any domain.

Hhmm I think that's more difficult than using these tools for creating software. If generated software doesn't compile, or does the wrong thing, you know there's an issue. Whereas if the LLM gives you seemingly accurate information that is actually wrong, you have no way of verifying it, other than with a human domain expert. The tech is not reliable enough for either task yet, but software is easy to verify, whereas general information is not.

This type of software is mainly created to gain brand recognition, influence, or valuation, not to solve problems for humans. Its value is indirect and speculative.

These are the pets.com of the current bubble, and we'll be flooded by them before the damn thing finally pops.


I wish we would see these warnings on all articles and comments from pro-AI influencers as well.

Except you got it all the time, just not as polite. Under every Simon Willison article you can see people call him grifter. Even under Redis developer's post you can see people insulting him for being pro-AI.

> otherwise you can't really have a default value, because there's no way to tell if a given zero was explicit or implicit

You can use pointers, or nullable types. These are not ideal, admittedly, but it's not true that "there's no way".

> there's no way to ensure that every field gets filled in

This can also be done with an exhaustive linter. You might think this isn't great either, but then again, always being reminded that you left out some fields is a) annoying, and b) goes against the benefit of default values altogether.

I agree with you on immutability, though.

I also agree with some of the points in the article, and have my own opinions about things I would like Go to do differently. But if we can agree that all programming languages have warts and that language designers must make tradeoffs, I would say that Go manages to make the right tradeoffs to be an excellent choice for some tasks, a good choice for many tasks, and a bad choice for a few tasks. That makes it my favorite language by a wide margin, though that's also a matter of opinion.


This is not about mindless worship, but about the fact that the UNIX design has stood the test of time for this long, and is still a solid base compared to most other operating systems. Sure, there are more modern designs that improve on security and capability (seL4/Genode/Sculpt, Fuchsia), but none are as usable or accessible as UNIX.

So when it comes to projects that teach the fundamentals of GNU/Linux, such as LFS, overwhelming the user with a large amount of user space complexity is counterproductive to that goal. I would argue that having GNOME and KDE in BLFS is largely unnecessary and distracting as well, but systemd is core to this issue. There are many other simpler alternatives to all of this software that would be more conducive to learning. Users can continue their journey with any mainstream distro if they want to get familiar with other tooling. LFS is not the right framework for building a distribution, nor should it cover all software in the ecosystem.


The first version of UNIX was released in 1971 and the first version of Windows NT in 1993. So UNIX is only about 60% older than NT. Both OSes have "stood the test of time", though one passed it with a dominant market share, whereas the other didn't. And systemd is heavily inspired by NT.

Time flies fast, faster than recycled arguments. :)


I'm confused as to which OS is the one that passed the other with dominant market share. Last I checked, Linux is everywhere, and Windows just keeps getting worse with every iteration.

I'm not sure I'd be smugly pronouncing anything about the superiority of Windows if I were a Microsoft guy today.

It's not surprising that systemd was heavily inspired by NT. That's exactly what Poettering was paid to create, by his employer Microsoft. (Oh, sorry--RedHat, and then "later" Microsoft.)


Linux is "everywhere" only if you count Android, which is not very Unix-like.

Except that it didn't, Linux has nothing to do with UNIX design, it isn't a UNIX System V in 2026.

> Linux has nothing to do with UNIX design

Respectfully, that's nonsense. Linux is directly inspired by Unix (note: lowercase) and Minix, shares many of their traits (process and user model, system calls, shells, filesystem, small tools that do "one thing well", etc.), and closely follows the POSIX standard. The fact that it's not a direct descendant of commercial Unices is irrelevant.

In fact, what you're saying here contradicts that Rob Pike quote you agree with, since Linux is from the 1990s.

But all of this is irrelevant to the main topic, which is whether systemd should be part of a project that teaches the fundamentals of GNU/Linux. I'll reiterate that it's only a distraction to this goal.


Yet, UNIX or Unix proper descendents, have replaced, or complemented their init systems, with systemd like approaches, before systemd came to be.

So is UNIX design only great when it serves the message?


I'm not familiar with what UNIX or its modern descendants have or have not implemented. But why should Linux mimic them? Linux is a Unix-like, and a standalone implementation of the POSIX standard. The init system is implementation-specific, just like other features. There has been some cross-system influence, in all directions (similar implementations of FUSE, eBPF, containers, etc.), but there's no requirement that Linux must follow what other Unices do.

If you're going to argue that Linux implementing systemd is a good idea because it's following the trend in "proper" UNIX descendants, then the same argument can be made for it following the trend of BSD-style init systems. It ultimately boils down to which direction you think is better. I'm of the opinion that simple init systems, of which there are plenty to choose from, are a better fit for the Linux ecosystem than a suite of tightly coupled components that take over the entire system. If we disagree on that, then we'll never be on the same page.


I strongly doubt this tool is nearly as popular as it appears to be. GitHub stars can be bought and social media is ridden with bots. On the dead internet it is cheap and trivial to generate fake engagement in order to reel in curious humans and potential victims.

I suspect this entire thing is a honeypot setup by scammers. It has all the tells: virality, grand promises, open source, and even the word "open" in the name. Humans should get used this being the new normal on the internet. Welcome to the future.


I’ve had several non-technical friends tell me about it. It’s like the Queens Gambit was to chess players but for people in tech

It was on the morning news today

That's not what I mean. Of course the buzz will reach mainstream media if everyone on social media seems to be talking about it.

What I mean is that the virality was bootstrapped by bots, which in turn was spread by humans. Virality can be maintained entirely by bots now, to give the appearance that there are more users than there actually are. But I doubt that the amount of humans using it is anywhere close to what the amount of engagement suggests. Which wouldn't be suprising considering the project is all about a large number of autonomous agents that interact with online services. It's a bot factory.


Oh I see

It's absolutely absurd that GitHub hasn't addressed it, to be honest. Right now it has 140k stars: more than foundational frameworks like Laravel or Express or universal tooling like ESLint or the Rust compiler.

Sure, bud. Totally legitimate.


> Qualcomm straight up refuses to support chips through this many Android releases.

That's not entirely accurate. They do provide chips with extended support, such as the QCM6490 in the Fairphone 5. These are not popular because most of the market demands high performance, and companies profit from churning out products every year, but solutions exist for consumers who value stability and reliability over chasing trends and specs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: