Hacker Newsnew | past | comments | ask | show | jobs | submit | fmbb's commentslogin

I don’t think we can expect all workers at all companies to just adopt a new way of working. That’s not how competition works.

If agentic AI is a good idea and if it increases productivity we should expect to see some startup blowing everyone out of the water. I think we should be seeing it now if it makes you say ten times more productive. A lot of startups have had a year of agentic AI now to help them beat their competitors.


We're already seeing eye-watering, blistering growth from the new hot applied AI startups and labs

Imo the wave of top down 'AI mandates' from incumbent companies is a direct result of the competitive pressure, although it probably wont work as well as the execs think it will

that being said even Dario claims a 5-20% speedup from coding agents, 10x productivity only exists in microcosm prototypes, or if someone was so unskilled oneshotting a localhost web app is a 10x for them


"eye-watering, blistering growth from the new hot applied AI startups and labs"

Could you give us a few examples?


claude code 1B+ arr

ant 10xing ARR, oai

harvey legora sierra decagon 11labs glean(ish) base10(infra) modal(infra) gamma mercor(ish) parloa cognition

regulated industries giving these companies 7/8-fig contracts less than 2 years from incorporation


Claude Cowork was apparently built in less than two weeks using Claude Code, and appears to be getting significant usage already.

Only a personal anecdote, but the humans I know that have used it are all aware of how buggy it is. It feels like it was made in 2 weeks.

Which gets back to the outsourcing argument: it’s always been cheap to make buggy code. If we were able to solve this, outsourcing would have been ubiquitous. Maybe LLMs change the calculus here too?


That's certainly a good example of a tool developed quickly thanks to AI assistance.

But coding assistance tools must themselves be evaluated by what they produce. We won't see significant economic growth through using AI tools to build other AI tools recursively unless the there are companies using these tools to make enough money to justify the whole stack.

I believe there are teams out there producing software that people are willing to pay for faster than they did before. But if we were on the verge of rapid economic growth, I would expect HN commenters to be able to rattle these off by the dozen.


AI has been a lifesaver for my low performing coworkers. They’re still heavily reliant on reviews, but their output is up. One of the lowest output guys I ever worked with is a massive LinkedIn LLM promoter.

Not sure how long it’ll last though. With the time I spend on reviews I could have done it myself, so if they don’t start learning…


> With the time I spend on reviews I could have done it myself, so if they don’t start learning…

Then? Your job is still to review their code. If they are your coworker, you can not fire them.


Then just start rubber-stamping their code. Say you "vibe" read it.

OpenClaw went from first commit in late November to Super Bowl commercial (it's meant to be the tech behind that AI.com vaporware thing) in February.

(Whether you think OpenClaw is good software is kind of beside the point.)


OpenClaw is not going to be a thing in 6 months. The core idea might exist but that codebase is built on a house of cards and is being replicated in 10% of the code.

I don’t think anyone is arguing against code agents being good at prototypes, which is a great feat, but most SWE work is built on maintaining code over time.


It’s very much not beside the point. Productivity is measured in how much value you get out from the hours your workers put in.

But that only gets you to a philosophical argument about what "value" is. Many would argue that being able to get your thing into a Super Bowl commercial is extremely valuable. I definitely have never built anything that did.

It's very much imperfect, but the only consistently agreed upon and useful definition of "value" we have in the West is monetary value, and in that sense, we have at least a few major examples of AI generating value rapidly.


OK but that also means VR was a success, and web 3, and NFTs.

Well, yes, these were definitely a success for some. And I personally still believe that VR will be a success in the longer-term.

In any case, I agree with the grandparent post about the distinction between being successful and good.


Right, but what about real companies that solve real people's problems? I think LLMs make a difference for sure, but I haven't yet seen a company that blew past its competitors because of how great their AI usage was. A really great example would be an underdog smallish company that did so in a non-AI field.

Raising $100M doesn’t even mean you have a good idea or an idea people like or an idea you can even make money on.

It’s probably a better indicator of a good business idea than if you get slapped in the face…

And yet, who would you trust more - a CEO that raised 100M on their "vision" or someone who got slapped in the face?

I mean, juicero got the money instead of the slaps in the face it deserved. And there's thousands of startup like that. I think VCs are terrible at picking and a dice would probably do a better job.

A raise is random noise, not signal, based a confidence game within the VC ecosystem. LP capital call->GP gamble based on waves arms around considering VC underperforms as an asset [1] [2] class even when accounting for the grand slam returns. It's 0DTE options gambling dressed up as skill and an art. But, you know [3] [4] [5], lottery still pays out sometimes.

TLDR A raise is not robust signal in this regard.

[1] https://news.ycombinator.com/item?id=7260137

[2] https://www.linkedin.com/posts/peterjameswalker_most-venture...

[3] https://en.wikipedia.org/wiki/There%27s_a_sucker_born_every_...

[4] https://en.wikipedia.org/wiki/Overconfidence_effect

[5] https://en.wikipedia.org/wiki/Survivorship_bias


Why do you think the creator behind SerenityOS has no experience? I mean it’s not the most popular OS out there but he seems like a capable individual.

in case it's not glaringly obvious from the comment, he has plenty of cpp experience and little rust experience, and that's according to his own comments

the relevant bit here is that he's porting from a language in which he has plenty of experience into another one in which he doesn't, in a large project

that in itself sounds like putting a lot of faith in LLMs but maybe there are factors not mentioned here, which is why i said "on the surface"


Indeed, the hard part won't the port, but the maintenance of that which got ported. To be fair though, he's probably going to be able to use the same techniques for that.

It's hard to articulate, but as someone who knows first hand, I just want to say that manic productivity is not the same as solid engineering.

AI demand is subsidized by the bubble. Those operators buying the RAM are not paying using money that exists. Market economics are not working here.

1. Did a human really knowingly decide to allow that?

2. Did a human create the plugin?

3. Are the maintainers human?

By human I mean an animal that is intelligent enough to understand the agreements and what code they are writing.


Most people aren't human then, sad.

I think Dune is easily a top ten franchise among computer people, so that sort of thing is nothing new.

That can be solved by migrating to a sensible legal system instead.

Large Language Models have no actual idea of how the world works? News at 11.

In any kind of real task, serialization is not the hard part.

If you can write a meta program for it, you can execute that in CI and spit out generated code and be done with it. This is a viable approach in any programming language that can print strings to files.

It’s not frustrating, but maybe it feels tacky. But then you shrug and move on to the real task at hand.


Lisp macros are more for not having to write the same type of code (all subtly different, but sharing the same general structure).

One such example is the let-alist macro in elisp

https://www.gnu.org/software/emacs/manual/html_node/elisp/As...

Dealing with nested association lists is a pain. this let you write your code with a dot notation like jq.

Macros are not only for solving a particular task (serialization, dependency injection, snippets,…) they let you write things the way it makes sense. Like having html-flavored lisps for template, sql-flavored lisp for query,… Lisp code is a tree, and most languages are trees, so you can bring easily their semantic in lisp.


You say that, but I've run into real production problems which were ultimately caused by bad serialization tooling. Language semantics are never going to be your biggest problem, but rough edges add up and do ultimately contribute to larger issues.

Spam senders don’t have pseudorandom number generators?

They're more likely to put in the least amount of effort or care the least about the reasons how the header is used later on.

The article is definitely contradicting itself. There are only two sentences between

> Why should I bother to read something someone else couldn't be bothered to write?

and

> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.

So they expect nobody to read their documentation.


That’s not a contradiction - documentation often needs to be written with no expectation anyone will ever read it.

> So they expect nobody to read their documentation.

Yes, exactly. Because AI will read it and learn from it, it's not for humans.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: