Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "spreadsheet" example video is kind of funny: guy talks about how it normally takes him 4 to 8 hours to put together complicated, data-heavy reports. Now he fires off an agent request, goes to walk his dog, and comes back to a downloadable spreadsheet of dense data, which he pulls up and says "I think it got 98% of the information correct... I just needed to copy / paste a few things. If it can do 90 - 95% of the time consuming work, that will save you a ton of time"

It feels like either finding that 2% that's off (or dealing with 2% error) will be the time consuming part in a lot of cases. I mean, this is nothing new with LLMs, but as these use cases encourage users to input more complex tasks, that are more integrated with our personal data (and at times money, as hinted at by all the "do task X and buy me Y" examples), "almost right" seems like it has the potential to cause a lot of headaches. Especially when the 2% error is subtle and buried in step 3 of 46 of some complex agentic flow.





> how it normally takes him 4 to 8 hours to put together complicated, data-heavy reports. Now he fires off an agent request, goes to walk his dog, and comes back to a downloadable spreadsheet of dense data, which he pulls up and says "I think it got 98% of the information correct...

This is where the AI hype bites people.

A great use of AI in this situation would be to automate the collection and checking of data. Search all of the data sources and aggregate links to them in an easy place. Use AI to search the data sources again and compare against the spreadsheet, flagging any numbers that appear to disagree.

Yet the AI hype train takes this all the way to the extreme conclusion of having AI do all the work for them. The quip about 98% correct should be a red flag for anyone familiar with spreadsheets, because it’s rarely simple to identify which 2% is actually correct or incorrect without reviewing everything.

This same problem extends to code. People who use AI as a force multiplier to do the thing for them and review each step as they go, while also disengaging and working manually when it’s more appropriate have much better results. The people who YOLO it with prompting cycles until the code passes tests and then submit a PR are causing problems almost as fast as they’re developing new features in non-trivial codebases.


From John Dewey's Human Nature and Conduct:

“The fallacy in these versions of the same idea is perhaps the most pervasive of all fallacies in philosophy. So common is it that one questions whether it might not be called the philosophical fallacy. It consists in the supposition that whatever is found true under certain conditions may forthwith be asserted universally or without limits and conditions. Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned. Because the success of any particular struggle is measured by reaching a point of frictionless action, therefore there is such a thing as an all-inclusive end of effortless smooth activity endlessly maintained.

It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they arc, or when taken universally.”


The proper use of these systems is to treat them like an intern or new grad hire. You can give them the work that none of the mid-tier or senior people want to do, thereby speeding up the team. But you will have to review their work thoroughly because there is a good chance they have no idea what they are actually doing. If you give them mission-critical work that demands accuracy or just let them have free rein without keeping an eye on them, there is a good chance you are going to regret it.

What a awful way to think about internship.

The goal is to help people grow, so they can achieve things they would not have been able to deal with before gaining that additional experience. This might include boring dirty work, yes. But that means they thus prove they can overcome such a struggle, and so more experienced people should be expected to also be able to go though it - if there is no obvious more pleasant way to go.

What you say of interns regarding checks is just as true for any human out there, and the more power they are given, the more relevant it is to be vigilent, no matter their level of experience. Not only humans will make errors, but power games generally are very permeable to corruptible souls.


I agree that it sounds harsh. But I worked for a company that hired interns and this was the way that managers talked about them- as cheap, unreliable labor. I once spoke with an intern hoping that they could help with a real task: using TensorFlow (it was a long time ago) to help analyze our work process history, but the company ended up putting them on menial IT tasks and they checked out mentally.

>The goal is to help people grow, so they can achieve things they would not have been able to deal with before gaining that additional experience.

You and others seem to be disagreeing with something I never said. This is 100% compatible with what I said. You don't just review and then silently correct an interns work behind their back, the review process is part of the teaching. That doesn't really work with AI, so it wasn't explicitly part of my analogy.


What an awful way to think about other people, always assuming the very worst version of what they said.

Certainly such a message demonstrates that a significant amount of efforts have been put to not fall in the kind of behavior it warrants against. ;)

The goal of internships in a for profit company is not the personal growth of the intern. This is a nice sentiment but the function of the company is to make money, so an intern with net negative productivity doesn't make sense when goals are quarterly financials.

Sure, companies wouldn't do anything that negatively affects their bottom line, but consider the case that an intern is a net zero - they do some free labor equal to the drag they cause demanding attention of their mentor. Why have an intern in that case? Because long term, expanding the talent pool suppresses wages. Increasing the number of qualified candidates gives power to the employer. The "Learn to Code" campaign along with the litany of code bootcamps is a great example, it poses as personal growth / job training to increase the earning power of individuals, but on the other side of that is an industry that doesn't want to pay its workers 6 figures, so they want to make coding a blue collar job.

But coding didn't become a low wage job, now we're spending GPU credits to make pull requests instead and skipping the labor all together. Anyway I share the parent poster's chagrin at all the comparisons of AI to an intern. If all of your attention is spent correcting the work of a GPU, the next generation of workers will never have mentors giving them attention, starving off the supply of experienced entry level employees. So what happens in 10, 20 years ? I guess anyone who actually knows how to debug computers instead of handing the problem off to an LLM will command extraordinary emergency-fix-it wages.


I’ve never experienced an intern who was remotely as mediocre and incapable of growth as an LLM.

I had an intern who didn’t shower. We had to have discussions about body odor in an office. AI/LLM’s are an improvement in that regard. They also do better work than that kid did. At least he had rich parents.

I had a coworker who only showered once every few days after exercise, and never used soap or shampoo. He had no body odor, which could not be said about all employees, including management.

It’s that John Dewey quote from a parent post all over again.


Was he Asian? Seems like somehow asians win the genetic lottery in the stink generation department.

Wait, you had Asmongold work for you? Tell us more! xD

I have always been told expect an intern to be a net loss in productivity to you and anything else is a bonus since the point is to help them learn.

What about a coach's ability for improving instruction?

The point of coaching a Junior is so they improve their skills for next time

What would be the point of coaching an LLM? You will just have to coach it again and again


coaching a junior doesn’t just improve the junior. It also tends to improve the senior.

Coaching an LLM seems unlikely to improve you meaningfully

What about it?

Isn't the point of an intern or new grad that you are training them to be useful in the future, acknowledging that for now they are a net drain on resources.

An overly eager intern with short term memory loss, sure.

And working with interns requires more work for final output compared do-it-yourself

For this example - Let’s replace the word “intern” with “initial-stage-experts” or something.

There’s a reason people invest their time with interns.


Yeah, most of us are mortal, that’s the reason.

But LLMs will not move to another company after you train them. OTOH, interns can replace mid level engineers as they learn the ropes in case their boss departs.

Yeah, people complaining about accuracy of AI-generated code should be examining their code review procedures. It shouldn’t matter if the code was generated by a senior employee, an intern, or an LLM wielded by either of them. If your review process isn’t catching mistakes, then the review process needs to be fixed.

This is especially true in open source where contributions aren’t limited to employees who passed a hiring screen.


This is taking what I said further than intended. I'm not saying the standard review process should catch the AI generated mistakes. I'm saying this work is at the level of someone who can and will make plenty of stupid mistakes. It therefore needs to be thoroughly reviewed by the person using before it is even up to the standard of a typical employee's work that the normal review process generally assumes.

Yep, in the case of open source contributions as an example, the bottleneck isn't contributors producing and proposing patches, it's a maintainer deciding if the proposal has merit, whipping (or asking contributors to whip) patches into shape, making sure it integrates, etc. If contributors use generative AI to increase the load on the bottleneck it is likely to cause a negative net effect.

This very much. Most of the time, it's not a code issue, it's a communication issue. Patches are generally small, it's the whole communication around it until both parties have a common understanding that takes so much time. If the contributor comes with no understanding of his patch, that breaks the whole premise of the conversation.

I can still complain about the added workload of inaccurate code.

If 10 times more code is being created, you need 10 times as many code reviewers..

Plus the overhead of coordinating the reviewers as well!

"Corporate says the review process needs to be relaxed because its preventing our AI agents from checking in their code"

”The people who YOLO it with prompting cycles until the code passes tests and then submit a PR are causing problems almost as fast as they’re developing new features in non-trivial codebases.”

This might as well be the new definition of “script kiddie”, and it’s the kids that are literally going to be the ones birthed into this lifestyle. The “craft” of programming may not be carried by these coming generations and possibly will need to be rediscovered at some point in the future. The Lost Art of Programming is a book that’s going to need to be written soon.


Too bad no one will be able to read it. Better make it a video essay.

with subtitles flashing in the center of the screen

So is here to stay. If you’re unable to write good code with it. Doesn’t mean everyone is writing bad code with it.

Oh come on, people have been writing code with bad, incomplete, flaky, or absent tests since automated testing was invented (possibly before).

It's having a good, useful and reliable test suite that separates the sheep from the goats.*

Would you rather play whack-a-mole with regressions and Heisenbugs, or ship features?

* (Or you use some absurdly good programing language that is hard to get into knots with. I've been liking Elixir. Gleam looks even better...)


It sounds like you’re saying that good tests are enough to ensure good code even when programmers are unskilled and just rewrite until they pass the tests. I’m very skeptical.

It may not be a provable take, but it’s also not absurd. This is the concept behind modern TDD (as seen in frameworks like cucumber):

Someone with product knowledge writes the tests in a DSL

Someone skilled writes the verbs to make the DSL function correctly

And from there, any amount of skill is irrelevant: either the tests pass, or they fail. One could hook up a markov chain to a javascript sourcebook and eventually get working code out.


> One could hook up a markov chain to a javascript sourcebook and eventually get working code out.

Can they? Either the dsl is so detailed and specific as to be just code with extra steps or there is a lot of ground not covered by the test cases with landmines that a million monkeys with typewriters could unwittingly step on.

The bugs that exist while the tests pass are often the most brutal - first to find and understand and secondly when they occasionally reveal that a fundamental assumption was wrong.


Tests are just for the bugs you already know about

They're also there to prevent future bugs.

“The quip about 98% correct should be a red flag for anyone familiar with spreadsheets”

I disagree. Receiving a spreadsheet from a junior means I need to check it. If this gives me infinite additional juniors I’m good.

It’s this popular pattern of HN comments - expect AI to behave deterministically correct - while the whole world operates on stochastically correct all the time…


In my experience the value of junior contributors is that they will one day become senior contributors. Their work as juniors tends to require so much oversight and coaching from seniors that they are a net negative on forward progress in the short term, but the payoff is huge in the long term.

I don't see how this can be true when no one stays at a single job long enough for this to play out. You would simply be training junior employees to become senior employees for someone else.

So this has been a problem in the tech market for a while now. Nobody wants to hire juniors for tech because even at FAANGs the average career trajectory is what, 2-3 years? There's no incentive for companies to spend the time, money, and productivity hit to train juniors properly. When the current cohort ages out, a serious problem is going to occur, and it won't be pretty.

It seems there's a distinct lack of enthusiasm for hiring people who've exceeded that 2-3 year tenure at any given place, too. Maintaining a codebase through its lifecycle seems often to be seen as a sign of complacency.

Exactly this

And it should go without saying that LLMs do not have the same investment/value tradeoff. Whether or not they contribute like a senior or junior seems entirely up to luck

Prompt skill is flaky and unreliable to ensure good output from LLMs


When my life was spreadsheets, we were expected to get to the point of being 99.99% right.

You went from “do it again” to “go check the newbies work”.

To get to that stage your degree of proficiency would be “can make out which font is wrong at a glance.”

You wouldn’t be looking at the sheet, you would be running the model in your head.

That stopped being a stochastic function, with the error rate dropping significantly - to the point that making a mistake had consequences tacked on to it.


98% sure each commit doesn’t corrupt the database, regress a customer feature, open a security vulnerability. 50 commits later … (which is like, one day for an agentic workflow)

It’s only a 64% chance of corruption after 50 such commits at a 98% success.

I would be embarrassed to be at OpenAI releasing this and pretending the last 9 months haven't happened... waxing poetically about "age of agents" - absolutely cringe and pathetic

Or as I would like to put it, LLM outputs are essentially the Library of Babel. Yes, it contains all of the correct answers, but might as well be entirely useless.

> A great use of AI in this situation would be to automate the collection and checking of data. Search all of the data sources and aggregate links to them in an easy place. Use AI to search the data sources again and compare against the spreadsheet, flagging any numbers that appear to disagree.

Why would you need ai for that though? Pull your sources. Run a diff. Straight to the known truth without the chatgpt subscription. In fact by that point you don’t even need the diff if you pulled from the sources. Just drop into the spreadsheet at that point.


In reality most people will just scan for something that is obviously wrong, check that, and call the rest "good enough". Government data is probably going to get updated later anyhow. It's just a target for a company to aim for. For many companies the cost savings is much more than having a slightly larger margin of error on some projections. For other companies they will just have to accept the several hours of saved time rather than the full day.

Of course, Pareto principle is at work here. In an adjacent field, self-driving, they are working on the last "20%" for almost a decade now. It feels kind of odd that almost no one is talking about self-driving now, compared to how hot of a topic it used to be, with a lot of deep, moral, almost philosophical discussions.

> The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

— Tom Cargill, Bell Labs

https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule


In my experience for enterprise software engineering, in this stage we are able to shrink the coding time with ~20%, depending on the kind of code/tests.

However CICD remains tricky. In fact when AI agents start building autonomous, merge trains become a necessity…


Love this quote. Kinda sad I didn’t see this earlier in my life.

> It feels kind of odd that almost no one is talking about self-driving now, compared to how hot of a topic it used to be

Probably because it's just here now? More people take Waymo than Lyft each day in SF.


It's "here" if you live in a handful of cities around the world, and travel within specific areas in those cities.

Getting this tech deployed globally will take another decade or two, optimistically speaking.


Given how well it seems to be going in those specific areas, it seems like it's more of a regulatory issue than a technological one.

Ah, those pesky regulations that try to prevent road accidents...

If it's not a technological limitation, why aren't we seeing self-driving cars in countries with lax regulations? Mexico, Brazil, India, etc.

Tesla launched FSD in Mexico earlier this year, but you would think companies would be jumping at the opportunity to launch in markets with less regulation.

So this is largely a technological limitation. They have less driving data to train on, and the tech doesn't handle scenarios outside of the training dataset well.


Indian, Mexican and Brazilian consumers have far less money to spend than their American counterparts. I would imagine that the costs of the hardware and data collection don't vary significantly enough to outweigh that annoyance.

Do we even know what % of Waymo rides in SF are completely autonomous? I would not be surprised if more of them are remotely piloted than they've let on...

My understanding is they don't have the capability to have a ride be flat-out remotely piloted in real time. If the car gets stuck and puts its hazards on, a human can intervene, look at the 360 view from the cameras, and then give the car a simple high-level instruction like "turn left here" or "it's safe to proceed straight." But they can't directly drive the car continuously.

And those moments where the car gives up and waits for async assistance are very obvious to the rider. Most rides in Waymos don't contain any moments like that.


That's interesting to hear. It may be completely true, I don't really know. The source of my skepticism, however, is that all of the incentives are there for them to not be transparent about this, and to make the cars appear "smarter" than they really are.

Even if it's just a high level instruction set, it's possible that that occurs often enough to present scaling issues. It's also totally possible that it's not a problem, only time will tell.

What I have in mind is the Amazon stores, which were sold as being powered by AI, but were actually driven by a bunch of low-paid workers overseas watching cameras and manually entering what people were putting in their carts.

https://www.businessinsider.com/amazons-just-walk-out-actual...


Can you name any of the specific regulations that robot taxi companies are lobbying to get rid of? As long as robotaxis abide by the same rules of the road as humans do, what's the problem? Regulations like you're not allowed to have robotaxis unless you pay me, your local robotaxi commissioner $3/million/year, aren't going to be popular with the populus but unfortunately for them, they don't vote, so I'm sure we'll see holdouts and if multiple companies are in multiple markets and are complaining about the local taxi cab regulatory commision, but there's just so much of the world without robotaxis right now (summer 2025) that I doubt it's anything mure than the technology being brand spanking new.

Maybe, but it's also going to be a financial issue eventually too

My city had Car2Go for a couple of years, but it's gone now. They had to pull out of the region because it wasn't making them enough money

I expect Waymo and any other sort of vehicle ridesharing thing will have the same problem in many places


But it seems the reason for that is that this is a new, immature technology. Every new technology goes through that cycle until someone figures out how to make it financially profitable.

This is a big moving of the goalposts. The optimists were saying Level 5 would be purchasable everywhere by ~2018. They aren’t purchasable today, just hail-able. And there’s a lot of remote human intervention.

And San Francisco doesn’t get snow.


Hell - SF doesn’t have motorcyclists or any vehicular traffic, driving on the wrong side of the road.

Or cows sharing the thoroughfares.

It should be obvious to all HNers that have lived or travelled to developing / global south regions - driving data is cultural data.

You may as well say that self driving will only happen in countries where the local norms and driving culture is suitable to the task.

A desperately anemic proposition compared to the science fiction ambition.

I’m quietly hoping I’m going to be proven wrong, but we’re better off building trains, than investing in level 5. It’s going to take a coordination architecture owned by a central government to overcome human behavior variance, and make full self driving a reality.


I'm in the Philippines now, and that's how I know this is the correct take. Especially this part:

"Driving data is cultural data."

The optimists underestimate a lot of things about self-driving cars.

The biggest one may be that in developing and global south regions, civil engineering, design, and planning are far, far away from being up to snuff to a level where Level 5 is even a slim possibility. Here on the island I'm on, the roads, storm water drainage (if it exists at all) and quality of the built environment in general is very poor.

Also, a lot of otherwise smart people think that the increment between Level 4 and Level 5 is the same as that between all six levels, when the jump from Level 4 to Level 5 automation is the biggest one and the hardest to successfully accomplish.


Level 5 is a pipe dream. Or if I’m being charitable it’s un-ambitious.

The goal for a working L5 should be “if piloting a rickshaw, will it be able to operate as a human owner in normal traffic.”


Yes, but they are getting good at chasing 9s in the US, those skills will translate directly to chasing 9s outside the US, and frankly the "first drafts" did quite a bit better than I'd have expected even six months ago

Guangzhou: https://www.youtube.com/watch?v=3DWz1TD-VZg

Paris: https://www.youtube.com/watch?v=iN9nu-IkS1w

Rome: https://www.youtube.com/watch?v=4Zg3jc90JTI


I’m rejecting the assertion that the data covers a physics model - which would be invariant across nations.

I’m positing that the models encode cultural decision making norms- and using global south regions to highlight examples of cases that are commonplace but challenge the feasibility of full autonomous driving.

Imagine an auto rickshaw with full self driving.

If in your imagination, you can see a level 5 auto, jousting for position in Mumbai traffic - then you have an image which works.

It’s also well beyond what people expect fully autonomous driving entails.

At that point you are encoding cultural norms and expectations around rule/law enforcement.


You're not wrong on the "physics easy culture hard" call, just late. That was Andrej Karpathy's stated reason for betting on the Tesla approach over the Waymo approach back in 2017, because he identified that the limiting factor would be the collection of data on real-world driving interactions in diverse environments to allow learning theories-of-mind for all actors across all settings and cultures. Putting cameras on millions of cars in every corner of the world was the way to win that game -- simulations wouldn't cut it, "NPC behavior" would be their downfall.

This bet aged well: videos of FSD performing very well in wildly different settings -- crowded Guangzhou markets to French traffic circles to left-hand-drive countries -- seem to indicate that this approach is working. It's nailing interactions that it didn't learn from suburban America and that require inferring intent using complex contextual clues. It's not done until it's done, but the god of the gaps retreats ever further into the march of nines and you don't get credit for predicting something once it has already happened.


Thanks, I appreciate your take. For what it’s worth, I made this point since the start of the FSD/l5/tesla launch.

I liked the use of the God of the gaps - an effective analogy for the counter position.

I’m rejecting the idea of the march of the 9s eventually getting to FSD - the cultural norms issue is about decision making not about physics.

Eg - You have to decide how aggressively to drive, overtake, or jockey for position.

My estimation is that this is not solvable by on board decision making, because that would be accepting unacceptable legal risk.


It snows like 40% of the year where I live. I don’t think the current iteration of self driving cars is even close to handling that.

Most people live within a couple hours of a city though, and I think we'll see robot taxis in a majority of continents by 2035 though. The first couple cities and continents will take the longest, but after that it's just a money question, and rich people have a lot of money. The question then is: is the taxi cab consortium, which still holds a lot of power, despite Uber, in each city the in world, large enough to prevent Waymo from getting a hold, for every city in the world that Google has offices in.

Yeah where they have every inch of SF mapped, and then still have human interventions. We were promised no more human drivers like 5-7 years ago at this point.

Human interventions.

High speed connectivity and off vehicle processing for some tasks.

Density of locations to "idle" at.

There are a lot of things that make all these services work that means they can NOT scale.

These are all solvable but we have a compute problem that needs to be addressed before we get there, and I haven't seen any clues that there is anything in the pipeline to help out.


The typical Lyft vehicle is a piece of junk worth less than $20k, while the typical Waymo vehicle is a pretend luxury car with $$$ of equipment tacked on.

Waymo needs to be proving 5-10x the number of daily rides as Lyft before we get excited


I suspect most gig drivers don't fully account for the cost of running their car, so these services are also being effectively subsidized by their workers.

You can provide almost any service at a loss, for a while, with enough money. We shouldn't get excited until Waymo starts turning an actual profit.


Their cars cost waymo money

Well, if we say these systems are here, it still took 10+ years between prototype and operational system.

And as I understand it; These are systems, not individual cars that are intelligent and just decide how to drive from immediate input, These system still require some number of human wranglers and worst-case drivers, there's a lot of specific-purpose code rather nothing-but-neural-network etc.

Which to say "AI"/neural nets are important technology that can achieve things but they can give an illusion of doing everything instantly by magic but they generally don't do that.


It’s past the hype curve and into the trough of disillusionment. Over the next 5,10,15 years (who can say?) the tech will mature out of the trough into general adoption.

GenAI is the exciting new tech currently riding the initial hype spike. This will die down into the trough of disillusionment as well, probably sometime next year. Like self-driving, people will continue to innovate in the space and the tech will be developed towards general adoption.

We saw the same during crypto hype, though that could be construed as more of a snake oil type event.


The Gartner hype cycle assumes a single fundamental technical breakthrough, and describes the process of the market figuring out what it is and isn't good for. This isn't straightforwardly applicable to LLMs because the question of what they're good for is a moving target; the foundation models are actually getting more capable every few months, which wasn't true of cryptocurrency or self-driving cars. At least some people who overestimate what current LLMs can do won't have the chance to find out that they're wrong, because by the time they would have reached the trough of disillusionment, LLM capabilities will have caught up to their expectations.

If and when LLM scaling stalls out, then you'd expect a Gartner hype cycle to occur from there (because people won't realize right away that there won't be further capability gains), but that hasn't happened yet (or if it has, it's too recent to be visible yet) and I see no reason to be confident that it will happen at any particular time in the medium term.

If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?


> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?

Lots of pre-internet technologies went through this curve. PCs during the clock speed race, aircraft before that during the aeronautics surge of the 50s, cars when Detroit was in its heydays. In fact, cloud computing was enabled by the breakthroughs in PCs which allowed commodity computing to be architected in a way to compete with mainframes and servers of the era. Even the original industrial revolution was actually a 200-year ish period where mechanization became better and better understood.

Personally I've always been a bit confused about the Gartner Hype Cycle and its usage by pundits in online comments. As you say it applies to point changes in technology but many technological revolutions have created academic, social, and economic conditions that lead to a flywheel of innovation up until some point on an envisioned sigmoid curve where the innovation flattens out. I've never understood how the hype cycle fits into that and why it's invoked so much in online discussions. I wonder if folks who have business school exposure can answer this question better.


> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like.

We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.

The improvements we are seeing are related to engineering and value added services. This is why "agents" are the latest buzzword most marketing is clinging on. This is expected, and good, in a sense. The tech is starting to deliver actual value as it's maturing.

I reckon AI companies can still squeeze out a few years of good engineering around the current generation of tools. The question is what happens if there are no ML breakthroughs in that time. The industry desperately needs them for the promise of ASI, AI 2027, and the rest of the hyped predictions to become reality. Otherwise it will be a rough time when the bubble actually bursts.

[1]: https://llm-stats.com/


The problem with LLMs and all other modern statistical large-data-driven solutions’ approach is that it tries to collapse the entire problem space of general problem solving to combinatorial search of the permutations of previously solved problems. Yes, this approach works well for many problems as we can see with the results with huge amount of data and processing utilized.

One implicit assumption is that all problems can be solved with some permutations of existing solutions. The other assumption is the approach can find those permutations and can do so efficiently.

Essentially, the true-believers want you to think that rearranging some bits in their cloud will find all the answers to the universe. I am sure Socrates would not find that a good place to stop the investigation.


Right. I do think that just the capability to find and generate interesting patterns from existing data can be very valuable. It has many applications in many fields, and can genuinely be transformative for society.

But, yeah, the question is whether that approach can be defined as intelligence, and whether it can be applicable to all problems and tasks. I'm highly skeptical of this, but it will be interesting to see how it plays out.

I'm more concerned about the problems and dangers of this tech today, than whatever some entrepreneurs are promising for the future.


> We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.

This isnt just a software problem. IF you go look at the hardware side you see that same flat line (IPC is flat generation over generation). There are also power and heat problems that are going to require some rather exotic and creative solutions if companies are looking to hardware for gains.


The Gartner hype cycle is complete nonsense, it's just a completely fabricated way to view the world that helps sell Gartner's research products. It may, at times, make "intuitive sense", but so does astrology.

The hype cycle has no mathematical basis whatsoever. It's marketing gimmick. It's only value in my life has been to quickly identify people that don't really understand models or larger trends in technology.

I continue to be, but on introspection probably shouldn't be, surprised that people on HN treat is as some kind of gospel. The only people who should respected are other people in the research marketing space as the perfect example of how to dupe people into paying for your "insights".


Could you please expand on your point about expanding scopes? I am waiting earnestly for all the cheaper services that these expansions promise. You know cheaper white-collar-services like accounting, tax, and healthcare etc. The last reports saw accelerating service inflation. Someone is lying. Please tell me who.

Hence why I said potential applications. Each new generation of models is capable, according to evaluations, of doing things that previous models couldn't that prima facie have potential commercial applications (e.g., because they are similar to things that humans get paid to do today). Not all of them will necessarily work out commercially at that capability level; that's what the Gartner hype cycle is about. But because LLM capabilities are a moving target, it's hard to tell the difference between things that aren't commercialized yet because the foundation models can't handle all the requirements, vs. because commercializing things takes time (and the most knowledgeable AI researchers aren't working on it because they're too busy training the next generation of foundation models).

It sounds like people should just ignore those pesky ROI questions. In the long run, we are all dead so let’s just invest now and worry about the actual low level details of delivering on the economy-wide efficiency later.

As capital allocators, we can just keep threatening the worker class with replacing their jobs with LLMs to keep the wages low and have some fun playing monopoly in the meantime. Also, we get to hire these super smart AI researchers people (aka the smartest and most valuable minds in the world) and hold the greatest trophies. We win. End of story.


It's saving healthcare costs for those who solved their problem and never go in which would not be reflected in service inflation costs.

Back in my youthful days, educated and informed people chastised using the internet to self-diagnose and self-treat. I completely missed the memo on when it became a good idea to do so with LLMs.

Which model should I ask about this vague pain I have been having in my left hip? Will my insurance cover the model service subscription? Also, my inner thigh skin looks a bit bruised. Not sure what’s going on? Does the chat interface allow me to upload a picture of it? It won’t train on my photos right?


> or if it has, it's too recent to be visible yet

It's very visible.

Silicon Valley, and VC money has a proven formula. Bet on founders and their ideas, deliver them and get rich. Everyone knows the game, we all get it.

Thats how things were going till recently. Then FB came in and threw money at people and they all jumped ship. Google did the same. These are two companies famous for throwing money at things (Oculus, metaverse, G+, quantum computing) and right and proper face planting with them.

Do you really think that any of these people believe deep down that they are going to have some big breath through? Or do you think they all see the writing on the wall and are taking the payday where they can get it?


It doesn't have to be "or". It's entirely possible that AI researchers both believe AI breakthroughs are coming and also act in their own financial self interest by taking a lucrative job offer.

Liquidity in search of the biggest holes in the ground. Whoever can dig the biggest holes wins. Why or what you get out of digging the holes? Who cares.

The critics of the current AI buzz certainly have been drawing comparisons to self driving cars as LLMs inch along with their logarithmic curve of improvement that's been clear since the GPT-2 days.

Whenever someone tells me how these models are going to make white collar professions obsolete in five years, I remind them that the people making these predictions 1) said we'd have self driving cars "in a few years" back in 2015 and 2) the predictions about white collar professions started in 2022 so five years from when?


> said we'd have self driving cars "in a few years" back in 2015

And they wouldn't have been too far off! Waymo became L4 self-driving in 2021, and has been transporting people in the SF Bay Area without human supervision ever since. There are still barriers — cost, policies, trust — but the technology certainly is here.


People were saying we would all be getting in our cars and taking a nap on our morning commute. We are clearly still a pretty long ways off from self-driving being as ubiquitous as it was claimed it would be.

There are always extremists with absurd timelines on any topic! (Didn't people think we'd be on Mars in 2020?) But this one? In the right cities, plenty of people take a Waymo morning commute every day. I'd say self-driving cars have been pretty successful at meeting people's expectations — or maybe you and I are thinking of different people.

The expectation of a "self-driving car" is that you can get in it and take any trip that a human driver could take. The "in certain cities" is a huge caveat. If we accept that sort of geographical limitation, why not say that self-driving "cars" have been a thing since driverless metro systems started showing up in the 1980s?

And other people were a lot more moderate but still assumed we'd get self-driving soon, with caveats, and were bang on the money.

So it's not as ubiquitous as the most optimistic estimates suggested. We're still at a stage where the tech is sufficiently advanced that seeing them replace a large proportion of human taxi services now seems likely to have been reduced to a scaling / rollout problem rather than primarily a technology problem, and that's a gigantic leap.


People were doing that in their tesla years ago and making the news for sleeping on the 5

Reminds me of electricity entering the market and the first DC power stations setup in New York to power a few buildings. It would have been impossible to replicate that model for everyone. AC solved the distance issue.

That's where we are at with self driving. It can only operate in one small area, you can't own one.

We're not even close to where we are with 3d printers today or the microwave in the 50s.


No, it can operate in several small areas, and the number of small areas it can operate in is a deployment issue. It certainly doesn't mean it is solved, but it is largely solved for a large proportion of rides, in as much as they can keep adding new small areas for a very long time without running out of growth-room even if the technology doesn't improve at all.

Okay, but the experts saying self driving cars were 50 years out in 2015 were wrong too. Lots of people were there for those speeches, and yet, even the most cynical take on Waymo, Cruise and Zoox’s limitations would concede that the vehicles are autonomous most of the time in a technologically important way.

There’s more to this than “predictions are hard.” There are very powerful incentives to eliminate driving and bloated administrative workforces. This is why we don’t have flying cars: lack of demand. But for “not driving?” Nobody wants to drive!


I think people don't realize how much models have to extrapolate still, which causes hallucinations. We are still not great at giving all the context in our brain to LLMs.

There's still a lot of tooling to be built before it can start completely replacing anyone.


It doesn't have to "completely" replace any individual employee to be impactful. If you have 50 coders that each use AI to boost their productivity by 10%, you need 5 fewer coders. It doesn't require that AI is able to handle 100% of any individual person's job.

How profound. No one has ever posted that exact same thought before on here. Thank you.

"I don't get all the interest about self-driving. That tech has been dead for years, and everyone is talking about that tech. That tech was never that big in therms of life... Thank you for your attention to this matter"

The act of trying to make that 2% appear like "minimal, dismissable" is almost a mass psychosis in the AI world at times it seems like.

A few comparisons:

>Pressing the button: $1 >Knowing which button to press: $9,999 Those 2% copy-paste changes are the $9.999 and might take as long to find as rest of the work.

Also: SCE to AUX.


I also find that validating data can be much faster than calculating data. It's like when you're in algebra class and you're told to "solve for X". Once you find the value for X you plug it into the equation to see if it fits, and it's 10x faster than solving for X originally.

Regardless of if AI generates the spreadsheet or if I generate the spreadsheet, I'm still going to do the same validation steps before I share it with anyone. I might have a 2% error rate on a first draft.


This is the exact same issue that I've had trying to use LLMs for anything that needs to be precise such as multi-step data pipelines. The code it produces will look correct and produce a result that seems correct. But when you do quality checks on the end data, you'll notice that things are not adding up.

So then you have to dig into all this overly verbose code to identify the 3-4 subtle flaws with how it transformed/joined the data. And these flaws take as much time to identify and correct as just writing the whole pipeline yourself.


I'll get into hot water with this, but I still think LLMs do not think like humans do - as in the code is not a result of a trying to recreate a correct thought process in a programming language, but some sort of statistically most likely string that matches the input requirements,

I used to have a non-technical manager like this - he'd watch out for the words I (and other engineers) said and in what context, and would repeat them back mostly in accurate word contexts. He sounded remarkably like he knew what he was talking about, but would occasionally make a baffling mistake - like mixing up CDN and CSS.

LLMs are like this, I often see Cursor with Claude making the same kind of strange mistake, only to catch itself in the act, and fix the code (but what happens when it doesn't)


I think that if people say LLMs can never be made to think, that is bordering on a religious belief - it'd require humans to exceed the Turing computable (note also that saying they never can is very different from believing current architectures never will - it's entirely reasonable to believe it will take architectural advances to make it practically feasible).

But saying they aren't thinking yet or like humans is entirely uncontroversial.

Even most maximalists would agree at least with the latter, and the former largely depends on definitions.

As someone who uses Claude extensively, I think of it almost as a slightly dumb alien intelligence - it can speak like a human adult, but makes mistakes a human adult generally wouldn't, and that combinstion breaks the heuristics we use to judge competency,and often lead people to overestimate these models.

Claude writes about half of my code now, so I'm overall bullish on LLMs, but it saves me less than half of my time.

The savings improve as I learn how to better judge what it is competent at, and where it merely sounds competent and needs serious guardrails and oversight, but there's certainly a long way to go before it'd make sense to argue they think like humans.


Everyone has this impression that our internal monologue is what our brain is doing. It's not. We have all sorts of individual components that exist totally outside the realm of "token generation". E.g. the amygdala does its own thing in handling emotions/fear/survival, fires in response to anything that triggers emotion. We can modulate that with our conscious brain, but not directly - we have to basically hack the amygdala by thinking thoughts that deal with the response (don't worry about the exam, you've studied for it already)

LLMs don't have anything like that. Part of why they aren't great at some aspects of human behaviour. E.g. coding, choosing an appropriate level of abstraction - no fear of things becoming unmaintainable. Their approach is weird when doing agentic coding because they don't feel the fear of having to start over.

Emotions are important.


Unless we exceed the turing computable - which there isn't the tiniest shred of evidence for -, nothing we do is "outside the realm of 'token generation'". There is no reason why the token stream generated needs to be treated as equivalent to an internal monologue, or need to always be used to produce language at all, and Turing complete systems are computationally equivalent (they can all compute the same set of functions).

> Everyone has this impression that our internal monologue is what our brain is doing.

Not everyone has an internal monologue, so that would be utterly bizarre. Some people might believe this, but it is by no means relevant to Turing equivalence.

> Emotions are important.

Unless we exceed the Turing computable, our experience of emotions would be evidence that any Turing complete system can be made to act as if they experience emotions.


A token stream is universal, but I don't see any reason to think that a token stream generated by an LLM can ever be universal.

I mean, theoretically in an "infinite tape" model, sure. But we don't even know if it's physically possible. Given that the observable universe is finite and the information capacity of a finite space is also finite, then anything humans can do can theoretically be encoded with a lookup table, but that doesn't mean that human thought can actually be replicated with a lookup table, since the table would be vastly larger than the observable universe can store.

LLMs look like the sort of thing that could replicate human thought in theory (since they are capable of arbitrary computation if you give them access to infinite memory) but not the sort of thing that could do it in a physically possible way.


Unless humans exceed the Turing computable, the human brain is the existence proof that a sufficiently complex Turing machine can be made to replicate human thought in a compact space.

That encoding a naive/basic UTM in an LLM would potentially be impractical is largely irrelevant in that case, because for any UTM you can "compress" the program by increasing the number of states or symbols, and effectively "embedding" the steps required to implement a more compact representation in the machine itself.

While it is possible using current LLM architectures might make encoding a model that can be efficient enough to be physically practical impossible, there's no reasonable basis for assuming this approach can not translate.


You seem to be making a giant leap from “human thought can probably be emulated by a Turing machine” to “human thought can probably be emulated by LLMs in the actual physical universe.” The former is obvious, the latter I’m deeply skeptical of.

The machine part of a Turing machine is simple. People manage to build them by accident. Programming language designers come up with a nice-sounding type inference feature and discover that they’ve made their type system Turing-complete. The hard part is the execution speed and the infinite tape.

Ignoring those problems, making AGI with LLMs is easy. You don’t even need something that big. Make a neural network big enough to represent the transition table of a Turing machine with a dozen or so states. Configure it to be a universal machine. Then give it a tape containing a program that emulates the known laws of physics to arbitrary accuracy. Simulate the universe from the Big Bang and find the people who show up about 13 billion years later. If the known laws of physics aren’t accurate enough, compare with real-world data and adjust as needed.

There’s the minor detail that simulating quantum mechanics takes time exponential in the number of particles, and the information needed to represent the entire universe can’t fit into that same universe and still leave room for anything else, but that doesn’t matter when you’re talking Turing machines.

It does matter a great deal when talking about what might lead to actual human-level intelligent machines existing in reality, though.


I'm not making a leap there at all. Assuming we agree the brain is unlikely to exceed the Turing computable, I explained the stepwise reasoning justifying it: Given Turing equivalence, and given that for each given UTM, there is a bigger UTM that can express programs in the simpler one in less space, and given that the brain is an existence-proof that a sufficiently compact UTM is possible, it is preposterous to think it would be impossible to construct an LLM architecture that allows expressing the same compactly enough. I suspect you assume a very specific architecture for an LLM, rather than consider that LLMs can be implemented in the form of any UTM.

Current architectures may very well not be sufficient, but that is an entirely different issue.


> and given that the brain is an existence-proof that a sufficiently compact UTM is possible

This is where it goes wrong. You’ve got the implication backwards. The existence of a program and a physical computer that can run it to produce a certain behavior is proof that such behavior can be done with a physical system. (After all, that computer and program are themselves a physical system.) But the existence of a physical system does not imply that there can be an actual physical computer that can run a program that replicates the behavior. If the laws of physics are computable (as they seem to be) then the existence of a system implies that there exists some Turing machine that can replicate the behavior, but this is “exists” in the mathematical sense, it’s very different from saying such a Turing machine could be constructed in this universe.

Forget about intelligence for a moment. Consider a glass of water. Can the behavior of a glass of water be predicted by a physical computer? That depends on what you consider to be “behavior.” The basic heat exchange can be reasonably approximated with a small program that would trivially run on a two-cent microcontroller. The motion of the fluid could be reasonably simulated with, say, 100-micron accuracy, on a computer you could buy today. 1-micron accuracy might be infeasible with current technology but is likely physically possible.

What if I want absolute fidelity? Thermodynamics and fluid mechanics are shortcuts that give you bulk behaviors. I want a full quantum mechanical simulation of every single fundamental particle in the glass, no shortcuts. This can definitely be computed with a Turing machine, and I’m confident that there’s no way it can come anywhere close to being computed on any actual physical manifestation of a Turing machine, given that the state of the art for such simulations is a handful of particles and the complexity is exponential in the number of particles.

And yet there obviously exists a physical system that can do this: the glass of water itself.

Things that are true or at least very likely: the brain exists, physics is probably computable, there exists (in the mathematical sense) a Turing machine that can emulate the brain.

Very much unproven and, as far as I can tell, no particular reason to believe they’re true: the brain can be emulated with a physical Turing-like computer, this computer is something humans could conceivably build at some point, the brain can be emulated with a neural network trained with gradient descent on a large corpus of token sequences, the brain can be emulated with such a network running on a computer humans could conceivably build. Talking about the computability of the human brain does nothing to demonstrate any of these.

I think non-biological machines with human-equivalent intelligence are likely to be physically possible. I think there’s a good chance that it will require specialized hardware that can’t be practically done with a standard “execute this sequence of simple instructions” computer. And if it can be done with a standard computer, I think there’s a very good chance that it can’t be done with LLMs.


I don't think you'll get into hot water for that. Anthropomorphizing LLMs is an easy way to describe and think about them, but anyone serious about using LLMs for productivity is aware they don't actually think like people, and run into exactly the sort of things you're describing.

I just wrote a post on my site where the LLM had trouble with 1) clicking a button, 2) taking a screenshot, 3) repeat. The non-deterministic nature of LLMs is both a feature and a bug. That said, read/correct can sometimes be a preferable workflow to create/debug, especially if you don't know where to start with creating.

I think it's basically equivalent to giving that prompt to a low paid contractor coder and hoping their solution works out. At least the turnaround time is faster?

But normally you would want a more hands on back and forth to ensure the requirements actually capture everything, validation and etc that the results are good, layers of reviews right


It seems to be a mix between hiring an offshore/low level contractor and playing a slot machine. And by that I mean at least with the contractor you can pretty quickly understand their limitations and see a pattern in the mistakes they make. While an LLM is obviously faster, the mistakes are seemingly random so you have to examine the result much more than you would with a contractor (if you are working on something that needs to be exact).

the slot machine is apt. insert tokens, pull lever, ALMOST get a reward. Think: I can start over, manually, or pull the lever again. Maybe I'll get a prize if I pull it again...

and of course, you pay whether the slot machine gives a prize or not. Between the slot machine psychological effect and sunk cost fallacy I have a very hard time believing the anecdotes -- and my own experiences -- with paid LLMs.

Often I say, I'd be way more willing to use and trust and pay for these things if I got my money back for output that is false.


If the contractor is producing unusable code, they won't be my contractor anymore.

In my experience using small steps and a lot of automated tests work very well with CC. Don’t go for these huge prompts that have a complete feature in it.

Remember the title “attention is all you need”? Well you need to pay a lot of attention to CC during these small steps and have a solid mental model of what it is building.


Yeah but once you break things down into small enough steps you might as well just code it yourself.

It's more likely that, because you didn't do the work, you haven't already justified the flaws to yourself.

My favorite part is people taking the 98% number to heart as if there's any basis to it whatsoever and isn't just a number they pulled out of their ass in this marketing material made by an AI company trying to sell you their AI product. In my experience it's more like a 70% for dead simple stuff, and dramatically lower for anything moderately complex.

And why 98%? Why not 99% right? Or 99.9% right? I know they can't outright say 100% because everyone knows that's a blatant lie, but we're okay with them bullshitting about the 98% number here?

Also there's no universe in which this guy gets to walk his dog while his little pet AI does his work for him, instead his boss is going to hound him into doing quadruple the work because he's now so "efficient" that he's finishing his spreadsheet in an hour instead of 8 or whatever. That, or he just gets fired and the underpaid (or maybe not even paid) intern shoots off the same prompt to the magic little AI and does the same shoddy work instead of him. The latter is definitely what the C-suite is aiming for with this tech anyway.


"It feels like either finding that 2% that's off (or dealing with 2% error) will be the time consuming part in a lot of cases."

This is the part you have wrong. People just won't do that. They'll save the 8 hours and just deal with 2% error in their work (which reduces as AI models get better). This doesn't work with something with a low error tolerance, but most people aren't building the next Golden Gate Bridge. They'll just fix any problems as they crop up.

Some of you will be screaming right now "THAT'S NOT WORTH IT", as if companies don't already do this to consumers constantly, like losing your luggage at the airport or getting your order wrong. Or just selling you something defective, all of that happens >2% of the time, because companies know customers will just deal-with-it.


It’s not worth it because of the compounding effect when it is a repeated process. 98% accuracy might be fine for a single iteration, but if you run your process 365 times (maybe once a day for a year) whatever your output is will be so wrong that it is unusable.

Can you name a single job like this? It's much easier to name jobs where the accuracy doesn't compound, like daily customer service chatbots, or personal-tutor bots, or news-aggregator bots, or the inevitable (and somewhat dubious) do-my-tax-returns bot.

All I can think of is vibe-coding, and vibe-coding jobs aren't a thing.


Doctors get the diagnosis wrong 10-23% of the time (depending on who you ask)

I have a friend who's vibe-coding apps. He has a lot of them, like 15 or more, but most are only 60–90% complete (almost every feature is only 60-90% complete), which means almost nothing works properly. Last time he showed me something, it was sending the Supabase API key in the frontend with write permissions, so I could edit anything on his site just by inspecting the network tab in developer tools. The amount of technical debt and security issues building up over the coming years is going to be massive.

I think the question then is what's the human error rate... We know we're not perfect... So if you're 100% rested and only have to find the edge case bug, maybe you'll usually find it vs you're burned out getting it 98% of the way there and fail to see the 2% of the time bugs... Wording here is tricky to explain but I think what we'll find is this helps us get that much closer... Of course when you spend your time building out 98% of the thing you have sometimes a deeper understanding of it so finding the 2% edge case is easier/faster but only time will tell

The problem with this spreadsheet task is that you don't know whether you got only 2% wrong (just rounded some numbers) or way more (e.g. did it get confused and mistook a 2023 PDF with one from 1993?), and checking things yourself is still quite tedious unless there's good support for this in the tool.

At least with humans you have things like reputation (has this person been reliable) or if you did things yourself, you have some good idea of how diligent you've been.


Would be insane to expect an ai to just match us right…nooooo if it pertains computers/automation/ai it needs to be beyond perfect.

Right? Why are we giving grace to a damn computer as if it's human? How are people defending this? If it's a computer, I don't care how intelligent it is. 98% right is actually unacceptable.

I’ve worked at places that sre run on spreadsheets. You’d be amazed at how often they’re wrong IME

There is a literature on this.

The usual estimate you see is that about 2-5% of spreadsheets used for running a business contain errors.


The entire financial sector, for one thing

It takes my boss seven hours to create that spreadsheet, and another eight to render a graph.

Exciting stuff

Distinguishing whether a problem is 0.02 ^ n for error or 0.98 ^ n for accuracy is emerging as an important skill.

Might explain why some people grind up a billion tokens trying to make code work only to have it get worse while others pick apart the bits of truth and quickly fill in their blind spots. The skillsets separating wheat from chaff are things like honest appreciation for corroboration, differentiating subjective from objective problems, and recognizing truth-preserving relationships. If you can find the 0.02 ^ n sub-problems, you can grind them down with AI and they will rapidly converge, leaving the 0.98 ^ n problems to focus human touch on.


> It feels like either finding that 2% that's off (or dealing with 2% error) will be the time consuming part in a lot of cases.

The last '2%' (and in some benchmarks 20%) could cost as much as $100B+ more to make it perfect consistently without error.

This requirement does not apply to generating art. But for agentic tasks, errors at worst being 20% or at best being 2% for an agent may be unacceptable for mistakes.

As you said, if the agent makes an error in either of the steps in an agentic flow or task, the entire result would be incorrect and you would need to check over the entire work again to spot it.

Most will just throw it away and start over; wasting more tokens, money and time.

And no, it is not "AGI" either.


I think this is my favorite part of the LLM hype train: the butterfly effect of dependence on an undependable stochastic system propagates errors up the chain until the whole system is worthless.

"I think it got 98% of the information correct..." how do you know how much is correct without doing the whole thing properly yourself?

The two options are:

- Do the whole thing yourself to validate

- Skim 40% of it, 'seems right to me', accept the slop and send it off to the next sucker to plug into his agent.

I think the funny part is that humans are not exempt from similar mistakes, but a human making those mistakes again and again would get fired. Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.


This depends on the type of work being done. Sometimes the cost of verification is much lower than the cost of doing the work, sometimes it's about the same, and sometimes it's much more. Here's some recent discussion [0]

[0] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...


> how do you know how much is correct

Because it's a budget. Verifying them is _much_ cheaper than finding all the entries in a giant PDF in the first place.

> the butterfly effect of dependence on an undependable stochastic system

We're using stochastic systems for a long time. We know just fine how to deal with them.

> Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.

There are very few tasks humans complete at a 98% success rate either. If you think "build spreadsheet from PDF" comes anywhere close to that, you've never done that task. We're barely able to recognize objects in their default orientation at a 98% success rate. (And in many cases, deep networks outperform humans at object recognition)

The task of engineering has always been to manage error rates and risk, not to achieve perfection. "butterfly effect" is a cheap rhetorical distraction, not a criticism.


There are in fact lots of tasks people complete immediately at 99.99% success rate at first iteration or 99.999% after self and peer checking work

Perhaps importantly checking is a continual process and errors are identified as they are made and corrected whilst in context instead of being identified later by someone completely devoid of any context a task humans are notably bad at.

Lastly it's important to note the difference between a overarching task containing many sub tasks and the sub tasks.

Something which fails at a sub task comprising 10 sub tasks 2% of the time per task has a miserable 18% failure rate at the overarching task. By 20 it's failed at 1 in 3 attempts worse a failing human knows they don't know the answer the failing AI produces not only wrong answers but convincing lies

Failure to distinguish between human failure and AI failure in nature or degree of errors is a failure of analysis.


> There are in fact lots of tasks people complete immediately at 99.99% success rate at first iteration or 99.999% after self and peer checking work

This is so absurd that I wonder if you're telling? Humans don't even have a 99.99% success rate in breathing, let alone any cognitive tasks.


> Humans don't even have a 99.99% success rate in breathing

Will you please elaborate a little on this?


Humans cough or otherwise have to clear their airways about 1 in every 1,000 breaths, which is a 99.9% success rate.

That’s quite good given the complexity and fragility of the system and the chaotic nature of the environment.

> I think the funny part is that humans are not exempt from similar mistakes, but a human making those mistakes again and again would get fired. Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.

My rule is that if you submit code/whatever and it has problems you are responsible for them no matter how you "wrote" it. Put another way "The LLM made a mistake" is not a valid excuse nor is "That's what the LLM spit out" a valid response to "why did you write this code this way?".

LLMs are tools, tools used by humans. The human kicking off an agent, or rather submitting the final work, is still on the hook for what they submit.


> Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.

Well yeah, because the agent is so much cheaper and faster than a human that you can eat the cost of the mistakes and everything that comes with them and still come out way ahead. No, of course that doesn't work in aircraft manufacturing or medicine or coding or many other scenarios that get tossed around on HN, but it does work in a lot of others.


Definitely would work in coding. Most software companies can only dream of a 2% defect rate. Reality is probably closer to 98%, which is why we have so much organisational overhead around finding and fixing human error in software.

"a human making those mistakes again and again would get fired"

You must be really desperate for anti-AI arguments if this is the one you're going with. Employees make mistakes all day every day and they don't get fired. Companies don't give a shit as long as the cost of the mistakes is less than the cost of hiring someone new.


I wonder if you can establish some kind of confidence interval by passing data through a model x number of times. I guess it mostly depends on subjective/objective correctness as well as correctness within a certain context that you may not know if the model knows about or not. Either way sounds like more corporate drudgery.

It compounds too:

At a certain point, relentlessly checking for whether the model has got everything is more effort in turn than…doing it.

Moreover, is it actually a 4-8 hour job? Or is the person not using the right tool, is the better tool a sql query?

Half these “wow ai” examples feel like “oh my plates are dirty, better just buy more”.


This reminds me of the story where Barclays had to buy bad assets from the Lehman bankruptcy because they only hid the rows of assets they did not want, but the receiver saw all the rows due to a mistake somewhere. The kind of 2% fault rate in Excel that could tank a big bank.

https://www.computerworld.com/article/1561181/excel-error-le...


Lol the music and presentation made it sound like that guy was going to talk about something deep and emotional not spreadsheets and expense reports.

People say this, but in my experience it’s not true.

1) The cognitive burden is much lower when the AI can correctly do 90% of the work. Yes, the remaining 10% still takes effort, but your mind has more space for it.

2) For experts who have a clear mental model of the task requirements, it’s generally less effort to fix an almost-correct solution than to invent the entire thing from scratch. The “starting cost” in mental energy to go from a blank page/empty spreadsheet to something useful is significant. (I limit this to experts because I do think you have to have a strong mental framework you can immediately slot the AI output into, in order to be able to quickly spot errors.)

3) Even when the LLM gets it totally wrong, I’ve actually had experiences where a clearly flawed output was still a useful starting point, especially when I’m tired or busy. It nerd-snipes my brain from “I need another cup of coffee before I can even begin thinking about this” to “no you idiot, that’s not how it should be done at all, do this instead…”


>The cognitive burden is much lower when the AI can correctly do 90% of the work. Yes, the remaining 10% still takes effort, but your mind has more space for it.

I think their point is that 10%, 1%, whatever %, the type of problem is a huge headache. In something like a complicated spreadsheet it can quickly become hours of looking for needles in the haystack, a search that wouldn't be necessary if AI didn't get it almost right. In fact it's almost better if it just gets some big chunk wholesale wrong - at least you can quickly identify the issue and do that part yourself, which you would have had to in the first place anyway.

Getting something almost right, no matter how close, can often be worse than not doing it at all. Undoing/correcting mistakes can be more costly as well as labor intensive. "Measure twice cut once" and all that.

I think of how in video production (edits specifically) I can get you often 90% of the way there in about half the time it takes to get it 100%. Those last bits can be exponentially more time consuming (such as an intense color grade or audio repair). The thing is with a spreadsheet like that, you can't accept a B+ or A-. If something is broken, the whole thing is broken. It needs to work more or less 100%. Closing that gap can be a huge process.

I'll stop now as I can tell I'm running a bit in circles lol


I understand the idea. My position is that this is a largely speculative claim from people who have not spent much time seriously applying agents for spreadsheet or video editing work (since those agents didn’t even exist until now).

“Getting something almost right, no matter how close, can often be worse than not doing it at all” - true with human employees and with low quality agents, but not necessarily true with expert humans using high quality agents. The cost to throw a job at an agent and see what happens is so small that in actual practice, the experience is very different and most people don’t realize this yet.


I guess we just have different ideas of what “high-quality agents“ are and what they should regularly produce.

> The cognitive burden is much lower when the AI can correctly do 90% of the work.

It's a high cognitive burden if you don't know which 10% of the work the AI failed to do / did incorrectly, though.

I think you're picturing a percentage indicating what scope of the work the AI covered, but the parent was thinking about the accuracy of the work it did cover. But maybe what you're saying is if you pick the right 90% subset, you'll get vastly better than 98% accuracy on that scope of work? Maybe we just need to improve our intuition for where LLMs are reliable and where they're not so reliable.

Though as others have pointed out, these are just made-up numbers we're tossing around. Getting 99% accuracy on 90% of the work is very different from getting 75% accuracy on 50% of the work. The real values vary so much by problem domain and user's level of prompting skill, but it will be really interesting as studies start to emerge that might give us a better idea of the typical values in at least some domains.


A lot of people here also make the assumption that the human user would make no errors.

What error rate this same person would find if reviewing spreadsheets made by other people seems like an inherently critical benchmark before we can even discuss whether this is a problem or an achievement.


the bigger takeaway here is will his boss allow him to walk his dog or will he see available downtime and try to fill it with more work?

More work, without a doubt - any productivity gain immediately becomes the new normal. But now with an additional "2%" error rate compounded on all the tasks you're expected to do in parallel.

95% of people doing his job will lose them. 1 person will figure out the 2% that requires a human in the loop.

I do this kind of job and there is no way I am doing this job in 5-10 years.

I don't even think it is my company that is going to adapt to let me go but it is going to be an AI first competitor that puts the company I work for out of business completely.

There are all these massively inefficient dinosaur companies in the economy that are running digitized versions of paper shuffling and a huge number of white collar bullshit jobs built on top of digitized paper shuffling.

Wage inflation has been eating away at the bottom line on all these businesses since Covid and we are going to have a dinosaur company mass extinction event in the next recession.

IMO the category error being made is that LLMs are going to agentically do digitized paper shuffling and put digitized paper shufflers out of work. That is not the problem for my job. The issue is agentically from the ground up making the concept of digitized paper shuffling null and void. A relic of the past that can't compete in the economy.


I don't know why everyone is so confident that jobs will be lost. When we invented power tools did we fire everyone that builds stuff, or did we just build more stuff?

if you replace "power tools" with industrial automation it's easy to cherry pick extremes from either side. Manufacturing? a lot of jobs displaced, maybe not lost.

That would be analogous to RPA maybe, and sure that has eliminated many roles. But software development and other similarly complex ever changing tasks have not been automated in the same way, and it's not even close to happening. Rote repetitive tasks with some decision making involved, probably on the chopping block.

> "I think it got 98% of the information correct... I just needed to copy / paste a few things. If it can do 90 - 95% of the time consuming work, that will save you a ton of time"

"Hello, yes, I would like to pollute my entire data store" is an insane a sales pitch. Start backing up your data lakes on physical media, there is going to be an outrageous market for low-background data in the future.

semi-related: How many people are going to get killed because of this?


How often will 98% correct data actually be worse? How often will it be better?

98% might well be disastrous, but I've seen enough awful quality human-produced data that without some benchmarks I'm not confident we know whether this would be better or worse.


In the context of a budget that's really funny too. If you make a 18 trillion dollar error just once, no big deal, just one error right?

By that definition, the ChatGPT app is now an AI agent. When you use ChatGPT nowadays, you can select different models and complement these models with tools like web search and image creation. It’s no longer a simple text-in / text-out interface. It looks like it is still that, but deep down, it is something new: it is agentic… https://medium.com/thoughts-on-machine-learning/building-ai-...

To be fair, this is also the case with humans: humans make errors as well, and you still need to verify the results.

I once was managing a team of data scientists and my boss kept getting frustrated about some incorrectnesses she discovered, and it was really difficult to explain that this is just human error and it would take lots of resources to ensure 100% correctness.

The same with code.

It’s a cost / benefits balance that needs to be found.

AI just adds another opportunity into this equation.


2% wrong is $40,000 on a $2m budget.

>It feels like either finding that 2% that's off (or dealing with 2% error) will be the time consuming part in a lot of cases.

People act like this is some new thing but this exactly what supervising a more junior coworker is like. These models won't stay performing at Jr. levels for long. That is clear


It is not exactly like that. Junior coworkers mess up in fairly predictable ways.

Also... he "thinks" it got 98% of the data correct. How does he know?

yes... and arguably the last 5% is harder now because you didn't spend the time yourself to get to that point so you're not really 'up to speed' on what has been produced so far

Great point. Plus, working on your laptop on a couch is not ideal for deep excel work

98% correct spreadsheets are going to get so many papers retracted.

Yes. Any success I have had with LLMs has been by micromanaging them. Lots of very simple instructions, look at the results, correct them if necessary, then next step.

Yes - and that is especially true for high-stakes processes in organizations. For example, accounting, HR benefits, taxation needs to be exactly right.

How well does the average employee do it? The baseline is not what you would do but what it would take to task someone to do it.

I see it as a good reason why people aren’t going to lose their jobs that much.

It just make people quite faster at what they’re already doing.


Totally agree.

Also, do you really understand what the numbers in that spreadsheet mean if you have not been participating in pulling them together?


it now will take him 4-8hours plus a 200usd monthly bill, a win-win for everybody.

Honestly, though, there are far more use cases where 98% correct is equivalent to perfect than situations that require absolute correctness, both in business and for personal use.

I am looking forward to learning why this is entirely unlike working with humans, who in my experience commit very silly and unpredictable errors all the time (in addition to predictable ones), but additionally are often proud and anxious and happy to deliberately obfuscate their errors.

You can point out the errors to people, which will lead to less issues over time, as they gain experience. The models however don’t do that.

I think there is a lot of confusion on this topic. Humans as employees have the same basic problem: You have to train them, and at some point they quit, and then all that experience is gone. Only: The teaching takes much longer. The retention, relative to the time it takes to teach, is probably not great (admittedly I have not done the math).

A model forgets "quicker" (in human time), but can also be taught on the spot, simply by pushing necessary stuff into the ever increasing context (see claude code and multiple claude.md on how that works at any level). Experience gaining is simply not necessary, because it can infer on the spot, given you provide enough context.

In both cases having good information/context is key. But here the difference is of course, that an AI is engineered to be competent and helpful as a worker, and will be consistently great and willing to ingest all of that, and a human will be a human and bring their individual human stuff and will not be very keen to tell you about all of their insecurities.


but the person doing the job changes every month or two.

theres no persistent experience being built, and each newcomer to the job screws it up in their own unique way


The models do do that, just at the next iteration of the model. And everyone gains from everyone's mistakes.

I call it a monkey's paw for this exact reason.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: