Hacker Newsnew | past | comments | ask | show | jobs | submit | rogerkirkness's commentslogin

Appealing, but this is coming from someone smart/thoughtful. No offence to 'rest of world', but I think that most people have felt this way for years. And realistically in a year, there won't be any people who can keep up.


> And realistically in a year, there won't be any people who can keep up.

I've heard the same claim every year since GPT-3.

It's still just as irrational as it was then.


You're rather dramatically demonstrating how remarkable the progress has been: GPT-3 was horrible at coding. Claude Opus 4.5 is good at it.

They're already far faster than anybody on HN could ever be. Whether it takes another five years or ten, in that span of time nobody on HN will be able to keep up with the top tier models. It's not irrational, it's guaranteed. The progress has been extraordinary and obvious, the direction is certain, the outcome is certain. All that is left is to debate whether it's a couple of years or closer to a decade.


Why is the outcome certain? We have no way of predicting how long models will continue getting better before they plateau.


They continue to improve significantly year over year. There's no reason to think we're near a plateau in this specific regard.

The bottom 50% of software jobs in the US are worth somewhere around $200-$300 billion per year (salary + benefits + recruiting + training/education), one trillion dollars every five years minimum. That's the opportunity. It's beyond gigantic. They will keep pursuing the elimination of those jobs until it's done. It won't take long from where we're at now, it's a 3-10 year debate, rather than a 10-20 year debate. And that's just the bottom 50%, the next quarter group above that will also be eliminated over time.

$115k + $8-12k healthcare + stock + routine operating costs + training + recruitment. That's the ballpark median two years ago. Surveys vary, from BLS to industry, two to four million software developers, software engineers, so on and so forth. Now eliminate most of them.

Your AI coding agent circa 2030 will work 24/7. It has a superior context to human developers. It never becomes emotional or angry or crazy. It never complains about being tired. It never quits due to working conditions. It never unionizes. It never leaves work. It never gets cancer or heart disease. It's not obese, it doesn't have diabetes. It doesn't need work perks. It doesn't need time off for vacations. It doesn't need bathrooms. It doesn't need to fit in or socialize. It has no cultural match concerns. It doesn't have children. It doesn't have a mortgage. It doesn't hate its bosses. It doesn't need to commute. It gets better over time. It only exists to work. It is the ultimate coding monkey. Goodbye human.


Amazing how much investment has mostly gone to eliminate one job category; ironically what was meant to be the job of the future "learn to code". To be honest on current trajectory I'm always amazed how many SWE's think it is "enabling" or will be anything else other than this in the long term. I personally don't recommend anyone into this field anymore, especially when big money sees this as the next disruption to invest in and has bet in the opposite direction investment/market wise. Amazing what was just a chatbot 3 years ago will do to a large amount of people w.r.t unemployment and potential poverty; didn't appreciate it at the time.

Life/fate does have a sense of irony it seems. I wouldn't be surprised if it is just the "creative" industries that die; and normal jobs that provide little value today still survive in some form - they weren't judged on value delivered and still existed after all.


>Your AI coding agent circa 2030 will work 24/7

Doing what? What would we need software for when we have sufficiently good AI? AI would become "The Final Software", just give it input data, tell it what of data transform you want and it will give you the output, no need for new software ever again.


And there's the same empty headed certainty, extrapolating a sigmoid into an exponential.


I can tell you don't control any resources relating to AI from your contempt alone


You're entitled to be wrong.


People claimed GPT-3 was great at coding when it launched. Those who said otherwise were dismissed. That has continued to be the case in every generation.


> People claimed GPT-3 was great at coding when it launched.

Ok and they were wrong, but now people are right that it is great at coding.

> That has continued to be the case in every generation.

If something gets better over time, it is definitionally true that it was bad for every case in the past until it becomes good. But then it is good.

Thats how that works. For everything. You are talking in tautologies while not understanding the implication of your arguments and how it applies to very general things like "A thing that improves over time".


Are you saying the current models are not good at coding? That is a strong claim.


For brand new projects? Perhaps. For working with existing projects in large code bases? Still not living up to the hype. Still sick of explaining to leadership that they're not magic and "agentic" isn't magic either. Still sick of everyone not realizing that if you made coding 300% faster (which AI hasn't) that doesn't help when coding is less than half the hours of my week. Still sick of the "productivity gains" being subsidized by burning out competent code reviewers calling bullshit on things that don't work or will cause problems down the road.


A bit reductive.


> And realistically in a year, there won't be any people who can keep up.

Bold claim. They said the same thing at the start of this year.


You're all arguing over how many single digit years it'll take at this point.

It doesn't matter if it takes another 12 or 36 months to make that claim true. It doesn't matter if it takes five years.

Is AI coming for most of the software jobs? Yes it is. It's moving very quickly, and nothing can stop it. The progress has been particularly exceptionally clear (early GPT to Gemini 3 / Opus 4.5 / Codex).


> Is AI coming for most of the software jobs?

be cool to start with one before we move to most…



im hoping this can introduce a framework to help people visualize the problem and figure out a way to close that gap. image generation is something every one can verify, but code generation is perhaps not. but if we can make verifying code as effortless as verifying images (not saying it's possible), then our productivity can enter the next level...


I think you underestimating how good these image generators are at the moment.


oh i mean the other direction! checking if a generated image is "good" that no one will tell something is off and it look naturally, rather than checking if they are fake.


I've been using the gpt-oss 20b parameter model on my laptop and it works great. Doesn't reject giving legal or medical advice either. Obviously not good enough for coding, but seems like 'useful AI assistant for daily life' is in overshoot.


Somewhere a doctor is happy he found a model that's good enough for coding but he thinks, I'm certainly not dumb enough to use this for medical advice.


The thing about medical advice is that Google was useful for narrowing problems down, and it's the same with any current LLM only more useful. I have enough biology to know what interventions require professional opinions.


That’s great, but not a reason for taxpayers to get involved and be on the hook for massive risky investments.

OpenAI doesn’t need government financial backing for investment. The government has more pressing priorities to address with the money they take from us first.


Totally agree to be clear.


I’m guessing the grandparent poster would agree with you.


If 20B reasoning models are the goal, we can do that a whole lot cheaper than for $1T.


Maybe they shouldn't expect people to answer Slack messages 24/7, and this would subside.


Round tripping.


It is insane to worry about this compared to other sources. 2M tons of carbon over the last decade to save how many lives? $30-200M to deal with that carbon is clearly worth the benefit of a decade of kids and adults not dying preventable deaths in mass scale.


AI is a sort of work hours veblen good. As it automates things, people working long hours becomes more important for status. Hence 996 and the year of agents happening at the same time.


This is the main thing that immediately tells me something is AI. This form of reasoning was much less common before ChatGPT.


I don't think this is true. The LLMs use this construction noticeably more frequently than normal people, and I too feel the annoyance when they do, but if you look around I think you'll find it's pretty common in many registers of human natural english.


And each of us has patterns. I bet if you read a million of my posts, you would be annoyed with my writing idiosyncrasies too.


Yes, this is absolutely part of it, and I think an underappreciated harm of LLMs is the homogeneity. Even to the extent that their writing style is adequate, it is homogeneous in a way that quickly becomes grating when you encounter LLM-generated text several times a day. That said, I think it's fair to judge LLM writing style not to be adequate for most purposes, partly because a decent human writer does a better job of consciously keeping their prose interesting by varying their wording and so forth.


Not sure what the downvotes are for -- it's trivial to find examples of this contruction from before 2023, or even decades ago. I'm not disagreeing that LLMs overuse this construction (tbh it was already something of a "writing smell" for me before LLMs started doing it, because it's often a sign of a weakly motivated argument).


Convictional founder here. Our experience is different than others:

- We had to sync, pre process and index the data to make the resultant knowledge search outputs actually good. MCP totally fails at this by comparison.

- It is not hugely painful thanks to bulk APIs, in Gmail in particular, as well as webhooks. We implemented both of them and it works well (so far).

- We wired it all up ourselves. Given the conclusion we had about pre-processing and indexing being required to make it work well, this seems preferred.

I think that MCP and using an integration platform will ultimately not work for any kind of agentic or deep research task heavily depending on Gmail context.


That’s really interesting, especially your point about preprocessing and indexing being required to make search outputs good. What was the first sign that made you realize querying live APIs wasn’t enough?

Was it latency, missing data, or just that results weren’t relevant? And when you say preprocessing, what kind of transformations or normalization ended up being most important?


Keywords or vector search on their own don't get good results for high entropy queries. MCP type approach is good for low entropy things like fact-based single source answers. [1]

[1]: https://arxiv.org/abs/2504.07106


what is your company doing exactly?


Collaborative email, meeting recording, knowledge search and goal tracking in one thing. The search applies across emails and meetings, but also other things. We had to figure out whether it would be sufficiently good to connect third party tools, and basically concluded no. We did some research to understand why (1).

[1]: https://arxiv.org/abs/2504.07106


Couldn't agree more with this. I have this phone and now I only charge my phone at work (Monday to Friday) and it's still fine by Monday.


I live in Waterloo and can confirm the ION is pretty awesome. It connects both the major malls, major universities, office parks, downtown/uptown areas to each other. Just seems well designed and well run overall.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: