Hacker Newsnew | past | comments | ask | show | jobs | submit | ossa-ma's commentslogin

> "75% of enterprise workers say AI helped them do tasks they couldn’t do before."

> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."

- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?

- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT

- OpenAI going for the agent management market share (Dust, n8n, crewai)


Workers at tech companies are getting paid for this because they are shareholders.

Increased efficiency benefits capital not labor; always good to remember to look at which side you prefer to be on


> Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT|

Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.


> Where are the salary bumps to reflect this?

Let me increase salary to all my employees 2x, because productivity is 4x'ed now - never said a capitalist.


>Where are the salary bumps to reflect this?

Revenue bumps and ROI bumps both gotta come first. Iirc, there's a struggle with the first one.


I imagine the salary bumps occur when the individuals who have developed these productivity boosting skills apply for jobs at other companies, and either get those jobs or use the offer to negotiate a pay increase with their current employer.

I haven't seen any examples of that.

Over the past few months mentions of AI in job applications have gone from "Comfortable using AI assisted programming - Cursor, Windsurf" to "Proficient in agentic development" and even mentions of "Claude code" in the desired skills sections. Yet the salary range has remained the exact same.

Companies are literally expecting junior/mid level devs to have management skills (for those even hiring juniors). They expect you to come in and perform on the level of a lead architect - not just understand the codebase but the data, the integrations, build pipelines to ingest the entire companies documentation into your agentic platform of choice, then begin delegating to your subordinates (agents). Does this responsibility shift not warrant an immediate compensation shift?


> apply for jobs at other companies

Ahh, but its not 2022 anymore, even senior devs are struggling to change companies. Only companies that are hiring are knee deep into AI wrappers and have no possibility of becoming sustainable.


The only group whose salaries have gone up as a result of LLMs are hardcore AI professionals, i.e. AI researchers.

Brilliant take.

Competition nowadays is so intense and fine-grained. Every new innovation or exploration is eventually folded into the existing exploits especially in monopolistic markets. Pricing models don’t change, revenue streams neither, consumer rarely benefits from these optimisation efforts, all leads to greater profit margins by any means.


It sucks for the ones who just want to play the game as "intended". The min-maxers always ruin it for everyone else. The devs ultimately balance the game around the few percent who min-max and everyone else just has to deal with it or stop playing. And the they say "don't blame the players, blame the game" but the game is literally being warped because of the players.

Also, often the new meta doesn't even make sense and the changes need to be rolled back. So all that pain and hustle will often be for nothing, but a lot of players will end up having a bad taste of the game altogether. So the damage has been done and a roll back can't fix it.


I'm not an economist so can someone explain whether this stat is significant:

> a sustained increase of 1.0 percentage point per year for the next ten years would return US productivity growth to rates that prevailed in the late 1990s and early 2000s

What can it be compared to? Is it on the same level of productivity growth as computers? The internet? Sliced bread?


These are economic studies on AI's impact on productivity, jobs, wages, global inequality. It's important to UNDERSTAND who benefits from technology and who gets left behind. Even putting the positive impacts of a study like this aside - this kinda due diligence is critical for them to understand developing markets and how to reach them.

But the thing is that they really aren't rigorous economic studies. They're a sort of UX research-like sociological study with some statistics, but don't actually approach the topic with any sort of econometric modeling or give more than loose correlations to past economic data. So it does appear performative: it's "pop science" using a quantitative veneer to push a marketing message to business leaders in a way that looks well-optimised mathematically.

Note the papers cited are nearly all ones about AI use, and align more closely with management case studies vs. economics.


Ok Dario

This looks super promising and useful for frontend dev work. Gonna start using it immediately and see how effective it is compared to taking screenshots and pasting them in cc/antigravity.

Man everything about this launch is super clean, from the conciseness to the interactivity. I'm glazing but WOW. Well done to the team.


One feature I'd love is a toggle to lock the input to the bottom of the terminal. It's a big inconvenience to have to scroll up and down between the chat and the input when responding to changes.


I was just thinking that half a hour ago when using Claude via tmux via mosh via my phone.

It would be a game changer for mobile usage.


How is it a “fascinating learning exercise” when the intention is to run the model in a closed loop with zero transparency. Running a black box in a black box to learn? What signals are you even listening to to determine whether your context engineering is good or whether the quality has improved aside from a brief glimpse at the final product. So essentially every time I want to test a prompt I waste $100 on Claude and have it an entire project for me?

I’m all for AI and it’s evident that the future of AI is more transparency (MLOPs, tracing, mech interp, AI safety) not less.


Current transparency is rubbish but people will continue to put up with it if they're getting decent output quality


there is the theoretical "how the world should be" and there is the practical "what's working today" - decry the latter and wait around for the former at your peril


So it took the author 6 months and several 1-to-1s with the creator to get value from this. As in he literally spent more time promoting it than he did using it.

And it all ends with the grift of all grifts: promoting a crypto token in a nonchalant 'hey whats this??!!??' way...


the note about the crypto token was intended to “okay this is now hype slop and it’s time to move on”


I meant super bullish from an outsiders point of view, but I do agree that the projections from within the investors circle are likely multiples of that.


I’m the author but not the poster. I didn’t change the title. Maybe the mods did?

The title came to me as an epiphany too haha, shame to see it get changed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: