Hacker Newsnew | past | comments | ask | show | jobs | submit | more nilkn's commentslogin

Front-line workers have a conflict of interest (AI making their jobs easier may lead to layoffs); they're incentivized to be productive, but not so productive that they or a peer they like ends up without a job. That conflict of interest becomes extremely strong when most companies around them are already conducting layoffs, they already know people personally who've been laid off, and hiring remains at a low level compared to the 2010s and early 2020s.

Executives don't care about any of that and just want to make the organization more efficient. They don't care at all if the net effect is reducing headcount. In fact, they want that -- smaller teams are easier to manage and cheaper to operate in nearly every way. From an executive's standpoint, they have nothing to lose: the absolute worst-case scenario is it ends up over-hyped and in the process of rolling it out they learned who's willing to attempt change and who's not. They'll then get rid of the latter people, as they won't want them on the team because of that personality trait, and if AI tooling is broadly useful they won't even bother backfilling.


Look, for most corporate jobs, there's honestly no way that you truly cannot find any kind or level of usage of AI tools to make you at least a bit more productive -- even if it's as simple as helping draft emails, cleaning up a couple lines of code here and there, writing a SQL query faster because you're rusty with it, learning a new framework or library faster than you would have otherwise, learning a new concept to work with a cross-functional peer, etc. It does not pass the smell test that you could find absolutely nothing for most corporate jobs. I'd hazard a guess that this attitude, which borders on outright refusal to engage in a good-faith manner, is what they're trying to combat or make unacceptable.


If the corporate directive was to share "if AI has helped and how" I would agree. But my company started that way and when I tested the new sql query analysis tool and reported (nicely and politely with positive feedback too) that it was making up whole tables to join to (assuming we had a simple "users" table with email/id columns which we did not have due to being a large company with purposefully segmented databases. The users data was only ever presented via api calls, never direct dB access).

My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.

About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.

This is abusive behavior aimed at generating a positive response the c suite can give to the board.


I know you don't want to hear this, but I also know you know this is true: you would genuinely need to look at the full dataset that team collected to draw any meaningful conclusion here. Your single example means pretty much nothing in terms of whether the tool makes sense at large scale. Not a single tool or technology exists in this entire field that never fails or has issues. You could just as well argue that because you read something wrong on Google or Stack Overflow that those tools should be banned or discouraged, yet that is clearly false.

That said, I don't agree with or advocate the specific rollout methodology your company is using and agree that it feels more abusive and adversarial than helpful. That approach will certainly risk backfiring, even if they aren't wrong about the large-scale usefulness of the tools.

What you're experiencing is perhaps more poor change management than it is a fundamentally bad call about a toolset or technology. They are almost certainly right at scale more than they are wrong; what they're struggling with is how to rapidly re-skill their employee population when it contains many people resistant to change at this scale and pace.


> I know you don't want to hear this, but I also know you know this is true

I wasn't sanctimonious to you, don't be so to me please.

> you would genuinely need to

> look at the full dataset that

> team collected to draw any

> meaningful conclusion here

I compared notes with a couple friends on other teams and it was the same for each one. Yes it's anecdotes but when the same exact people that are producing/integrating the service are also grading its success AND combine this very argument while hiding any data that could be used against them, I know I am dealing with people who will not tell the truth about what the data actually says.


If you truly think the team responsible for this made a bad call, you need to go look at all the data they collected. Otherwise, yes, you're just sharing a couple anecdotes, and that is problematic and can't be brushed off or ignored. While it's possible that the people integrating the service just ignored negative feedback and are apparently pathological liars (as you accuse them of being), it's also possible that it's actually you who is ignoring most of the data and being disingenuous or manipulative about it. You are demonstrating a lot of paranoid, antagonistic thinking about a team that might just have a broader good-faith perspective than you do.


It's not a good-faith question to say "here's a new technology, write about how it made you more productive" and expect the answer to have a relationship with the truth. You're pre-ordaining the answer!


Lets imagine it is 1990 and the tool is e-mail over snail mail. Would you want leadership of a company to allow every employee to find out on their own if email is better way to communicate despite the spam, impersonal nature, security and myriad other issues that patently exist to this day ? or allow exceptions if an employee insists (or even shows) how snail is better for them?

It is hardly feasible for an organization to budget time for replicating and validating results, form their own conclusions, for any employee form who wishes to question the effectiveness of the tool or the manner of deployment.

Presumably the organization has done that validation with reasonably sized sample of similar roles over significant period of time. It doesn't matter though, it would be also sound reasoning for leadership to take a strategic call even when such tests are not conducted or not applicable.

There are costs and time associated with accurate validation which they are unable / unwilling to wait or even pay for, even if they wish to. The competition is moving faster and not waiting, so deploying now rather than wait and validate is not necessarily even a poor decision.

---

Having said that, they can articulate their intent better than "write about how it made you more productive", by adding more description along the lines of "if not then explain all the things you have tried to try and adopt the tool and what and how it did not go well for you/ your role"

Typically well structured organizations with in-house I/O psychologists would add this kind of additional language in the feedback tooling, line managers may not be as well trained to articulate it in informal conversations, which is whole different kind of problem.


The answer isn't pre-ordained -- it's simply already known from experience, at least to a sufficient degree to not trust someone claiming it should be totally avoided. Like I said, there are not many corporate roles where it's legitimately impossible to find any kind of gain, even a small or modest one, anywhere at all.


It's 100% plausible and believable that there's going to be a spectacular bubble popping, but saying we are way past peak LLM would be like saying we were way past peak internet in 1999-2001 -- in reality, we weren't even remotely close to peak internet (and possibly still aren't). In fact, we were so far from the peak in 2001 that entire technological revolutions occurred many years later (e.g., smartphones) that just accelerated the pace even further in ways that would've been hard to imagine at the time. It's also important to note that AI is more than text-based LLMs -- self-driving cars and other forms of physical "embodied" AI are progressing at exponential pace, while entirely new compute form factors are only just now starting to emerge yet are almost certainly guaranteed to become pervasive as soon as possible (e.g., real AR glasses). Meanwhile, even plain-old text-based LLMs have not actually stagnated.


Sam Altman has called it a bubble already: https://www.cnbc.com/2025/08/18/openai-sam-altman-warns-ai-m... (Even a liar sometimes speaks the truth? I don’t know.)


  “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” he told the room, according to a Verge reporter.

  “We have better models, and we just can’t offer them, because we don’t have the capacity,” he said. GPUs remain in short supply, limiting the company’s ability to scale.
https://finance.yahoo.com/news/sam-altman-admits-openai-tota...

So why would Altman say AI is in a bubble but OpenAI wants to invest trillions? Here's my speculation:

1. OpenAI is a private company. They don't care about their own stock price.

2. OpenAI just raised $8.3b 3 weeks ago on $300b valuation ($500b valuation today). He doesn't care if the market drops until he needs to raise again.

3. OpenAI wants to buy some AI companies but they're too expensive so he's incentivized to knock the price of those companies down. For example, OpenAI's $3b deal for Windsurf fell apart when Google stepped in and hired away the co-founder.

4. He wants to retain OpenAI's talent because Meta is spending billions hiring away the top AI talent, including talent from OpenAI. By saying it's in a bubble and dropping public sentiment, the war for AI talent could cool down.

5. He wants other companies to get scared and not invest as much while OpenAI continues to invest a lot so it can stay ahead. For example, maybe investors looking to invest in Anthropic, xAI, and other private companies are more shaky after his comments and invest less. This benefits OpenAI since they just raised.

6. You should all know that Sam Altman is manipulative. This is how he operates. Just google "Sam Altman manipulative" and you'll see plenty of examples where former employees said he lies and manipulates.


Altman wants OTHERS to spend trillions are GPU. He needs the scaling hype to continue so he can keep getting investors to put money in hopes of an AGI breakthrough. If there is no funding, OpenAI is immediately bankrupt.


A couple thoughts:

- The author of this clearly disliked the WSJ article, but I don't think they did a good job of explaining why. I'm not saying they're wrong, but this article is very emotional without much concrete criticism. I assume 'woit' is someone famous I should know about but don't and he or she is assuming people will find this sufficient simply because they wrote it. But for someone like me who doesn't know who woit is, it doesn't land as a result.

- I enjoyed the WSJ article and (perhaps naively) thought it did an acceptable job shedding light on an interesting phenomenon that would fly under the radar for many readers. I'd be interested in seeing credible criticism of it, but the article in question declares that providing that information would be "hopeless". In the next sentence, they mention experiencing mental health issues.

- On theoretical physics, my thought, for whatever it may be worth, is that a verified theory of quantum gravity is simply one of the hardest scientific questions of all time. It's something that we should expect would take the entire world hundreds of years to solve. So I'm not at all unnerved or worried about what appears from the outside to be a slow rate of progress. We are talking about precisely understanding phenomena that generally only occur in the most extreme conditions presently imaginable in the universe. That's going to take time to unravel -- and it may not even be possible, just like a dog is never going to understand general relativity.


The author is a member of the academic community and are known for their grounded and informed takes on topics which are extremely complex and only well understood by high level physics theoreticians and researchers.

They are writing for an audience of sophisticated non-experts who are curious about fields of advanced topics that are difficult to understand and thus are usually distorted in an unhelpful way by people who barely grasp it but are trusted by the public to be the authority on relaying that topic to laypeople.


>I assume 'woit' is someone famous I should know about but don't and he or she is assuming people will find this sufficient simply because they wrote it.

https://en.wikipedia.org/wiki/Peter_Woit


You’re going to be disappointed when you realize one day what you yourself are.


ChatGPT is designed to be addictive, with secondary potential for economic utility. Claude is designed to be economically useful, with secondary potential for addiction. That’s why.

In either case, I’ve turned off memory features in any LLM product I use. Memory features are more corrosive and damaging than useful. With a bit of effort, you can simply maintain a personal library of prompt contexts that you can just manually grab and paste in when needed. This ensures you’re in control and maintains accuracy without context rot or falling back on the extreme distortions that things like ChatGPT memory introduce.


There's no doubt it can function as a convenient cover, but that doesn't mean it's having no effect at all. It would be naive to assume that the introduction of a fundamentally new general-purpose work tool across millions of workers in nearly every industry and company within the span of a couple years has not played any role whatsoever in making teams and organizations more efficient in terms of headcount.


It's rarely (maybe never) a direct one-to-one elimination of jobs. Most attempts to replace a single complete human job with an AI agent are not successful (and large companies, generally, aren't even attempting that). Rather, the phenomenon is more like a diffuse productivity gain across a large team or organization that results in a smaller headcount need over time as its final net effect. In practice, this materializes as backfills not approved as natural attrition occurs, hiring pipelines thinned out with existing team member workloads increased, management layers pruned, teams merged (then streamlined), etc.


When my father joined an attorney's office as recently as the 80s, there was a whole team of people that worked with: Of course, there were the attorneys who actually argued the cases, but also legal assistants who helped create various briefs, secretaries who took dictation (with a typewriter, of course) to write those documents, receptionists who managed the attorney's schedules and handled memos and mail, filing clerks who helped keep and retrieve the many documents the office generated and demanded in a giant filing room (my favorite part of visiting as a kid: row after row of rolling shelves, with crank handles to put the walkways where they were needed to access a particular file), librarians who managed a huge card catalog and collection of legal books and acquired those that were not in the collection as needed... it was not a small team.

When he retired a few years ago, most of that was gone. The attorneys and paralegals were still required, there was a single receptionist for the whole office (who also did accounting) instead of about one for each attorney, and they'd added an IT person... but between Outlook and Microsoft Word and LexisNexis and the fileserver, all of those jobs working with paper were basically gone. They managed their own schedules (in digital Outlook calendars, of course), answered their own (cellular) phones, searched for documents with the computers, digitally typeset their own documents, and so on.

I'm an engineer working in industrial automation, and see the same thing: the expensive part of the cell isn't the $250k CNC or the $50k 6-axis robots or the $1M custom integration, those can be amortized and depreciated over a couple years, it's the ongoing costs of salaries and benefits for the dozen humans who are working in that zone. If you can build a bowl screw feeder and torque driver so that instead of operating an impact driver to put each individual screw in each individual part, you simply dump a box of screws in the hopper once an hour, and do that for most of the tasks... you can turn a 12-person work area into a machine that a single person can start, tune, load, unload, and clean.

The same sort of thing is going to happen - in our lifetimes - to all kinds of jobs.


It already happened to all kinds of jobs.

I recall the mine water pump had a boy run up and down a ladder opening and closing steam valves to make the piston go up and down. The boy eventually rigged up a stick to use the motion of the piston to automatically open and close the valves. Then he went to sleep while the stick did his job.

Hence the invention of the steam engine.


Head on the nail. I try to explain this to everyone who thinks we are heading to global collapse. AI isn't good enough to replace a person. It enables a person to replace a team. That will take awhile since, as cool and great AI is now, it is not yet as powerful and integrated yet for teams to be totally replaced. Only as fast as people naturally leave, so instead of hiring someone to replace that job, one guy inherits his teammates job and his own, but has more tools to do both. It sounds like a person is being replaced, but I've never worked anywhere where people weren't complaining about being understaffed. The budget likely wasn't cut, so now they can hire someone to do a different job. A job an idea-fairy wanted someone to do but they lacked the bandwidth. The old position is gone, but new ones have opened. It is the natural was of the world. We innovate, parts of our lives get easier, our responsibility scope increases, our life is full again. For a person, that translates to never feeling rich if they allow their standard of living to match their income, and for a company, that translates to scope increase as well if the company is growing, shown as either more job openings or more resources for the employees (obviously the opposite for both cases if a person/company is "shrinking")


Many jobs even most jobs don't work you at or near the short term max capacity you can achieve because it isn't sustainable, lacks redundancy or because the nature of work flow and peer expectation creates a degree of slack.

Condensing the workforce as you describe risks destroying redundancy and sustainability.

It may work in tests with high performers over short dutations but may fall under over longer terms, with average performers, or with even a small amount of atrition.

Having cog number 37 pick up the slack for 39 doesn't work with no excess capacity.


The low hanging jobs historically created from progress are gone. You are talking nonsense. You are talking the equivalent of 'and then magically, new jobs appear, because jobs have appeared in the past'. And while you wave away job fears as a nothing burger, you randomly add in blaming people for not feeling rich because...they think progress should include their lives progressing for the better?


> secretaries who took dictation (with a typewriter, of course) to write those documents

Complete aside, just because you brought up this thought and I like the concept of it:

My mom trained professionally as a secretary in the 1970s and worked in a law office in the 1980s; at that point, if you were taking dictation, you were generally doing longhand stenography to capture dictation, and then you'd type it up later. A stenotype would've been a rarity in a pre-computer office because of the cost of the machine; after all, if you need a secretary for all these other tasks, it's cheaper to give them a $2 notebook than it is a $1,500+ machine.


> Rather, the phenomenon is more like a diffuse productivity gain across a large team or organization that results in a smaller headcount need over time as its final net effect. In practice, this materializes as backfills not approved as natural attrition occurs, hiring pipelines thinned out with existing team member workloads increased, management layers pruned, teams merged (then streamlined), etc.

My observation so far has been that executive leadership believes things that are not true about AI and starts doing the cost-cutting measures now, without any of the productivity gains expected/promised, which is actually leading to a net productivity loss from AI expectations based on hype rather than AI realities. When you lose out on team size, can't hire people for necessary roles (some exec teams now won't hire unless the role is AI related), and don't backfill attrition, you end up with an organization that can't get things done as quickly, and productivity suffers, because the miracle of AI has yet to manifest meaningfully anywhere.


> the phenomenon is more like a diffuse productivity gain across a large team or organization

AI alone can't do that, even if you make the weakest link in the chain stronger, there are probably more weak links. In a complex system the speed is controlled by the weakest, most inefficient link in the chain. To make an organization more efficient they need to do much more than use AI.

Maybe AI exposes other inefficiencies.


I mean one of the services at work had a custom HTML dashboard. We eliminated it and replaced it with Grafana.

I worked on both - my skillset went from coding pretty bar charts in SVG + Javascript to configuring Grafana, Dockerfiles and Terraform templates.

There's very little overlap between the two, other than general geekiness, but thanks I'm still doing OK.


Seems like a bad decision. Grafana is awful compared to a bespoke solution.


I don't get the hate - Imo it's one of the better working tech products I've had the chance of using.


If we're actually headed for a "house of cards" AI crash in a couple months, that actually makes their arrangement with Meta likely more valuable, not less. Meta is a much more diversified company than the AI companies that these folks were poached from. Meta stock will likely be more resilient than AI-company stock in the event of an AI bubble bursting. Moreover, they were offered so much of it that even if it were to crash 50%, they'd still be sitting on $50M-$100M+ of stock.


I am very certain that AI will slowly kill the rest of "social" in the social web outside of closed circles. And they made their only closed circle app (WhatsApp) unusable and ad invested. Imo either way to are still in the process of slowly killing themselves


A social media company is more diversified? Maybe compared to anthropic or openai, but not to any of the hyperscalers


Yes, of course it is.


As far as I can tell, putting the conspiratorial thinking aside, he's not really wrong, but I'm also not sure it matters that much.

If neurosymbolic AI was "sidelined" in favor of "connectionist" pure NN scaling, I don't think it was part of a conspiracy or deeply embedded ideological bias. I mean, maybe that's the case, but it seems far more likely to me that pure deep learning scaling just provided a more incremental and accessible on-ramp to building real-world systems that are genuinely useful for hundreds of millions of users. If anything, I think the lesson here was to spend less time theory-crafting and more time building. In this case, it looks like it was the builders who got to the endpoint that was only imagined by the theory-crafters, and that's what matters at the end of the day.


That resonates. There _are_ a lot of good approximation functions can be developed from deep learning and good data and now RL on top. But then, we really do need symbolism, and now we need to somehow combine them. And it'll be different for text vs vision... Lots of ...s ahead


But how to even combine them. Is it only via another AGENT who has a symbolism tool. and if that (group of agents) cant extract multiple symbolisms from the context, of one which best fits, from the current approximation (context) then..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: