Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not unfounded "moral panic" that AI will take our jobs, it will.

It's not hard to find accounts of people who say they're 2x, 3x, 5x more efficient in their work now. That's going to quickly translate into 1x person working at 5x efficiency putting 1-4 people out of work. It's not hard to find accounts of people who have already lost their jobs-- such stories have popped up here on HN.

Sure, you can argue that the jobs are simply displaced, and others will emerge requiring different skills than AI can easily perform, but that doesn't negate the reasonable panic individuals in jobs will feel. The article cites off-shoring as an example, but that's actually my point: I know various people who were ~55-60 when offshoring hit they industry or company. They were displaced and never found a new career-- their expertise was in areas that other corps were also outsourcing. I saw plenty of individuals hop to another company only to have their job and the new corp ofshored a few years later. Then if they're lucky & near the finish line they can take early retirement, but if they're a bit short of that mark they worked jobs that paid minimum wage+$1-$7/hour. Apart from other people I came across, I knew several such individuals when I worked a big box retail book store (when those were still relevant) during my college years.

So yeah, even if the author is correct that job loss or displacements are temporary, that's still a process that takes a minimum of 10-20 years. Retirement age minus 10-20 years is a whole heck of a lot of the population that are in jobs right now that can be automated of 3-5x'ed by a single person. Panic, or at least anxiety, is a rational response to that. Sure, a person should learn the new tools ASAP to be one of the 1x-now-5x workers but there's just not enough slots for everyone.



It’s also not hard to find people (see: me) in industries where AI has been prophesied to be exceptionally disruptive, saying that actually, meh, it’s not going to mean much ultimately.

I’m a programmer, and sure, I’ve asked ChatGPT to write some unit tests, and I’ve encouraged CodePilot to suggest some function signatures, but - it’s still ‘just’ a novelty, and it is wildly out of reach of ‘disruptive.’ It’s not doing anything I couldn’t do, and it’s not doing anyone not at my level would know what to do with. You’ve still got to be a programmer to see whether the code it generates is good.

People kind of forget that about art - everyone has an opinion about how art looks. But there’s really one main way everyone knows to test code: does it work? And if you don’t know how to stitch it together so it works, it doesn’t look like anything to the initiated. You can write 10000 thousand lines of code, and even if it’s brilliant technically, unless you can demonstrate it to the client, it’s practically worthless. You still need a programmer to figure out what to do with chatGPT’s output, clients don’t know what to do with it.

Everyone has an opinion on almost-perfect AI art - almost-perfect code doesn’t compile.


That's not how it works. With autogen code a small office of devs can build a test rig and run autogen code through it until it passes.

That's no different to what happens - or should happen - now. But with a much, much faster iteration cycle and higher throughput.

At this point GPT doesn't really understand semantics. But it does a fair imitation, and that imitation will improve over time.

There will certainly be a point where it will be writing code that has fewer bugs than human code.

There will be a point after that where it will build the test rig internally.

But I suspect we'll be in a very different place by then, and most of what we consider dev work today will be redundant for structural reasons rather than technical ones.

Generally I think the AI not-so-bad comments are coming from people who haven't really understood what's coming at everyone. AI won't automate coding, it will automate culture - artistic culture, media culture, business culture, political culture, perhaps also many kinds of personal interactions.

It'll be like the web, which automated paperwork, dats gathering, and certain kinds of social interaction, but many orders of magnitude broader and more disruptive.


>It’s also not hard to find people (see: me) in industries where AI has been prophesied to be exceptionally disruptive, saying that actually, meh, it’s not going to mean much ultimately.

Indeed that's quite a bit easier than finding people who are 5x more efficient thanks to AI.

Never mind, I'm sure that we will see the results soon in the form of lower inflation and a lowered retirement age though /s

After all, we're so much more efficient now. It's not just a smokescreen for the disemployment effects of industrially hollowing out the country for profit? Is it?


Even if what you say is true (and I am very doubtful), that only covers rather specific sorts of jobs. It says nothing about the wider impact across all industries.

Even if dev's jobs are "safe", if even 20% of everyone else can't earn a living, devs will suffer the consequences just the same as the rest of the world.


Have you tried gpt4? IBM already cutting staffing


...yet


> It's not hard to find accounts of people who say they're 2x, 3x, 5x more efficient in their work now.

What does "making some one 5x more efficient" actually mean anyway? Does it makes people enjoy their lives more, or it just makes few people work harder while leaving the others hanging without income?

Let me bring up another point just popped up in my head: AI don't really need to be the perfect replacement to a human worker for them to replace the human worker, instead, it just need to be good enough to make economical sense for the employer to switch labor strategy.

We the market has already been trained to accept reduced product quality during the past decades as companies adjusting their production strategies. It is only reasonable to assume that we will accepting it further.

So even if AI technology cannot maintain the same level of quality standard we often suffer today, it is still well within the realm of possibility that companies will just replace their workers with AI and then expect the market to lower it's expectations more.

> The stakes here are high. The opportunities are profound.

I guess time will tell...

While I at it, another thing:

> AI will not destroy the world, and in fact may save it.

Well, you can save the world many many times over, and then you can still save it once more and even more. But the world can't really take too many destruction. One or two extinction level events might just done it for good.

Also, AI don't even need to be sentient or have the ability to reason it's way towards world destruction. In fact, it might not even know what the f it's doing, all it needs is the capability to do so and some calculations to line things up.

I'm not really very optimistic about it, obviously.


> It's not hard to find accounts of people who say they're 2x, 3x, 5x more efficient in their work now

Yet labor markets are strong, with record low unemployment across the board. I wonder if the anecdata or the real data are right.

Offshoring was not motivated by technology. It's not technology that harms people, it's other people being greedy.


> Yet labor markets are strong, with record low unemployment across the board. I wonder if the anecdata or the real data are right.

You have to give it more time. Even if this would lead to less employees needed, it will take a few years before the effects will be seen. It's a quite slow process:

1. A sufficiently large part of the work force has to learn it and start using it in an efficient manner. 2. It has to be noticed, measured and evaluated as being more efficient by management. 3. Management has to decide to lay off people instead of being happy with the improved efficiencies and possibly higher profits. 4. Lay-offs take time. From step 3 to actually taking action can take time, and from the actual notice to the person being in the job seeking market also often takes many months.


Those are only the first steps. Companies with a relatively inflexible amount of programming that they need doing might lay off staff, but it'll have neglible net effect while other companies want more work done faster.

How much unmet need for software is there in the world? Probably an enormous amount. It's pretty easy to think of software you wished existed or wished had an extra ten thousand hours of improvements put into it. It could take decades before efficiency gains translate into reducing numbers of programmers in employment. ... Edit: I mean competent programmers in employment. Easy to imagine that when there are more highly productive programmers around there will be less need for the ones that are less so.


Historically, when a resource becomes cheaper, its demand increases. AI, as a type of software, can, like software, make human labor more productive. I.e. it's output become cheaper.

Efficiency gains don't create unemployment, they create economic growth.

Corporate greed creates unemployment. McKinsey has created much pain. Are we defining Corporations as forms of AI? If not, AI (as software) causes the economy to grow and employment to increase.


> Efficiency gains don't create unemployment, they create economic growth.

Stated as a blanket rule, this is simply untrue. As is its opposite. Reality is much more nuanced than that.

For instance, even if things become less expensive, how much would that matter to people who have no income?


No, literally. That's a basic tenet of economics. Where does economic growth come from? Productivity gains. What are productivity gains? Better efficiency in using factors of production.


Labor markets were strong for long periods of the off shoring process, but countless people were displaced, many of whom didn’t have the ability to retrain to equivalently paid jobs. My comment upthread details that aspect of things.

Also offshoring was absolutely driven by technological advances. Commercial grade network speeds made some of it possible, and even more of it simply more convenient, cheaper, and w/ less friction than it would have taken earlier.

As one small example one of my projects at my first job out of college was for a small publishing company. It was a jack-of-all trades job but this project had me working on digitizing the back catalog. They had decades of prior publications, books, journals that were print-only. They paid significant (comparatively) fees to get the most in demand of the back catalog’s abstracts transcribed by a US firm, and even more money for whole articles or books that were really popular. However advances in the speed of scanning technology and OCR accuracy meant that costs went down. In particular better tools for things like human-assisted OCR meant that we could offshore the scanning of the entire back catalog to a firm in India where those tools could be used by people with less domain knowledge and non-native fluency in English to achieve similar results. I oversaw coordinating the technical details of the deal, implementing parts of the in house database for the results, and the integration with the digital platforms of the day.

None of that was impossible with prior tech, but tech advances both lowered the cost and friction of doing it from half a world away.

Separately you can look at advances is shipping & logistics technology, some even decades prior to the digital revolution, that made offshoring of manufacturing a viable economic option. Just one part of that was containerized shipping & its massive growth in the 60’s and 70’s paving the way for significant globalization >= the 80’s. All of that required technical advances.


Yes and all that throughout decades of strong long term growth leading to current historically low unemployment rates.

And your point is that this is all going to change abruptly 'because AI'? Or what?


I think we are passing each other by in addressing different core points. (which i'll specify more clearly below, I may not have done so as explicitly as I thought)

I am not arguing with a claim that AI will forever result in a net-negative of jobs. I'm not certain how this will play out, but you & I probably agree (my apologies if I read too much into your comment) that things will likely, as with past technological disruptions, level out, new different jobs created, likely no need for long term anxiety or panic on a macroscopic level. Going back to the publishing company I worked for, innovations going on elsewhere in the field meant that costs for publications, and even very high quality publications, dropped so significantly that lots of places that couldn't previously afford such things could now do so. More publishing meant more jobs for designers, copywriters, marketing folks to determine their strategic use, etc... lots and lots of new jobs.

On to what I intended as my main point: I was addressing the claim by the article's authors that worrying about this process amounted to moral panic, forgetting the history of past innovations, etc. I strongly disagree. The changes to required skills will mean that not everyone qualified for the old jobs will easily transition to new ones. Some never will, and a portion of those won't do so because it's not a realistic possibility for them [1]. I saw the publishing revolution evolve over more than a decade. It came for different jobs at different time. The people in the pathway of that have plenty of reason be anxious or panic a bit about the-- at minimum-- upheaval of their lives or if they're really unfortunate lasting economic difficulties.

[1] This is for (at least) two main reasons:

1) Age. Plenty of people in the twilight of their working years may be able to reskill and find other jobs, but plenty won't. Maybe some won't have the aptitude for it, some won't have the economic ability to take time off from receiving a paycheck to do so, and so on. And even if they do, they're starting off fresh in a new line of work without job experience in it, so they're at the bottom of the pay ladder, entry level work. Ageism is also a thing. A 62 year old person with 40 decades of pre-digital expertise typesetting & related skills loses their job. They spend 6 months or longer paying to learn new skills in digital equivalents. They try to get a job and they are A) competing against a mass of younger people that are "native" to this technology and B) having their resumes reviewed by people that see a candidate with little direct experience in the required tech who might very well be quitting into retirement just as they're starting to become most useful after learning the job.

2) Hindsight is 20/20. It's easy to look back at these shifts and say "well they should have seen the way the industry was moving and made changes earlier". But that ignores the fact that such dramatic shifts are often not obvious in the moment. At the very beginning of such things-- even before hand when the tech is invented but not adopted-- there will be people saying "X is dying!". And then for years people hear that message while very little changes By the time a decent fraction of things have shifted enough that maybe the future is a little clearer people have also heard "X is dying!" for so long that it's just noise and they become hardened against the message. Some will manage, some won't, some will get trapped in the "age" scenario I detailed.

In short: Society probably doesn't need to panic about this, especially when taking a long term view. But individual industries and people in specific types of jobs are, contrary to the general point by the article's author, quite justified in having a little bit of "OMG we're f'ed!". In fact it is exactly those people doing that right now who will-- by virtue of their fear and panic-- be most likely to take steps early in these shifts to reposition themselves either by business pivot or job skills to weather the change.

If you're a copywriter in advertising & marketing right now and you feel comfortable, if you're going about your daily work and you're not panicking at least a little bit, that's a problem. LLM's may only (right now) turn out mediocre results so those sorts of tasks but lets face it: A majority of ads, marketing, and other writing churned out by humans today is also mediocre and uninspired. The people in those fields that are quite reasonable panicking right now are the ones who will either A) Improve their throughput productivity to be one of the few remaining workers churning out a larger amount of the same mediocre work or B) Using the output of these new tools to bootstrap the process of generating a bunch of mediocre options and then using human-level intelligence to sort through the results and polish on up into something much better than mediocre.


The thing about advertising and marketing is that actually writing copy doesn't take anywhere near the full 40 hours a week. How much time does it take to type up a short bit of marketing copy? If your only goal is to just shit something, anything out then you could write up a whole campaign in a few minutes.

But they don't do that in general. So where's ChatGPT going to save time? You'll still have to iterate, iterate, iterate, go back to stakeholders, discuss in progress work with clients, etc etc etc. I just don't see any reason to worry much in the short term, as long as you're working for a reputable company.

This may change eventually, but I foresee it taking several years at least. Progress isn't anywhere near as "exponential" as some camps claim.


Don't focus exclusively on the specific example I chose. It's only one area of applicability and tech is still advancing rapidly such that improvements will likely make it useful for new use cases as time goes on.

But even staying on just this one example, your framing of the process is incorrect:

You are underestimating the amount of "creative" time that goes on in these processes, and how the labor is divided. There are in fact full time creative folks who spend much closer to 40 hours a week on this than you think. I've worked on the analytics side of a few campaigns from small to very large (not my favorite type of project) so I've sat in on plenty of meetings where my organization was the customer. During the entire process, before we even sit down with a marketing firm, people on my side were spending significant time brainstorming ideas, slogans, narrative tone, visual aesthetic etc. so we could have a starting point for conversation.

When that conversation began, it was account managers from the marketing firm, not creative staff involved. Account managers met with the client, discussed & clarified ideas, refined the parameters of what we wanted, etc. Then they took it back to their dedicated creative staff who did spend a majority of their time working on the actual creative side. We generally only spoke at length to someone like a full-time writer for very important projects, and their time was billed by the hour, usually scoped out as a block of time in the contract. Otherwise we might have only a brief conversation, usually if a few rounds of conversation & revisions hadn't quite got us where we needed. Sure there's bleed over and in any given company people will wear multiple hats and the venn diagram of account manage & creative will have more or less overlap, but beyond small firms this is a typical rough picture on the division of labor.

>I just don't see any reason to worry much in the short term

Depends on what you mean. If you mean "Don't panic that you're going to lose your job in a year" then I'd guess you're mostly right. If you mean "Don't worry at all" then you're mostly wrong, because as you specified, "short term". And as I said in my last comment, if people in jobs exposed to disruption aren't worried right now, then they are the ones most at risk of getting left behind. The people who worry now will be more likely to prepare.

> your only goal is to just shit something, anything out... [and later] ...as long as you're working for a reputable company.

See my previous comment about 90% of this output being fairly bland and mediocre, but the process still isn't to choose the first idea you think of, which is likely not going to be the best one compared to taking more time to come up with a bunch and choose from the best. There are lots of levels of mediocrity, and the first thing you shit out is simply the lowest of them. In reality the typical process will be something like this:

Forget about longer copy writing, let's choose something smaller, a slogan. Usually 1 sentence, sometimes 2, but not much longer. I, the customer, go to a marketing firm for an entire campaign, one part of which is the slogan. As part of the contract we will have 2-3 rounds of discussions, preliminary ideas, revisions, etc, and the "deliverable" will be 3 finished, polished options to choose from. The account manager isn't going to their writer and saying "give me 3 slogans". They're saying "Give me 10 slogans to start with". The writer then goes to work. The writer does not write 10 slogans! The write writes more like 20 or 30, maybe more if you count minor iterations. They choose the 10 best and bring it to the account manager, talk them over, then the copywriter goes back and revises and maybe writes a bunch more for the account manager to finally choose the half dozen they think the client will like the most. The client gives feedback, the account manager goes back to the copywriter, who goes back for another round, etc. If all goes well then after 2 or 3 rounds of this the account manager has a final meeting with the client for approval of one of the 3 final options. And that's just for the slogan


> And that's just for the slogan

My point is that I don't understand what using an LLM changes here. You'll still have to generate multiple candidate slogans, you'll still have to bring 10 of them to the account manager, you'll still have to discuss the possibilities with the customer and tailor the results to their needs. You'll still have to iterate. All an LLM can do is write the candidate slogans, and -- as of now -- it doesn't do a spectacularly great job at it. Very little actual labor is being saved here.


I'm sorry but I see no relation between saying "some people will fare better then others in society" and "technical advances grow the economic output". Personal success and overall societal progress are completely independent processes. Some people will advance in degenerating societies. Some people will loose out during golden ages. I really don't see a point to be made here.


Once upon a time the only tools we had were rocks, sticks and fire.

Technology relentlessly improved for millenia in leaps and bounds.

And today we hover around full employment.

I think AI is disruptive, but technology has never yet made humans redundant en masse.


This is incredibly short sighted.

The rate at which we've been developing tools has been increasing, and has also required more specialization. You cannot switch jobs easily from being a software engineer to a heavy equipment operator but they both use sufficiently advanced tools. Looking at top level employment metrics is also hiding quite a bit of information. Time is required for people to learn new skills. If a sudden influx of people were all heavy equipment operators then the market will price that in and suddenly that job is the same as mopping a fast food bathroom.


Nope, it's history.

Go back and look at any new technology. Each one put large numbers of people out of work, but they quickly found new jobs in areas that were opening up due to other new technology: telephone operators became television assemblers and so on. Same will happen here.


There are two obvious limits:

- this may be true in the long run, this does not mean people will magically adapt in one day. Luddites were skilled workers whose jobs were replaced by much less skilled jobs because of mechanisation.

- People work much, much less than in the past: working hours for those who work were almost cut in half in 150 years [0], and there are relatively way less people who work (much longer studies, much longer retirement than a century ago) - if you count time spent on domestic work.

[0] https://ourworldindata.org/working-hours


Technology didn't make horses unemployable until it did. We have certain abilities. Once those abilities are replaced, that's it.


So who doesn't want to get more done? Increased productivity means increased output. Who would fire somebody who is working 5X more efficiently?

Ok in some industries this might happen, especially when the productivity increase isn't in the main business line but some ancillaries. But everybody turning out code faster means getting done faster etc. You want to go back to the old slow way? Only losers will do that.


As a business owner, either you can grow or you can’t (eg., snack shop). If you can grow, then each additional new person you hire can do 3x more for the money. This is why unemployment will stay low and wages relatively high— individuals are able to be so much more productive in growing businesses.


Your whole argumentation only applies to jobs where you shower before going to work, not afterwards.


why does a human need a job?


Because, barring a grand upheaval in global economic systems, most people generally need to sell their labour in order to earn a 'wage' which they spend on food, clothing, rent, etc.


Let's tax AI work and redistribute wealth. Work does not have to have a monopoly on wage. Just give us money. We can achieve a post-scarcity society.


The truth is the society needs people to put their time and energy somewhere so the society can function in order. That's why all governments, regardless its kind, care about employment, economy. Our society will collapse if people all have free time to do whatever they want.


We can do lots of things in the abstract. Concentrations of power, wealth and the political trends of the last 20 or so years make me less optimistic.


I'm noticing an uptick in the number of people who are agreeing with the idea, in the abstract, though.


Hmm, sounds like we need a grand upheaval in the global economic system in that case.


Oh yeah no worries. We'll have a prototype post-scarcity society in a few months.


Be careful what you wish for, you just might get it.


> 1x person working at 5x efficiency putting 1-4 people out of work

Or the whole industries are getting 5x more productive.


Maybe depends on the industry? Let's say there's a huge need for actually talented software devs -- then, among them, no jobs gone, 5x productivity.

Whilst ... Graphics designers? A small % of them captures large shares of the market (eg good marketing, automation, winner takes it all?), but most end up without a job


Or more businesses and people can afford graphic design and use it to their benefit.

If ML-powered tools will continue being available and affordable (and I don't see how they won't), that effectively erases a possibility of monopolies on overlaying markets. How can small % of graphic designers capture large market share while there will be tens of thousands wannabies behind their backs?


How? If they provide a good product/services, and are good at marketing, branding.

As an extreme example from another industry, think about Coca-Cola -- their coke isn't that magically much better (if at all), but their marketing is? And try to start a new bubbles drink, that'd be hard


There's no such thing as a graphic design marketing. It is oxymoron, like "oil painting marketing" or "woodcarving marketing". Every creative craft has names, not brands (let's left "personal brand" bullshit to lifestyle yoga coaches).


But all companies do marketing (or sales) to find costumers. Graphic design companies do marketing (or sales) to find their customers.

Painters and artists do marketing, to get people interested in their art and exhibitions.

When you say "oxymoron" I think there's a misunderstanding. I didn't mean that they would do marketing about wood carvings in general, but instead for their own company and their own things they sell.

But what does it matter anyway, have a nice day




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: