> Ultimately I think over the next two years or so, Anthropic and OpenAI will evolve their product from "coding assistant" to "engineering team replacement"
The way I see it, there will always be a layer in the corporate organization where someone has to interact with the machine. The transitioning layer from humans to AIs. This is true no matter how high up the hierarchy you replace the humans, be it the engineers layer, the engineering managers, or even their managers.
Given the above, it feels reasonable to believe that whatever title that person has—who is responsible for converting human management's ideas into prompts (or whatever the future has the text prompts replaced by)—that person will do a better job if they have a high degree of technical competence. That is to say, I believe most companies will still want and benefit if that/those employees are engineers. Converting non-technical CEO fever dreams and ambitions into strict technical specifications and prompts.
What this means for us, our careers, or Anthropic's marketing department, I cannot say.
That reminds me of the time where 3GL languages arrived and bosses claimed they no longer needed developers, because anyone could write code in those English-like languages.
Then when mouse-based tools like Visual Basic arrived, same story, no need for developers because anyone can write programs by clicking!
Now bosses think that with AI anyone will be able to create software, but the truth is that you'll still need software engineers to use those tools.
Will we need less people? Maybe. But in the past 40 years we have been increasing the developers productivity so many times, and yet we still need more and more developers because the needs have grown faster.
My suspicion is that it will be bad for salaries, mostly because it'll kill the "looks difficult" moat that software development currently has. Developers know that "understanding source code" is far from the hard part of developing software, but non-technical folks' immediate recoiling in the face of the moon runes has kept our profession pretty easy to justify high pay for for ages. If our jobs transition to largely "communing with the machines", then we'll go from a "looks hard, is hard" job, to a "looks easy, is hard" job, which historically hurts bargaining power.
I don't think "looks difficult" has been driving wages. FAANG etc leadership knows what's difficult and what's not. It's just marginal ROI. If you have a trillion-dollar market and some feature could increase that by 0.0001%, you hire some engineers to give it a try. If other companies are also competing for the same engineers for the same reasons, salaries skyrocket.
I wonder if the actual productivity changes won't end up mattering for the economics to change dramatically, but change in terms of a rebound in favour of seniors. If I was in school 2 years ago, looking at the career prospects and cost of living, I just straight up wouldn't invest in the career. If that happens at a large enough scale, the replenishment of the discipline may reduce, which would have an effect on what people who already had those skills could ask for. If the middle step, where wild magical productivity gains don't materialize in a way that reduces the need for expert software people who can reasonably be liable for whatever gets shipped, then we'll stick around.
Whether it looks easy or not doesn't matter as much imo. Plumbing looks and probably is easy, but it's not the CEOs job to go and fix the pipes.
I think this is the right take. In some narrow but constantly broadening contexts, agents give you a huge productivity edge. But to leverage that you need to be skilled enough to steer, design the initial prompt, understand the impact of what you produce, etc. I don't see agents in their current and medium term inception as being a replacement of engineering work, I see it as a great reshuffling of engineering work.
In some business contexts, the impact of more engineering labor on output gets capped at some point. Meaning once agent quality reaches a certain point, the output increase is going to be minimal with further improvements. There, labor is not the bottleneck.
In other business contexts, labor is the bottleneck. For instance it's the bottleneck for you as an individual: what kind of revenue could you make if you had a large team of highly skilled senior SWEs that operate for pennies on the dollar?
Labor will shift to where the ROI is highest is what I think you'll see.
To be fair, I can imagine a world where we eventually fully replace the "driver" of the agent in that it is good enough to fulfill the role of a ~staff engineer that can ingest very high level business context, strategy, politics and generate a high level system design that can then be executed by one or more agents (or one or more other SWEs using agents). I don't (at this point) see some fundamental rule of physics / economics that prevents this, but this seems much further ahead from where we are now.
Thanks for the post. I found it very interesting and I agree with most of what you said. Things are changing, regardless of our feelings on the matter.
While I agree that there is something tragic about watching what we know (and have dedicated significant time and energy in learning) devalued. I'm still exited for the future, and for the potential this has. I'm sure that given enough time this will result in amazing things that we cannot even imagine today. The fact that the open models and research is keeping up is incredibly important, and probably the main things that keeps me optimistic for the future.
* I'm into genealogy. Naturally, most of my fellow genealogists are retired, often many years ago, though probably also above average in mental acuity and tech-savviness for their age. They LOVE generative AI.
* My nieces, and my cousin's kids of the same age, are deeply into visual art. Especially animation, and cutesy Pokemon-like stuff. They take it very seriously. They absolutely DON'T like AI art.
I don't think anyone disagrees with that. But it's a good time to learn now, to jump on the train and follow the progress.
It will give the developer a leg up in the future when the mature tools are ready. Just like the people who surfed the 90s internet seem to do better with advanced technology than the youngsters who've only seen the latest sleek modern GUI tools and apps of today.
>> It's too bad people spend energy for generating them now.
How do you mean?
Some quick back of the napkin math.
Creating a 'throwaway' banner image by hand, maybe 15 minutes on a 100W CPU in Photoshop:
15 minutes human work time + 0.025 kWh (100W*0.25h)
Creating a 'throwaway' banner image by stable diffusion on a 600W GPU. In reality it's probably less than 20 seconds to generate, but let's round it up to one full minute of compute time:
5 minutes human work time + 0.01 kWh (600W*(1/60)h)
The way I see it it seems to spend less energy, regardless of whether you're talking about human energy or electrical energy. What's the issue here exactly?
You can't take the human energy in account, because there's no reason to believe they won't live for the same amount of time and use the same amount of energy regardless.
You are not accounting for the model training (which can't be ignored, first because you can't ignore fixed costs, and second, because we keep training newer models, so amortizing doesn't quite work), rebound effect, the subsidized bot crawling, etc.
I won't comment further on this, this discussion has been rehashed to death anyway and in better ways that I can.
IMHO the better way is to not do meaningless cover images, and this is also true of stock, non-AI generated images (I'm not against art, so if it's your strength, by all means, please do meaningful or nice cover images).
It all depends on the scale you use. At the individual, sure. But it's like cars. They keep getting more effecient, yet total energy consumption keeps increasing.
The further we can go, the further we will go.
The more CPU power we get, the more JS heavy websites get.
The more images we can generate, the more we will generate.
The more we can do, the more we do, whether we should or not.
To add to the sibling comment, your CPU is not going to be using 100 W (if it can even reach that!) for more than a few seconds in total during 15 min of typical Photoshop use.
Perhaps. But I can't see a reason why they couldn't still write endless—and theoretically valuable—poems, dissertations, or blog posts, about all things red and the nature of redness itself. I imagine it would certainly take some studying for them, likely interviewing red-seers, or reading books about all things red. But I'm sure they could contribute to the larger red discourse eventually, their unique perspective might even help them draw conclusions the rest of us are blind to.
So perhaps the fact that they "cannot know red" is ultimately irrelevant for an LLM too?
There are use cases where even low accuracy could be useful. I can't predict future products, but here are two that are already in place today:
- On the keyboard on iphones some sort of tiny language model suggest what it thinks are the most likely follow up words when writing. You only have to pick a suggested next word if it matches what you were planning on typing.
- Speculative decoding is a technique which utilized smaller models to speed up the inference for bigger models.
I'm sure smart people will invent other future use cases too.
Yes, there are even videos showing it on youtube.
reply