Entirely misses the point. The right frame for this is “AI wont take your job, but humans with ai will” [0]
There are countless examples of people up-skilling using AI. Today, that might threaten the bottom of the market. Soon, it will put everyone’s jobs at risk for disruption.
I just don't buy it. This is the classic non-programmer's view of programming productivity. It is really easy to dash off hundreds of lines a day. Doing so feels great, but is in fact massively counterproductive. The hard work of software development benefits from deep thought, not shallow. Ten lines of well-tested, peer-reviewed code are more valuable than ten thousand lines of crud. Any time a programmer complains something will slow down their work, that's a point in its favor.
"AI"-based static analysis tools might be valuable. If I ever see "AI"-based code generation on a project I run before I get stuck cleaning up the mess.
Conversely I think you express the view of a non-ai-assisted programmer.
LLMs don't just dash out hundreds of lines of code. They can take a concept and break it down into concrete steps or even a fully planned architecture for you.
That's literally the hard work you describe. AI can do it beautifully, it just needs a human capable of critical thought to steer it (human understanding of the subject at hand is merely a bonus, not a requirement).
Hard agree. I’ve been at this for 22+ years now, and GPT4 by my side has made me AT MINIMUM 2x as productive; on a good day, 5x.
The separation will very clearly be “legacy devs” vs “AI-powered devs”.
I’ve spent a tremendous amount of time thinking about how the problems should be solved on my current project — big picture, and small. Once the system design became clear, about 80% remains as “just write the damn code” type work. I outsource about 70% of that to GPT, while keeping my focus on the important parts; the right abstractions, modular and self-contained functions, good naming and function signatures, making sure it all works and makes sense in harmony.
I don’t care (too much) about the inner workings of every function, so long as the abstraction and usage of it is great (and well tested). Refactoring, if ever needed, is trivial then.
My overall work has never been better.
Anybody dismissing AI for any level of programming is doing themselves a huge disservice.
I've also been in industry for ~17 years and coding for a decade or so before that, but I'd seriously sit down and watch an hour long video of someone just using AI to work on a large existing system. Virtually all the AI coding content out there is about building something new from scratch.
Could elaborate more on your workflow? Do you use Copilot, or just GPT4? Are you copying and pasting large blocks of existing code to hint at how things should fit together, or do you describe how the existing code is structured in English? Do you find yourself decomposing your work differently so as to fit your new AI workflow?
I use both Copilot and chat with GPT4. For now, it’s mostly copy/pasting large blocks of pre-existing code, types, etc. I have a pre-defined prompt set up to explain how to behave, languages I know, libraries in use, naming conventions, how to respond, etc.
I talk GPT as if it’s a skilled colleague who’s got memory issues. When it forgets, I remind it with stuff like “this is our current code now, remember X, Y, Z”
I’ve become an even bigger fan of small, compostable functions that do one thing very well. GTP excels at that, both writing and testing them.
I don’t involve it much for architecture and high-level design atm (mostly because I got that part solved on my current project). I tend to have a design in mind, give GPT the overview of how it fits together with mock code, and ask it to review with me, propose other paths we could take, etc before proceeding to implement.
One function at the time, with tests. When refactoring happens, it’s usually isolated to a few hundred lines at most. When tests fail, directly or indirectly, I give it the full output and ask it to debug and fix. I find this helps GPT remember the code’s responsibilities when it gets lost/forgets. It also helps me avoid regressions when GPT returns functions that miss use cases we had covered before.
I keep long running chat threads — weeks at times, hundreds or even thousands of messages. The longer we go, the better it tends to perform (web app performance, even on an M2 Studio, does suffer after a while though)
At worst, GPT is a fantastic rubber duck. At best, it’ll help me see superior approaches and solutions I wouldn’t have considered, AND give me perfect code in seconds.
Once tooling gets really good, and AI can understand the whole code base/database/infra… we’ll probably be in real trouble.
Thanks a ton! Details around this use case are conspicuously missing from the public discourse.
You've really changed your workflow to adapt to these new tools. The amount of mental effort seems comparable to learning a new IDE, maybe a bit less.
Instead of the chat app, I wrote a python script that uses the GPT-4 completion API. I can just pop over to the terminal and type 'chat' and it's there. As far as I can tell, it's basically the same as the app.
We are starting to see AI tooling that can fit an entire 100k line code base in its context window. I still see myself having a job five years out. Ten years out, not so much. Luckily I'll be close to retirement.
I've got all of 30 minutes of experience with https://www.cursor.sh, but this appears to be another major upgrade to my workflow. In-context editing by AI, without any of the copy/paste annoyances.
The PR I'm currently working on has about 1K diffs – I've literally touched NONE of that code directly. Freakin' wild!
I suppose I have adapted quite a bit, but it’s just a different path to the same style of code/modularity I’ve found most effective/productive.
I was primarily FE for a while, and used to agonize over the smallest details when building eg React components. After a few years, I concluded that the innards don’t really matter most of the time, at least not early on, so I limited my obsessions to the “public interface” (naming, types and prop design) and libraries of well abstracted, composable, low-level building blocks. If and when things needed to be rewritten or optimized, replace the internals.
ChatGPT just gets me there faster, and more often than not, gives me acceptable production-level innards while I get to stay focused on perfecting the exterior and cohesiveness of the overall solution.
I’ve found this approach usually pays dividend when reworking parts of the system, too — GPT picks up on the flow of everything much better when the code is “self documenting”. Same appears true if humans need to get directly involved.
Still blows my mind on a daily that we’re here already. I’m glad I’m not early in my career, I’d be very worried if I had another 40 years to go.
> The separation will very clearly be “legacy devs” vs “AI-powered devs”
I don't think there will be many people not using AI, if at all. Just like everyone learned to use Google and Stackoverflow, everyone will learn to prompt a chatbot. And since it's not really rocket science prompting a chatbot, we might see programming becoming a cheap commodity (and after that - pretty much most knowledge work)
I agree, but purists can be their own worst enemy. Prompting is still a bit of an art, but I’m sure it’ll get better. Still, seems silly to not leverage one of the greatest tool upgrades we’ve seen in decades. There’s plenty of interesting problems to solve, why waste time on the boring parts anymore?
Don’t get me wrong — I think this’ll be the end of MANY jobs within almost every field in the next decades, ours included. But I sure as shit can’t beat it, so may as well join it and get something out of it before I become just a commodity.
Everybody here HN know how to write but not everybody submit quality posts/comments that get thousands of upvotes. The same dynamics will apply to a society where everybody can code. The prize for being in the % top coders will only increase.
Agree. I've been using ChatGPT4 occasionally to write separate self contained pieces of code. It's amazing how good it is. The limitation right now is that you can't load your whole repo into it. So it's limited to small self contained pieces of code. You can give it functions it can call in the input, but still it's very limited right now. Once someone figures out how to load whole repo into it you will basically be able to assign tasks/bugs to it and it will create a PR for you. So you will only need to review PRs written by AI.
> Once someone figures out how to load whole repo into it
Yep that's the crux of the issue I agree. It's not just one repo, any well established org has millions of lines of code - can ChatGPT handle that kind of context size without any human directing it ? It's basically having an LLM trained specifically on your company data, and having it retrained constantly since the data keeps changing - what are the costs of that?
We'll see.
I get what you're saying about 10 lines of well-tested vs 10,000 lines of crud, but this hyperbole is also a false dichotomy.
GPT-4 and Copilot absolutly makes me more productive as a programmer, wiring all the tests, getting maybe 80% of them right and then I fix them up. I am not producing 10,000 lines of crud, but I AM producing 1000 lines of test-covered code, I'm doing the hard bits, and the AIs are doing the menial parts.
I am by far the most productive programmer on the team however you measure it (we have about 60 different metrics available to management) and a large part of this is be having enough energy to always be attacking the hard problems, because I am not wasting mind power and time writing the trivial bits that can be automated.
Does it get it right every time? Nope. Does it make hilariously wrong mistakes? Yep, about once or twice a day! Is it worth it? Absolutly.
It is great, I use them to self improve all the time, I like things that are critical of me, or show me where I am weak.
Everyone (including management) is aware of all of the tropes and cliches about how metrics cease to be good when they become targets and how metrics can be gamed blah blah blah - we all know that - and have imbibed that, but the rationale is that more (but imperfect) visibility is better than no visibility or solely subjective interpretations.
As long as all metrics are assumed to be wrong until proven correct, with some digging, some thorough analysis and a judicious and abundant use of the benefit of the doubt, I find them really helpful.
starting with FORTRAN the last 70 years have involved continual improvements to make programming faster, easier, higher-level, easier to learn, and more accessible; and each one of these has just made programming an even more useful and important skill. I don't currently see any reason why ChatGPT should be any different.
This article is perfect. Basically it sums up to, "whats more likely, that every job disappears? (never happened in human history) or we get yet another expansion of products and innovations (always happens)?"
"but this time is different" - I still have not seen a single example that shows otherwise.
Remember the Luddites. Yea, that group considered to be 'anti-technology', except that isn't what happened. At the time they were kicked to the street to starve with nowhere to live.
AI does not have to replace every job to completely and totally disrupt society in ways like leading to world wars, or the rise again of fascist governments. Blind faith that the magic hand of technology always makes things better is just a form of historical ignorance of all the social changes that had to occur with it to ensure that we were not enslaved by those that owned technology.
Historically, efficiency increases from technology were driven by innovation from narrow technology or mechanisms that brought a decrease in the costs of transactions. This saw an explosion of the space of viable economic activity and with it new classes of jobs and a widespread growth in prosperity. Productivity and wages largely remained coupled up until recent decades. Modern automation has seen productivity and wages begin to decouple. Decoupling will only accelerate as the use of AI proliferates.
This time is different because AI has the potential to have a similar impact on efficiency across all work. In the past, efficiency gains created totally new spaces of economic activity in which the innovation could not further impact. But AI is a ubiquitous force multiplier, there is no productive human activity that AI can't disrupt. There is no analogous new space of economic activity that humanity as a whole can move to in order to stay relevant to the world's economic activity.
I mean everything goes through boom and bust cycles. If you want a relatively safe job think about becoming a public servant. However, dev jobs are some of the best jobs in the world, as it allows one to work in many sectors and the pay is generally very good.
I invest and save aggressively. So a bust cycle lasting a few years won’t matter to me.
Additionally, keep life expanses reasonable and not getting sucked into consumerism keeps the cost of living low.
The awkward truth is that every other job you can take will appear inefficient. After some time you'll crave a computer to come do aspects of that job too... and hey presto, you're back on a client, coding up some automation feature.
Your subtle observations about inventory management, scheduling and mathematical errors in planning, bugged management so much they took you off 'normal work'. Then you're tasked with automating easy coding tasks that are hard because of outdated company structures, with limited oversight and no fellow bro-grammers to get you out of a jam.
If you do manage to 'lose' a programming job, please tell me how. This career is a one-way ticket to infinity, because you know.
> Don't really know where to go next, I don't see that many alternative well-paying jobs out there.
Forget well-paying, I haven no idea what I CAN do that's not programming. Sure I can try working as a teacher or as a nurse - but that's all it will be - trying. I might despair, burn out etc. Even people who are on paper better suited to these jobs burn out all the time.
I'm 39 so it's not going to be an easy switch.
I hope we all have 5-10 more years in the industry, that's enough time to save enough money.
I'm actually developing a product that writes database queries, SQL and Cypher with natural language. ChatGPT makes "coding" more accessible for everyone. It will not replace real software developers, but it will help a big number of people with zero coding skills to perform simple data manipulation and data extraction tasks.
The challenge here is that regardless of how simple the input language can be, you'll still need the knowledge as a user to know what to ask in the first place! For example, I've worked with product managers (these "zero coding skills"
folks you mention) who didnt even understand the basic concepts of HTTP - so how can we expect them, armed only with an LLM, no matter how powerful, to ask the right questions and actually build something useful, let alone entire applications. (not insinuating this was your meaning, but this is what i see repeated in many places). I think as far as what is useful to no code folks is a WYSIWYG editor, and even those always suffer pitfalls at somepoint, needing custom development somewhere evantually.
Languages like SQL are probably the killer app for this generation of LLMs like GPT 3.5 and 4. It can work very well, especially if you give it some application specific hints.
Probably because the designers of SQL got us as far along towards natural language as they could, and LLMs were built for translation.
GPT4-5 will change the nature of the job: instead of digging thru a pile of smelly code to find the right place to fix something, you'll ask GPT for a list of suggestions, and the top suggestion will be usually correct. However if it generates an arcane script to update your database, you can't trust it and can't verify the script without assistance from an expensive software dev. There is some similarities with lawyers: gpt4 can write a lengthy contract for a few cents, but you still need to pay a lawyer thousands of dollars to verify it.
However, when gpt gets fluent in formal logic, the number of knowledge workers needed to run a business will shrink tenfold at least.
Agree. I haven't seen any examples of ChatGPT that might make me want to use it for my job. Yes if you are an inexperienced programmer (or just not very competent) then it might help you get something working I guess?
A couple of things I’ve found it useful for as a senior dev:
- Working with unfamiliar APIs which don’t have great documentation. It can be quite helpful for asking questions here, essentially taking the role of Google/StackOverflow, as long as the API was created before the knowledge cutoff and I guess as long as there’s enough content out there. It felt to me like it was able to do quite a good job of linking together the (rather terse) official docs with code from GitHub to show real world examples of how to do stuff. This was definitely net useful for me, but it does like to hallucinate APIs that should exist but don’t.
- Working with unfamiliar languages. I used it to help me write Rust recently (“how can I express this code more succinctly?”, “why won’t this compile?”) and it was quite useful, though I feel like you hit the limits of it if you get deeper into e.g. trying to find a workaround for a borrow checker situation and it’ll just make up code that doesn’t work.
Outside of that, I find it can be quite useful as a supplement to Google, and is good for writing things like regexes, but when I’ve tried doing more advanced coding tasks with GPT-4, I felt like I ended up spending more time trying to make the output compile and work (sometimes without success) than if I had just written it myself. It can be good for working out the broad outline of a solution though.
Overall, I find it a useful tool but I am sceptical of people claiming a 5x increase in productivity thanks to GPT.
Every advancement that simplifies coding potentially displaces a job. If tools like GPT-4 amplify my efficiency by even 1-5x, it's not just code— it implies a broader workload shift across the tech sector.
I'm also wondering if it'll impact compensation in any way, as the work becomes easier and more accessible, I'd expect the pay will lower, since more people will be able to do the job with the help of AI.
From my perspective, hailing from Germany, I don't anticipate a decline in developer compensation. Instead, I foresee a future where companies will be selective, opting for developers who are adept at integrating AI into their workflow.
As we begin to reshape our approach to project construction, the inclusion of AI systems will become a standard component in our calculations. This realization will inevitably lead us to understand that certain tasks no longer require as many developers as they once did. Consequently, there might be a reduction in hiring rates for developers.
Nevertheless, I remain optimistic that while the demand for developers might become more selective, their compensation could potentially increase due to the specialized skills required in an AI-integrated landscape.
Maybe, but LLM are trivial to integrate, it does not take any special knowledge or experience. Integrating in-house trained models is difficult, but LLMs really changes that.
Programmers used to program in Assembly. Id argue coding skills in modern languages are akin to knowing Assembly when C was invented; how programs are written will change.
Actually, Fortran and COBOL _did_ replace programmers, as far as I know. It was largely women at the beginning because it was quite tedious and the first computer languages were assembly or machine code. They were so far from the application or mathematical language and often had to be programmed in difficult processes like sometimes literally flipping switches, or input systems that were slow to set up.
But then came high level assembly languages and even higher level application languages. And easier to use input systems. Programmer used to literally be the person either translating some finished math into assembly language or actually encoding code into punch cards or both.
All of that went away with better tools. So when this article claims that this stuff didn't take jobs, they are wrong. It literally did.
The other thing the article gets wrong is that people are concerned that ChatGPT and the current generation of LLMs will literally replace software developers today in equivalent fully general capacity. No one thinks that. No one whatsoever.
What is obvious, however, is that it is possible to use something like the existing ChatGPT API and models to do certain simple specific programming tasks. That means that _some_ things you might have previously hired a programmer for can now be done automatically.
The applications that I have got working are things like creating a chat interface for querying a specific database, where the system generates the SQL and then formats or summarizes the output. Or creating a web page from a template with just a little bit of custom functionality.
So the current version of GPT cannot replace the actual abilities of a programmer, but these AI-powered app or analysis generator are definitely chipping away at the amount of basic but custom tasks that you need to hire a programmer for.
The main concept though which the author seems to not get is that these systems will continue to improve year in and year out. And the pace we are on, even assuming it slows down a bit, means that we can expect the complexity of programming tasks that LLMs can handle to continue to improve.
There is no reason to believe that it won't pretty shortly end up being better than any human. Especially when you can narrow down the task to a specific popular type of application.
As soon as I saw ChatGPT last year, I realized that this was eventually going to impact jobs for software development. To what degree and how soon was not determined, but it is obvious it will have a very large impact at some point.
So within two days I started working on my own code generation startup leveraging GPT. Within 1-3 years I expect the competition in this area will be very intense. Within five years it will be very obviously competing directly with freelance programmers for small contracts. You will literally see articles like "Use this ChatGPT Plugin instead of spending $10000 on a Freelance Programmer" and it will work.
Actually right now I know there is someone who always advertises on HN for building a GPT-powered chatbot for $10000. There are websites that will make the same thing for under $100 per month. At least one that can generate code for custom chatbots. And I am planning to add chatbot capability to my platform which my code generating AI system can easily hook into. So my system will be able to build custom chatbots also within a few months. As a ChatGPT plugin with hosting for $15/month. Directly competing with that guy charging $10000.
Of course ChatGPT isn't going to replace any programmers now. Is anyone that has actually tried it seriously worried about that?
The reason people are worried is because it provides strong evidence that AGI might actually be possible within a few decades. Before that we really had no idea.
How tediously derisive. Part of the reason ChatGPT is seminal is that it shows that "next word predictor" can display extremely sophisticated capabilities; more than the sum of its parts. It might not end up in AGI but it's clearly a huge step closer - taking us from "will it every happen?" to "it actually might happen!".
It's not a huge step closer. Consciousness, desire and will are orthogonal to ChatGPT's predictive modeling. Tedious is continuing to reiterate this point every time someone tries to spring the AGI doom-trap.
Nope. AGI has a fairly well accepted definition - something with human level cognition. The ability to replace humans for most tasks.
Bringing up things like sentience and consciousness muddies the waters because nobody knows those even are. HN is full of confident and unfounded assertions that ANNs fundamentally can't be conscious. Or even that they can't reason!
Well, I agree that there is a reasonable and more correct definition of AGI something like you say, but unfortunately the term is not being used with that connotation consistently. The majority of people use it in a very fuzzy way, often incorporating ideas like consciousness.
But also even in your statement, the word "level" is problematic. Because there is more than one dimension to the cognition. See Yann LeCun's recent criticisms, many of which are correct. Yet it's clear that LLMs have their own type of often useful and fairly general reasoning.
Can you? The burden of proof is on the AGI is nigh camp to prove they are not orthogonal otherwise it's just people handwaving and abusing the notion of "intelligence".
I know what consciousness, desire and will are because I experience, and act, on them every day. People pointing at things outside themselves to prove intelligence have a much higher bar to clear.
The very idea of intelligence as a discrete quantity is foolhardy and doesn't match what we know to be true in the animal kingdom. Intelligence is deeply rooted in an organism's sensorium.
There are countless examples of people up-skilling using AI. Today, that might threaten the bottom of the market. Soon, it will put everyone’s jobs at risk for disruption.
[0] https://hbr.org/2023/08/ai-wont-replace-humans-but-humans-wi...