Having used Copilot for a while, I am quite certain it will replace me as a programmer.
It appears to me that when it comes to language models, intelligence = experience * context. Where experience is the amount what's encoded in the model, and context is the prompt. And the biggest limitation on Copilot currently is context. It behaves as an "advanced autocomplete" because it all is has to go on is what regular autocomplete sees, e.g. the last few characters and lines of code.
So, you can write a function name called createUserInDB() and it will attempt to complete it for you. But how does it know what DB technology you're using? Or what your user record looks like? It doesn't, and so you typically end up with a "generic" looking function using the most common DB tech and naming conventions for your language of choice.
But now imagine a future version of Copilot that is automatically provided with a lot more context. It also gets fed a list of your dependencies, from which it can derive which DB library you're using. It gets any locatable SQL schema file, so it can determine the columns in the user table. It gets the text of the Jira ticket, so it can determine the requirements.
As a programmer a great deal of time is spent checking these different sources and synthesising them in your head into an approach, which you then code. But they are all just text, of one form or another, and language models can work with them just as easily, and much faster, than you can.
And one the ML train coding gets running, it'll only get faster. Sooner or later Github will have a "Copilot bot" that can automatically make a stab at fixing issues, which you then approve, reject, or fix. And as thousands of these issues pile up, the training set will get bigger, and the model will get better. Sooner or later it'll be possible to create a repo, start filing issues, and rely on the bot to implement everything.
I didn't find reading largely correct but still often wrong code is a good experience for me, or it adds up any efficiency.
It does do a very good job in intelligently synthesize boilerplate for you, but be Copilot or this AlphaCode, they still don't understand the coding fundamentals, in the sense causatively, what would one instruction impact the space of states.
Still, those are exciting technology, but again, there is a big if whether such machine learning model would happen at all.
I'm skeptical it'll replace programmers, as in no more human programmers, but agree in the sense 100% human programmers -> 50%, 25%, 10% human programmers + computers doing most of the writing of actual code.
I see it continuing to evolve and becoming a far superior auto-complete with full context, but, short of actual general AI, there will always be a step that takes a high-level description of a problem and turns it into something a computer can implement.
So while it will make the remaining programmers MUCH more productive, thereby reducing the needed number of programmers, I can't see it driving that number to zero.
It will probably change the types of things a programmer does, and what it looks like to be a programmer. The nitty gritty of code writing will probably get more and more automated. But the architecture of the code, and establishing and selecting it's purpose in the larger scheme of a business, will probably be more what programmers do. Essentially, they might just become managers for automated code writers, similar to the military's idea of future fighter pilots relating to autonomous fighters/drones as described in this article:
Yup, I think that's it exactly. I just described this in another comment as a reverse of the evolution that graphic design has undergone in bringing them into programming front-ends.
I can't wait to see how far we're able to go down that path.
I have a feeling this is the correct read in terms of progression. But I'm skeptical if it'll ever be able to synthesize a program entirely. I imagine that in the future we'll have some sort of computer language more like written language that will be used by some sort of AI to generate software to meet certain demands, but might need some manual connections when requirements are hazy or needs a more human touch in the UI/UX
> But I'm skeptical if it'll ever be able to synthesize a program entirely.
Emotional skepticism carries a lot more weight in worlds where AI isn't constantly doing things that are meant to be infeasible, like coming 54th percentile in a competitive programming competition.
People need to remember that AlexNet is 10 years old. At no point in this span have neural networks stopped solving things they weren't meant to be able to solve.
I feel like you're taking that sentence a bit too literally. I read it as "I'm skeptical that AI will ever be able to take a vague human description from a product manager/etc. and solve it without an engineer-type person in the loop." The issue is humans don't know what they want and realistically programs require a lot of iteration to get right, no amount of AI can solve that.
I agree with you; it seems obvious to me that once you get to a well-specified solution a computer will be able to create entire programs that solve user requirements. And that they'll start small, but expand to larger and more complex solutions over time in the same way that no-code tools have done.
It appears to me that when it comes to language models, intelligence = experience * context. Where experience is the amount what's encoded in the model, and context is the prompt. And the biggest limitation on Copilot currently is context. It behaves as an "advanced autocomplete" because it all is has to go on is what regular autocomplete sees, e.g. the last few characters and lines of code.
So, you can write a function name called createUserInDB() and it will attempt to complete it for you. But how does it know what DB technology you're using? Or what your user record looks like? It doesn't, and so you typically end up with a "generic" looking function using the most common DB tech and naming conventions for your language of choice.
But now imagine a future version of Copilot that is automatically provided with a lot more context. It also gets fed a list of your dependencies, from which it can derive which DB library you're using. It gets any locatable SQL schema file, so it can determine the columns in the user table. It gets the text of the Jira ticket, so it can determine the requirements.
As a programmer a great deal of time is spent checking these different sources and synthesising them in your head into an approach, which you then code. But they are all just text, of one form or another, and language models can work with them just as easily, and much faster, than you can.
And one the ML train coding gets running, it'll only get faster. Sooner or later Github will have a "Copilot bot" that can automatically make a stab at fixing issues, which you then approve, reject, or fix. And as thousands of these issues pile up, the training set will get bigger, and the model will get better. Sooner or later it'll be possible to create a repo, start filing issues, and rely on the bot to implement everything.