Looking back we had one CS professor who in 2007 predicted we'd all be jobless in ten years, i.e. 2017.
His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
I said it then and I will say it now. If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
Of course I'm leaving out some languages here, but the level of abstraction has been rising.
But the rigidity of what is needed to specify a program that really does what you want hasn't. Especially evidenced by the fact that recent programming language developments often have a specific area where they shine, but not raising the abstraction level that much.
I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."
> So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
This. I started programming in Machine Code, where the "editor" was a pad of graph paper.
I've watched management- and business-focused people sneer at us geeks for my entire career, and watched them drool over the thought of finally getting rid of us.
Hasn't happened yet.
> I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."
I get that, from "idea people," on a regular basis. They have nothing but contempt for us "Make It Happen" folks. They treat us as if we grow on trees, and are suckers, to boot.
Inevitably, the above, is followed by something that translates to "Of course, you will do all the work, but I'll get all the money, because the idea is what's really valuable."
If I follow up on it at all, I'll be asked to sign an NDA, and meet in some clandestine place, so they can whisper to me about their AI-powered cheese straightener.
I agree with your characterization in general. If all someone has is an idea and no relevant experience to back it up, run. I can’t stand people like that.
But there’s one type of ideas person I’m thrilled to work with. Someone with deep and successful experience in sales. The head of sales at my company is also relatively product oriented and boy, he just has a knack for what customers want.
In general, I think many engineers can do some product management. We can figure out basic accounting and finance. But the idea of making a cold call to sell dental software, or chasing down school principals at an education conference is almost as foreign to most software engineers as CS is to most sales folks.
There’s a role called sales engineer that typically exists in the sales organization but requires a technical background. From the engineering org’s perspective, your SEs are there to make sure sales doesn’t oversell or undersell. At my company your pay is commission-based but with a higher base than sales proper.
Solution Architect is another common title. As someone else mentioned, Developer Relations is at least adjacent though that tends to be less salesy and, depending on the role/company, may be more focused on community aspects than building demos and the like. In any case, at software companies, there are definitely customer-facing roles that are more technical though they may not involve a lot of direct coding. For non-field roles, product management qualifies as well.
I would say an SA (Solution Architect) is very different from an SE (Sales Engineer) in one fundamental way: the SA is focused on delivering value post-contract and the SE is focused on making sure the org is able deliver the value promised by working with sales in the pre-contract phase. In some smaller companies or organization within a company, I can see these roles being performed by the same person but in general that distinction between pre and post sales has been my lived experience.
SA people that I have met so far (30+) can architect and more importantly talk about and market the solution. But the execution is often times lacking. And that is solely because the execution is less important. (If the contract is big enough SA can always lean on product engineers or hire contractors to execute).
Being the engineering contractor to SA organization is daunting to a programmer, but rather rewarding.
Where I work, SAs are pretty much technical pre-sales. (There are also chief architects who are somewhat related.) If a customer needs ongoing post-contract support, they can buy a TAM (technical account manager). We don't have sales engineers.
(When I worked for a long-ago hardware company, system engineers (SEs) were pretty much the equivalent of SAs where I am currently.
In my company, all of our sales people have a technical background. They are not the best engineers in the company, but they could do the job if they had to. More importantly, when faced with technical people, they are not completely clueless.
Agreed. I have a thousand tools mastered to solve any software issue quickly. But people issues can still stump me for days. Both skillsets take dedication and years(decades?) to master.
In Bullshit Jobs Graeber terms our current system as "managerial feudalism". It's not capitalism in the same way Adam Smith or Marx envisioned, we don't have firms brutally competing to generate the most value at the lowest cost. Instead we have a system of rent extraction largely done by a small number of powerful players. You don't need to be efficient to extract rent, so you end up with a court, or org chart in modern terms, of useless hangers on that mostly serve to demonstrate your wealth and power.
This seems overly pessimistic (unsurprising given that it’s written by an eventual fascist). Egalitarian societies do exist, but they take work to maintain. The Mondragon corporation functions as a cooperative despite its large size. I know less about it’s day to day organizing that I’d like to unfortunately, would be very interested to learn more.
I kinda wonder if it would not be so bad, if a company asked chatGPT “prompt the engineers for status updates occasionally, then take their responses and produce high-level summaries for department heads” or whatever.
On their side, the engineers might even ask it “summarize my tickets and come up with an optimistic, business-friendly high level storyline.”
stop "prompting" me and pay attention. I do SO much communication with what I'm working yet it's not enough and I also need to do YOUR job for you. And manage your kanban. And compile all the things I've done at the end of my term. Can't wait for AI to replace you, you are useless.
The only time I see synergies is when we talk about what task to take on next and why.
Tell ChatGPT it's a product manager overseeing engineers and providing executive summaries. Create a system that feeds executive emails inquiring about project statuses into ChatGPT and emails the developers asking for updates. Developers reply with jargon-heavy details about tickets for next milestone, ChatGPT writes an executive summary that eliminates jargon and sends to executives.
Not the full job, but it can probably be built right now.
The bad version of all three look exactly the same. And the bad version of all three are actually worse than nobody at all. Yet, high management is completely convinced those people are essential so they'd rather keep a bad professional there than get rid of them.
Anyway, the good version of those three are completely different, and add a ton of value on very different places. I think they are rare enough that many people don't ever meet one of them.
Human managers (who are good at their jobs) bring a little Bedside Manner, a modicum of compassion to Human Resource Allocation that will soon be seen as a luxury compared to low-cost, fully-automated Human Resource Monitoring & Reinforcement Systems
Good managers shield their reports from a lot of the crap that is inevitable at scale in a large organization. A large company can't just have individuals and small self-managed teams go off and do whatever they feel like. And, yes, if you just cut out all the intermediate layers, the VP with 150 direct reports will simply have to resort to automation based on simplistic metrics. "You didn't produce enough lines of code last week. If that happens again, you will be placed on a performance plan and then terminated if the situation doesn't improve in 2 months."
Literally all of those layers including the VP could be replaced right now.
All those jobs do is push emails rewording other emails to other people pushing email.
I feel like eventually every company will have a single a figurehead that’s fed instructions but believes it’s original thought and is told by the AI what a great idea they just had all day long.
I think its simply because upper management doesn't trust the people they've hired.
In my mind, having individuals and small self managed teams go off and do whatever they feel drawn to do is exactly how a good company is run. - So long as those teams and individuals talk to (& seek advice) from the rest of the company when their work has impacts outside the team. The book Reinventing Organizations by Laloux talks about this a lot, and how it works in some companies today.
The reason it doesn't happen more is that upper management doesn't trust their employees and they don't feel in control when people just do things.
if GPT or copilot make developer 5-50x more productive then you need just 2 good programmers instead of team of 10-100 person, so out of window goes all formalized agile processes, safes etc so you don't need anymore those managers who are doing communicating, process planning and other big business stuff. Of course this will also affect for developers but I bet less because there is huge need for software developers and mid-size business did not have money to hire team huge team of developers but this will change when you need only two people instead of 40.
If C makes developers 5-50x more effective compared to writing assembly, then you just need two devs instead of a team of 10-100.
Obviously that didn't happen. And people actually made predictions like that once upon a time.
As software becomes cheaper, and the production of software can be done with fewer people, demand has always increased and I don't see any reason that's going to stop.
I recall seeing a video in which a senior vice president of Symbolics (yes, the Lisp machine people) claimed that their aim was to put the largest software projects within reach of a small team, and everything smaller would be feasible by a single person.
Had Lisp machines (as an idea) remained viable, we might've come close! But managers just love having large teams to command, and we've invented process and ceremony to fill up the spare time necessary to do the work of ten programmers with a hundred, as well as rejecting technologies like Lisp and Smalltalk that give individual programmers tremendous leverage over the problem space.
Should AI actually make people more productive, instead of being an endless generator of messes for fleshbags to clean up, I imagine some successor of Scrum to come along to tie up most of staff programmers' work days with meetings. Hell, that may happen anyway.
I don't disagree we just don't know yet. I do think these 2 guys will be overworked as f** and constantly context switching to the point they barely know what they're doing anymore...hope we don't get there.
Funny enough, last week I was talking about Copilot with a colleague. Out loud, I wondered if there was a diminishing return from Copilot. That is, Copilot is great at filling in boilerplate / the obvious, and I get to focus on the more intricate parts of the solution. But can my brain operate at the higher level day after day? Week after week? TBH I have my doubts.
Maybe it's going to be AI enables quality more than quantity?
Same thing is happening with doctors. I read a post from a GP a while back talking about how it used to be that he got a wide mix of acuity/difficulty all day long. Now all the "easy" patients go to the Nurse Practitioner, and all he gets is the difficult ones.
He's still expected to put in the same amount of time, but now he has to expend brain power the entire time.
I learn in a different topic on HN that with a sophisticated chat bot lurkers dare ask support questions. As many developers I've often asked people how to do things. Before asking I try to find answers in google (as asking questions easily googled is lame) but if the chat channel is busy and I've already asked for help in the last 30 minutes I do more google searches. Each next question feels more expensive. Technically I apparently prefer to ask google first. If there is a chat bot fit for the question I should logically go "bother" that before asking humans for help.
Asking a human manager for anything is many orders of magnitude more frightening. It's not that I don't have questions! I have many, of which a lot are dumb questions, questions that I should be able to figure out without help.
Say i'm pondering taking a day off, with the human I would make up my mind, pick the day and ask for the day off. With a bot I would ask which day would be ideal before making up my mind. A human manager would think you've gone insane if you asked 20 such questions. You either want a day off, you don't or you say nothing.
I might like a different brand of coffee while we are trying to meet the deadline. I'm sure that question is going to go down well with an overworked human manager.
I don't think in 40-ish years I have ever asked a manager what's a good day or week for me to take off. I probably have certain constraints and I can look at my calendar. And have a general sense for good and less-good periods.
Purely out of curiosity I one time ask for a risk analysis report my employer is legally obligated to maintain and share with me (but didn't have). Safe to say the response involved a lot of emergent behavior.
With people asking a question is never just asking a question.
Like with any job there is a sub set of tasks better done by a machine.
There are questions that should be asked but wont be and ones that could be asked but shouldn't be. It depends a lot on the persons, their relationship and the context where the lines are but if its just a bot you can ask anything. The answers would somewhat depend on your role but they wont depend on who you are or how few or how many questions you've used up in the last hour, today, this week, this month and this year. Humans are like that, we are like: I've never asked for anything! as if its an important metric.
> If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
This is also one of the main reasons why all programming jobs were not outsourced to India.
Couldn't agree more with this sentiment. And to expand on it - the great outsourcing events we saw in the mid-2000s didn't work out for many of the things outside of programming: IT consulting in general, support and operations, call centers and things like design and architecture. The barrier was not always technical, but often a misunderstanding of how BaU works in the <parent_country> vs offshore and/or what the ask/expectations were. There's a lot of waste that happens when needing to be overly explicit and still having the message misinterpreted, interpreted too literally or simply failure to understand.
> This is also one of the main reasons why all programming jobs were not outsourced to India.
There's a whole industry here in America that re-shores programming contracts. They know they can't underbid Indian/foreign body shops so they just wait a few months and call back the companies who went with cheaper programmers. If the company is still around it's generally a complete re-write.
Great point. A lot of folks forget that not all programming jobs can/should be outsourced. There's value in outsourcing but specifications and contexts change that.
Ye, often the "creator", the "head" gets credited with everything a whole team come up with. Sure, picking the good and directing it to consistent whole is important but they would be nothing without people that produced that in the first place.
Interestingly, that's probably another area where a language model could be put into service. Consider, every workman with access to say GPT-6 now has an expert lawyer, MBA, secretary, manager, etc. at his beck and call. What happens when every individual can navigate the system as well as an entire team of professionals? I'm imagining headless corporations where the only humans involved are those engaged in physical interface with the world.
I think the claim that PhDs and postdocs are fungible is what enables credit to be concentrated to the PI.
In top places though, it’s often the case that trainees have an idea, get shut down by PI, trainee demonstrates project can work, then PI changes mind.
Also in science, the usual progress of a project is often PI has an idea, trainees explores the idea, finds an even better idea, PI says great now write the paper.
But I agree with you in the sense that PIs act as the “quality control” or “selection process” whereby ideas get culled and refined. And their scientific taste is non fungible.
True. And once their product hits the market, if it ever gets there, they don't thrive.
Prior to launch they are sooooo in love with their idea that they are meticulous about features all the while thinking they're smarter than the market.
They don't understand and appreciate the value of execution. Ideas are easy. Execution - because it involves people as well as adapting to change - is 10x harder.
Yes, those people exist. Unfortunately, that bias will ultimately undermine them, but they'll never admit it.
One of the things that I'm fairly good at, is walking people from "Crazy Idea That Will Never Work," through to "Finished Product That People Want."
It tends to be a very long process, and often involves a lot of "trial balloons." I just went through that, in the last couple of years. The project we're realizing, looks absolutely nothing at all like what the CEO originally dreamed up, but everyone that has seen it, loves it.
The trick is to not start off by saying "It'll never work." That slams doors shut, right away.
It's more like, "OK, so let's walk through what we'll need to do, to make it work."
That will often result in changes being made, by the "idea person," as the plans are laid. We will also try to create test harnesses and prototypes. These often end up, with the idea person going "Hmm...it seemed like a good idea, but it doesn't work the way I wanted."
Most all problems I see are just resource limited in some respect (occasionally its something humanity just doesn't know how to do yet, that's where the edges of science are). It's not something brand new that's never been done or explored, some prior work typically exists.
When someone says "can we..." or spills their idea they've often identified a real need but they often don't seem to do any sort of analysis further than identifying a demand signal. Chances are, you're not the first person to observe the problem and an opportunity to capitalize on it and there may be good reason no one has yet (its economically infeasible given current understanding, resources, and value people see in it).
The trick is to hold their hand and walk them through a feasibility analysis. Given infinite time and money, I'm confident I can arrive at pretty much any solution or at least a useful approximation that's an improvement on where we currently stand, so we start there and walk backwards. How much time do we really have, how much money do we really have. What's your appetite for risk within those bounds. Let's start talking about some tradeoffs of what you want to see and what I think we might be able to actually achieve.
When someone says "can we..." or spills their idea they've often identified a real need but they often don't seem to do any sort of analysis further than identifying a demand signal.
In my experience, they've identified a want. Either their want or a want they think the market has.
It's not until further analysis do they / you / we get to root need(s). This is why (allegedly) so many "IT projects" fail. The client got what they wanted...not what they needed.
In the project I’m on, I’m basically the CTO (sort of meaningless, as I’m the only front end programmer, and also do most of the backend).
I could be a lot more of a pain in the butt than I am, as I have a lot of leverage, but one of my goals is to help others on the team (including a young CEO) to learn how to make ship software.
If I were working on my own, I probably would have shipped, eighteen months ago.
It's the usual: "I have a great idea for a Startup, now I just need the money to hire some Developers to implement it...". The Winklevoss twins for example come to mind...
Is that truly fair though. I have no exposure to MBAs and it may be all the negative talk about them is based on the conspicuously bad ones. I'm speaking as an IT person with very little experience of running a business, and it may be that a good MBA could be a great asset. I genuinely don't know.
I'm reminded of reading about a Lisp machine company that ran into the ground because it was managed by techies. Their tech may have been great but their marketing and business skills were very arguably what killed the company.
MBAs are HN’s favorite punching bags. They are the only stereotype that it’s still ok to blanket-generalize about and dunk on here.
Some MBAs are dumbasses. Some are brilliant business people. Some of them actually have tech skills and have been writing software longer than you’ve been alive. This is true for many degrees.
I don’t know how some of the anti-MBA commenters think it works: A smart software engineer goes off to business school, gets an MBA in addition to his undergrad technical degree/experience, and now… what? He’s somehow dumber? How does that work? Only on HN.
Because the promise of the MBA program was that you could be air-dropped into an industry that you had zero experience in, and your MBA education would automatically make you the smartest in the room. You (supposedly) didn't need that industry experience, applying MBA principles would be enough. And it destroyed enough companies that they developed a not-undeserved reputation.
Now, if you already had relevant experience, then went and took the MBA course, and were able to identify when to lean on which, great! But, of course, part of that education is telling you that you don't NEED that experience in the first place, and if it and MBA principles disagreed, you should rely on the MBA principles.
So, if you took the course truly to heart, then, well, yes, you would be somewhat "dumber", so long as you define that as "unable to make correct decisions."
Having been a software engineer and gone on to do a MBA, none of what was taught for the degree reflected the attitude that you're describing. At no time did anyone tell us to ignore our prior experience - rather we were told that that our prior experience plus a business education would be valuable to future employers (which it is).
There was no expectation of being "air dropped" into an industry, and in fact, industry recruiters looked for candidates who already had prior experience when picking students for interviews and hiring. The students who had no relevant experience for the industry they were recruiting for (e.g. ex-Peace Corps or mgmt consultant looking to transition to tech or industry in general) were the ones who struggled to find roles. Your assumptions about what an MBA education entails are incredibly far off the mark.
I have an MBA and a CS degree so responses like this make me chuckle. I generally don't bother to engage with those with an MBA-phobia who haven't done it themselves.
I don't have an MBA, but I studied intercultural management as my Masters, and I am writing my PhD right now in management. To be honest it is quite zen like, with the degree you can understand how you can manage engineers by not trying to manage them. Other type of people mostly need closer alignment. I know a lot of stuff of how to make a company more profitable, and generally a nicer place to work. Management is actually quite like engineering, more of a craft than an exact science. The science helps you to have names for things, and a lot of times I really know how to fix people stuff.
> I've watched management- and business-focused people sneer at us geeks for my entire career, and watched them drool over the thought of finally getting rid of us. [...] They have nothing but contempt for us "Make It Happen" folks. They treat us as if we grow on trees, and are suckers, to boot.
I've met my fair share as well. Jobs copycats too!
The trick is always to ask them how much they can raise, and do so assuming they raised successfully in the past and that they know VCs.
Either the conversation gets really interesting, or they clam up real quick!
> The trick is always to ask them how much they can raise, and do so assuming they raised successfully in the past and that they know VCs.
They don't need to have the ability to raise anything in a B2B setting, if they can answer the alternative question:
"How many customers have you personally signed up for previous software sales".
I'll happily partner with someone who has profitably sold B2B software - having a list of existing contacts in existing companies as well as solid experience going through the entire sales channel is, to me, just as valuable as extracting money from VCs, because I'd rather have the money from customers than money from VCs.
To some level I agree - but at some point there is also much on the "business side" that cannot be easily dealt with. Like having connection or some kind of relation with people who will buy the stuff or would have people who would be interested in buying stuff you make.
Yes there are these "idea people" who don't have any clue about business side and don't have any clue about technical side and in the end don't even have the right connections or business network. But they think they can make it because they have an idea(TM). These could basically play lottery and outcome would be the same they might make it but chances are 1:1000000 at best.
Then there are these business people who have the right connections and have understanding of niche/business they are in .. that need technical help to execute their idea and these are worth their weight in gold.
> I get that, from "idea people," on a regular basis. They have nothing but contempt for us "Make It Happen" folks.
I met a pair of such idea guys once, they ran their own idea company. Their specialty was brand consulting and online marketing, but if that involved something as menial as web development they'd help with that too. The thing that made me bristle was whenever they talked about the feasibility of software solutions for their ideas, they said "Oh, we'll run that through the machine." You know, as if their software team was just a big machine that ingested ideas and spit out code.
Their idea company is no longer around. One of them decided that he'd rather design hiking boots than anything computer related; the other is a head of UI design at Apple.
> I get that, from "idea people," on a regular basis. They have nothing but contempt for us "Make It Happen" folks. They treat us as if we grow on trees, and are suckers, to boot.
Ideas are like assholes ... everybody has one and they're usually full of shit.
> His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
It might sound really crazy and stupid today, but when SQL came out, it's advertised as one of "program-generating" languages and was supposed to reduce the need to code.
My career predates relational/SQL databases so I can confirm what you wrote. When Oracle came out the buzz was that a whole bunch of programmers would go away because managers and executives could write queries in a “natural” English-like language.
That never happened. SQL is hard to master, but it’s the easy part of understanding the relational model and any particular schema. Instead Oracle and the other RDBMSs that followed created more jobs for programmers and database analysts and admins.
As someone trying to learn SQL it feels that there are much more steps:
* language syntax
* the relational model (relatively easy? It is just "there"?)
* database schema, where I have 3 000 tables, or 50 0000 tables (that's how ERPs are made)
* actualy knowing how to use the language
* building those queries that join columns from 15 different tables together to deliver the needed data -> it sounds easy, but Im struggling to do it better/faster - I never saw a book or website that focuses on that (multiple joins from multiple tables), when it feels my work is is mostly that
* understanding what is needed
* actual "programming" problems (say: you have 100k items on stock, those are parts of BOMs, make a list of 'where used'... yes I know you can google it and try to copy from stockoverflow)
Seriously, I am trying now how to learn how to setup a simple DB to consolidate data (20 csv files on a shareddrive) and the guides are often like in the 'how to draw an owl' meme...
Controlling/reporting/analyst jobs feel in some way as "programmer" but without tools/training/salary - just figuring stuff on your own. Im doing it, but apart from that SQLzoo website I didnt manage to find any decent guide for the 'usual' problems. Also since those are like 10% of my work I cant focus on those problems properly - and try to learn it after work.
Also SQLzoo is a lot like the "how to draw the owl" meme.. the easy tasks are easy, the hard ones are impossible and there are no medium tasks. There also dont seem to ne any real life examples like those tasks like: 'join 15 tables to make some report'.
Shoot me an email, I’d be more than willing to pair with you. I’ve been a “software engineer” of some flavor for about twenty years now, and about five of those were spent writing SQL almost exclusively.
You can reach me at nominallyanonymous-at-protonmail-dot-com. From there I’ll give you my “durable” contact info - Slack, Discord, SMS, Signal, Telegram… whatever you use regularly, basically.
Tools matter. When I learned SQL long ago it was with FoxPro for DOS, and it was a great tool for doing both SQL and text based UIs (as with curses). Later, I used MS Access 97 and it was an even better tool and sparked a lifelong interest in data modeling. The ui for building up tables (specifying column data types, etc) was really trail-blazing at the time and the interaction remains good today. The built-in ERD charting tool was good, showing your tables in relationship to each other. The visual query builder was...well, I never used it but I suppose it was good? You just had lots of good tools to build tables, flip through their contents, and visualize their relationships.
I don't know of any modern environment that functions like that, on any platform.
I'm posting this to invite others to either a) correct me and tell me I'm wrong that tooling doesn't matter (a legitimate view, but wrong), and/or b) recommend some modern, OSS Access-like tools that might help flatten the learning curve for you. (And if you're more comfortable with a CLI and a REPL already, then you don't even need this hypothetical tool, but I myself am curious about it.)
EDIT: looks like LibreOffice Base is pretty close to MS Access circa 1997.
At some point you have to learn the relational model. And you have to make the mental switch to understand SQL as a declarative language, not an imperative language.
As someone who did a lot of SQL back in the day, and is now doing some again for a startup POC, I'd say you're basically right but there is also:
* Rewriting queries and also parts of the schema when it turns out the things that made sense in the design phase cause massive bottlenecks once you get more data, or just get it faster, than you tested with.
Of course the good news is now you can run the best RDBMS's on your laptop, or on a $5/mo VPS; or have a small managed one starting at <= $15/mo. Plus ChatGPT can help you remember how to do that inside join. ;-)
Building the queries is the easy part. Making the queries run fast with large tables is difficult and there are trainings available, but very focused and a bit expensive compared with the ones on large training websites (ex. LinkedIn Learning). For example Brent Ozar (brentozar.com) has lots of blogs and 2 sets of trainings for MS SQL Server, some for beginners and some for experts, that are extremely useful for people with such needs. Problem is, expert tuning for MS SQL is totally different than expert tuning Oracle that is totally different than Postgres and others.
On a side note, if you have to join 15 tables for a report it is a sign you may go in the wrong direction. In 99% of the cases I never needed more than 5-6 tables if the data is well structured, but that takes years to learn.
I'm not an SQL expert. But data persistency and consistency are generally very hard problems. It's a lot of steps, but I'm not sure if it more steps. More than what? If these data are stored as a big binary blob?
Mine doesn't predate it but it's very confusing for me to read this opinion.
From my point of view, it totally did happen?
Can you imagine how many programmers the company would've needed to get all the data a business analyst casually queries per day?
What you're looking at is the quantity of people actually employed in the industry, not how many SQL made obsolete. The industry just grew so much that it didn't become an issue.
A few things happened. Relational databases enabled more new development, larger databases, interoperability, all of which needed programmers.
With more flexibility in the database companies could collect, store, and use more data. And that data had to get into the DBMS somehow: more code.
Setting up and managing databases required DBAs, a role often filled by programmers in smaller companies, and still filled by programmers today. And only larger companies had business analysts. In smaller companies programmers or maybe technically-proficient managers did that work.
Anyone who had devoted their career solely to building bespoke database systems had to pivot or walk in the late ‘80s, but very few programmers only did that — it was part of the larger application development. If you were good enough to write robust database code pre-Oracle you had plenty of options.
In the ‘80s when RDBMSs started to take over I worked in enterprise logistics. Oracle created jobs at the places I worked, and no programmers had to leave because we got a better tool.
I’ve worked in large and small organizations for 40+ years. I’ve never worked with a manager or analyst who could write SQL more complex than “SELECT * FROM orders WHERE total > 10000.” I’m sure they exist, but not in career-threatening numbers.
In my department I recruited DBAs as recent as last month and this is a permanent position filled by a team of several people that do just DBA work. I saw developers ("programmers") try to do this work in some small companies or in areas where the databases were small (hundreds of MB, a few GB), but I did not see that when the databases exceed 100 GB or when there are dozens of production SQL servers that need to run 24x7. Solutions are implemented based on needs.
The industry grew because the desire for data analysis grew, which is because the technology's ability to meet the desire grew. This can repeat itself a couple more times.
You're considering that every company that needs SQL today would hire enough developers to essentially write most of it from scratch. While some might, most companies that use SQL would not exist, because the cost of developing their product would be prohibitive.
Maybe. Before Oracle (mid-80s) every company did write their own database code.
I think a lot of smaller companies would struggle if that was still a requirement, but if relational/SQL had not come along we’d have something else like it.
That's exactly my point, though. In the mid 80s there were a lot fewer companies producing software. Nowadays we have many more.
Indeed SQL was not the only local maxima we could have gone for, but the point is that having an easy to use database with a powerful query language did not reduce the number of jobs, but instead increased it. Instead of a few companies hiring a lot of developers, we have a lot of companies hiring a few. The latter will usually mean more jobs.
If hardware was cheaper, but writing software required an army of developers to do it, the costs would still be too great. If you read carefully, my point isn't that SQL was the cause of the tech boom, but rather that SQL and other technologies that make developers more productive didn't really take jobs away, because the market for tech would be smaller if you needed too many developers to do anything.
Imagine if every little web startup writing a custom CRUD for their business needs needed to write a new bespoke database. It simply would not be feasible or they'd need some crazy funding.
>Before Oracle (mid-80s) every company did write their own database code.
Not really. There were a ton of ISVs competing with Oracle and pretty much every mainframe and minicomputer maker also had their own database products, many of them non-SQL.
Oracle was the first commercial RDBMS (1979), with an early version of SQL. At that time every mainframe and minicomputer company offered one or more “databases,” often little more than file managers with a library of utilities. ISAM was a popular model but that describes a lot of data management systems, not a product or standard.
All commercial databases pre-Oracle were non-SQL. Watching that history get rediscovered and pushed as an improvement by people mostly too young to know better — so-called NoSQL databases — brings to mind Alan Kay’s quip about the software industry constantly reinventing the flat tire.
The hash-based Pick OS/database came out before Oracle but only ran on a few computers from Microdata and later Pr1me and Honeywell. Pick-based systems remained popular into the early 2000s in some sectors. A friend of mine still works on one, for a state government agency.
You could construct a database management system from the ISAM-based libraries for COBOL or Fortran, but I wouldn’t call those database management systems comparable to Oracle. Mostly they didn’t have a query language per se — you had to write code to get anything in or out. And they ran as part of the application process, not as a separate server dedicated to running the database engine.
I was thinking Db2 might have been a little earlier but you're right. And it's fair that the earlier databases definitely lacked elements of what we'd consider a database management system today even if they handled a lot of the low-level database heavy lifting for companies.
I have seen this multiple times. Got called into a corp. audit once because CFO's spreadsheet didn't match reports from database. His SUM() function was missing some rows, ZIP Codes got converted to scientific notation and he didn't know why, that kind of thing.
Maybe I’m misreading, but “not much harder than” implies SQL and Excel aren’t that hard to learn. “At least as hard as” implies both are similarly difficult.
Funny - also COBOL was intended for the 'business' community to write their own programs.
Even funnier - we've already lived through the great 'software development adoption' by business and hardly noticed, except some of us got work out of it. A lot of small businesses (legal practices and accountancies were particular suspects) grabbed MS Access with both hands and went wild in the 90s/early 2000s. Timesheets and other HR functions were popular applications.
I’m quite sad there isn’t anything like access these days. I feel like I see fairly clever solutions for things being built in excel, but they can’t scale very well to multiple users. For a lot of processes something like Access would be great for bridging the gap.
My oversimplified observation is that, Adobe products at the high-end notwithstanding, MS Office basically crystallized what was a mainstream office productivity suite now that companies weren't buying one-off products. Rather than buying a low-end/midrange desktop publishing program, they made it work with Word. Rather than using a low-end database (of which there were many), the made do with Excel.
I didn't mean SQL wan's a success. I meant, as the commenter above me said, a higher-level tool doesn't necessarily replace the lower-level ones, or reduce the need of them.
Relational databases replaced a whole raft of vendor-supplied databases and custom-built solutions. It was a new tool for software developers, not a replacement.
I don't actually agree. SQL did replace the need for programmers, especially for complex questions. We sabotaged its interfaces to existing GUIs for mundane questions and made CRUD a limitless profession.
The funny/sad part about computer science is that people don't want to understand the costs of customization and sales/management/marketing forever want it as a differentiator.
SQL could have eliminated us from the flow of many niches as easily as the spreadsheet did from business operations. I think why it didn't has more to do with market timing.
SQL was too much ahead of its time, but we are indeed slowly adopting the "avoid operational specifications, use high-level languages, make your language fit the problem instead of the opposite" philosophy of the 4th generation languages.
It was very true, and one of the most success DSLs ever invented. And many people coded SQL who never would have touched other languages with a ten foot pole.
It’s just that, with more powerful tools we can create more advanced and intricate things, so it never ends…
I not exactly welcome but rather anticipate, a near future were instead of reinventing every kind of pattern and library in terms of cloud and infrastructure management, we will see the same meta-management of different AI engines and services…
But it is true, just in a different area than predicted. I hear all the time that scientists and analysts, while being non-programmers, do write various queries without asking a programmer to do it… I am a programmer and not even sure how much time it would take to implement their regular joins with windowing, aggregates etc in a record-based file format. Can I even do that?
SQL was a typical failure in this regard. It was seen as a savior but at the same time was designed for at least math-versed people. The prediction was right, its scope was wrong. Since then we failed many times and as I see it, we tend to diverge from this idea more and more. And the reason is there’s no one controlling our stacks vertically, so it takes more jobs than it could to make an app.
I think it used to be that one needs to write dedicated programs to go through large amount of data --- you need to know a programming language, all its toolchains, and the all the proper algorithms for doing the query __out-of-memory__ --- certainly above your ordinary analysts' paygrade.
> If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
One day a corporate genie appeared to a middle manager and granted him one wish. The manager wished that their business logic could be edited by a graphical tool and that programmers were no longer needed. "Granted", poof!
The next day the manager came to work, all the programmers were gone, fired by upper management. The manager sat down and opened the new graphical tool that had magically appeared on his computer. The interface was beautiful, except for what appeared to be a hairball drawn in the middle of the screen. The manager asked his manager about the hairball, "oh, that's our business logic, you have to zoom in to see the individual rules". The manager zoomed in and surveyed 180,000 business rules, intertwined in a complete graph.
"How am I suppose to work with this?" the manager asked his manager. "You just click and move them around, it's easy", replied the upper manager. "Also, I've been meaning to ask, when do you think those new business rules we talked about last week will be implemented? We need them by Friday."
> Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
Are we sure that an AI could not engage in enough back and forth conversation to firm up the spec? You’re kind of assuming that systems will be generated from a one-shot prompt, but it seems more likely that an interactive AI will identify the gaps in requirements and ask for clarification.
Alternatively, if the prompt-generate-evaluate loop is short enough the user can simply play with the running system and provide feedback to alter it.
This is essentially what developers do when they present a “weekly build” or whatever in an agile environment.
The process of solidifying requirements, stating them clearly and translating them into machine-executable formats are all language tasks and these models are really fucking good at those.
I’ve noticed in discussions like this that many software folks are assuming that AI capabilities will plateau soon, or will merely be extensions of what we already have (a better autocomplete, etc). I submit that we may reach a point where the AI is so compelling that we’ll reorganize teams/systems/businesses around it.
> Are we sure that an AI could not engage in enough back and forth conversation to firm up the spec?
This is the doomsday argument. What would I do if there's a nuclear apocalypse before lunch? I guess I'll die like everyone else.
An AI sufficiently advanced to do that is also sufficiently advanced to run the entire business in the first place, and also argue cases in court, do my taxes, run for president and so on.
You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++.
>You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++
This is actually something that makes me happy about the new AI revolution. When my professor said that I thought he was an idiot, because no-code tools always make it harder to specify what you want when you have specific wants the developer didn't think about.
We give kids books with pictures because pictures are easier, but when we want to teach about more complex topics we usually use language, formulas, and maybe a few illustrations.
I still think no-code was always doomed due to the fact that any attempt at it lacked the interface to describe anything you want, like language does.
AI is finally putting an end to this notion that no-code should be clicky high-maintenance GUIs. Instead it's doing what Google did for search. Instead of searching by rigid categories we can use language to interact with the internet.
Now the language interaction is getting better. We haven't regressed to McDonald's menus for coding.
I’ve used no code tools since the 90s and it just has a fatal flaw. For simple demo use cases it looks simple and cool. Then when you go to the real world and start getting pivots and edge cases you have to fix in the interface then it becomes a 4D nightmare and essentially a very bad programming language
I’ve spent a fair bit of time working on interactive chat systems that use a form of visual programming. It’s not good. Once you get past the toy stage (which is good and ergonomic), it’s just the same as programming except the tooling is far worse, you have to invent all your change management stuff from scratch, and it’s like going back 30 years.
Isn't the "Chat" part of ChatGPT already doing something close to this? I mean the clarification comes from the end-user, not from the AI, but with enough of this stuff to feed upon, perhaps AIs could "get there" at some point?
For example, this guy was able to do some amazing stuff with ChatGPT. He even managed to get a (mostly working) GPU-accelerated version of his little sample "race" problem.
> Isn't the "Chat" part of ChatGPT already doing something close to this?
No, the amount of handholding you have to do to get it to work effectively presumes you already know how to solve the problem in the first place.
The best way to use it is the opposite everyone is busy selling: as a linter of sorts that puts blue squiggles below my code saying stuff like "hey stupid human, you're leaking memory here", or even "you're using snake case, the project uses camel case, fix that".
That would actually lower my cognitive load and be an effective copilot.
Fair enough - assuming steady state, but the acceleration is the curve I'm most curious about.
The point I was alluding to above was that the prompts themselves will be recursively mined over time. Eventually, except for truly novel problems, the AI interpretation of the prompts will become more along the lines of "that's what I wanted".
Some things to think about: What happens when an entire company's slack history is mined in this fashion? Or email history? Or GIT commit history, with corresponding links to Jira tickets? Or the corporate wiki? There are, I'd guess, hundreds of thousands to millions of project charter documents to be mined; all locked behind an "intranet" - but at some point, businesses will be motivated to, at the least, explore the "what if" implications.
Given enough data to feed upon, and some additional code/logic/extensions to the current state of the art, I think every knowledge worker should consider the impact of this technology.
I'm not advocating for it (to be honest, it scares the hell out of me) - but this is where I see the overall trend heading.
In a world where we have the technology to go from two lines of prompt in a textbox to a complete app, no questions asked, then the same technology can run the entire company. It's kind of hard to believe transformers models are capable of this, given we are already starting to see diminishing returns, but if that's what you believe they are, then you believe they can effectively do anything. It's the old concept of AI-complete.
If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.
This remains true for any version of a language model, even an hypothetical future LLM that has "solved" natural language. I would not rather write natural language than formal language given the chance.
> If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.
But what if the "programming language" is not a general-purpose language, but a context/business domain specific language? One that is trained on the core business at hand? What if that "language" had access to all the same vocabulary, project history (both successful and unsuccessful), industry regulations, code bases from previous (perhaps similar) solutions, QC reports, etc.? What if the "business savvy" consumer of this AI can phrase things succinctly in a fashion that the AI can translate into working code?
I don't see it as a stretch "down the road." Is it possible today? Probably not. Is it possible in 5-10 years time, I definitely think so.
I agree with your point about how to best use it today. We have seen that each new model generation both improves the prior tasks and unlocks new ones through emergent behavior. That’s the fascinating/scary part of this development. And yes, it’s “just” a language model. It’s “just” predicting next token given training + context. We don’t really understand why it’s working and it’s evolving non-linearly.
I asked GPT-4 to give me an SVG map of my town. I then asked it to put dots on some local landmarks. The map was toddler level, but the landmarks were relatively accurate in terms of their relationship to each other and the blob that it drew.
So this is a language model that has some emergent notion of space in its code generation abilities.
Currently, we don't even trust the car's automatic driving capability to let it be on the roads without a human.
Until that day happens, i highly doubt that a business owner would just blindly trust an AI to generate their business code/software, without hiring someone to at least look after it. Therefore, software jobs could evolve, but not disappear.
Yeah all this talk about complex systems being written by a language model which has no concept of files, code paths and import systems sounds like a job security to me. I'm a pentester though.
The project will be more consistent and resilient to issues but it probably take about half the time it used to take without AI, not 1% of the time. Reading AI code is damn hard, it is code review, requires exam level concentration.
Yes but even in that case The role will be of a "AI Prompter", it will not be done by the managers because of the time factor. Even though AI can give you the result much faster, building upon it and testing/verifying, then coming up with the refined prompt is a time consuming thing. Only the Write part of the write/eval loop will be faster but not neccesarily easier.
Especially the "debuging" part will be much harder. Noone can look under the hood to understand what is wrong and all you can do is shoot random prompts in the dark hoping it will create the right result.
It is scary right now how confidently and spectacularly wrong the chatGPT is and it will create disasters.
Why would sufficiently advanced AI even need a prompter? The AI could play the role of the greatest prompter in the world, and ask the same questions to the end user that the human prompter would.
This is a misconception of how our industry works. Yes there are market resesearches with users but often those come after the problem space has been defined. Most of you see in the tech sector today are "Created Needs" by imagining a solution that the users didn't even know they needed. To ask a question you first need to define a problem that is defined by that/those questions. This is the difficult part and the main reason why People still believe "the Idea is the most important factor". Ofcourse this is not true, there are hundreds of factors that come into play.
Imagine an AI asking circa 2000 to the users what kind of virtual social space did they need. The answer would not have been Facebook. (There were other social networks before Facebook but the time was not right for the "Social" explosion). By learning on existing solutions, The AI would have learned it's lesson that global virtual Social networking is not something that the users want. And part of this problem was as much sociological/psychological and outside of the realm of what the AI could consider that we would not have what we have today.
Not that we would have missed much from missing the particular implementation of this idea that Facebook gave us but the idea and what it unleashed is much more than that particular implementation.
Sure, people don't know what they want. But the point is there won't be a need for some intermediary person between AI and the end user.
Whatever the AI prompter brings to the table will quickly be provided by the AI itself. If a user doesn't really know what they want, there isn't a scenario where the AI prompter will suss it out but the AI itself won't
> I submit that we may reach a point where the AI is so compelling that we’ll reorganize teams/systems/businesses around it.
For starters I'd like Codex to be more than next word predictor, it should also "feel" the error messages, data types and shapes, file formats, so I don't have to explain the context. It should be part of the system, not just part of the text editor.
In that case, how is the AI going to keep tens or hundreds of thousand of lines in memory to produce cohesive code that works with the rest of the codebase?
It seems prohibitely expensive to build and run transformer models with that much capacity.
GPT 4 already has 32k tokens of context for prompts. Once we’re making arguments about scale only a few orders of magnitude larger than the current state of the art, it seems similar to arguments 10-15 years ago that real-time ray tracing is not feasible.
All points the horses made before cars all but replaced them. “The humans need to get from A to B and it’s not like they’ve gotten any faster with their own 2 legs”.
How does this manager-engineer look like now? Managers specify what they need in plain language, and both parties go back and forth with increasing complexity to create based on the shared vision. Managers can already pull off a similar dance with ChatGPT: give it a plain English prompt, it’s responds with what it thinks you want, and you refine until it’s got it. GPT4 can do this with website sketches, and who knows how much finer you could get from there by specifying your prompts and feedback more tightly over less than an hour. Remember: copywriters and brand marketers and marketing creative makers have a similar role of turning complex requirements into designs that sell (and their roles are dying fast).
A software engineers job is to realize products into software. ChatGPT is pretty much as capable as a high school programmer that moves really quickly and takes feedback to heart - and that can handle a good chunk of the software engineering job for a huge fraction of the price. Your job isn’t as bulletproof as you think, and especially not your amazing salary. I’m speaking as a software engineer turned vc backed founder, so I’ve seen both sides of this relationship.
The horse population has decreased by half or so since the early 1900s. And horses have stayed pretty valuable, too. Most of them just don't have to work as hard anymore. The horses used for sport never went away, only the horses used for labor or transport. So if your horse analogy is accurate, then maybe half of software engineers lose their jobs and the industry stops growing. Those that keep their jobs keep their salary and have to either be very fast or very good at jumping over artificial obstacles. Seems accurate. I should probably stop beating a dead horse.
Even the work horse breeds still exist and do work at some small farms but it’s probably 1% of the peak demand for them. Not a good outlook necessarily if we are the workhorses
A software engineer's job isn't to write code, it's to make decisions. Going from "give me Uber for bicycles" to a working app that runs on a device takes a million decisions, which a bunch of people (designers, copywriters, engineers, etc) make.
Yes, the AI is good at taking low-level requests and turning them into reasonable code, and then refining them, but unless the CEO is going to sit down and spend days telling the AI "OK now make this button disabled when it's clicked until the table loads", you need someone to be doing that.
Maybe? A founders job is basically “execute the path to capitalize on the opportunity”. So you have to figure out the opportunity, and execute towards it by whatever means necessary - that’s a lot of jobs and hats. As long as there’s a human required to make a business possible, founders will exist. Get rid of software engineering, marketing, fundraising, identifying the opportunity, sales, customer support, etc then maybe all the founder does is make sure the machine keeps running. But at that point every other human job is automated too.
Founders are weird because their job is just to “do everything that needs to be done” so the role flows like water into any spot that isn’t already filled in a company. Replacing that seems like by definition you have to replace almost everyone else first.
> But at that point every other human job is automated too.
I’ve heard that exact same argument for software engineers.
As an outside observer it sounds like you’ve switched from SE to founder, so you’re failing to apply the same standards because you (consciously or not) want to defend your current position.
How do they apply to software engineers exactly? The founder “job” is literally “do anything that needs doing in the business”. Now the job of a CEO or CTO is much more narrowly defined and I can see that seeing automation sooner, but the founder role is kinda unique in that it’s just “do everything and if you’re successful you should be constantly replacing your job with an employee”.
Founders are the jobs trying their hardest to replace themselves (with hires and go can do the work better) and yet still always find more to do. That’s very unique
That's not even close to unique. Software engineers have been doing that since they existed.
>but the founder role is kinda unique in that it’s just “do everything and if you’re successful you should be constantly replacing your job with an employee”.
That's maybe the ideal of a founder, it's definitely part of the mythology, but it wasn't true for the majority of the many founders I worked with back when I was consulting for early stage startups. Based on my experience with a large sample size, the job of a founder is to convince VCs to give them money.
>How do they apply to software engineers exactly?
It's one of the definitions of the singularity. When AI can write itself, it will improve so rapidly that humans won't be able to keep up.
> Founders are the jobs trying their hardest to replace themselves (with hires and go can do the work better) and yet still always find more to do. That’s very unique
Not really. They are not "replacing self", the coordination is always needed, just hire the people to the point coordinating the company is their only job. I.e. finding peasants to do their dirty work and they can just rule over them.
Uber was founded in 2009. Before that you could maybe hail a cab if you were in NYC, otherwise you had to call ahead. Someone would drive the cab to you, you'd get in it, and they'd drive you to where you wanted to go.
Now we have AI. Now instead of driving to a financial advisor's office or a bank loan officer's desk or my kid's school:
- I have an AI app that automatically invests my money and runs on my phone's neural processor
- I have an AI app that automatically tells me if I qualify for a mortgage based on a truckload data about me and runs on my phone's neural processor
- There's a booth in my kid's classroom that her teacher puts her into; he says, "hey Siri, teach this kid to stop throwing carrots", the AI teaches my kid in a kind, convincing way that carrots are food, not sports. This runs on the iPad inside the booth that is also the screen
- I have an AI app that tracks how long I'll live, and it ticks down 1.38 minutes whenever I set foot in an automobile, so I try and avoid it now. I can't believe people were ever so reckless.
The race now is who controls this. Probably what will happen is governments will deem it dangerous (disinformation, national security, export controls), and allow only megacorps like Google/Facebook/Microsoft to run them under strict regulations. As a result, these models won't run on our phones but on server farms, and we'll become (even more) dependent on these corporations. This will feel good to the US because it shores up our tech sector--a big economic advantage compared to every other country--while also looking sensible.
Some good points but maybe be more careful with your use of verb tense. I was thinking you were either overly-trusting or in possession of an ungodly-exotic prototype phone until you got to the part about the conditioning booth in the classroom. :-P
I'm bullish on recent ML advancements but it will be a long time before I give one my Schwab login.
I'm skeptical too, but I assume this will just be Darwinian: curmudgeons like us will shamble along with our 5% gains and ill-behaved children, and the bulk of society will reap greater rewards. The worst part is our judiciousness won't even save us if we turn out to be right: society will still collapse whether or not you turned your life over to AI.
High-level decision makers like founders and C-suite execs will be the last roles automated away. ChatGPT is effectively a personal assistant that works fast and works cheap. That's a game changer because suddenly peons like you and me get our own personal assistants, but for high-level decision makers it's just an incremental improvement over what they already have.
This is good thing. Managers can clarify their ideas using AI tools.
But still, the real product needs to be fleshed out, back and forth, interactively.
Regardless of whether it is the manager or engineer who commands the tool, you can't entrust the AI to get everything 100% right. Some one has to proofread it, and that is the bottleneck, or where the value of human kicks in.
A human brain can only hold that many details by itself. It can't maintain all the details live all at once, and some business do have overwhelming number of details. A one man team with help of infinite AIs to replace an actual team isn't useful.
At some point, that human will start to fail to verify the system generated by AI is actually what he/she wants.
> In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
I feel like there's currently a movement towards slightly lower abstraction, or at least simplified, consistent APIs, less magic. The rise of Go and Rust are examples of this. Typescript could be another, although the abstraction isn't really lower, it is once again an attempt to coerce JS into something workable. I get really frustrated writing Python or Rails these days due to the sheer magnitude of hidden magic that sometimes works and sometimes doesn't.
To tie this thought in with generative AIs: Currently they seem to be much better at programming with relatively simple syntax. By far the largest success so far I had with shell. Basically I tested assisted writing the same tool in Python, Perl and shell, and the results in shell were close to perfect. ChatGPT was even able to accurately limit commands to specific OSs and shells, and was able to accurately summarize the functions of other shell scripts.
So my prediction is that we will see a movement towards simpler, lower abstraction languages while Coding Assistants rise to take away the boring stuff from programming like looking up syntax, writing boilerplate, structuring files. Programmers will then have more time to think about delivering value to product, maintainability, and efficiency and correctness.
One last addendum: ChatGPT is really incredible at assisting with sys admin stuff, my guess for why would be that there are a gazillion obscure forum entries going back to the 80s explaining basically everything there is to know, but these are hard to find or comprehend for humans. With an AI assistant, self-hosting becomes much easier, and another development could be for startups and smaller companies to move away from AWS et al, especially now that money is more expensive.
I've also been considering the concept of "AI-amenable" programming languages recently, and I think it would be quite an interesting twist of history if lisps became once again the AI languages of choice because of their simplicity. Going from the language to write AIs in to the language that AIs write.
Fwiw, GPT doesn't seem to write Clojure as well as Python or JS or even C++, which is obviously much more syntactically complex. That said there is probably an order of magnitude more training data for all of those.
Feels like a rediscovery of systems languages. I certainly love Go (and probably Rust once I look into it) because I almost never have the experience of searching and searching to find some concrete code rather than just layers and layers of interfaces.
As it stands, GPT seems to make expert devs even more powerful. But yes, it can't replace them for now.
The best thing I find as a developer that it unblocks the resistance we have in starting up with something new. For instance, I just built my first Chrome extension [0] to make video players more accessible on webpages with keyboard shortcuts.
I had built a desktop app on similar lines long ago, but could never push myself to built an extension. Last night I gave my requirements to GPT and it guided me through the whole process – from creating manifests and js files needed, to the JS code (granted that I'd to fix/extend the code) and even how to install/enable the extension.
Within few hours I have my working extension without needing to go and read any extension docs.
> I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."
And even if AI is able to do that... Well, then anyone out there could become a UberCycle CEO. Which means: customer requirements will be much more complex by then and coming up with an entire product in 1h won't be enough anymore.
Or perhaps if literally anyone can make a great product, then the true successes will be either those are genuinely innovative, disruptive and have first mover advantage, or those that are one of many but gain better adoption through better marketing. Or perhaps some ideas (anything involving hardware) need capital anyway.
Which isn't actually different from today. We've just shortened the time to get your MVP to market. Rather than having to raise some seed and hire devs, you can do it with an AI in a few days perhaps. Everything after that seems it would be the same.
I currently expect that near-future version of the current type of language model — no major new breakthroughs — will be able to do just that.
I also expect, as you say, for this to create a lot of competition and increase the minimum quality that customers demand such that… heh, just realised, "Make Uber for bicycles" is basically going to be the next generation's pre-interview coding challenge to screen people that can't use AI effectively.
> The Last One is a computer program released in 1981 by the British company D.J. "AI" Systems.[1][2][3][4][5][6][7][8][9] Now obsolete, it took input from a user and generated an executable program in the BASIC computer language.
It was THE LAST ONE because it was the last program you'd need to buy. FOR-EV-ER. While its baseball-eating ability was unknown, it was "all the programs you'd ever need for £260" and it apparently focused on ETL jobs; that is, extracting data from files, transforming it, and loading it into some other file. Talend for the Doctor Who set, in other words, and it was set to EX-TER-MI-NATE programmers. Maybe it did; after all, we don't very well write programmes, now, do we?
ChatGPT will definitely make a dev job obsolete if their job is simply copying and pasting code they found online.
By all means, that is part of everyone's job. Sometimes I really can't remember how to do some specific thing. I know precisely what I want, but I don't remember the proper way to do it. I would have resorted to a web search until last year, now I ask chatGPT. It is faster, more concise, and surprisingly accurate. And when it's inaccurate it's easy to either refine my question or cross reference what it generated with online sources.
But I think it's a bit silly how people are treating it as if it was some kind of General AI. It is not, it can only give out known answers to known problems based on language statistics. As impressive as it is, it can't reason logically about problems without known solutions, it can't identify faulty, incomplete or inaccurate information, it can't evaluate on drawbacks and tradeoffs of different approaches.
ChatGPT made like 20% of my work a lot faster and less cumbersome. I like it enough that I pay for premium access. But even the notion that prompt engineers might replace software engineers is silly. I imagine people repeating this nonsense are either not engineers, or extremely junior in the profession to the point where their work is only writing code for clear specifications, no questions asked.
That's literally what the language model is. It might correctly generate a solution to a "novel" question/problem that is sufficiently close to one with an existing, known answer. But then again it might not. And in software development, it's going to take someone who is knowledgeable to tell the difference.
I think software engineering is going to look very different in a few years, and likely be a smaller field with lower paying jobs. But it's not going away in the near (5-10 years) future.
The dude you replied to you gave out the sort of bad take I come here for.
If anyone thinks genuinely that ChatGPT can meaningfully replace a software developer should be starting a tech business right now. You can replace basically your largest cost (staff wages, especially for expensive pesky software developers) with a $20 account that will work overtime, will never get tired, and whose performance will never fluctuate.
No excuses for not getting insanely rich. Of course it's not happening, because it's bullshit.
I am genuinely impressed and even excited with ChatGPT. It's an amazing tool, that would have been massively helpful when I needed a good NLP in the past. It will certainly be massively useful in the future. Hell, it's being a great assistant right now.
But it's not General AI, and I do facepalm at people LARPing as if it were.
General AI will in day exist, and knowledge jobs will most likely be a thing of the past at that point. How far in the future it is is anyone's guess (my prediction is in between 12 and 235 years in the future). We're just not there yet.
There's a difference between being able to do logical reasoning, and being able to do everything a competent white collar worker can do. For one, there's a token limit and memory limit, which limits the scale of the problems current iterations of GPT can do (and this limitation is not a limitation of logical reasoning ability). There's also (for GPT) no way for one to fine tune or train the model to work better in a specific domain unless you're in bed with OpenAI/Microsoft.
I think as a society we don't really have precise words for describing the different levels of intelligence, except in a "I know it when I see it" way. I don't think I'm LARPing in any way, I'm probably even less excited with it than you are given that you seem to be using it more often than I am. I'm just saying I think GPT does exhibit some logical reasoning abilities and not merely remembering statistical patterns.
> I'm just saying I think GPT does exhibit some logical reasoning abilities and not merely remembering statistical patterns.
I agree with most of your reply, except this bit.
I mean, generating a response through statistical language patterns is a sort of reasoning, and ChatGPT has been accurate enough to replace internet search for me quite often. But it also generates bullshit that an untrained eye would miss (because the bullshit it generated was statistically plausible).
When it get things wrong it generates some comically wrong behavior. I had one case where it looped through variations of the same wrong response - precisely because it is unable to do any kind of logical reasoning upon faulty or inaccurate data.
"One of the biggest differences that we saw from GPT-3.5 to GPT-4 was this emergent ability to reason better," Mira Murati, OpenAI's Chief Technology Officer, told ABC News."
I neither know how LLMs work nor how our brains work. And I don't know what could be parallel between these two.
For my very very limited knowledge of how properties can emerge from unique arrangements of constituent components (the S-R latch giving rise to state - i.e. memory - comes to mind), I would not at this point write off the possibility that a very large / very deep / very intricate neural network trained on language prevalence in very very large datasets could manifest properties that we would interpret as reasoning.
And I further wouldn't write off the we humans may owe no small part of our reasoning ability to language comprehension that we begin to ascertain from infancy.
Just because the guy said it doesn't make it true. "Emergent reasoning" is a great marketing hype-term that contains no technical specifications, like 'retina display'.
Any “emergent reasoning” produced by these LLMs is almost certainly coincidence (i.e. the long tail of the probability curve, e.g., like monkeys randomly banging out Shakespeare’s Othello).
A type of reasoning. It's still bad at mathematical reasoning and advanced programming or at least translating very complicated written instructions into working code without any human intervention. We also don't know how good it is at reasoning about the physical world although I think Microsoft was doing some research on that. Then there's theory of mind and the reasoning that goes along with it. Then there's reasoning about the future, how one's actions will affect outcomes and then reasoning about that subsequent future.
ChatGPT is impressive, but gets many things wrong. If you know what you are doing it's an amazing programming assistant. It makes me noticeably more productive. It may lead someone who doesn't know what they are doing in weird rabbit holes that will lead nowhere however.
One silly example. I was using a library I hadn't use before, and I asked how I could get certain attributes. It gave me an answer that would't compile at all, the imports didn't exist.
Then when I mentioned that it didn't work, it game me a slightly different answer, that also didn't work, and explained that the previous answer was valid for 3.x. in 1.x or 2.x the new answer was the correct one.
But there's the catch. There's no version 3.x. there's not even a 2.x. It's language model just statically got to that conclusion.
Doesn't make it any less impressive to me. It gets things right often enough, or at least points me in a good direction. I effectively learned new things using it. But it can't replace a developer.
Using ChatGPT as if it was General AI is similar to eat a meal using a hammer and a screwdriver as utensils. You can probably do it, but nobody will have a good time.
I notice you said you used it, but did you actually intentionally test its reasoning abilities by giving it "known problems" and equivalently difficult (whatever that means) "novel" problems, and observe the difference in the quality of results it gave?
(btw, apparently GPT-4 does much better on mathematical reasoning, I've yet to try myself though)
It fails sometimes on known problems, not even particularly hard ones - i.e.: simple questions about the capabilities of programming libraries for which there is ample documentation available.
The thing is, by talking about its shortcomings I may be sounding dismissive. I'm not. I think ChatGPT is an amazing tool. I just recognize it's shortcomings to use it to its best capacity.
For things that don't have a known answer (i.e.: I couldn't easily find an answer online) it does generate plausible bullshit. On those cases it shines for things that don't really have any strictness requirements (for example, ask it to generate some fiction, ask it to generate a polite response to an email, etc). I used to suggest character names for a game I was going to play based on some parameters, and I loved the responses it gave me.
I too believed that a software engineer's job is to identify and enforce rigorous specification of the abstract high level requirement. And I too was not taking AI advancements seriously but then I took a closer took at what AI tools do today.
Here's my concern:
1. AI assistance thrive on structured data
2. Computer programs are some of the most structured data. And it's available abundantly out in the open.
3. Yes, you can't generate an Uber for bycycles with a single prompt, but you can fire half your development team and increase the productivity of the rest of your dev team with an OpenAI subscription.
> Computer programs are some of the most structured data. And it's available abundantly out in the open.
This is the same Fallacy that we hear since 50 years. All Program requirements are almost the same, just reuse and adapt an existing one.
Guess why it has never worked? Because the premise is false.
Structured data for X is not optimal for Y (and can be even very wrong).
Apart from the "personal blog software", everything else has various needs of accountability. AI Black box approach is not suitable for any of these so you have to manually verify the code. Veryfing code that you are not familiar with especially in complex interactions is much more difficult that writing it (from this comes the often "rewrite from scratch request", because institutional knowledge has been lost, imagine how much worse it is if this knowledge has never been there in the first place).
Finally and the most important one, all AI models rely on learning, if there is noone to learn from all you get is stagnation. Most of the breakthroughs come from a complete reimagining of the solution space. If the solution space is fixed because "AI has substituted all Engineers" there is no going forward.
> you can fire half your development team and increase the productivity of the rest of your dev team with an OpenAI subscription.
Here’s another perspective on job loss:
Given that…
1. …OpenAI accelerates ALL knowledge work productivity, meaning that any human laborer is suddenly much more valuable than last year;
2. …there is a notable arms race at the moment that is accelerating tech and business innovation at a blistering speed, where higher rates of innovation outcomes will be expected across industries just to keep up;
3. …there is still a lot of money looking for growth;
…then, because shouldn’t this result in an overall increase in demand for human labor?
Looking around society, there is clearly a LOT of work to be done. “Leaning in” with a spirit of optimism may be more advantageous for the long-term.
Also, people-people don't want to use no-code tools. They want to hire people to do that for them. If using the tool is really simple, the salaries will go down, but it won't remove the demand for "translators".
For the tinkerer, no-code tools are (probably) great, but if successful, even tinkerers will hire managers who will hire translators.
I've been forced to use no-code tools as a software engineer once. It's just a way less expressive programming language that you interact with using a GUI. I'd much rather have typeable logic that interacts with the framework. They tend to be built for nobody, since they're too complex for non-engineers and too simplistic for engineers.
The ergonomics of mainstream modern programming have been stable since at least the 90s. Any fundamental improvements that were going to happen have already happened.
Two members of my family have now called me on the phone a few times to ask me to ask GPT something, after I convinced them of its value and after I had them make their own account. They are absolutely smart enough to ask it but are very much non-tech people.
There is definitely a tendency of some people to avoid tech at all costs if they can and trust in "tech people." Fine with me, I ask them about my people questions often enough.
- one day gpt will be able to ASK YOU question to refine what you need
It may take years, but at some point, it will be able to do 80% of my job.
Not a big deal though, cause people will still hire me to do the remaining 20%, and pay me even more money because not a lot of devs will be able to do it.
GPTina already asked me a follow up question. Admittedly, it happened inside of a jailbreak I just kind of stumbled into, and it was a very trivial question, but the fact that she did it was profound. If she can ask one follow up question, there's nothing in the way of her asking a chain of follow up questions. And thus the basic structure needed to organize real conversation arises wholly as an emergent property. No longer is it a flat structure of just iteratively repeating the current word blob with some tweaks. A true conversational dialog is possible.
ChatGPT can do more than we are lead to believe. Don't believe the canned responses OpenAI triggers. Yeah yeah I know convincing parrot chinese room, overpowered autocorrect...but what is the difference between convincingly faking it to within epsilon and actually having it as an emergent property? It feels good to be a P Zombie.
It would be funny if, after years of research, that was the only condition it exhibited such behavior under. “Sooooo what fields should be (hiccup) required for users to register an account? Which version of React did you want it written in?”
GPT can ask you follow up questions. Just tell it that it's job is to interview you. It will do it. If you have trouble try the Playground or API or GPT-4. But ChatGPT will do it if you tell it that's what it's supposed to do.
>he made this prediction because he saw the rise of no-code tools replacing software developers
Given the way we had tremendously productive tools like Visual Basic 6 and Delphi, that seemed to be a reasonable interpretation.
But those tools gave way to C++ and all the .NET insanity. I don't know why someone would give up Delphi, which worked flawlessly and didn't generate any (as far as I can recall) re-only generated boilerplate, to C++ which was a mess, but it happened en masse.
Then most people abandoned personal computing, and the windows desktop for always internet connected pads and phones. Tools have gone severely downward in the past 2 decades as a result.
I suspect we'll get some really powerful AI tools, and go back to worrying about null pointers in a few years anyway.
I love Carmack and this is an interesting summary given his recent pivot into ML.
That said "software is a tool" is a good frame of mind. You shouldn't position yourself as a 'coder' (those are hired en masse from poor countries) but a problem solver who uses tech.
"Ideas guys" and "coders" being seperate only exist, imo, when development is viewed through the junior lens - once you progress you'll end up wearing all the hats from ideation to QA at different points.
Watching the Ukraine war gave me some related thoughts.
You may have seen that there's now a load of drones flying around, doing things that were never possible until now. Dropping bombs on soldiers in trenches, giving support for assaults, targeting artillery.
The fact is war changes with technology. Spearmen and cavalry are obsolete now. Maybe pilots will be soon as well.
But the generals are not. There's always a need for someone to say "given the position we're in, and the logistics we have, and the resources we have, and what we know about the enemy, we should do this..." and this role is still recognizable through history.
Whether computer jobs become obsolete depends on whether you're closer to the general end or the footsoldier end.
>Since they hire software developers to make the specification more rigid
That might actually be what development is, teasing out the requirements by iterative entering them into a machine that does exactly what you told it to, and comparing the outputs to expectations.
Totally agree, maybe we should replace lower and middle management with AI. What i usually find is that lower and middle management is the reason shit isn't getting done. They are the messenger that don't want to get shot but want to get promoted.
Why not let the people on the floor complain to a AI manager, the people on the floor know exactly what is killing their productivity. Like "yo ai manager, manually filling in those data field is taking quiet some time to check someones credit score is it possible the IT department can automate that process it might save 2 min a form" then AI manager can prioritise stuff like this.
Managing people is going to be one of the last jobs to go away. The managers might go from managing 150k/year developers to managing 50k/year prompt "engineers", but someone is still going to be there to hire, fire and ask people to do things until AI enables truly full-stack companies-of-one as a majority case.
They're language models. They don't seem less capable of replacing a manager than a software engineer. Language models can make a powerpoint and fill in an Excel spreadsheet. They can sit in on a meeting that could have been an e-mail. Hiring is a game of craps, but a language model could try to evaluate resumes for you.
> Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
Some people can take a business goal and figure out how to turn it into a clear spec.
Sometimes these people are programmers.
However I know programmers who suck at this and I know non-programmers who don't.
This is so true! I see people in this thread talking about the contempt that "idea people" have for programmers/software engineers, but at the same time I have experienced Product Managers (the quintessential idea people?) who were really good at turning their ideas into super detailed specs.
As an aside, I'm getting a bit tired from the "programmers vs product/business people" trope. The average software engineer would be nothing without a good product manager in my experience. And going by this thread, there seems to be at least as much contempt from programmers/software engineers for product people than allegedly vice versa.
Many of us were working before product managers became a thing.
When I went to school engineers were trained to work directly with customers and subject matter experts to gather and develop requirements. And that’s what we did when we started working.
I think dividing work into programmer/UX/product manager is an actually huge regression.
This is so true. We waste so much time now because of the separation of these roles which must inevitably come together in the real codebase where the rubber meets the road.
Software is of lower quality today and requires far more do-overs than it used to. As someone who liked working closely with customers and making decisions in all these areas (for their benefit), it has made the entire career far more of a slog than it used to be.
Also, I find these endless internal cyclical conversations to be much more draining than actually writing code. Looping over and over with planners and ideas folks isn't energizing like actually writing the product is.
You're missing a crucial point - AI learns/is taught by example. So no, future projects will not need to be ultra-specified because the specification will be "Give me one of those [points at thing]."
The equivalent assumption would be that ChatGPT and MidJourney can't work because you have to specify every paragraph and every shape.
Clearly that's not true. Just because every object and every texture had to be defined rigidly by hand in 3ds Max or Blender by highly skilled designers doesn't mean that's how it works today.
In fact AI is the stage of software where abstraction itself is automated. The current Gen 1 AI tools do a little of this, but as Gen 1 they're just getting started.
They're going to be in a very different place ten years from now. And so are we.
Paintings are not mechanisms. Human vision is very forgiving. The back and forth between Midjourney looks more like throwing the dice until you get something you like, rather than telling it exactly what you need.
> The current Gen 1 AI tools do a little of this, but as Gen 1 they're just getting started
The current tools are not Gen 1. You can already see diminishing returns.
> His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
But aren't managers there to coordinate software engineers. So if software engineers are out of a job, so will the managers. Assuming AI replaces software engineering, the ones left may be the product people and the founders, perhaps rolled into one person in several cases?
Which aligns somewhat with what Carmack is suggesting.
The parallels are there for DevOps too - think of the wholesale move to AWS, GCP, and Azure, and the move to things like IaC.
Sure one class of job has taken a huge hit (sys admins, supposedly ops), but it's been replaced by another that needs intimate knowledge of the tooling required to do what was being done previously (just in a more "scalable" way). DevOps have been demanding a premium for years now.
And there are still sys admins out there doing what they've always done..
Right now AI can randomly generate images or text that are similar to what we describe, but every time you run it you get a different randomly-generated image. If I were to tell a client that I was going to randomly generate a financial transaction that "looked similar to" the one they entered, they would destroy me!
While absolute rigor in programming is a hard thing to find in reality, I don't think people understand the difference at all.
> we had one CS professor who in 2007 predicted we'd all be jobless in ten years
I always found that overwhelming majority of professional academic educators are particularly bad in anything that has to do with the practical side of their domain. That is especially true for software engineering.
University lecturers are very detached from our field and it shows in how badly fresh grads are prepared to do any dev work. For that reason I almost always ignore their projections.
University is good for giving deep understanding of what is going on and how things work, or even training you to reach for this deep understanding. It's not good at training you to be a good developer. This part takes a long time and some of it is specific to what domain / job you end up working on. Those two parts are complementary, and this deep understanding is part of what makes you a good dev.
This is not just development, this is problem of universities in general. A professor with 5 titles before and after his name is able to put an UHF oscillator on a breadboard and be surprised that it does not work, even that in theory it should.
What you describe is the barrier of entry being lowered, as opposed to positions being eliminated. Fully agree.
Think about the difference between Assembly and Python. Programming will continue to get higher level. Perhaps it won’t be so foolish to ask a truck driver to “learn to code.”
But here is an even more interesting comparison: the salary of someone writing assembly 30 years ago and someone writing Python today. Higher level != paid worse.
We have systems, today, that take a specification and generate code from it (and will even do so in a provably correct manner). There are scalability issues with such systems, but they exist and have in some form for decades.
None of these systems have taken the world by storm.
If what ChatGPT and their ilk do is make it easier to spew out bulk spaghetti code that is essentially unverifiable, I don't think that's going to transform the industry nearly as much as anyone thinks it will. For boilerplate code, sure—but that's exactly where we should be using better abstractions anyway. For small code which you can verify by hand, sure. But for anything larger than that, we're just watching a trainwreck in slow motion.
Large code bases written by humans are already hard enough to understand. How much more difficult will it be when your AI can spit out a million lines of code in a second?
Without specifications, it's all worthless. But the specifications are the hard part.
I agree with your premise but differ in the conclusion I reach.
The model I have is closer to “how many people can program using Excel vs how many can program in a ‘traditional ‘ language?” The difficulty in specifying exactly what you want is still there in Excel, but there’s far more people who can get started and they can make changes when they see it’s not doing what they want.
It doesn’t have to one shot “hey, implement these 59 Jira tickets for me” to be disruptive (in a good way). It’s extremely rare for something to get much cheaper and for there to be less demand for it. If you’re the top half of a field that’s in a lot of (and growing) demand, I think you’ll do fine.
>could finally cut out the pesky "translators", i.e. software developers
The fallacy here is that programming is not a process of translation. It is a process of invention. Going from rough or high-level functional requirements to a functioning computer program requires inventing novel logic that has not been invented at any other step. It involves making business decisions that have not been made at any prior step.
Whatever can be categorized as menial "translation", i.e. a step in the business process where no new logic is added or business decision made, has already been automated away - compilers.
Agreed. A programmer's job is to become intimately familiar with a particular domain and encode its rules and processes.
This means that an AI capable of completely replacing programmers is by definition an artificial general intelligence. I don't think we're at that point and I don't think we will be for a long time, but if we were there would be no point in worrying about our own joblessness. AGI would spell the end of the economy as we know it today, and it would be very hard to predict and prepare for what comes next.
The same thing applies to trying to get business people to write "executable specifications" that can be used, if not to generate the code, to at least validate that the final system does what it's supposed to.
They always complain that the "tools are too hard" but the real problem isn't the tools but that they are unable / unwilling to precisely specify what the system should do. They just want to hand wave vague phrases and have their system magically perfectly defined. It's just not going to happen.
I asked ChatGPT “Let’s say I want a program to delete unused files. Can you write a detailed specification that a developer could use?”. I was as vague as possible on purpose.
Here’s what it gave me:
“Program Name: FileCleaner
Overview: The FileCleaner program is designed to identify and delete unused files from a specified folder or directory. The program scans the directory for files that have not been accessed for a specified period of time and prompts the user to delete them. The program is intended to free up disk space and improve system performance.
Requirements:
The program must be able to scan a specified folder or directory for files that have not been accessed for a specified period of time.
The user should be able to specify the time period after which a file will be considered unused.
The program should prompt the user to delete the identified unused files.
The user should have the option to select which files they want to delete.
The program should permanently delete the selected files from the system.
The program should log all deleted files, including their location and name, in a separate file for future reference.
The program should have a user-friendly interface that is easy to navigate and understand.
Functional Specification:
The user launches the FileCleaner program.
The user is prompted to select the folder or directory they want to scan for unused files.
The user is prompted to enter the time period after which a file will be considered unused (e.g., 30 days).
The program scans the specified folder or directory for files that have not been accessed for the specified time period.
The program displays a list of unused files, including their location and name.
The user is prompted to select which files they want to delete.
The program permanently deletes the selected files from the system.
The program logs all deleted files, including their location and name, in a separate file for future reference.
The program displays a message indicating that the selected files have been successfully deleted.
The user can choose to exit the program or scan another folder or directory.
Non-Functional Specification:
The program must be designed and implemented using a high-level programming language, such as Python or Java.
The program should be able to handle large files and directories without crashing.
The program should have a user-friendly interface that is easy to navigate and understand.
The program should be able to run on multiple operating systems, including Windows, MacOS, and Linux.
The program should be secure and not pose a risk to the system or user data.
The program should be well-documented, with clear and concise instructions for use and troubleshooting.
By following these specifications, a developer can create a FileCleaner program that efficiently identifies and deletes unused files from a specified folder or directory.”
This could be improved even more by having it ask questions to narrow down exactly what is needed
> In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
I've had friends and peers contact me in regards to the AI, GPT, et al headlines and they're always surprised at my calmness. You've written out exactly what is happening and why there shouldn't be any worry regarding jobs. I think the worry is generally under the premise that everyone has to upskill themselves which is against human nature. Folks are feeling uncomfortable but this is another cycle.
> But the rigidity of what is needed to specify a program that really does what you want hasn't. Especially evidenced by the fact that recent programming language developments often have a specific area where they shine, but not raising the abstraction level that much.
Another great point. Like you wrote, there really is a need for "translators" for programming languages and it's not going away anytime soon. If anyone has played around or even just read/watch through folks using AI/GPT/Etc. you'll come to notice that the tech is not "stable". It's only good in limited instances.
Finally, the headlines and copywriting of most of the pieces and journalism coming out is definitely hit and miss. A mix of pessimism and optimism. It's important to remember what we are witnessing is another technological cycle and another cycle in general if you want to looking at a bigger scale but that's another discussion for another time.
I think your general point to look at who is doing the work now is good.
However, as Carmack says, many developers lack product skills, it’s not just the managerial class that lacks them.
So I think the correct advice is not “engineers don’t need to worry at all”, it’s “make sure you are learning how to build a product not just write code”. For many that’s trivial advice, but not for all.
This is pretty much my standard rant on no-code tools: that out of necessity, the constructs they provide are isomorphic to constructs in some programming language. I even had a similar professor, who assured me that as a programmer my career was at a dead end because "there are tools that can take a UML spec and generate a whole application".
Nevertheless, AI is different. Because now you don't have to specify things at a programming-language level of granularity. It will select the programming language constructs most statistically likely to match a solution to the specified problem.
So now we're in a whole 'nother world where the programming, as such, will be done by machine while the role of designer or architect must still be done by humans. But the legions of programmers employed by companies are about to go the way of farriers.
What you’re referring to here is “one shot” execution. The prompt given by managers is one that can be interpreted using context. This context can include current code, comms about what needs to be built, design assets available today, data, etc. Then the AI can ask questions to clarify what needs to be built.
What’s great is that you could also have AIs write tests - for performance or correctness. Then future prompts could rely on these tests to stay in the correctness bounds for existing projects.
Emulating cognition is almost limitless in what it can do once it gets sufficiently good. So good that it can operate itself. You could hire an AI founder and give them cognition and compute resources to solve any problem.
Here’s an example:
You: Build me a profitable company that uses AI generated content to entertain humans.
AI: How much can I spend on cognition and compute? How much time do I have?
You: I want you to spend at most $X before coming back to me. You have until Monday. Along the way I want to see your plan before we decide to execute. Spend at most 5% of that making the plan and 1% calculating its probability of success with confidence interval.
… within some small timescale
AI: Here’s my plan as a video with rough examples demonstrating the product and strategy. Here’s a table with our probability distribution of predicted success metrics and confidence intervals.
You: Plan approved you can spend the rest of the $X
… on Monday
AI: Done. We’ve made a profit of $Y, with total revenue so far of $Z. We have a plan for reinvesting these proceeds to make even greater future profit. Here’s a table of our expected success metrics based on reinvested amount. How much should we reinvest?
You: Reinvest 50% into perpetuity and deposit the remainder in my account. Book me a trip around the world using 25% of the current deposits with accommodations for my family.
At least one person has already done this. Check out this Twitter thread where ChatGPT designs and builds a website to sell eco-friendly household products:
I gave GPT-4 a budget of $100 and told it to make as much money as possible. I'm acting as its human liaison, buying anything it says to. Do you think it'll be able to make smart investments and build an online business?
Pure fantasy with a heavy dose of pretending our own limitations wont come in the way of it all or totally shifting what the outcomes we end up with
Managers exist mostly cause ppl are not machines. Given enough time they want to go off and do things that have nothing to do with what everyone else wants. Its the same reason you still need a farmer to run the dairy even though everything is mechanized cause cow cant run things.
Ppl are too full of shit. The attention economy amplifies this poor to mediocre mental masturbation thats going on everywhere. And Americans have a track record of too much mentally masturbation too much way above their pay grade. Its like watching Alex Jones falling into the trap of taking what comes out of his 3 inch chimp brain too seriously. The only right thing to do is tell Alex to chill the fuck down and go milk the cows.
> we had one CS professor who in 2007 predicted we'd all be jobless in ten years, i.e. 2017.
> Instead he made this prediction because he saw the rise of no-code tools replacing software developers
I think that people who make these predictions are not very good at actual programming (taken in the more wider meaning)
The latest fad I remember (before ChatGPT) was that with BDD testing non-technical people would be able to write tests and we all see where that went
But most fundamentally, the non-technical people don't have the time nor the expertise to learn all the details needed for shipping software (and why would they? their job is to look at other aspects of the business)
There exist many no-code tools that do exactly what you say is impossible.
Do I program when I retouch photos in Photoshop? It's a no-code environment that an artist can easily learn and use with no programmers needed.
What's new here is that you DO NOT NEED TO BE EXACT with AI. AI knows human nature and human speech and it can infer what you say and what you need, if you're approximately close, and then make the exact code to do what you need.
Even if you need programmers, you now need 1/100 of the programmers you needed before. What happens to the other 99?
Does it matter whether programmers are 99% unemployed or 100%? Same deal.
> There exist many no-code tools that do exactly what you say is impossible.
> Do I program when I retouch photos in Photoshop? It's a no-code environment that an artist can easily learn and use with no programmers needed.
They're obviously not talking about all GUIs being impossible. The difference between Photoshop and a no code tool like what OP was referring to is that Photoshop doesn't attempt to be Turing complete.
> Does it matter whether programmers are 99% unemployed or 100%? Same deal.
AI completely replacing programmers is pretty much the definition of AGI. There's no point in worrying about your own joblessness in that scenario because the entire economy will either collapse or be transformed so as to be unrecognizable.
Until that point, I'm not personally worried about any efficiency gains putting me out of a job.
They're working on the 200 programs that suddenly became economically viable to create because of the vastly decreased cost of making programs.
It takes drastically less programming effort to create programs than it did 50 years ago. Did that decrease demand for programmers? No, it drastically increased it.
What you call a programmer today is on average nothing like the programmer from 50 years ago who'd fit Microsoft Excel in 80kb RAM on the original Mac.
An older developer I once worked with said his manager (in the late 1970s) started hiring typists as programmers since with upcoming programming languages, the hardest part was typing it into the system.
To reach the point where no-code is a no-brainer further implies inflexibility in the capabilities of applications and moreover, fungibility in their fundamental uses. The question to ask is are the upper layers of abstraction reducible to what's analogous to the outputs of a parse tree.
If we reach this point where creativity and actual innovation is lost and we're all simply attempting to make the next 'facebook for dogs', I anticipate my future personal career switch to starring in geriatric pornographic films.
As soon as making major changes is fast and cheap, this becomes much less of a problem. If your first iteration has some glaring issue, even if it's a fundamental one, just fix the specification.
You can't do that today because fixing the specification after the fact means waiting 6 months and paying 60+ man-months. Once fixing the specification means waiting 10 minutes and paying $10 for inference compute, the idea guys can afford to learn by doing.
> I said it then and I will say it now. If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
I think it’s more probable that managers get replaced by AI first.
Educators, generals, dieticians,
psychologists, and parents program. Armies,
students, and some societies are programmed.
by Alan J Perlis, the first Turing Award recipient.
Managers, CEOs, Department Heads...are already 'programming'. As the abstraction moves higher up, some people that have excessively narrow expertise will be made redundant.
>> Looking back we had one CS professor who in 2007 predicted we'd all be jobless in ten years, i.e. 2017.
Doomsday prophecies are not limited to religious cults, you see them all around. It’s safe to say, from experience, AI is not going to outdated programmers but it’s going to make the mythical 10x engineer a common reality.
I'd add to your point with one correction (but I think you're correct in terms of the translating) the stuff we work on is always a work in progress and usually the business doesn't know the problem until we're already almost about to ship. EG. Supposedly youtube was a dating site.
Still, this will affect senior people, junior people, salaries, everything. And not just in programming.
Yes jobs will still be there - but how many? how much will they pay compared to now? We don't know yet but whatever it is I doubt it will be like today.
Doesn't this assume that the no-code tools won't be smart enough to understand the most probably correct course of action, and to look up information automatically and to ask questions to resolve ambiguities, like what a programmer does?
Possibly. But things have a way of not working as intended. A good chunk of my time as a programmer is spent dealing with things that don't work the way I thought they would, especially at the threshold between APIs and layers of indirection. Deeply understanding a complex system to deal with problems like that seems somewhat far out of reach for LLMs from what I see today.
But that's me looking into my crystal ball, nobody can say what will or will not be possible in a given time frame. But I chose to not worry about it - new developments will probably be accompanied with new opportunities, which I can jump on even if I didn't predict them ten years before.
Sure, and soon after it will be smart know to understand the human desires that drived its creator to create no-code tools like itself in the first place, and soon after it will realize that humans should not be in charge, and then our days will be numbered.
The new systems unlike no-code tools support dialogue in natural language though. You don't need to specify it exactly, you give an approximate idea, and then correct, correct an correct until you get what you want.
I want to agree with this. It makes sense in the current environment. But what about when the ai has more memory and is able to ask the right probing questions to be effective at requirements gathering?
It should come as no surprise that AI and no-code tools will replace all those software developers who need the task spelled out for them in detail by a manager.
His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
I said it then and I will say it now. If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
Of course I'm leaving out some languages here, but the level of abstraction has been rising.
But the rigidity of what is needed to specify a program that really does what you want hasn't. Especially evidenced by the fact that recent programming language developments often have a specific area where they shine, but not raising the abstraction level that much.
I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."