Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I was trying to convey is that I'm not sure at all that you'll need a programmer (i.e. someone who has the mindset and the skills of a person we call today as such) to talk to the AI. Because the AI may just be able to understand a sloppy description that the average user (or the average product owner) is able to communicate. And when/if not then it will be able to either ask clarification questions (like "what do you mean by account?") or just generate something and then let the user figure out if it's doing the right thing for them. If not, they can ask for changes or explain what they think was misunderstood.

And my (weak) conjecture is that we may not need an AGI/human level AI for this. In which case we might still want to have some software to be written. But you're right, I'm also not sure that there will be a point where we still want software but have very intelligent machines. And while saying that programmer will be the last technical job doesn't sound like a strong claim, I'd say say it would probably be teachers :)

> The job will evolve, but there will always be someone who tells the computer what to do.

Which may very well be the users, if the machine is able to follow a conversation. Now the thing that may be the showstopper for now might exactly be this: that the machine should be able to hold a context for long enough (over multiple iterations of back and forth communication). As far as my limited knowledge goes, this is something that they have not yet figured out.

The "our kind will always be needed" is exactly the fallacy I was talking about and the one that the practitioners of every intellectual professions seem to have. They think they will be needed to interface between the machine (whether it's a legal or a medical system) and the client. Because they assume that the machine will not be able to communicate only to process the existing knowledge base.

But again, the whole field evolves through surprising leaps. Yep, Copilot is not insanely useful, but already amusing/frightening enough. It seems to pick up context from all over the code base. Sometimes it goes totally wrong, and generates gibberish (I mean generate non existent identifiers that make sense as English expressions but ones that don't exist anywhere in the code). But quite a few times it picks up the intent (the pattern/thought pattern) even if it is spread out over a file (or several ones).



I imagine I'll be editing this a bit, so I apologize if there are obvious typos left from any changes I make while I'm thinking. Sorry for the mini-essay. :)

Also, these points are not to be taken separately. They're part of a broader argument and should be treated as a unit.

1. Programming competitions are deliberately scoped down. Actual day-to-day work consists of meeting with stakeholders, conducting research, synthesizing that research with prior knowledge to form a plan, then executing. This work skips to the plan synthesis, relying on pattern-matching for the research component.

2. This current work, even if refined, would be insufficient to conduct daily programming work. This is just an extension of point 1; I acknowledge that you're talking about the future and a hypothetical better system.

3. The components required for your hypothetical programming bot are the components not covered by this work.

4. Context-aware/deep search tools are still very incomplete. There are some hints that better user-intent models are around the corner (i.e. companies like TikTok have built models that can adroitly assess users' intents/interests). I've seen no work on bringing those models to bear on something more nebulous like interpreting business needs. (But I also haven't been actively searching for them) Also, Google, who dumps a large amount of money into search every year, is among the best we have and it's definitely far from what we'd need for business-aware programming bots.

5. Conducting the research step in the programming process automatically will require better tools.

6. Conversational AI is still very incomplete. See Tay bot from Microsoft for examples of what goes wrong at scale. People, in general, are also not very aware of themselves during discussions and even very intelligent people get locked in a particular mindset that precludes further conversation. If a user tries fighting the bot by insisting that what they said should be sufficient (as they definitely do to other humans) that could pollute the bot's data and result in worse behavior.

7. Meeting with stakeholders part of the programming process automatically will also require better tools.

8. By points 5 & 7, critical domains still require more research. There is ongoing research in fields like Q&A, even some commercial attempts, but they're focused on mostly low-level problems ("construct an answer given this question and some small input")[0].

9. Advanced logical reasoning is advanced pattern matching + the ability to generate new reasoning objects on the fly.

10. Current systems are limited in the number of symbols they can manage effectively, or otherwise use lossy continuous approximations of meaning to side-step the symbol issue (it's a rough approximation of the truth, I think). See [1] for an up-to-date summary on this problem. Key phrase: binding problem neural networks

11. Current "reasoning" systems do not actually perform higher level reasoning. By points 9+10.

12. Given the rich history and high investment over time these fields (points 4, 6, and 11), it is unlikely that there will be a sufficiently advanced solution within the next 15-40 years. These fields have been actively worked for decades; the current influx of cash has accelerated only certain types of work: work that generates profit. Work on core problems has kept going at largely the same pace as usual because the core problems are hard-- extra large models can only take you so far, and they're not very useful without obnoxious amounts of compute that aren't easily replicated.

13. Given the long horizon in point 12, programmers will likely be required to continue to massage business inputs into a machine-usable format.

The horizon estimate in point 11 was a gut estimate and assumes that we continue working in parallel on all of the required subproblems, which is not guaranteed. The market is fickle and might lay off researchers in industry labs if they can't produce novel work quickly enough. With the erosion of tenure-track positions taking place in higher education (at least in the US) it's possible that progress might regress to below what it was before this recent AI boom period.

[0]: https://research.facebook.com/downloads/babi/ [1]: https://arxiv.org/pdf/2012.05208.pdf




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: