Hacker News new | past | comments | ask | show | jobs | submit login
New junior developers can’t code (nmn.gl)
115 points by mhfs 60 days ago | hide | past | favorite | 134 comments



The reason why these articles appear every 5 years is because many engineers pride themselves on how they solve problems. And then when they see people solving those problems without putting in the hard work that they had to put in, it really bothers them.

This is a timeless post. We'll see it today. We'll see it in five years. We'll see it in 10 years.


Yes. Ten years ago some friends asked me to teach PHP to them, I showed them the book I learned from 20 years ago. It had 1200 pages and you could learn web development and MySQL without using the Internet. No one used that book. They searched for easy ingestable tutorials and went from there. After a year, some of them actually wrote software in production. I can't know if more or less of them would be able to write software, if they had used the book. But sure enough, I was disappointed that they didn't.

My oldest son doesn't even use tutorials, he uses LLMs. Only time will tell, if his way is worse than mine. And right now, I think it doesn't really matter _how_ he learns writing software. It matters more that he doesn't stop doing that.


Method of acquiring is not necessarily equal to knowledge.

I never liked learning from books. I often played with code myself. In the long term I think this had some negative effect, where I did not learn all the things but used more common solutions over and over.

I like writing with LLMs, as it sometimes show a pattern I never could think of. This also teaches me new ways to solve a problem / write a code.


Hehe, in 2000 I bought the 2nd edition of the book "Beginning Linux Programming (Programmer to programmer series)" [1] and learnt so much about Linux programming, and programming in general.

For us people 40+, even "Stack Overflow" was the easy/lazy way to get knowledge. There was something called expertsexchange.com at some point in the 2000 (but it became pay-walled at some point). But generally, downloading PDFs from Emule or going to the library was THE way to learn.

Fortunately, nowadays we have LLMs and tools that are way better. No regrets, and I am so happy to live in this era.

[1] Beginning Linux Programming (Programmer to Programmer) 2nd edition by Stones, Richard, Matthew, Neil (2000) Paperback


dusts off HN account

AI coding is even more impactful considering that most coding-oriented AIs will explain what the generated code does. My offline combo of Ollama, Qwen Coder 14b, and the Continue.dev VS Code extension will always explain what it did at the end of each chat message in understandable English. And if I'm still confused, I can literally type in "I don't understand these changes could you walk me through it" and it will walk me through the code changes with the whole codebase as context/RAG material. All running on-device with no token limits or subscription fees (runs really slow, but still $0), only limited by the computer hardware itself.

In fact, I credit my AI stack with removing a huge coding "writer's block" I've had since recovering from a mental health crisis right before the start of the COVID lockdown, and has made me fall in love with building software all over again.

And it's only going to get even better the more open/shared source on-device stuff gets released. Forget multimodal models, these specialized tools are where the real magic is happening!


Some people learn better by doing rather than getting info dumped.


You're absolutely right, but I'd add that there is still an element of truth to the article - someone who struggles through a manual will retain more than someone who re-implements an answer from Stack Overflow, who will retain more than someone who gets the working code handed to them by an LLM.


>>many engineers pride themselves on how they solve problems.

Im guessing its only a matter of time we see newer programming languages specially invented to work in the LLM era. So the same old processes like ever before continue. You need to understand things in a fundamental way else you won't have a clue what is going on.

You could say you still need to have done a fair of code work without LLMs to work through difficult to find and fix bugs.


Todays junior dev that is copying AI code without understanding it, is the same as the junior dev several years ago who did the same with stack overflow code.

This is a mindset, and I don't think AI code is changing the number of people with this mindset. What it may be doing though, is letting them get away with it for longer.


This comparison of coding assistants to Stack Overflow comes up a lot so I feel like it needs to be addressed.

It is SIMILAR - it is not the same. There's a minimal element of interaction by virtue of the fact that SO code is usually not completely bespoke for the developer's requirements. They'll need to do things like change variable names, re-arrange some parts of it, etc.

The junior dev using an integrated LLMs (like with Cursor) has to do NONE of this. They simply hit the Tab key to accept and move on with their day.

There's a far larger danger of induced passivity.


Also, Stack Overflow regularly came with multiple solutions and comments on each solution discussing drawbacks and alternatives. I've learned a lot from those.


And a lot more likely to (often correctly) say "you should not do this thing".


I wrote another much longer comment but you summarized it nicely, there becomes a sort of learned helplessness that even experienced engineers will start experiencing, as I had when I used Cursor, because it's just much easier to continue pressing tab than to actually review every single change thoroughly, and that is where the danger lies, especially with subtle hallucination-induced bugs.


Since the StackOverflow code was not an identical perfect fit for the problem you had in mind, you still had to have some mental awareness to what you are doing. Now they can just go to Claude and "say I have this solution, and that problem, someone says this should fit into that, what do" and they get the glue inserted without gaining the basic understanding they'd get from putting the glue there themselves.


Agreed. I'd say there is a ladder to the cognitive aspect of writing code.

1. Constructing an algorithm yourself from first principles, then implementing it. Let's call this "architect level"

2. Reading someone else's description of an algorithm (from a textbook, a blog post, etc.) and implementing it yourself. "Senior dev level"

3. Starting with an existing implementation, treating certain parts of that implementation as a blackbox, but adapting other parts. (e.g. a StackOverflow solution doesn't have a cache, but you need one for performance and add one yourself) "Junior dev level"

4. Copying/pasting with minimal modification. (e.g. ChatGPT gives you a solution that doesn't have a cache. You reprompt it, asking it to add a caching strategy. Your contribution is correcting some names or parameter order and adding some glue code. The main insight you gain here is how to drive ChatGPT, not how the code itself functions.)

Can today's new devs climb from rung 4 to rung 3? If the answer is yes, then maybe nothing has fundamentally changed. If it's a no, then we may be in for trouble 10 to 15 years down the road.


There is a rung 5, using an idea integrated with an LLM agent, like Cursor or Windsurf. It becomes trivially easy to simply prompt it for features or bugs and to tab through the changes it makes to your codebase such that you don't even have to do any copy pasting at all.


Came here to say this. It’s rare that you can blindly copy/paste an SO answer. Sometimes it looks like you can but then it doesn’t work and you have to not only carefully read the answer but then also fill in the missing pieces with information elsewhere. Usually, you end up reading 10-30 answers across multiple websites before being able to confidently produce a solution.

Almost always, the instant answers are toolchain-specific, like why my C# DLL isn’t using relative paths in the CSProj file (answer: because different VS versions process DLLs differently).


It is not the same imho; the code copied from SO, they had to change a bit; even if there was no understanding in the beginning, after changing and running and changing and running in a loop until it works, you learn something (often about how variables work). This is more like when I was typing in BASIC source from magazines in the early 80s and changing things to see if I could cheat in a game or make the gameplay different etc. The difference is that even if it doesn't work one-shot with an llm, the loop doesn't have you in it; sure sometimes you have to hammer the Yes button (but you can switch that off); it will be auto for all very soon, if you don't make a real conscious effort, you are not going to learn anything from it. Maybe just you wondering why they are paying you for clicking Yes.

Because of where the HN community works and hires, things are a bit different; in the real world, senior programmers (people who are hired in that role and make money for >=decade, not whatever your feeling what it should be is) are not very different either. Very many don't know what they are doing either, just they deliver by trial and error and got their years and stripes in, still barely understanding what they are doing. This now has become easier with llms for them too, but it's the reason why I, vs other people on hn, am bearish on programmer jobs; by far most outside the hn bubble are and always were terrible and can be readily replaced by llms now and will be soon. The ones that do understand what they are doing and can architect, write and read complex software won't be replaced by the current or next gen, but when we read that companies are going to lay off programmers in favour of llms, they mean the people I have to work with daily (we go into large companies and do emergency repairs; there was an article yesterday somewhere saying that all companies have outages all the time; sometimes we get called in for those) who have massive teams of people who cannot write anything sensible; it is useful for the problem, but reading the code or looking how it's done makes you cry; clearly there was no real understanding to begin with. Most commonly, and this wasn't all that common when we started out, an (or rather 1000s now) external library was used, the way it was supposed to be used wasn't completely/fully understood and so a bunch of brittle code has been produced to make it work in the way the author believed it should work, breaking in a myriad of edge cases that are discovered (by outage often) years/decades later. I am thinking that maybe llms are better at these cases; sure they 'understand' about the same nothing, but at least, once it works, they can clean up the code without effort so it might not be that crust of misunderstood pain plastered on top to hold things together.


> in the real world, senior programmers (people who are hired in that role and make money for >=decade, not whatever your feeling what it should be is) are not very different either. Very many don't know what they are doing either, just they deliver by trial and error and got their years and stripes in, still barely understanding what they are doing.

Yup. I had to deal with that last year when some senior Microsoft devs tried to shove serverless Azure stuff into something that was supposed to be for a Seattle community group full of non-technical people. The group lead was totally oblivious to how serverless on-demand pricing worked and wanted a fixed monthly cost. That whole project ended up getting scrapped and replaced with an Excel spreadsheet.


There’s a lot of “people have been complaining about the youth for a long time” in these comments.

I get where that’s coming from. However, I don’t think these complaints are the same.

Let’s not approach this from the youth, but from the technology that’s supposedly corrupting the youth.

Stack Overflow, C compilers, Python are all mentioned as previous examples of technologies that were supposedly making people bad developers. And while true, none of them was hailed as a genuine game changer the way AI is. And why is AI hailed as a game changer? Precisely because AI takes the thinking out of the achieving. It does the thinking for you (it’s right there in the name…artificial INTELLIGENCE).

None of those other technologies pretended to take the thinking out of the achieving.

Now it may turn out that AI is overhyped and it doesn’t actually able to think as well as humans beyond a certain point. But the point still stands that AI, if it exists, is fundamentally different from other technologies and can genuinely have some of those concerning effects on developers that those other techs did not.


Old people have been complaining about the youth for 2000 years but THIS time they're right.


So the youth are always correct and always an improvement?


> So the youth are always correct and always an improvement?

I don't think this is an issue of "the youth", inasmuch it's an issue with "the old" struggling to understand a world where people don't play by their rules anymore.

Pay attention to the absurdity: a blogger complaining that today's junior devs don't do the hard work of... asking questions in stack overflow? That's their baseline of expertise and knowledge-seeking?

I'm old enough to remember "the old" complaining that junior devs can't code anymore because they just paste nonsense they copy from stack overflow.

How times have changed.


Flynn effect is reversing for this generation, so it doesn't seem to be the same, previously the next generation was always smarter, but now its dumber.


No, but the best of them will generally do great things with the tools they are afforded. Just like all the previous generations.


Always is too strong of a word but if you look at the history of human development I'd say that in general each generation is an incremental improvement over the previous one.


The internet was hailed as and did change the game. Even if we pretend for the sake of argument that AI magically becomes AGI, it still only changes the game about as much as hiring a team. And if it IS that good it certainly doesn't need you managing it.

It's also not as different as you make it out to be. A compiler takes the thinking away from targeting hardware (promise* reality: you still have to target hardware (and software), but you can write larger projects) Likely AI will just become superhuman in various fields, subhuman in many other and won't be AGI for the foreseeable future (barring some kind of massive emergence in VLLMs).


> And why is AI hailed as a game changer? Precisely because AI takes the thinking out of the achieving.

So, like WYSIWYG designers and RAD tools?


That’s not a bad analogy. WYSIWYG is great for almost all use cases. It let people without training in typesetting produce well formatted content quickly. It’s fast and easy. But, sometimes it doesn’t work. Or produces formatting that isn’t quite right. And when that happens, you learn about why things were done differently in the past. Or why markup syntaxes exist as opposed to just WYSIWYG formatted documents.

Or in RAD, you could make pretty good GUI programs quickly that did their job, and did it well. But they would all look very similar. And if you needed a complex interface that required an unsupported workflow, it might not work at all.

Coding with AI can be pretty similar. It will work a lot of the time and you can have something usable quickly. But, if you don’t understand why it works, and something is broken, or misfortunes, you’re stuck. You’ll be left trying to figure out a system without the benefits of knowledge of how or why it worked in the first place.

I’ve seen this with junior developers who don’t understand the languages and tools they use. If they just plug things into AI and hit a wall, they don’t understand the data flow to be able to fix the problem. On the other hand, I’ve also worked with junior devs who have a solid programming background who are able to work faster with AI and still understand/troubleshoot the system. At the end of the day, AI is still a tool (for now) that needs to be used. Some people will use it well…


> So, like WYSIWYG designers and RAD tools?

Exactly. One if the killer features of Copilot is even, of all things, tab completion.

Are template engines now bad?


Stack overflow also has a ton of misleading answers too. Yes, the popular ones are exemplary ones with depth. The "struggle" is where the most knowledge is gained. While one struggle, they look inside, outside and around the problem and over time dots starts connect to each other.

I don't think it's all junior engineer's laziness to get things quick out the door. I recently worked with several, and many prefers to "do the right way". However, there are bad managers who wants to get things done quick so they can climb up the ladder. One junior told me his 1-on-1 with manager told him "don't think too much and get it done". The current market is tough for junior devs so they'll do whatever to please their manager. What choice do they have? Rather prefer getting PIP'ed?

And it's funny how the current market is considered "bad" because software companies earnings are clearly not that bad. Perhaps our greed has peaked and the only way to pocket more money is to squeeze every drop of juice out of employees to get the product out ASAP


When I was younger people derided us new junior developers who relied on compiling and running their code to see what it would do, instead of thinking deeply about it. Using a REPL is a sure sign of a weak mind.

I feel like every 15 years or so you can just find-and-replace the name of the tech we decide is only for Not Real Programmers.


I had a boss who could program machine code directly into memory using the switches on the front panel

God knows what he thought about me


I mean, to be fair, that's actually a lot easier than dealing with the giant mess of infrastructure and tooling people have to understand today. And yes I've "programmed" machine code through toggle switches on a front panel.


>>instead of thinking deeply about it.

Around a decade back, I was doing lots of work on 8-bit microcontrollers, and a fairly old programmer taught me how it was done. And I learned a lot from the approach.

Honestly speaking I had to do lots of paper work, and lots of incremental thinking on paper, testing the ideas along the way.

I'm guessing if you didn't have the print statement or a web page as an output, this is just how you work anyway.

The code did come insanely efficient and bug free. Its not for web dev, but there are its use cases.


According to the legend, when Ken Thompson was shown vi, which was the first full screen code editor on Unix, he replied, in essence : "who needs to see other lines than the edited one?"


I've gotten lazy and do that far too often. But yes, it's better to only use actually running it to verify that it does why I already know it'll do.


The implication that Stack Overflow was somehow the only way you'd ever find a solution to a gnarly problem (back in the day) seems weird to me. I mean before that (even during that) there was the alternative problem solving technique of, well, rolling up your sleeves getting stuck in and figuring out the problem yourself without asking anybody. By doing some reading. Or by reading source code. Or disassembling code when there was no source code. Or by using a debugger. Or a logic analyser. Or by doing experiments. Or some combination of some or all of these and other things that didn't involve being spoon fed.


The only time I got a question answered on stack overflow was when my coworker suggested I ask there so then he could go answer it himself for the points.


Before StackOverflow, we copied stuff from physically printed material. My personal favorite is the Perl Cookbook, which is still on my bookshelf.

The thing with printed books is that you have to type in those snippets yourself, and the act of typing out code reinforces knowledge. I only used the cookbook for each new problem a few times, after which I have committed the relevant bits to memory.

The act of copy&pasting from Stack Overflow might have the same reinforcement effect, but perhaps not as much because it didn't cost as much effort. The act of having a bot generate code probably doesn't do much reinforcement at all, although perhaps these new developers will be better at asking questions or creating prompts.


> the alternative problem solving technique of, well, rolling up your sleeves getting stuck in and figuring out the problem yourself without asking anybody.

Meanwhile, boss wants the fix now...


Well productivity is output per unit of labor no? And labor is measured in hours.


The problem is with managers pushing hard deadlines and expecting immediate responses.

That being said any large company I've worked at seems to have followed the mantra you are putting down.


Before StackOverflow there was ExpertSexChange.com and before that there was DejaNews and posting to comp.lang.*.


This article is just the latest in the long history of lamenting the laziness of the youth


I love how ten years ago it was folks complaining about juniors using Google to code, this articles first suggestion was juniors need to Google more. Twenty years ago it was that Python was not as good as learning C or something. Thirty, it was probably folks lamenting juniors can’t code x86 asm.


Nah, they are still whining here, on Hacker News, about how a 91 KB webpage is too large to be considered disciplined. (Literally saw that comment earlier today.)


> 91 KB webpage

If it's all code too large indeed


I don't think it's quite the same. We live in an inbetween time - AI is not quite there yet.

AI struggles with knowledge from after its training date (so it can't help very well for anything relating to new versions of libraries) and often just generally gets things wrong or comes up with suboptimal answers. It's still only optimized to create answers that look correct, after all.

With these problems, someone on the team still needs to understand or be able to figure out what's going on. And dangit if it isn't getting hard to hire for that.

And the day that AI can actually replace the work of junior devs is just going to cause more complications for the software industry. Who will get the experience to become senior devs? Who will direct them? And even if those people also get replaced eventually, we will still probably have more awkward inbetween times with their own problems.

Can't say it's not convenient, but no use pretending the challenges don't exist.


> all these are idlers whom the Clouds provide a living for, because they sing them in their verses.

The Clouds By Aristophanes Written 419 B.C.E


While a trope, it is also statistically proven that the youth of this generation are doing worse, by almost every metric, than prior generations; including but not limited to achieving adult milestones years later than normal, record high percentages of medication distribution, and (for what it’s worth) a historically low percentage of young people that could even enter the military if they wanted to, at just 1 in 7.


Gen Z also has a lower crime rate, less teen birth rates, and less illegal drugs and alcohol consumption when compared to previous generations at the same age.



Yes; but are we silly enough here to believe that it’s because they have some newfound sense of morality that is guiding their actions?

No.

They are so deep into other things that the old vices have lost their appeal.

I argue that what’s really happening is that this is the most introverted generation (lockdowns playing a part). Home is comfortable, safe, entertaining, free of judgement, and has never been less boring; the basement dweller stereotype has never been more real.


I didn't say anything about their morality, just that by some statistics gen z is better than previous generations.

They are not better or worse. Just different.

Their music sucks, though.


“Better” because they’ve found even worse replacements.

Who needs to get a girl pregnant, when you can jack off twice a day to a unique body every day for the next decade? Heck, why even get married or even get a girlfriend at all, at any point (also a statistical low)?

Who needs alcohol and drugs, when you can drown your sorrows into hundreds of hours of gaming? And you’ve been doing this since you were 10 years old and playing 12 hours straight on weekends? (There is literally a 10 year old relative in my life doing this.)

This generation is better on the statistics - but we have no reason to believe the outcomes are better. We have every reason to believe that they will be worse for it.

Not just that they might - look at the statistics, it’s already happening. Antidepressant rates among teen girls has literally increased 130% in 5 years.


Are you seriously comparing drugs and alcohol with video games and saying video games are worse replacements?

What the hell is wrong with you? Most people get bored of any particular game given enough time. Any psychological dependency will disappear and fade away.

The reason drugs screw you over is that they enter your brain directly. The brain can't cope with direct chemical alterations through countermeasures on a mental level. For the brain to protect itself from that, it would need to activate the immune system and send out macrophages etc to physically get rid of the chemicals.


> Who needs to get a girl pregnant, when you can jack off twice a day to a unique body every day for the next decade?

Buddy, here's the straight dope. If you're mad that teenage pregnancies are going down on average, you literally lack the basic empathy required to comment on the internet. Log off, you are just too unintelligent to contribute to this discussion.

If you're not trolling then I am seriously worried that you are autistic or mentally disabled and may not know it. This is outrageous behavior and I've seen you repeat this psychopathic bullshit for years now. Do you live in the real world? Do you have healthy relationships with family members or friends that share these ridiculous principles with you? It defies normalcy.


Your account is not even one year old, dude.

Also, great way to miss the point: I am all in favor of teen pregnancies going down; but if it’s because everyone is culturally stuck in an extended adolescence, that should not be confused with maturity, and the effects of that will be felt everywhere. I say "worse" replacements because if people aren't growing up, they will just do the original problematic activities later, while having their immaturity affect their lives and other people's lives longer.


This. Just because things are always getting worse does not mean things are not literally at their worst


Yes. A point can be both a local and global maximum.


Please don't forget to tell us about them in great detail, Socrates and all that. After all that supposed counter-argument to anything and everything only gets brought up every single time when anyone suggests something got worse.


That's because it's true.


It's only true if you are bickering about aesthetics of an argument versus its content.


Maybe it's true sometimes but not always ?

Even a broken clock etc


It's also true that for every thing that turned out to be horrible, there were people saying "this is gonna be great". Do you then also also dismiss all optimism?

Why is this fallacy only employed in one direction? It's not an argument either way, it's just spam, but I wonder why it only ever gets deployed in one direction.


Look, when you get to my age, you've literally seen it happen through multiple generations. As a kid growing up, parents thought we were the lost generation. Then I started growing up and I thought the next generation was the lost one. Then they grew up and they think the next one is doomed. It's so absurd to see it happening over and over. And everybody always thinks they have the unique and correct perspective.


I've seen it happen even as a young person, and still saw the fallacy of this "argument". I've been arguing with people twice my age who used it to silence any criticism of trends.

And it's totally an online thing, too. It's usually used to dismiss a thing without looking at the thing. Because if you could dismiss the thing by looking at it, you wouldn't need that old chestnut.

I don't blame young people for having less to look forward to than previous generations. I don't blame them for being unable to code, and so on. Blaming them for their lot is totally orthogonal to even having a honest look at what that lot is.

I never thought of a new generation failing the one's before it, being "lost" -- rather the other way around. We failed them. And by the same token, I also never dismissed people as being old and knowing nothing even when I was 20. I simply never once rolled that way, and I see nothing interesting, much less truthful, coming from it.


I don’t think so.

Being lazy and never developing foundational knowledge is different.

New devs are expected to have experience using AI coding tools. If they’re expected to have that, why wouldn’t they trust it?

If you trust a code generator, why dig deeper?


People said similar things when C compilers became decent.

Turns out that we still have people who know how to write good code, even good assembly code.


I’ve heard this comparison between C compilers and LLMs and I don’t think it’s similar.

there’s a direct relationship to C code and the resulting assembly.

The C language was specifically designed so that language constructs were converted into the intended assembly anyway.

C didn’t obfuscate the problem of writing code, just allowed us to operate at a higher, but still sound, level.

LLMs are not an abstraction like C. You aren’t going through a sound, tractable process going from natural language to code.

You’re going through a black box.

It’s no longer a new abstraction on top of a fundamental idea.

There is no understanding of anything being developed. Prompts aren’t even deterministic wrt the output of the LLM, so even how to prompt a given model isn’t something that can be understood or totally predicted. It’s just like SEO hacks.


While C compilers are technically not "black boxes", if you looked at lawyerly discussions about UB I don't think you can in full conscience say that `gcc -O2` is much better than a LLM.


They’re not black boxes at all. The code is all there and is deterministic.

Sorry, but the peculiarities of one optimization level on one implementation of C do not affect my point at all.


I already said they are technically not black boxes. We are not in disagreement here.

The fact that code is all there and deterministic isn't sufficient. People use newer versions of compilers and you can't predict what the people who write compilers will do (people are black boxes). The compilers may do something you don't expect but still conform to the spec.

The point is that there is no one implementation of C. And the same is true for any other language too (to a lesser extent).

Unless you're coding against very specific versions of OS and compiler and runtime environment, you don't have code to inspect. You are not inspecting all the underlying dependencies when you write your code. You just assume they work.

It doesn't matter whether they are black boxes if you don't routinely inspect the boxes. Perhaps you do, but for most people they don't. So it doesn't matter as much.


> I already said they are technically not black boxes. We are not in disagreement here.

I’m saying they are not black boxes at all. Not just “technically”

> The point is that there is no one implementation of C. And the same is true for any other language too (to a lesser extent).

That doesn’t matter for my argument.

> It doesn't matter whether they are black boxes if you don't routinely inspect the boxes. Perhaps you do, but for most people they don't. So it doesn't matter as much.

It does matter. That’s my whole point.

For two main reasons:

First, you have the option to understand how a compiler translates C to assembly. You do NOT have that option with LLMs

With a compiler, you are abstracting the underlying machine code, but you’re not hiding it entirely.

Secondly, with an LLM there’s no tractable connection between your prompt and your code. That’s a huge issue when it comes to understanding the resulting program.

Imagine a world where LLMs turn raw language into assembly (or, more realistically, LLVM IR)

How would you reason about your resulting program?

You could give the same prompt to the same LLM and get wildly different resulting code, since LLMs are nondeterministic.

How would you debug? How would you ensure that small changes to your LLM prompt doesn’t change parts of your resulting code that worked well?

How would you even understand what your program is doing?

You couldn’t do any of those things, because there’s no deterministic relationship between the prompt and the resulting program.


LLMs can be deterministic if you set the temp to zero.

It's not feasibly debuggable though, I'll give you that.

On this point it's not really different in principle from having a non-OSS compiler. Using OSS compilers are great because you can look at the source and trace problems and fix them, but people also use closed source compilers in their work too.


> LLMs can be deterministic if you set the temp to zero.

At which point, you the major benefit statistical models like of LLMs…

A lot of performance is lost if you fix the sampling step like that as well.

That’s why nobody does it…


The bigger problem in my experience isn't even junior devs using it. It's senior people who haven't been individual contributors in many years who love that AI means they can just hop in and patch whatever problem comes up. It's not too hard to tell a junior dev that adding an if statement right before the error happened to make a separate codepath for that one case is not the right answer. It can be MUCH harder to tell a Senior Director that you don't want to merge that pull request because that's not the right way to solve the problem.


I can't really speak to the impact on junior devs since I haven't worked with any since the start of AI dev tools so this comment is kind of off topic.

Totally agree that should be treated as learning tool just as much "give me something that works" tool. If Junior devs are not taking advantage of that side of it instinctively out of their own curiosity and interest, well, maybe they were never going to be good developers in the first place even without AI.

What I can say is that for me as a as senior dev with 22 years experience who has been using using these tools daily for about a year now, it has been a huge win with no downsides.

I am so much more efficient at unblocking myself and others with all the minor "how do I do X in Y" and "what is causing this error" type questions. I know exactly what I want to do, how to ask it, but only partially what the answer should be... and AI takes away the tedious part of bridging that knowledge gap for me.

Maybe even more significantly, I have learned new things at a much faster rate. When AI suggests solutions I am often exposed to different ways to do things I already knew, features I didn't know existed, etc. I feel good that I found a solution to my problem, but often I feel even better having learned something along the way that wasn't even the original goal and it didn't really take any extra dedicated effort.

The best part is that is has made side projects a lot more fun and I stick with them a lot longer because I get something working sooner and spend less time fighting problems. I also find myself taking on new types of projects that are outside my comfort and experience zone.


I wrote this article! Thanks for sharing.

Love the discussion on HN as always, great to see various perspectives on the issue.

Do you think that in the future, new programmers will not ever need to learn syntax/algorithms and will just be able to rely on AI?


A reliable system will need a relatively formal “proof” that what it does is correct.

Code is currently the easiest and most convenient encoding for lots of folks to express such logic. So they’ll need to learn to read the syntax even if they write less of it.

So I think people will be able to put together lots of code with AI and not much programming experience, but there will be a need to ensure that it does the right thing. Now eventually the AI will create fewer bugs, infer intent better, automatically write tests, etc but even then someone needs to eg check the testset is correct.


> A reliable system will need a relatively formal “proof” that what it does is correct.

One of my pet theories is that this is what programming evolves to in a few decades. Programmers write formal specifications for what should happen in some very high level language. AI + Compilers + ... take that specification, implement it, and prove that the implementation is correct and performant.

Think "SQL but not just for databases".


So back to waiting for "the sufficiently advanced compiler"


I really want to say you're right, but I'm afraid it's not necessarily true.

Before we had computers and machines, humans did all the work.

And you can't really reliably "code" humans. They misunderstand instructions. They disobey rules and regulations. They make mistakes.

But society still strived with these unreliable humans that have no "proof" whatsoever that they'll do the job properly.

In fact, these days we still often trust humans more than code, at least in areas where the stakes are high. For example, we trust human surgeons over programs that perform surgery. We trust humans to run the government rather than programs.

It's entirely plausible that future generations growing up with AI don't see the point of requiring "proof" of correctness when deploying automation. If an AI model does the job correctly for 99.99% the cases, isn't that sufficient "proof"? That's better than a human for sure.

Yeah that sounds dystopian but I don't see why it can't happen.


Interesting.

I think we’ll always have code that will need to be “formal” and deterministic. Banks, voting machines, cryptography, pacemakers, rockets, etc may all need the kind of “precise” software we have today. Precise software for precise machines.

I’m now wondering if there may be a new category of programs, that are more wishy-washy yet useful in the kind of way an LLM can be today. Computer games aren’t really mission critical, for example, and I imagine we’ll see deep learning models and LLMs embedded one day. I could imagine a kid generating game logic, maps, content, etc through a more intuitive and less formal interface. I already get ChatGPT to help me cook dinner or any other number of less precise tasks. Who knows what other new kinds of automation may be this way - to produce something “good enough” like a person would.


I feel sorry for young developers today, because the rush to quickly make money from software, has created a lot of complexity. Programming used to be a lot simpler. Some of the problems we have today are indeed harder. So I’m not wanting to go back to assembly or something. Some maturation was/is a good thing. But the last twenty years have seen an excessive hyper maturation of idea churn.


When has there been a time since at least Netscape IPO’d in the mid 90s was there not a rush to quickly make money?


Doesn't this feel similar to how we talked about people using Google search to find answers to programming problems, instead of reading documentation and understanding things before writing code? Or how we criticized not using IDEs and just writing raw code without syntax editors? Or if we go further back - using calculators instead of calculating in your mind? I remember talking to an old man when I was a kid - he used to mock me for not knowing how to recite the alphabet in reverse order, complaining about how kids these days don't learn anything properly.


> We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning.

I assume that 20-30 years ago when juniors were using either ide-provided auto-completion or refactoring or gui designers some old graybeard developer had a similar reaction.

Nothing new under the sun.

On a different layer of thinking, it makes perfect sense. The more the computing industry progresses, the more it abstracts away from how the thing actually works.

I don't know the author of this post, but as a system engineer that works closely with many software engineers, there are so many of them that yap left and right about the code they wrote or the ecosystem around their main programming language but are completely hopeless to anything outside that scope. I've seen so many re-implement the wheel because they don't know about facilities provided by the operating system (let alone how to interface and make use of them).

There's so much stuff that's done by the kernel (linux) and could be re-used if somebody was able to dive into FFI and write the adequate wrappers. And that's just an example.

One might argue that junior developers are just starting at a higher level of abstraction.


>I assume that 20-30 years ago when juniors were using either ide-provided auto-completion or refactoring or gui designers some old graybeard developer had a similar reaction.

You assume wrong. Source: was there at the time.

This time is genuinely different.


it's always "different this time"


Never before has a technology enabled kids to pass every high school class just by using it without learning anything. Calculators just solved a small part of math etc, this kind of technological change has never ever even close to happened before.

Maybe the kids will be fine anyway, but you can't say this time isn't different, it is different regardless of the outcome.


> it's always "different this time"

If you're extrapolating from a sample size of one massively disruptive workflow innovation, then of course, it's natural to assume if you've seen one you've seen them all.

That said, a few examples of disruptive workflow innovations from the past, from someone who was there at the time:

IDE's - gray beards didn't say they made you stupid, they said they didn't like them.

Memory managed languages - gray beards didn't say they made you stupid, they said they didn't like them.

Spell checkers and grammar assist - gray beards didn't say they made you stupid, they said they didn't like them.

Source control systems - gray beards didn't say they made you stupid, they said they loved the concept but they didn't like the one being used on the project (regardless of what it was).

Object oriented programming - gray beards didn't say they made you stupid, they immediately listed it as a skill on their resume.

GUI builders - gray beards didn't say they made you stupid, they actually liked them when they were new (there is probably one from a billion years ago they still think we should all be using today).

Digital readouts on machine tools (yes, massive disruptive workflow innovation) - gray beards didn't say they made you stupid, they said you should absolutely use them.

Calculators (when they were actually new) - gray beards didn't say they made you stupid, they competed to see who could use them the fastest (backlash against them came much, much later).

This one is different. Yes, you should absolutely use it, and yes, if you aren't careful, it risks making you stupid.


> GUI builders - gray beards didn't say they made you stupid, they actually liked them when they were new (there is probably one from a billion years ago they still think we should all be using today).

oh I remember the bashing agaist dreamweaver (and frontpage) in the early 2000s because it allowed people to make decent looking web pages without learning html.

> Calculators (when they were actually new) - gray beards didn't say they made you stupid, they competed to see who could use them the fastest (backlash against them came much, much later).

that was not my experience in school, quite the contrary: my teacher kept saying things like "you won't always have a calculator with you" (joke's on her, we how have computers with computing power that could compete a whole rack of the early 90ies in our pockets, pretty much always with us).

> when they were actually new

it doesn't have to be actually new, the discourse applies to tech that has just become available at the general public (not to over-specialized people with budget to acquire or license it)


Market forces will take care of it. These people won't get hired or last long if they do. Companies don't need programmers whose only skill is prompt "engineering".


While I agree with this in the aggregate and I think this problem is likely to take care of itself for many companies - I can’t help but be frustrated with it right now.

It is unbelievably grating to have colleagues that are plainly over-reliant on LLMs, especially if they’re less experienced. Hopefully the cultural norm around their use gets set quickly. I can’t handle too many more PRs where juniors plug my feedback in to an LLM and paste the response


A long time ago there was a serious effort toward building programs from UML in the Java sphere, with the ideals of explaining the system what needs to be done and have it spit out a solution, potentially from pre-made blocks.

That philosophy never died and will probably keep living on eternally, perhaps until it somedays becomes a realistic option, who knows ?

Many companies see it as the utopia, and won't have any strong rejection of programmers sharing that ideal.


The issue comes 5-10 years from now when there’s a serious shortage of senior devs.

I mean… as a senior dev now, I’m not complaining, but it can’t be good for the industry at large.


People who studied computer science, even on their own, are going to do just fine as long as they don't turn off their brains. I am not losing any sleep over this.


> as they don't turn off their brains

Let’s hope.

But it seems like we do turn off our brains when using AI tools.

https://www.mdpi.com/2075-4698/15/1/6


One very recent anecdote about this we tasked a new graduate with the task of extracting some tabular data from a relatively small worksheet and processing it. This is something you'd use a dataframe or alternative for.

Because he used the words 'Google Worksheet' in his prompting, the LLM spit out some code using the Google Sheets API. This was less than helpful, and it took some time to realize why he was struggling.


There’s no real connection to junior developers in this article. It’s a piece about how the author feels people are using AI and how the author feels people should use AI. The quick fix vs deep understanding positions are mired by many factors; two of the big ones being time and how much one cares none of which are actually discussed.


^ this. If a junior dev is legitimately interested in software development and how things work, they will use these tools for more than just completing their tasks and a paycheck and will eventually stand out above their peers who don't.

If the author is suggesting that more junior devs today than in the past are only interested getting tasks done and the paycheck, then this really has nothing to do with AI.


As one of the new devs , I feel like AI tools have pushed the bottom line higher , you can learn to do things and you can dive deep into technical aspect or in this case the basic implementation. But the velocity or expected velocity has increased , we expect more out of the junior devs then what we used to years ago , As if you don't use AI you're already behind.

There will always be people who know how to do things better , who can understand what will be the most optimal code style for fastest and/or smallest assembly instruction , and in future this bottom line will get moved much further.

We are probably coming close to something similar of 1980s game industry crash for engineers or we can avoid it.

New devs may not care for deep tech knowledge they optimize for opportunities, and you give them tools for their disposal to optimize for opportunities and they grab it.


If functional code can be written and deployed by juniors using an LLM, then perhaps the required business logic isn't that complex after all. Ask them to write a device driver or a kernel module, or code a faster algorithm for, say, stock trading, and they'll soon see the need for closer inspection and manual coding.

The reality, though, is that for most CRUD / code scaffolding, what you need the most is good knowledge of the problem space and a solid enough foundation to ask LLMs for solutions. Because, let's not forget it, people with no coding knowledge whatsoever can't get functional stuff out of LLMs as fast as non-developers.

We need to get used to a world where augmentation means "getting rid of the boring stuff at the margins". There's no heroics in doing that stuff the old way.


I had a similar thought the other day when playing with Deepseek-coder. I asked it to do a problem I had just figured in a system I didn't know (opencv). The code deepseek gave was basically the same as what I had figured out, which I was pretty impressed by. But then I thought that if I had just used that code as is, it probably would have worked but I wouldn't have learned anything about opencv.

These new tools can be very valuable but I worry about people not using the tools to maximum effect long-term, in favor of short term success of a certain kind.


> But then I thought that if I had just used that code as is, it probably would have worked but I wouldn't have learned anything about opencv.

A large majority of people would be very, very happy with this. I don't have to know how to fix my car to drive it and thank God for that!

I think it's wonderful that LLMs enable someone to create useful things without understanding what's happening under the hood.

I also hope those people don't claim to be / won't be hired as (senior) software engineers, though.


I agree with that. I think part of moving out of junior status is thinking about maintenance and fixing bugs, which will be much harder without the underlying learning.


> Reading discussions by experienced developers about your topic is the best way to learn.

No it isn't. Stack Overflow consists of people saying "I don't know what you are trying to do" and then answering anyway without waiting for additional information or scolding people for not asking their question correctly. If you post your real code, you get told to provide a minimal reproducible example. If you just post the minimal reproducible code, you get told this doesn't correspond to a real problem and that the question itself is contrived.


Closed, marked as a duplicate: [link to completely different issue]


Pretty easy to filter out a majority of devs who don't know much including juniors before hiring. Not knowing stackflow doesn't mean much, but then again professors think that's cheating so why should they reference it. No reason to trash jr devs, they don't know what they don't know because colleges ignore what they need in preparation for real world interviews and readable resumes.


A Computer Science degree does not teach you to be a good software engineer. In fact, you don't even need a degree to be a good software engineer. For 99% of the software engineering jobs, employers are not looking people who know the theory of computation, algorithmic complexity, operating systems, compilers, or even how a database works. What they want is the following:

0. A strong desire to solve the user's problem and the organization's problem.

1. Knowledge of a major programming language like JavaScript, Java, Python, C/C++, C#, etc.

2. Knowledge of how to use an SQL database or maybe a no SQL database.

3. Knowledge of how to debug the build process and write scripts in Bash, PowerShell, etc.

4. Knowledge of at least 1 major framework.

5. Knowledge of Linux, MacOS, or Windows.

6. An ability to read documentation and learn.

7. An ability to debug large programs and fix bugs without introducing more bugs.

8. A desire to think critically and choose the appropriate technology for the problem (very hard, takes a lot of experience).

9. An ability to write clear code which others will understand.

10. The ability to write, argue, and persuade others.

11. A good person who works well with others, puts the product before himself, and is honest.

Almost all of these things are not taught to computer science majors. At best, a person will learn 1 to 2 languages and maybe Linux. Expecting computer science programs to produce good software engineers is crazy because software engineering and computer science are two different things.


How long can a junior dev last simply using AI day in, day out? Is it enough to ship high quality products? I assume at least most of them eventually learn the proper skills when they are no longer "junior" devs. It seems like the usual "fake it til you make it" style a lot of people do to get by.


Have you hired junior engineers in question? so you're bad at hiring? what's the post history here? what are you trying to sell? https://news.ycombinator.com/from?site=nmn.gl.


First AI is a tool. Then when it can outperform its will replace. Starting with junior devs, then as output speed and quantity grow and it becomes difficult to track, it will learn to manage itself. Do it’s own code reviews and internal discussion, like the author proposed.


I remember saying this in 2003, too, when new grads couldn't sit down in front of an editor and produce a program from nothing, they could only modify existing ones with lots of hints. surprise, they almost all learnt and are now quite good seniors.


> Can you imagine that some dude just wrote an answer with this level of detail? Raw, without any AI? And for free?

I once got Raymond Chen himself to answer My Stack Overflow question. I do not think my programming career will reach that height ever again.


I for one am very excited about where things are going.

I know we're getting fewer "traditional" junior devs, but I'm seeing more and more designers and product managers contributing, at a frequency which was much harder pre-GPT.

In my roles as a head of product/eng, I've always encouraged non-technical team members to either learn coding or data to get better results, but always had limited success, as folks were scared of it (a) being hard, (b) taking too much time to get proficient.

But that's changing now, and fast - and these junior devs are becoming proficient much faster and leading to much business and customer outcomes.

I'm seeing more realistic goals, sprints, etc. I'm seeing fewer front/backend bottleneck, and lastly I'm seeing fewer pixel moving requests going to senior engineers.

As other have mentioned juniors were often unable to code prior to LLMs, and what helped make them better was code reviews, bugs and general mentorship. Those tools to make them better are still available to us.


I've been thinking about this for a few years, even before the advent of AI coding tools. New developers certainly couldn't code then either, but I think what "coding" is is changing, and it has to do with how our field is capturing solved problems, encoding that knowledge, and then resharing it out to the next generation of developers.

- In the before times, writing software usually meant having a logical mindset, perhaps one trained in math of physics, or an insatiable curiosity. Most of my older CS college professors had actually majored in virtually anything other than CS, because the field didn't exist when they were in school.

- Lessons learned in how to convert higher-level languages into lower-level ones were captured in compilers, linkers, debuggers, and other tools. The next generation really didn't need to learn machine code, assembler, or any of that other business. They were operating in COBOL, Fortran or maybe later C most likely. They entered the workforce ready to figure out complex algorithms rather than figure out how to make the machine perform basic functions at scale -- that knowledge was captured.

- By the time I went to school, there was a strong emphasis on algorithms, established operating system concepts, multi-threading and processors, very little at the machine level, almost no GPUs existed outside of Silicon Graphics workstations in a little lab, and some cursory and revolutionary concepts about VMs as in the Java VM, and a new thing called "agile" that was showing up in some places. There was a very active department researching ways to distribute work across networked computers. Templates in programming languages hadn't really shown up anywhere, and it wasn't uncommon to work in version of C++ without a standard String type. Perl was common, and unicode was being looked at, but the world was ASCII. I could sit down and write my own optimal self-balancing trees, and reliably put them behind production software. My first programming gig, a company that wasn't focused at all on hard CS algorithms, wrote their own String types, search engines, compression algorithms, and virtual memory mapped libraries just to get a product out the door. But we weren't trying to write boot loaders, firmware, graphics windowing systems, or databases, that stuff already existed - the knowledge was captured.

- Templates, the Enterprise Java "Culture", better IDEs, and early internet tech seemed to drive lots of new students away from figuring out how to write a B-tree, and into things that were more valuable in the marketplace. Early CRUD apps, e-commerce systems, sales-tax libraries, inventory control software. A developer didn't need to figure out the algorithms, because the knowledge had been generically captured behind a few Template libraries. I remember buying a C++ Template library from a software house that had a good String type, regex engine and early support for UTF-8 Strings, along with some interesting pointer types. UML and code generation was the new hotness in places that thought COBOL was still radical.

- Today, CRUD/e-commerce/etc are so common, you start with a framework, plug in your business logic and point it at your database and service resources and off it goes. The knowledge for how to build that stuff has been captured. The movement to the front-end has been tremendous, getting rid of Java and flash from the browser and all the work in Javascript-land has made good front-end developers worth their weight in gold. But lots of that knowledge is captured too, and you can just React.js your way into a reasonable front-end pretty quick that works hard not to look like a developer-made GUI. Even design aesthetics have been captured.

So what's next? The knowledge for how to build all this stuff, the decades of hard fought knowledge creation was captured in code, and on forums, and blog posts. StackOverflow worked because it was "central" to lots of searching, but there's still random stuff buried away as captured knowledge in code repositories, personal sites, weird vendor examples, and so on.

Ignoring AI, what one might want to be really functional today as developer is something that taps into this wealth, maybe auto searches for things in all of these places and compiles a summary, or an example. Maybe this searching happens in the IDE for you and the result is dumped right into your code for you to tweak. But how to compile all these results? How to summarize into something logical? I guess we need something kinda smart to do it, maybe we need AI for it.

The real question is what's the next layer that needs to be hard fought to create new knowledge? If AI is the new Generics/Template library, what are we building next?


This is great comment that offers great perspective.

Grumpy old devs often lament that their technology layer is being ignored by newer devs, but often forget that computing has a long history and many layers are built on top of each other. Nobody really understands all the layers from the top down to the semiconductor tech.

If your goal is to learn the tech at any layer, you can still do it; but most people just want to create products and technical knowledge is just a means to an end. Both are valid goals, and I think there's no reason for the former to be snobbish against the latter.


Or install and configure a Linux server or design a system without access to turn-key cloud primitives.


in similar vein gui tools for git killed the need of remembering commands. it is so painful watching somebody struggling with git when something "advanced" as rebase needs to done and they only know how to use git via vscode integration


Exactly what people said about stackoverflow in the past


They're too busy being on your lawn.


It's like relying on a GPS to go everywhere you need. You will quickly find out you don't know to go anywhere by yourself.


... and maybe that's OK?

I've been reliant on GPS on my phone for probably over a decade now. I'm probably lost without it. But... I explore way more of the world now! I'll quite happily bounce around to random parts of a city that's completely foreign to me, safe in the knowledge that I'll always be able to find my way back again.

(I don't know if this analogy will hold up for AI-assisted programming - I still think programmers who actually understand the details will be a whole lot useful in the long run over programmers who don't - but I'm definitely not going to ditch my GPS in an attempt to improve my sans-GPS navigation skills.)


I don’t think it’s a great analogy, and here’s why:

When trying to get from point A to point B, there is a clear, well defined goal.

Streets (paths) are also clear and well defined (within reason on most popular GPS direction software).

An expert in city streets may get you there a little faster, but the end result of both GPS directions and city street experts is the same; getting you to B. What’s more, once you’re at B, the route you took probably no longer matters.

The only side effect of taking different routes is the time involved.

Coding is different. There’s a combinatorial explosion of different paths to get to any step of any problem. Each path has consequences which can be very difficult to understand at the time. (for example, the classic of-by-one error always causes some strange behavior)

Also, the code is an artifact that would need to be understood, built upon, and changed in the future.

It’s just a different beast than getting directions.


> When trying to get from point A to point B, there is a clear, well defined goal.

In programming, you also go from point A (requirements analysis phase) to point B (shipping the product). The only difference is that you may face multiple paths leading to point B.


I feel like I shouldn’t need to point out how literally going from point A to point B is wildly different than going from a requirement to a shipped product.

Or how your example “a shipped product” is not a well defined goal.

Please don’t just play semantics.


> I'm definitely not going to ditch my GPS in an attempt to improve my sans-GPS navigation skills.

Maybe you should, if Eleanor Maguire's research on "The Knowledge" and the hippocampus has any bearing.

https://www.nytimes.com/2025/02/14/science/eleanor-maguire-d...


What i said is -somehow- the scenario described in the article: a craftsman who relies on something to do his craft but can't actually do the craft by himself. I'd est, he's no longer a craftsman.

The author didn't say stop using AI. He offered his solutions.


This weekend I tried out Cursor since I stopped paying for ChatGPT, Claude etc several months ago and wanted to see whether I should restart my subscription or to try them via a code editor.

It was honestly frightening how quickly I finished a side project this weekend, one that I had previously been struggling with on and off for a few weeks now. The scary part was that the user experience of prompting for feature requests or bugs and then seeing the code changed and the app be hot reloaded (I use Flutter which has this functionality) was so seamless that it didn't feel like a Copilot, it felt like an Autopilot. I literally felt myself losing brain cells as I could, yes, ostensibly review the code between every single prompt and change cycle, but realistically I clicked apply and checked out the UI.

However, all good things must come to an end, it seems, as I burned through all 150 credits of the free trial, but more importantly, the problem of hallucinations is still ever-present and oftentimes I'd ask it to fix an issue and it'd change some other part of the codebase in subtle ways, such that I'd notice bugs popping up that had been fixed in previous iterations, from minor to truly application-breaking. The issue now was that since I didn't write the code, it took me quite a bit longer to even understand what had been written; granted, it was less total time than if I had written everything from scratch, and it could be argued that reading it is no different than reading a coworker's (or one's own older) code, and I still had an LLM to guide me through it (as Cursor's chat is unlimited while their composer feature, the AI prompt and apply loop, is not), but I understand the author's point much better now.

While others in this thread and elsewhere might say it is no different than reading Stack Overflow or books of yore, the automaticity of AI and the human together feels fundamentally different than what came before. Truly, I felt much more like a product manager, citing feature requests and bugs, than I ever did as an actual developer during this loop, only this time our knowledge and experience will be so eroded that we won't be able to fix novel problems in the future and will rely on our learned helplessness in asking the AI to fix it, as I had increasingly felt as the easier this loop got.


What the post describes is kids misusing AI. You can always ask AI how a code works. Ask GPT, then have Claude review it, then have Gem chip in. Ask all of them to ELI5 every single detail of the code, and then try to explain it yourself, in your own words.

I'm sorry but for many of us socially awkward folk having to rely on begging to solve coding tasks felt like chewing glass.


> Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.

"This is the worst code I've ever run."

"But it does run."

> I recently realized that there’s a whole generation of new programmers who don’t even know what StackOverflow is.

I also did not know about stack overflow when I started programming. Or even when I finished college and entered the workforce.

Because it didn't exist yet.

> Back when “Claude” was not a chatbot but the man who invented the field of information entropy, there was a different way to debug programming problems.

First, search on Google.

Google also did not exist until '98. By which point I'd already learned enough c++ from a physical book to write toys like a space invaders kind of game.

> Junior devs these days have it easy. They just go to chat.com and copy-paste whatever errors they see.

Juniors blindly copying code from StackOverflow used to be a standard complaint.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: