Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm loving the new programming. I don't know where it goes either, but I like it for now.

I'm actually producing code right this moment, where I would normally just relax and do something else. Instead, I'm relaxing and coding.

It's great for a senior guy who has been in the business for a long time. Most of my edits nowadays are tedious. If I look at the code and decide I used the wrong pattern originally, I have to change a bunch of things to test my new idea. I can skim my code and see a bunch of things that would normally take me ages to fiddle. The fiddling is frustrating, because I feel like I know what the end result should be, but there's some minor BS in the way, which takes a few minutes each time. It used to take a whole stackoverflow search + think, recently it became a copilot hint, and now... Claude simply does it.

For instance, I wrote a mock stock exchange. It's the kind of thing you always want to have, but because the pressure is on to connect to the actual exchange, it is often a leftover task that nobody has done. Now, Claude has done it while I've been reading HN.

Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.

Now I know what you're all thinking. How does it not end up with spaghetti all over the place? Well. I actually do critique the changes. I actually do have discussions with Claude about what to do. The benefit here is he's a dev who knows where all the relevant code is. If I ask him whether there's a lock in a bad place, he finds it super fast. I guess you need experience, but I can smell when he's gone off track.

So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.

What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.



> What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.

Exactly my thinking, nearly 50, more than 30 years of experience in early every kind of programming, like you do, I can easily architect/control/adjust the agent to help me produce great code with a very robust architecture. By I do that out of my experience, both in modelling (science) and programming, I wonder how the junior devs will be able to build experience if everything comes cooked by the agent. Time will tell us.


I feel like we've been here before, and there was a time when if you're going to be an engineer, you needed to know core equations, take a lot of derivatives, perform mathematical analysis on paper, get results in an understandable form, and come up with solutions. That process may be analogous to what we used to think of as beginning with core data structures and algorithms, design patterns, architecture and infrastructure patterns, and analyzing them all together to create something nice. Yet today, much of the lower-level mathematics that were previously required no longer are. And although people are trained in their availability and where they are used, they form the backbone of systems that automate the vast majority of the engineering process.

It might be as simple as creating awareness about how everything works underneath and creating graduates that understand how these things should work in a similar vein.


Exactly right now, I am helping a big oil and gas company have a process simulation software to correctly converge on a big simulation. Full access to the source code, need to improve the Newton method in use with the right line search, validate the derivatives, etc.

I do think that for most of the people, you are right, you do not need to know a lot, but my philosophy was to always understand how the tool you use work (one level deeper), but now the tool is creating a new tool. How do you understand the tool which has been created by your Agent/AI tool?

I find this problem interesting, this is new to me and I will happily look at how our society and the engineering community evolve with these new capacities.


I don’t know how seniors will cope. You seem to have a solid understanding that you can make use of AI. But most seniors on HN struggle with basic tasks using AI. Juniors are likely to out pace them quickly. But potentially without the experience or understanding.


They cope just fine. If you’re thinking about stubborn ‘AI is just a statical parrot’ folks, IME it takes just one interaction where it clicks and nothing is the same afterwards. Perhaps we need GPT-5 or Gemini 3 for some of those people, but that’s what, a year out? Or a month?

Juniors need experience to know if the machine is going in the right direction or guide it. That experience is now nigh impossible to get, nobody has the time for apprentices now. It’ll take some brave management to pave a way forward, we don’t know what it’ll be exactly yet.


Really well said, it's a large amount of directing in additoin to anything else.

To continue this thought - what could have been different in the last 10-15 years to encourage junior developers to listen more where they might not have to those who were slightly ahead of them?


I also am enjoying LLMs, but I get no joy out of just prompting them again and again. I get so incredibly bored, with a little side of anxiety that I don’t really know how my program works.

I’ll probably get over it, but I’ve been realizing how much fun I get out building something as opposed to just having be built. I used to think all I cared about was results, and now I know that’s not true, so that’s fun!

Of course for the monotonous stuff that I’ve done before or don’t care a lick about, hell yeah I let em run wild. Boilerplate, crud, shell scripts, CSS. Had claude make me a terminal based version of snake. So sick


This is interesting. Maybe slow it down a bit? What I've found is I really need to be extremely involved. I approve every change (claude-code). I'm basically micromanaging an AI developer. I'm constantly reading and correcting. Sometimes I tell it to wait while I help it make some change it's hung up on.

There's no way I could hire someone who'd want me hovering over their shoulder like this.

This sounds tedious I guess, but it's actually quite zen, and faster than solo coding most of the time. It gives me a ton of confidence to try new things and new libraries, because I can ask it to explain why it's suggesting the changes or for an overview of an approach. At no point am I not aware of what it's doing. This isn't even close to what people think of as vibe coding. It's very involved.

I'm really looking forward to increasing context sizes. Sometimes it can spin it's wheels during a refactor and want to start undoing changes it made earlier in the process, and I have to hard correct it. Even twice the context size will be a game changer for me.


I've always felt building something was close to artistry. You create something out of your thoughts, you shape it how you want and you understand how it works to the most minute detail. The amount of times I've shown something seemingly simple to someone and went "but wait this is what is actually happening in the background!" and started explaining something I thought was cool or clever are great memories to me. AI is turning renaissance paintings into mass-market printing. There's no pride, no joy, just productivity. It's precisely those repetitive, annoying tasks that lead you to create a faster alternative, or to think outside the box and find different ways. I just don't get the hype.


> There's no pride, no joy, just productivity.

This is exactly what bothers me about the present moment. Not that the pride of craftsmanship is everything, but dialing it down to zero with extreme pressure to stay that way is a bit sad.

But we’ve clearly gone through this with other mediums before, perhaps someday people will appreciate hand written code the way we appreciate hand carved wood. Or perhaps we were all wasting time in this weird middle ground in the march of progress. I guess we’ll find out in 5-15 years.


I think the audience who can appreciate handcrafted code will be vastly smaller than the audience who appreciates hand carved wood.


What about the audience which appreciates software that actually works without one billion subtle bugs and devastating security issues, and which also can be built upon and extended?


Maybe not possible with today's SOTA AI but I have no doubt it's within reach.


> There's no pride, no joy, just productivity.

I think it’s more nuanced than that.

Not every project one does will be or should be considered art or a source of joy and pride.

The boring CRUD apps that put the “bread on the table” are just that, a means to an end, they will not be your main source of pride or fulfillment. But somewhere in between there will be projects where you can put all your heart in and turn off that LLM.

Think of the countless boring weddings playlists a DJ has to do or the boring “give me the cheapest” single family homes an architect has to design.


Well, that's a good example. Why would you get a DJ when you can say "Siri, play Weddings Classics"? There's no humanity involved, no skills to read the room or cater to audiences. So you get a DJ; what if your DJ thinks his job or your event is boring and generates the same playlist you could have done yourself? You need passion, you need interest, you need to be involved. Otherwise every job becomes tedious, and humanity dies.


One thing that differentiates a (good) DJ from a playlist is that a DJ will react to the crowd. That'll influence song selection, mixing, live looping, and so on.

Which means clearly we need to feed video of the dancefloor to a vision model and output MIDI tokens!


My biggest problem with working LLMs is that they don't understand negatives and they also fail to remember their previous instructions somehow.

For example:

If I tell it to not use X, it will do X.

When I point it out, it fixes it.

Then a few prompts later, it will use X again.

Another issue is the hallucinations. Even if you provide it the entire schema (I did this for a toy app I was working with), it kept on making up "columns" that don't exist. My Invoice model has no STATUS column, why do you keep assuming it's there in the code?

I found them useful for generating the initial version of a new simple feature, but they are not very good for making changes to an existing ones.

I've tried many models, Sonnet is the better one at coding, 3.7 at least, I am not impressed with 4.


I've tried many models, Sonnet is the better one at coding, 3.7 at least, I am not impressed with 4.

If Sonnet 3.7 is the best you've found, then no, you haven't tried many models. At least not lately.

For coding, I'd suggest Gemini 2.5 Pro, o3-mini-high, or Opus 4. I've heard good things about Grok 4 as well, so if you're OK with that whole scene and the guy who runs it, maybe give it a shot.

If you have already done so and still think Sonnet 3.7 is better than any of them, then the most likely explanation is that you got incredibly lucky with Claude and incredibly unlucky with the others. LLMs aren't parrots, but they are definitely stochastic.


> Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.

Are people implementing stuff from start to finish in one go? For me, it's always been iterative. Start from scaffolding, get one thing right,then the next. It's like drawing. You start with a few shapes, then connect them. After you sketch on top, then do a line art, and then you finish with values (this step is also iterative refinements). With each step, you become more certain of what you want to do, while also investing the minimum possible effort.

So for me coding is more about refactoring. I always type the minimal amount of code to get something to work. And it usually means shortcuts which I annotate with a TODO comment. Then I iterate over, making it more flexible, adds more flexibility, makes the code more clean.


this is how I interact with the coding assistant.

one thing at a time. slowly adding features and fighting against bug regressions, same as when I was writting the code myself.


> What junior devs will do, I'm not so sure

I see it as a worrying extension of a pre-LLM problem: No employer wants to train, they just want to hire employees after someone else trains them.


so i guess that's a good argument for replacing employees with a bespoke LLM for your business-they will never leave after they're trained. and they never ask for a raise. and they dont need benefits or carry other human risks.


> I would normally just relax and do something else. Instead, I'm relaxing and coding.

So more work gets to penetrate a part of your life that it formerly wouldn't. What's the value of “productivity gains”, when they don't improve your quality of life?


> So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.

Wish I had your confidence in this. I can easily see how this nullifies my hard earned experience and basically puts me in the same sport as a more mid level or even junior engineer.


Right, I’ve been using it recently for writing a message queue -> database bridge with checkpointing and all kinds of stuff (I work for a timeseries database company).

I saw this as a chance to embrace AI, after a while of exploring I found Claude Code, and ended up with a pretty solid workflow.

But I say this as someone who has worked with distributed systems / data engineering for almost 2 decades, and spend most of my time reviewing PRs and writing specs anyway.

The trick is to embrace AI on all levels: learn how to use prompts. learn how to use system prompts. learn how to use AI to optimize these prompts. learn how to first write a spec, and use a second AI (“adversarial critic”) to poke holes in that plan. find incompletenesses. delegate the implementation to a cheaper model. learn how to teach AI how to debug problems properly, rather than trying to one-shot fixes in the hope it fixes things. etc

It’s an entirely different way of working.

I think juniors can learn this as well, but need to work within very well-defined frameworks and probably needs to be part of college curriculum as well.


This is exactly what makes me excited as well. It really does replace the tedious parts of coding I’ve done thousands of times at this point.


Have you had the realization that you could never go back to dealing with all the minutia again?

LLMs have changed me. I want to go outside while they are working and I am jealous of all the young engineers that won’t lose the years I did sitting in front of a screen for 12 hours a day while sometimes making no progress on connecting two black boxes.


Serious question: have you considered that dealing with all that minutiae and working through all that pain has made you capable to have the LLM write code?

Those young engineers, in 10 years, won't be able to fix what the LLM gave them,because they have not learned anything about programming.

They have all learned how to.micromanage an LLM instead.


> Those young engineers, in 10 years, won't be able to fix what the LLM gave them,because they have not learned anything about programming.

I have heard a version of this plenty of times, and it was never correct. In the early 90s it was the "electronics" people that were saying "I come from an electronics background, these young'uns will look at a computer and don't know what to do if it breaks". Well, bob, we did, the whole field moved to color coded anti-stupid design, and we figured it out.

Then I heard it about IDEs. Oh, you young people are so spoiled with your IDEs and whatnot, real men code in a text editor.

Then it was about frameworks. BBbbut what if your framework breaks, what do you do then, if you don't know the underlying whatever?

... same old, same old.


Have you also heard about calculators?

Every single finance person uses a calculator. How effective do you think a person in any aspect of finance would be if they had never learned what multiplication is? Would they perform their job adequately if they don't know that `X * Y` is `X repeated Y times`?

IOW, if you gave a finance person (accountant, asset manager, whatever) a non-deterministic calculator for multiplication, would you trust the person's output if they never learned what multiplication is?

This is the situation I am asking about; we aren't talking about whether deterministically automating something that the user already knows how to do is valuable, we're talking about whether non-deterministically generating something that the user is unable to do themselves, even if given all the time in the world, is valuable.

All those examples you give are examples of deterministic automation that the user could inspect for accuracy. I'm asking about a near-future where people managing your money have never learned multiplication because "Multiplication has been abstracted away to a tool that gets it right 90% of the time"


If I may play the devil's advocate, nothing is deterministic. A neutrino could cause a bit flip in your calculator. More commonly, the lower abstractions we build upon without knowing their innards can have bugs. Even the most popular compilers (say, g++) have bugs, for instance. I am personally incapable of fixing a bug within gcc, despite the tool being a vital requirement of my work.

IMO the dichotomy should not be deterministic/stochastic, but proved/unproved reliable. gcc has been shown reliable, for instance, so I don't need to know whether it was built by deterministic (clever engineers) or stochastic (typewriting monkeys) processes. I'm certain the former are more efficient, but this is ultimately not what makes the tool valuable.

As a bit of an artificial example, there's stochastic processes that can be proved to converge to a desired result (say, a stochastic gradient descent, or Monte-Carlo integration), in the same way that deterministic methods can (say a classic gradient descent or quadrature rules).

In practical cases, the only proof that matters is empirical. I write (deterministic) mathematical algorithms for a living, yet they very rarely come out correct on first iteration. The fact there is a mathematical proof that a certain algorithm yields certain results lets me arrive at a working program much faster than if I left it to typewriting monkeys, but it is ultimately not what guarantees a valid program. I could just as well, given enough time, let a random text file generator write the programs, and do the same testing I do currently, it would just be very inefficient (an understatement).


> Have you also heard about calculators?

Yup, my mom used to say "you need to be able to do it without a calculator, because in life you won't always have a calculator with you"... Well, guess what mom :)

But on a serious note, what I'm trying to say (perhaps poorly worded) is that this is a typical thing older generations say about younger ones. They'll be lost without x and y. They won't be able to do x because they haven't learned about y. They need to go through the tough stuff we went through, otherwise they'll be spoiled brats.

And that's always been wrong, on many levels. The younger generations always made it work. Just like we did. And just like the ones before us did.

There's this thing that parents often do, trying to prepare their children for the things they think will be relevant, from the parent's perspective. And that often backfires, because uhhh the times are achanging. Or something. You get what I'm trying to say. It's a fallacy to presuppose that you know what's coming, or that somehow an entire generation won't figure things out if they have a shortcut to x and y. They'll be fine. We're talking about millions / billions of people eventually. They'll figure it out.


You didn't even come close to addressing his points about non-deterministic outcomes? Aka the crux of the issue...


Junior engineers will be lost if they don't take the time to read the code generated by the LLM and really understand it. This is an objective truth. It has nothing to do with boomer takes.


Funny, that's what I said, as an experienced assembly hacker, when somebody first showed me a C compiler.

People who "take the time to really understand the code" will rapidly be outcompeted by people who don't. You don't like that, I don't like that, but guess what: nobody cares.

I suppose we'll get over it, eventually, just like last time.


LLMs are not compilers. They can't be deterministic. A better comparison is an autocorrect on steroids.

And I don't think there's anything to get over about them. They are useful but people elevate their significance too much over what they actually are.


An unhealthy attachment to determinism will turn out to be a career-limiting hangup, I suspect. You already lack insight into how 100% of the code in your project works, unless you only work on trivial projects. Did you think that state of affairs was going to get better with time? As usual, TDD covers a multitude of sins.

As for "autocorrect," let us know when your "autocorrect" takes gold at the International Math Olympiad, with or without steroids.


Talk is cheap. Give your LLM/agent your badge and let it turn in 100% of your job.


That'd be awesome. Not going to happen this week or this year, but it will.


Enjoy being unemployed then, I guess?


Yeah, there are qualitative differences.

I might offload multiplying some numbers to a calculator, but Kids These Days™ are trying to offload executive function, like "what should I do next" or "is there anything I've forgotten".


I see a version of this every day.

Developers throwing huge amounts of money (in cloud resources) at performance problems that would’ve been prevented if they had some understanding of how their tech stack actually worked.


> In the early 90s it was the "electronics" people that were saying "I come from an electronics background, these young'uns will look at a computer and don't know what to do if it breaks".

...and today, Nvidia ships self-immolating graphics cards because nobody wanted to figure out how to design a safe electric connector.

> Oh, you young people are so spoiled with your IDEs and whatnot, real men code in a text editor.

...and today, a lot of so-called programmers are trapped in AbstractHellFactorySingletonFactories that they cannot and never will understand, because generations of code monkeys have abused IDE assistance to dig themselves deeper into their hole.

And as a user, you'll know, because the software they write is garbage and never works reliably.

> Then it was about frameworks. BBbbut what if your framework breaks, what do you do then, if you don't know the underlying whatever?

Going by software like Teams, or Slack: They just ignore it, because consumers can't fight back against the the enshittification of increasingly useless software nobody understands.


Losing first principles will have some kind of an unexpected result.

Like, this is how we've always done it.

Finding a way to better learn first principles compared to sitting in front of a screen for 12 hours a days is important.


What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.

"That's OK, I found a jetpack."


Chess is probably a good analogue to how the juniors will learn. You will have to learn for the sake of it even though the space is solved.


That's a pretty good take. I was actually looking for a good analogy recently

I think if I was just starting out learning to program, I would find something fun to build and pick a very correct, typed, and compiled language like Haskell or Purescript or Elm, and have the agent explaining what it's doing and why and go very slow.


Hot take: Junior devs are going to be the ones who "know how to build with AI" better than current seniors.

They are entering the job market with sensibilities for a higher-level of abstraction. They will be the first generation of devs that went through high-school + college building with AI.


Where did they learn sensibility for higher-level of abstraction? AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea, you will have to learn yourself all the way down into the details that the big picture it chose for you was faulty from the start. Convert some convoluted bash script to run on Windows because thats what the office people run? Get strapped in for the AI PowerShell ride of your life.


How is that different than how any self-taught programmer learns? Dive into a too-big idea, try to make it work and learn from that experience.

Repeat that a few hundred times and you'll have some strong intuitions and sensibilities.


The self-taught programmer's idea was coded by someone who is no smarter than they are. It will never confuse them, because they understand how it was written. They will develop along with the projects they attempt.

The junior dev who has agents write a program for them may not understand the code well enough to really touch it at all. They will make the wrong suggestions to fix problems caused by inexperienced assumptions, and will make the problems worse.

i.e. it's because they're junior and not qualified to manage anybody yet.

The LLMs are being thought of as something to replace juniors, not to assist them. It makes sense to me.


> Dive into a too-big idea, try to make it work and learn from that experience.

Or... just pick up that book, watch a couple of videos on Youtube and avoid all that trying.


AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea

That's not true at all, and hasn't been for a while. When using LLMs to tackle an unfamiliar problem, I always start by asking for a comparative review of possible strategies.

In other words, I don't tell it, "Provide a C++ class that implements a 12-layer ABC model that does XYZ," I ask it, "What ML techniques are considered most effective for tasks similar to XYZ?" and drill down from there. I very frequently see answers like, "That's not a good fit for your requirements for reasons 1, 2, and 3. Consider UVW instead." Usually it's good advice.

At the same time I will typically carry on the same conversation with other competing models, and that can really help avoid wasting time on faulty assumptions and terrible ideas.


Do you think that kids growing up now will be better artists than people who spent time learning how to paint because they can prompt an LLM to create a painting for them?

Do you think humanity will be better off because we'll have humans who don't know how to do anything themselves, but they're really good at asking the magical AI to do it for them?

What a sad future we're going to have.


more reasons for humans not to birth more humans


Like the iPad babies and digital natives myth? I don’t think that really went anywhere. So a new generation of… native prompters? Ehhh.


This is the same generation that falls for online scams more than their grandparents do[1].

[1] https://www.vox.com/technology/23882304/gen-z-vs-boomers-sca...


It may be the same generation, but it's probably not the same people.


I think the argument is that growing up with something doesn't necessarily make you good at it. I think it rings especially true with higher level abstractions. The upcoming generation is bad with tech because tech has become more abstract, more of a product and less something to tinker with and learn about. Tech just works now and requires little in assistance from the user, so little is learned.


Yeah, I have a particular rant about this with respect to older generations believing "kids these days know computers." (In this context, probably people under 18.)

The short version is that they mistake confidence for competence, and the younger consumers are more confident poking around because they grew up with superior idiot-proofing. The better results are because they dare to fiddle until it works, not because they know what's wrong.


I think this disregards the costs associated with using AI.

It used to be you could learn to program with a cheap old computer a majority of families can afford. It might have run slower, but you still had all the same tooling that's found on a professional's computer.

To use LLMs for coding, you either have to pay a third party for compute power (and access to models), or you have to provide it yourself (and use freely available ones). Both are (and IMO will remain) expensive.

I'm afraid this builds a moat around programming that will make it less accessible as a discipline. Kids won't just tinker they way into a programming career as they used to, if it takes asking for mom's credit card from minute 0.

As for HS + college providing a CS education using LLMs, spare me. They already don't do that when all it takes is a computer room with free software on it. And I'm not advocating for public funds to be diverted to LLM providers either.


It is known team size and speed are not linear.

Many times adding a new junior to a team makes it slower.

How does using llms as junior makes you more productive?


At one hand you get insane productivity boost, something that could take maybe days, weeks or months to do now you can do in significantly shorter amount of time, but how much are you learning if you are at a junior level and not consciously being careful about how you use it, feels like it can be dangerous without a critical mindset, where you eventually rely too much on it that you can't survive without it. Or maybe this is ok? Perhaps the way of programming in the future should be like this, since we have this technology now, why not use it?

Like there's a mindset where you just want to get the job done, ok cool just let the llm do it for me (and it's not perfect atm), and ill stitch everything together fix small stuff that it gets wrong etc, saves alot of time and sure I might learn something in the process as well. And then the other way of working is the traditional way, you google, look up on stackoverflow, read documentations, you sit down try to find out what you need and understand the problem, code a solution iteratively and eventually you get it right and you get a learning experience out of it. Downside is this can take 100 years, at the very least much longer than using an llm in general. And you could argue that if you prompt the llm in a certain way, it would be equivalent to doing all of this but in a faster way, without taking away from you learning.

For seniors it might be another story, it's like they have the critical thinking, experience and creativity already, through years of training, so they don't loose as much compared to a junior. It will be closer for them to treat this as a smarter tool than google.

Personally, I look at it like you now have a smarter tool, a very different one as well, if you use it wisely you can definitely do better than traditional googling and stackoverflow. It will depend on what you are after, and you should be able to adapt to that need. If you just want the job done, then who cares, let the llm do it, if you want to learn you can prompt it in certain way to achieve that, so it shouldn't be a problem. But this sort of way of working requires a conscious effort on how you are using it and an awareness of what downsides there could be if you choose to work with the llm in a certain way to be able to change the way you interact with the llm. In reality I think most people don't go through the hoops of "limiting" the llm so that you can get a better learning experience. But also, what is a better learning experience? Perhaps you could argue that being able to see the solution, or a draft of it, can be a way of speeding up learning experience, because you have a quicker starting point to build upon a solution. I dunno. My only gripe with using LLM, is that deep thinking and creativity can take a dip, you know back in the day when you stumbled upon a really difficult problem, and you had to sit down with it for hours, days, weeks, months until you could solve that. I feel like there are some steps there that are important to internalize, that LLM nowdays makes you skip. What also would be so interesting to me is to compare a senior that got their training prior to LLM, and then compare them to a senior now that gets their training in the new era of programming with AI, and see what kinds of differences one might find I would guess that the senior prior to LLM era, would be way better at coding by hand in general, but critical thinking and creativity, given that they both are good seniors, maybe shouldn't be too different honestly but it just depends on how that other senior, who are used to working with LLMs, interacts with them.

Also I don't like how LLM sometimes can influence your approach to solving something, like perhaps you would have thought about a better way or different way of solving a problem if you didn't first ask the LLM. I think this could be true to a higher degree for juniors than seniors due to gap in experience when you are senior, you sort of have seen alot of things already, so you are aware of alot of ways to solve something, whereas for a junior that "capability" is more limited than a senior.


So you are relaxing and the AI is coding? Neat! Way to replace yourself, hope you won't cry after your job once it is gone.


What you miss is the constant need to refine and understand the bigger picture. AI makes everyone a lead architect. A non-coder can't do this or will definitely get lost in the weeds eventually.


It doesn’t make everyone a lead architect, it just makes everyone think they’re a lead architect. What makes people a lead architect is a decade or two of experience in designing software and learning what works and doesn’t.


What makes people a lead architect in my experience is an abnormal amount of arrogance and no capability to admit mistakes.


That just gives the title. To be really successful, they need to let someone else, knowledgeable, actually make the architecture decisions.


Yeah that's the actual senior developers who just ignore everything the architect architects.


Right, but a lead architect can be a lead architect on multiple projects at the same time, and the world doesn't need as many lead architects as it has programmers.

This kind of working is relaxing and enjoyable until capitalism discovers that it is, and then you have to do it on five projects simultaneously.


I'm using AI assistants as an interactive search and coding assistant. I'm still driving the development and implementing the code.

Where I use it for is:

1. Remembering what something is called -- in my case the bootstrap pills class -- so I could locate it in the bootstrap docs. Google search didn't help as I couldn't recall the right name to enter into it. For the AI I described what I wanted to do and it gave the answer.

2. Working with a language/framework that I'm familiar with but don't know the specifics in what I'm trying to do. For example:

- In C#/.NET 8.0 how do I parse a JSON string?

- I have a C# application where I'm using `JsonSerializer.Deserialize` to convert a JSON string to a `record` class. The issue is that the names of the variables are capitalized -- e.g. `record Lorem(int Ipsum)` -- but the fields in the JSON are lowercase -- e.g. `{"ipsum": 123}`. How do I map the JSON fields to record properties?

- In C# how do I convert a `JsonNode` to a `JsonElement`?

3. Understanding specific exceptions and how to solve them.

In each case I'm describing things in general terms, not "here's the code, please fix it" or "write the entire code for me". I'm doing the work of applying the answers to the code I'm working on.


Why I don't bother with LLMs for the above is:

1. I usually just pull up the docs for the CSS framework, give it a quick look over to know what it offers and the nomenclature and then keep it open for all the code examples.

2. I've serialized json in enough languages to know the pain points, so what I usually do is locate the module/library responsible for that in that language. And then give the docs/code sample a quick lookover to know where things are.

3. With nice IDEs, you launch the debugger and you have a nice stack frame to go through. In languages with not so great tooling, you hope for a trace.

It's not that your workflow won't yield result. But I prefer to be able to answer 5 successive why's about the code I'm working on. With PRs taking hours and days to be merged, it's not like I'm in an hurry.


For 1 I tried looking through the bootstrap documentation but couldn't find it because they called it "Pills" and not what I was thinking. So I then tried google to search for it but that didn't work.

For 3 -- Sure, that can help. But sometimes it is difficult to follow what is going on. Especially if that comes from a library/framework you are unfamiliar with such as AWS.

I've also used it to help with build errors such as "Bar.csproj: Error NU1604 : Warning As Error: Project dependency Foo does not contain an inclusive lower bound. Include a lower bound in the dependency version to ensure consistent restore results." -- That was because it was using a fixed version of the module via the "[1.0]" syntax, but my version of NuGet and/or Rider didn't like that so once I new that and the range syntax specifying "[1.0, 1.0]" worked. I was able to understand that from the LLM response to the error message and telling it the specific `<PackageReference>`.


He's still telling the AI what to code. Prompting, i.e. deciding the right thing to build then clearly specifying and communicating it in English, is a skill in itself. People who spend time developing that skill are going to be more employable than people who just devote all their time to coding, the thing at which LLMs are more cost effective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: