I would argue that a good process always has a good self correction mechanism built in. This way, the work done by a "low quality" software developer (this includes almost all of us at some point in time), is always taken into account by the process.
Right, but if everyone is low quality then there's no one to do that correction.
That may seem a bit hypothetical but it can easily happen if you have a company that systematically underpays, which I'm sure many of us don't need to think hard to imagine, in which case they will systematically hire poor developers (because those are the only ones that ever applied).
Replace the "hire poor developers" with "use LLM driven development", and you have the rough outline for a perfect Software Engineering horror movie.
It used to be that the poor performers (dangerous hip-shootin' code commitin' cowpokes) were limited in the amount of code that they could produce per time unit, leaving enough time for others to correct course. Now the cowpokes are producing ridiculous amount of code that you just can't keep up with.
This is why at every software project I've done in the past 15 odd years, steps were taken to prevent this in an automated and standardized fashion; code reviews of course, but they're more for functionality. Unit test requirements, integration / end-to-end tests based on acceptance criteria, visual regression tests, linting, type systems, OTAP, CI/CD, audit log via Git and standardized commit messages, etc etc etc.
My job hasn't significantly changed with AI, as AI generated code still has to pass all the hurdles I've set up while setting up this project.
Sad truth is that average dev is average, but it's not polite to say this out loud. This is particularly important at scale - when you are big tech at some point you hit a wall and no matter how much you pay you can't attract any more good devs, simply because all good devs are already hired. This means that corporate processes must be tailored for average dev, and exceptional devs can only exist in start-ups (or hermetically closed departments). The side effect of that is that whole job market promotes the skill of fitting into corporate environment over the skill of programming. So an a junior dev, for me it makes much more sense to learn how to promote my visibility during useless meetings, rather than learn a new technology. And that's how the bar keeps getting lower.
But average construction worker is also average and average doctor as well.
World cannot be running on „best of the best” - just wrap your head around the fact whole economy and human activities are run by average people doing average stuff.
Learning new technologies wasn’t the issue with the Therac. In fact as someone who has been coding and leading sw engineering teams for the past 28 yrs, I don’t like “new technologies”. When someone does this awesome complicated async state machine using a large set of brittle components alarm bells go off and I make it my life’s mission to make it as simple as it needs to be.
A lot of times that is boring meetings to discuss the simplification.
I can extend the same analogy to all the gen ai bs that’s floating around right now as well.
This only works with enough good developers involved in the process. I've seen how the sausage is made, and code quality is often shockingly low in these applications, just in ways that don't set off the metrics (or they do, but they can bend the process to wave them away). Also, the process often makes it very hard to fix latent problems in the software, so it rarely gets better over time, either.
My takeaway from observing different teams over years is the talent by a huge margin is the most important component. Throw a team of A performers together and it really doesn't matter what process you make them jump through. This is how a waterfall team got the mankind to the Moon with handwoven core memory but an agile team 10x the size can't fix the software for a family car.
You conflated, misrepresented and simply ignored so many things in your statement that I really don’t know where to start rebutting it. I’d say at least compare SpaceX to NASA with space exploration but, even then, I doubt you have anywhere near enough knowledge of both programmes to be able to properly analyse, compare and contrast to back up your claim. Hell, do you even know if SpaceX or Tesla are even using an agile methodology for their system development? I know I don’t.
That’s not to say talent is unimportant, however, I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent? I think I’m talented but I wouldn’t be surprised to learn others think I’m an imbecile who only knows Python!
Hell, do you even know if SpaceX or Tesla are even using an agile methodology for their system development?
What I've been saying is methodology is mostly irrelevant, not that waterfall is specifically better than agile. Talent wins over the process but I can see how this idea is controversial.
I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent?
Yep, even if I made it my life's mission to run a formal study on programmer productivity (which I clearly won't) that wouldn't save the argument from nitpicking.
Except your only example was nonsensical on the face of it.
> Yep, even if I made it my life's mission to run a formal study on programmer productivity (which I clearly won't) that wouldn't save the argument from nitpicking.
I didn't ask for this, I just asked for sensible examples, either from your experience or from publicly available information.
“ This way, the work done by a "low quality" software developer (this includes almost all of us at some point in time), is always taken into account by the process”
That’s a horrible take. There is no amount of reviews, guidelines and documentation that can compensate for low quality devs. You can’t throw garbage into the pipeline and then somehow process it to gold.
The process that makes this work would be so onerous to create. Would you think you could do this to make a low quality machinist be able to build a high quality technical part? What would this look like? Quite a lot like machine code which doesn't really reduce the requirements does it? It actually just shifted the onerous requirement somewhere else.
> (this includes almost all of us at some point in time)
I'd say this includes all of us all the time; a good developer never trusts their own work blindly, and spends more time gathering requirements and verifying their and others' work than writing code.
I had quiet a ride myself with that topic. For years my opinion was, that I dont want to, as the auther here suggests as well, go with tdd as long as I dont know exactly what I need. Than I switched over and used tdd for everything with a more regid (interface) design upfront. Nowadays I use tdd from the integration side and only add unit tests later or case by case when I think its usefull. A really good ressource is "Growing Object-Oriented Software, Guided by Tests"
> tdd from the integration side and only add unit tests later
This is where I've landed as well. Unit tests are for locking down the interface, preventing regressions, and solidifying the contract - none of which are appropriate for early stages of feature development. Integration tests are almost always closer to the actual business requirements and can prove direct value - ie only once the integration works, then lock it down with unit tests.
I've also toyed with a more radical idea: don't check your tests into the git repo. Or at least keep the developer tests separate from the automated tests. Think about it: what rule says that the tests used in development should go directly into the CI test suite? One is designed to help you navigate the creative process, the other is designed to prevent regressions. I think we do a disservice by conflating the two scenarios into one "testing" umbrella. During TDD, I need far more flexibility to redefine the shape of the tests (maybe it requires manual setup or expert judgement or ...) and I don't want to be hampered by a (necessarily) rigid CI system. Dev tests vs CI serve two completely different purposes.
For me, dev testing is something that I use directly in the hot feedback loop of writing code. Typically, I'll run it after every change then manually inspect the output for quality assurance. it could be as simple as refreshing the browser or re-running a CLI tool and spot checking the output. Importantly, dev tests for me are not fully fleshed out - there are gaps in both the input and output specifications that preclude full automation (yet), which means my judgement is still in the loop.
No so with CI tests. Input and output are 100% specified and no manual intervention is even possible.
There are some problems where "correct" can never be well defined. Think of any feature that has aesthetic values implied. You can't just unit test the code, brush off your hands and toss garbage over the wall for QA or your customers to pick up!
I use this technique mainly to avoid an over-reliance on automated testing. I've seen far too many painful situations where the unit tests pass but the core functionality is utterly broken. It's like people don't even bother to run the damn program they're writing! Unacceptable and embarrassing - if encouraging ad-hoc tests and QA-up-front helps solve this, it's a huge win IMO.
Today, if someone uses LLMs for code generation, he/she will probably question the generated code and will put his own judgement above it. I am curious how fast that will change, especially for juniors. When will they start to question their own judgement and just go with the generated code becuase its "more safe"?
At present, when juniors do this at my company, they usually get fired within the month. The onboarding docs now explicitly state that though code review is a joint-responsibility process, you as the submitter are responsible for understanding it, ensuring it all works, and being aware of the broader scope and consequences. Maybe many companies have placed more responsibility on the reviewier to catch problems in the past?
I would go a step further and dont let juniors use LLMs for code generation. The purpose and your role as a junior is to not only work but also to learn. When using genrated code, you miss a lot of opportunities to do so. Of course you could learn of some other methods or stuff from the frameworks you are using but imho thats not that big of an advantage.
LLMs are very good at sounding right. I’m sure the code generated from them are rarely reviewed by junior developers. Even if they did question it i bet they give the LLM the benefit of the doubt. “Well the computer said this so it must be right otherwise it would be a bug and I bet Anthropic has caught all the bugs…”
Yeah, kind of like that. What would be needed to get curated are the facts. Who could do it, would be the author of the original article. Maybe the author just needs to flag the wrong facts. In my, obviously very well minded view, an author has a big interest that first: his scientific article is correct - you get real bad reputation if your science is provable wrong - and that the facts others get out of it, are also correct. Of course, highly positive view :)
I love the idea ("Foundational Shared Truth"), but you don't need a blockchain, just a changelog that can be mirrored or consumed like certificate transparency. I suppose that's a blockchain of sorts?
You might be interested in something like Coq [0], but even with literal software it's very very hard to prove logical correctness, let alone wishy-washy real-world things like "when I mixed these two chemicals they turned greenish which probably suggests X."
Best I can see happening is a way to visualize relationships between research papers so that humans can argue over what it really means. Like a graph of edges where "this paper cites that one and one strongly depends on it being true for its own conclusions", or "this one claims it did/didn't disprove that other one", and retroactive additions like "an outside observer noticed that X and Y are probably either both correct or both incorrect."
I kind of like your idea but in my head I always bounce back to "Is that a problem technology can or should solve?" IMHO the underlying problem is that people dont care that much anymore for each other, especially strangers. And the reason for that is, that in our society, we are not at all dependent on our surroundings. You can be perfectly fine without knowing your neighbors for example. If you dont have any sugar or salt to cook something, you can go to a close by store, order it or just order the cooked meal. You dont go to your neighbor and ask for a little bit of sugar or salt (if you ask for sugar, you probably get insulted because how bad sugar is - little joke :D ). So I guess, the only thing that truly work is to build communities that consists of interdependent people. Just my take one it :)
I definitely agree that real life relationships and communities are the best way to go, but also that our increasingly isolated lives makes that difficult, and some people get anxiety around social situations or just find themselves stuck in a rut of work followed by unwinding at home alone.
People increasingly also don't like being dependent on others, so while your take is a valid one, people who don't think it's true would need an alternate solution to that problem.
There are also some people who would like a better social life but are unsure how to, or don't have the skills or opportunity to do so.
The success criteria of such an app could even be that users should only be using it for a certain amount of time, after which the app should have encouraged and helped the users in replacing app interactions with real life social interactions.
We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent. I always told the candidates, that the goal of the task is 1. to see some code and if some really basic stuff is on point and 2. that you can argue with someone about his or her code.
If I have a public portfolio of existing projects on GitHub, couldn't that replace an assignment? Choose one of my projects (or let me choose one), and let's discuss about it during the review interview.
>>We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent.
Be upfront that finishing the assignment doesn't guarantee a hire and very likely the very people you want to hire won't show up.
Please note that as much as you want good people to participate in your processes. Most good talent doesn't like to waste its time and effort. How would you feel if someone wasted your time and effort?
I am in germany, so by far not the same situation as in other areas of the world. If I would get such an assignment myself and I have the feeling that this will help the company and also me to verify, if it is a fit, I will do that 1 to 3 hour task very happily.