Someone who's been programming for three years is still a beginner, even though based on what he wrote here, Kevin is almost certainly a lot better than I was when I'd been programming for only three years.
I do find that a lot of programming stuff that used to be hard is easier for me now that I've been progrmaming for 38 years. But that doesn't mean I spend all my time doing things I can do without thinking, and it certainly doesn't mean I write code that doesn't need refactoring. Maybe my designs are better than they used to be — I think so — but often that's because I developed the design more incrementally through refactoring, rather than less incrementally by planning.
I do avoid a lot of errors by thinking through the consequences of a choice before taking action, in a way that I couldn't always do. But often that choice is specifically the choice to plan out a big design at the start of a project.
Also, though, what counts as "big" has changed for me. What I can hack together in an afternoon now might have taken me a week ten or fifteen years ago. So I can explore alternatives with less risk, in a way.
The author describes a lot of anxiety and guilt about doing things imperfectly. I think that's a big obstacle to improvement — other people can pick up on that and will be reluctant to give you feedback, and feedback from other people is a really fast way to improve. Also, it tends to shunt you into tasks that don't challenge you enough — it pushes you to avoid risk, and pushes other people to not ask you to do things that are at the limit of your abilities. This is precisely the dynamic Atwood was trying to combat by telling his story.
While 3 years isn't much from a whole career perspective, I do believe those may be the three most important years, where you learn the most etc; I'm at about 10 years now, and don't feel like my level has significantly improved since then. I mean sure, I'm more experienced now and know more languages etc etc, but it's not like I'm 10x more productive or smart or better than I was 10 years ago.
Being 10× more productive doesn't always feel like being 10× more productive. It can mean spending two days on a project which, seven years ago, you would have spent five days planning out, two days implementing, and eight days debugging; or you would have spent five days planning out, eight days just beginning to implement it the wrong way before someone pointed out that there was a library that already did what you needed, and then two days implementing the actual solution. It can look like half-assing things that need to be half-assed and thoroughly solving things that need to be thoroughly solved, instead of thoroughly solving everything. It can look like writing unit tests for your code that find the stupid bugs right after you write them when it's easy to fix them, instead of spending a bunch of time debugging the whole system when you do an integration test. Conversely, it can look like having less unit tests and therefore less code to change when you need to change something.
In the original "10×" study, some programmers were never able to finish the assignment at all (within the allotted time); probably they either couldn't figure out a workable attack on the problem (like Ron Jeffries on Sudoku: http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...) or they wrote their code as cleverly as possible so that debugging it was beyond their ability. The 0.1× programmers in the study were the ones who almost didn't finish in the time that the researchers gave them. Presumably you can think of problems that you know how to solve now that you didn't know how to solve in any finite time seven years ago; presumably also you've learned how to avoid introducing bugs that you would have introduced then, costing you, again, unbounded time to diagnose.
Or maybe not. Maybe I just learn slowly. I've certainly worked with programmers with only a few years of experience who were better than I was; maybe they'd already hit their performance ceilings. But I doubt it.
Is "10x more productive" even the right way to assess ourselves? I may be writing 1.5 times as much code as I was 2 years ago, but that code per line is much more sophisticated and gets a lot more work done, as I've learned more and more libraries and learned more tips/tricks/shortcuts.
I would say that your description is indeed of someone who's "more productive"
> code [...] is much more sophisticated and gets a lot more work done
> [familiar with] more and more libraries
> I've [...] learned more tips/tricks/shortcuts
I'd even throw something like "code is closer to correct on the first revision" into the productivity column.
I suspect it depends on what you're building. Some problems in computing are drastically harder than others. Crud-like web applications, for example, are a well-trodden path now, but there are many applications of software that are not as straight forward.
> Also, though, what counts as "big" has changed for me. What I can hack together in an afternoon now might have taken me a week ten or fifteen years ago. So I can explore alternatives with less risk, in a way.
Could you elaborate on that? Maybe with an example of something that would fall into this "hack together in an afternoon" category?
I am still a beginner with less than 5 years of professional experience, but usually there seem to be rather large overhead to putting together new projects/services and getting everything up an running.
I can imagine being faster to proof of concepts in some environments (like a very involved framework - RoR/Django..), but not in others..
Well, one night in 2007 I wrote a 3-D rendering engine in JS http://canonical.org/~kragen/sw/torus and one Friday night in 2013 I wrote a raytracer in C http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra.... It took me until almost 4 AM, at which point I had reflection, color, and Lambertian shading from a tabular ASCII scene file format; a few days later I added procedural textures, a K-V-pair scene file format, specular highlights, and some more textures.
One night last year I wrote http://canonical.org/~kragen/sw/dev3/usql.py, a toy SQL database supporting queries that join up to two tables. I would have implemented more of SQL, but I had set myself a two-hour time limit, and there isn't a parsing engine in the Python standard library.
Last month I wrote a Mandelbrot set renderer in Python, but it was slow, so I rewrote it in Lua using LuaJIT, which made it extremely fast. That took less than an hour, but I subsequently added some more features to it, so it represents maybe a few hours of work now: https://gitlab.com/kragen/bubbleos/blob/master/yeso/mand.lua
(That one will only run on Linux right now, or MacOS with X11, because it uses my GUI library Yeso, which I haven't ported to Quartz or GDI yet. The others are portable.)
aspmisc, dev3, and bubbleos are Git repos you can clone if you like. You may need to append /.git to the URL.
Those were all purely programming exercises, not drawing on frameworks and libraries and whatnot beyond the very basic stuff that comes with the languages in question (and, in the Mandelbrot case, the Yeso library), but of course I've also learned how to use a bunch of tools; I can spin up a new React project with an Express server and get something working in an hour or two, I can hack together a visualization with D3 in minutes, I can spin up a test server on Bithost in a few minutes, I can import stuff into SQLite or LevelDB in a few minutes, I can write a website scraper using Beautiful Soup and urllib2 in half an hour or so, and so on. So if there's something that's easy to do in SQLite, I don't waste my time doing it the hard way in Python, and vice versa.
I think there's enormous value in investing in tools like Docker to reduce the overhead in spinning up a new environment. There's also a lot to be said for improving low-level skills like typing, text editing, and writing and debugging simple code. It allows you to devote more attention, and more uninterrupted attention, to higher-level tasks like system design. But mostly programming more productively is not about writing code faster, although that sure helps; it's about writing less code.
> But that doesn't mean I spend all my time doing things I can do without thinking
But you do, or at least give the appearance of it:
> Also, though, what counts as "big" has changed for me. What I can hack together in an afternoon now might have taken me a week ten or fifteen years ago. So I can explore alternatives with less risk, in a way.
I'm pretty sure that's the core of the idea. What you're doing in an afternoon looks just like being able to code without thinking to someone who would still take a week on it.
Someone programming for 3 yrs is not a beginner, There are people programming at 3 yrs that are experts and will run circles around you and your 38 years. Some people really have the talent and put in the work to get good. In 2019, the mentorship is available via means of books, blogs, MOOC, youtube videos, conferences. What some folks can achieve in 3 yrs these days is really unbelievable. P/S, I have been hacking around the computer for 25yrs and think I'm a pretty good programmer too. But I have been impressed by some 2 yrs old & 3 yrs old experiences.
I’m a little skeptical of 3 years making an expert, but I’ll stipulate it’s certainly possible for the reasons you mentioned.
I’m basically an old fart who has learned and forgotten a lot of stuff.
It really depends on what you need. I think the 3 year programmer will struggle when moving out of their experience.
The person with decades of practice have faced and failed at a dozen core cs problems. They’ll probably fail, but they can explain their state at any given moment.
The 3 year person will fail or fail and self destruct. It’s bad for everyone involved.
Every once in a while your lottery ticket wins. That has nothing to do with experience. A kid can win and good for them.
I guess my biased opinion is, you get a small advantage with people with our track records.
Newbs can do amazing stuff. Old farts, like us, can also do amazing stuff. You can’t know up front which to pick.
I dunno man, be good to your coworkers. They often end up being amazing.
> The person with decades of practice have faced and failed at a dozen core cs problems. They’ll probably fail, but they can explain their state at any given moment.
This is a pretty critical point in my opinion, but it also depends on the individuals drive. I personally put a distinction between programmer/scripter and computer scientist. One knows how to write code to get stuff done, the other knows some much deeper algorithmic and structural concepts that they can use to explain what they are doing and why they are doing it. And in today’s world it is much easier to be a programmer than a computer scientist.
It may be a broad oversimplification, but that’s my general purpose take on it. Currently, I know I’m a programmer/scripter but it works for what I do in my sysadmin and pipeline work. In the future I plan on revisiting some of the more core concepts to edge back into the CS world, but for now I need to do what I can.
i'm confident i've hit an NP complete problem once. I don't remember the nitty gritty details, but it had something to do with optimal rendering of boxes on a screen. kinda like bin packing. I've been asked a few times, but this one stands out.
usually, my spidey sense kicks in, and i inform my manager what they're asking is hard, like multiple people for years with no guarantee of success hard. Mostly, they cut the feature. rarely they say keep poking at it for a while. I've probably messed that up a few times. But at least once, i felt that the problem was 3sat. i'm not a doctor. i'm not a moron. the problem seemed really hard, and i think i mapped it to 3sat. Perhaps i messed it up, and it wasn't really np-complete. Either way, it's not like i could look up the answer in a book.
nobody wants to pay for original research. Very few people should do original research. I'm not a person that should do original research. I think i saved the company a lot of money by cutting the feature.
And this. Your value to the organisation for recognising a problem, arguing for a low cost solution and having the reputation to be believed is way larger than even a PhD researcher in the area who could have delivered an improved solution.
By the way, packing problems are 70% optimal by putting the largest in first and keep going till the next largest won't fit. (It's like the old fart description - I cannot quote the source but the spider sense remembers the shape of the problem)
I think it depends on how much time they spend programming and what they program.
1. There's a difference between 3 years of experience 4 hours a day 5 days a week and 9 hours a day 7 days a week. That's 20 hours vs 63 hours per week. Furthermore, the 63-hours-per-week person has all of that experience in the last 3 year context. A 20 hours-per-week person would take 9 years to get the same amount of experience in terms of time. Technology changes much more in 9 years than 3.
Now, 9 hours of actual programming per day is very unrealistic. It's also the case that the 9 year person has much more varied experience due to technology changing, so they end up with different skills.
2. If you work on one project/system then after a while what you often deal with is specific to that system. It might not translate well to general programming or other tasks.
I've done programming for at least 9 years, but most of it has been on and off. I would certainly consider somebody with 5 years experience to be better than me in most circumstances.
but it's a counter to the popular idea that "more work" is what it takes to become better at something. Two people in the same group will find that the one who tries more will get better results, but training for more time won't get them to a better group. Instead, qualitative changes are needed - probably many of them - people to work with, ways to think about work, habits of work, adopting the styles of those who are better, etc.
I think it does apply to programming, but quantitative increases in programming also come from the adoption of new tools, programming paradigms, programming languages etc.
I think you're using "quantitative increase" to mean "measurable improvement in output", but in the article sense it means simply "more quantity of practise" without changing the kind of practice.
Quantitative would be "I write more lines of code", "I solve more challenge problems", "I write and release more programs", "I code for more minutes each day, more days each week, more weeks in a year", "I suffer and endure more". This might get you further up in your friend group or class ranking, but won't take you to a new level (so claimed).
Quantitative changes would be things you suggest, like "I use different tools", "I approach problem solving in a new way", "I lean towards hard problems instead of retrying easy things", "I work with different people to learn new ideas", "I use languages which let me do more with less code", etc. (so claimed) those can take you to better output, even if your quantity of practice overall decreases.
We always say "practice makes perfect", but "do what winning people do" seems better advice than "do more of what you are doing, when you aren't winning". Phrased like that it's almost tautological - training longer with bad form, won't give you good form.
I agree with this. I know quite a few people who not only write code for work, but they spend massive amounts of their spare time building things or contributing to open source projects.
Although it's not guaranteed, it's likely these kind of people are significantly better than their peers.
Programming is no different than any other discipline. You aren't going to be an expert in anything in only 3 years. Expert physician? Wont even be out of residency yet. Expert car mechanic? No way. Expert troll on hacker news? Maybe.
Expert car mechanic doesn't take 3 yrs. I went from knowing nothing about cars to building my first race car in 2 years in my late teen years and that's without the wealth of knowledge I can find online today. ... and not just jamming a bigger v8 engine, but slamming a turbo on a 4 banger and not blowing up the engine and shaving almost 3 seconds off the stock 1/4 mile times, with limited budget too.
Something I see with most IT folks is that they think they are so special and feel very threatened when other's can learn what they can and faster too. Being an expert in 3 years is possible in most fields, all you need is dedication and deliberate practice.
We are not special, let's get over it. The only crap getting in the way of youngster's today is filtering out the noise since there's too much crap. But once they sort it out, they will move 10x faster than most of us old timers ever did.
> Being an expert in 3 years is possible in most fields, all you need is dedication and deliberate practice.
Do you have anything other than a personal anecdote to back that up?
> But once they sort it out, they will move 10x faster than most of us old timers ever did.
So instead of 36 months, youngsters today can become experts in most fields in 3.6 months? That sounds preposterous. I assume you think that because of the internet and modern tech enabling much faster learning. But then old timers have access to the same tech and knowledge. So why can't they move just as fast?
I think it is possible to be a React.js or Vue.js expert within 3 years (they've only been around for 5 years), but expert knowledge with these libraries also requires some fundamental knowledge of Javascript and the DOM that I believe can be learned in parallel with these libraries.
I think generally you are correct, but these might be the exceptions to the rule.
You could be a vue expert in only a few months if you already know programming and web development. In my experience vue only adds a few things on top of javascript which are incredibly helpful but not complex to understand.
I’ve been involved with hiring and I do part time work as an examiner for CS students, both those who go straight through their education, but also on diploma degrees where people upskill after having worked for a while. Kind of like a masters but at bachelor level.
I’ve seen one or two rockstar developers in my time, but even they, would have had trouble keeping up with the silver foxes I know.
I think young programmers have an easier time picking up X framework because they have more time. That’s not really as valuable as knowing computation though, and I frankly think a lot of the YouTube and MOOCs you praise are to blame for the general lack of CS knowledge among a lot of young programmers. Some of them (the college ones) are great introductionary courses, but the majority of MOOCs are amateurs teaching amateurs.
Maybe that works in software because software is in high demand. I mean, I build a RPA process in a few days by google programming. It certainly works, sure, but I also know that it could have been build a lot smarter and more efficient by someone who knew how. That’s the thing with software though, you can get by if you deliver something that works okish. At least until you have to work in a field like medical software, where you code is quite literally never allowed to fail. Because someone will die if it does. At that point you’ll want the 25 years of experience, every time, and if that’s true for medical software, I don’t see why it wouldn’t be true everywhere.
I call BS. Of course there are extreme outliers, like savants who are composing sonatas at age 3, but programming is a different animal.
Sure, someone with 3 years experience could easily be better in some narrow way, but any programmer worth anything after 30 years experience has forgotten more than a 3 year coder could have possibly learned if he was literally reading white papers all day long for those 3 years and retaining 100% of it. There just isn't enough time in 3 years to cover the breadth of knowledge needed to be "better" than someone decent who has 30 years experience.
To adopt a standard set of terminology — while fully acknowledging the limitations of the model — consider the Dreyfus model of skill acquisition.
I've met or worked with programmers with 3 years of experience who demonstrated competence, but never proficiency.
Some programmers have a strong natural intuition, and a competent programmer with a strong natural intuition may appear proficient. It's a tricky distinction, but important to recognize (especially for the programmers themselves and anyone mentoring them).
However, I think most people with 3 years of professional experience programming are advanced beginners reaching competence in a few specific areas.
I can say from knowing Kragen that there are very few mortals who could run circles around him after 3 years of programming. Trying to imagine what they might be like, Christopher Strachey comes to mind -- it seems he impressed Alan Turing with his first program: https://history.computer.org/pioneers/strachey.html
While I wouldn't have described myself as a beginner 3 years into my dev career, I certainly wouldn't have described myself as "expert" either - perhaps "advanced beginner" or "competent" would have been more appropriate.
For me, part of being "expert" is having broad experience, and that takes time. I think I would have used "expert" to describe my abilities after somewhere between 5 and 10 years.
3 years of professional experience (and working on mobile applications as side gigs since 12 after their dad teached them how to do simple scripts with PHP at 8)
Interesting view. I'm learning, with about 8 months worth of knowledge and experience, and this is useful.
Did you notice any skills/traits in those people who were "unbelievable" in those 3 years? For instance, did they do "test driven development"? Also, did they keep good log habits?
I think OP has provided good context. But I will try to answer the question you _really_ seem to be asking: unfortunately there is no shortcut.
There are things you can do that will, like the author said, give you faster feedback and thus let you get better _if_ you use that feedback. You might write a lot of code and read a lot of code and internalize good patterns. Eventually you will have enough experience and the confidence that comes with it, to take on bigger and bigger challenges. I don’t know what happens after that for I am still in that stage. But I have observed more experienced programmers and one thing they’re very god at is _really reading_ others’ code thoroughly and being able to spot better ways of doing things (which I imagine is through experience) and also thinking a little bit in the future rather than simply the assigned task. e.g. if assigned to create a new system, they won’t just follow the design specs blindly, but will question the design choices rigorously, helping improve the design a lot, and then implement something a tiny bit better than the eventual design.
Maybe I’m making it a bigger deal than it is, but I’ve worked closely with senior engineers and it’s _always_ a fascinating experience. They will question your design and code very very deeply but all of them will be good questions and will help you either improve your design or not add spurious code.
This is very helpful. Your points make me think of the need for immersing in the experience and intently considering not only the 'what' but also the 'why' of actions, as well as alternative approaches to solve problems.
It also makes me think of the importance of stepping back, questioning the scope of "design choices" available, and anticipating things other "than simply the assigned task", possibly for some long-term aim, as you write. This is neat, thanks.
For me the biggest learning experience was building a system for a company, (a moderately complex CRUD app at the end of the day) and maintaining it for 4+ years.
If I had to go back and modify the code, if I didn't understand it straight away then it was usually worth refactoring. It was my own code so I had no one else to blame if it was crap.
That probably has expanded into reading others code and seeing better ways of doing things like you say.
No, they didn't do the modern day ceremonies around programming such as TDD. What they did was code a lot. Code a lot of different things, knew where to draw the line and "finish" up their projects. Maybe out of their many projects, they really took one or two all the way and polished, but most were fast, done and the lessons learned. ... They also stayed deep in the language and didn't rely much on frameworks & libraries.
Here's a sample of things you can do to vary your programming knowledge fast.
Learn multiple languages C, asm, forth, lisp, prolog, any OOP
Code a card game, blackjack, cribbage, poker
Code a board game, checkers
Code a puzzle game, tetris
Code an adventure game (text)
Code your own text editor
Code your own interpreter (BASIC or your own language)
Code a network server & client (not REST, socket programming & threads)
Code a basic CRUD app
They are deeply curious about a lot of things and soak up as much as they can, that's all I can say.
I like your notes on varying programming languages yet staying deep in one. I think this is turning into a consensus among the folks here who've been kindly offering advice.
And will consider this: "Maybe out of their many projects, they really took one or two all the way and polished, but most were fast, done and the lessons learned. ... They also stayed deep in the language and didn't rely much on frameworks & libraries."
This is also useful: "Code your own text editor Code your own interpreter". I find these types of things intrinsically motivating, thank you.
These people have their experience in one field for all these years, therefore they know a lot about it. They also work with limited number of technologies, which makes them good in them.
So, basically, they are good in certain tasks, so as long as they do these tasks you won't notice their lack of experience in other stuff.
> any skills/traits in those people
There are no patterns, to be honest, otherwise everybody would do it. There are best practices (some of them are debatable, like mentioned TDD) which you can read and try to incorporate, but don't treat them as dogmas.
Solve challenging tasks, reflect on your code, try different stuff, actively talk and discuss solutions with more experienced engineers, and you'll learn (relatively) quickly.
Very useful. Good notes on solving, reflecting, and actively talking.
And that point you write: "They also work with limited number of technologies, which makes them good in them. So, basically, they are good in certain tasks". An interesting observation, thank you.
I think one word would shed light on such anecdotes: deliberate practice. Countless literature and researches have pointed out that simply doing more would not bring much improvement but find out what is missing and practice it to perfection then move to another weaknesses is the best way to achieve top performance, be it in sport, in music or other fields. Like someone who want to learn chess; simply plays a lot each and every day will improve his ELO slowly but it will stall after a while (say at the level of a club player) and he can almost certainly never attain the mastery. In opposite, someone who seeks guidance, study and practice to fix his weaknesses and perfect his technique will improve vastly more in the same amount of time. Effective practice and coaching is also the only difference between merely very good players and grand masters. I can’t imagine why it should be different in the field of programming and software engineering. Someone who writes thousands sloppy games might have learned how to do them faster (and maybe better) but he still learned nothing more about complex systems or secure systems. Only with intense study and practice he can master such stuffs effectively.
The Talent Code of Coyle is a good book to start reading about deliberate practice.
Yes, exactly this. If you never really think about your coding seriously, and ponder where you should get better at and then do it - you'll never improve! Simply by acknowledging your own mistakes and how you could fix them in the future, let's you build up skill much better than just churning out code day in day out without giving it a second thought.
I've seen the both extremes, the programmers who are so serious about the style of the code that nothing really satisfies them (and consequently progress happens at snail's pace). Then I've seen those, who'll write the ugliest of hacks just to get the thing working and move on. I think there's a deliberate balance between the two, where you feel you're not wasting time on useless things but actually get something done, which you can then comfortably show to your peers without beginning to blush.
And only way of getting there is practise and being mindful of what you are doing. Similar to sports, music - whatever. To know where and how you did wrong is key, often you'll need a very good teacher to show you that. Criticizing yourself works too, but often you either become too strict to yourself or too lenient.
Just starting to code without worrying if this is the "best practise" or not will allow you get into a flow which is much better than over-thinking your approach. Because once you start to do it, like a good warm-up it allows you to see the problem much clearer as you become aware of the problems as you encounter them. Then if you have time, you might want to refactor your solution or just move on the next most important thing. Sometimes it's better to just write awful code to get to that MVP or other important milestone, only after which you start to review your code. Experience will tell you, when is the right time to move fast and when it's good to slow down and enforce a particular paradigm to your codebase.
Yes, exactly this. If you never really think about your coding seriously, and ponder where you should get better at and then do it - you'll never improve! Simply by acknowledging your own mistakes and how you could fix them in the future, let's you build up skill much better than just churning out code day in day out without giving it a second thought.
I've seen the both extremes, the programmers who are so serious about the style of the code that nothing really satisfies them (and consequently progress happens at snail's pace). Then I've seen those, who'll write the ugliest of hacks just to get the thing working and move on. I think there's a deliberate balance between the two, where you feel you're not wasting time on useless things but actually get something done, which you can then comfortably show to your peers without beginning to blush.
And only way of getting there is practise and being mindful of what you are doing. Similar to sports, music - whatever. To know where and how you did wrong is key, often you'll need a very good teacher to show you that. Criticizing yourself works too, but often you either become too strict to yourself or too lenient.
Just starting to code without worrying if this is the "best practise" or not will allow you get into a flow which is much better than over-thinking your approach. Because once you start to do it, like a good warm-up it allows you to see the problem much clearer as you become aware of the problems as you encounter them. Then if you have time, you might want to refactor your solution or just move on the next most important thing. Sometimes it's better to just write awful code to get to that MVP or other important milestone, only after which you start to review your code. Experience will tell you, when is the right time to move fast and when it's good to slow down and enforce a particular paradigm to your codebase.
I've been coding intensively for more than half of my life from a young age - I'm approaching 16 years of experience now. Also, I've been doing it quite intensively (I was writing games when I was still in school before studying software engineering at university, then I worked for 14 different companies; some startups and some corporations in about 10 different industries. I've completed projects in at least 7 different programming languages). Also I've always been working on a side project during nights and weekends; mostly open source.
I have mixed feelings about the ceramics anecdote because I've met some people who have been coding a long time and producing large quanities of code but it's not good quality. To write high quality code, you need to enjoy the process and be adaptable. You need to have been exposed to a lot of different kinds of projects and management cultures.
Also, the most frustrating thing is that other people who are not good coders don't recognize straight away who is a good coder. Often, it takes a whole year to prove yourself. However, good coders usually know straight away who a good coder is.
Because of this effect, our industry is currently in a bad state. Many of the big popular tech stacks are mediocre compared to some of the alternatives that are available. There is a lot of misinformation and misplaced hype.
Most people who have the power to make hiring decisions are not sufficiently good at coding to be making those decisions. So good coders tend to find themselves smothered in most work environments...
Can you elaborate on your opinions about frameworks? What are some tech stacks that are mediocre, and why are they mediocre? What makes an alternative better?
Please stop thinking there are 10x programmers - or perhaps stop thinking it means 10x better.
It is simply 10x more valuable. And that depends on the organisation you work for, the state of the code base and so on.
Look at it this way - sports stars are regularly 10x, 100x more valuable to their team management than A.N.Other professional player. Take football (soccer) - Ronaldo is a waaaay better player than I am easily 100x, but take the newest signing in league 3 or whatever it is - can Ronaldo run 10x as much, is he 10x as likely to put in a penalty or a free kick? No. it's probably not even a question of twice as likely - it's percentages better.
It's just that those percentages matter when the cup is on the line. I am sure that Baseball statistics probably show this - the spread from top of the league to bottom is unlikely to be 10x more runs scored
So more and more it's worth remembering that the value provided to an organisation (and remember that's what you can charge for) is based not on your intrinsic qualities but what you can do for them in their current state.
I don't really agree, since I consider myself a 10x engineer.
I have countless of examples of guys struggling for 2 months on some project. They get stuck, ask for help and I look at it and build it from scratch in a week.
It's not I'm typing faster, it's more about choosing the right architecture and libraries. You can save insane amounts of time by making the right descisions.
Besides the overall architecture descisions there are a lot of small day to day choices to make. If this value is not valid, do I throw an exception, or silently log something?
In both the small and big design descisions there is always a tradeof. Talent and (years of) experience makes that these descisions come from intuition, which can make you a 10x developer.
> I have countless of examples of guys struggling for 2 months on some project. They get stuck, ask for help and I look at it and build it from scratch in a week.
Are you a 10x engineer or are they 0.1x engineers and how do you differentiate the two?
I have countless of examples of guys struggling for 2 months on some project. They get stuck, ask for help and I look at it and build it from scratch in a week.
Have you ever wondered why the engineers don't come and ask your advice before they waste a couple of months?
I'd much rather be known as an approachable and helpful engineer who juniors can ask for advice, even if it means I'll never be known as a 10X engineer.
Some do, they are curious, and I can teach them a lot. At this very moment I have a junior engineer in my team. He is eager to learn and we do a lot of peer programming which benefits us both (having to explain why choosing some solution is even harder than making it 'by intuition'.
But the ones who don't are more difficult for me. I think some might be shy or unaware things can go better. The most trouble I have is with the really stuborn who don't want to learn, and think they know it all.
I try to avoid these engineers as much as possible, by hiring the ones I think are eager and willing to learn.
> How often can a junior approach you before you grow tired of them constantly asking you for help/guidance?
Personally, it would depend on if they're learning from my guidance or not: if they keep coming back with the same problems I'd get frustrated that they (seemingly) weren't attempting to learn or improve. On the other hand, it they usually come to me with new problems they've encountered and gotten stuck on, I don't expect it would bother me nearly as much.
Of course, IRL it'll probably be somewhere in between.
Not the OP, but I give junior engineers a lot of slack. I'm happy to spend a lot of time working with them. And you know if one keeps asking me lots if questions or looking fir guidance that's fine... maybe I need to do a better job explaining or send them off to look at something else that might be a better teaching aide.
The hardest problem with mentoring juniors is that most companies don't recognize the work put in. Still too many places judge purely on your direct work-in to work-out ratio.
To an extent, it really isn't hard to be a 10x programmer.
Take, for example, the median level 1x programmer. Perhaps a Java programmer working at a large tech corporation on a team of 1000+ in Indonesia or Brazil.
That guy isn't visiting HN. That guy doesn't read tech articles at home. He may join a few programmer groups on Reddit or Facebook. But his main concern is that he gets paid and feeds his family. He doesn't care so much about doing his work well, but he cares that he does it well enough to make a 10% salary raise each year.
He can probably reverse a linked list. But his searches are all O(n^2), and he can't do any less. The company that hires him doesn't really care - they're unfamiliar that things can be better, and as far as they know, they'd rather get hire ten $15k guys than a single $150k star programmer.
Is it possible to code 3x faster than that guy? Certainly. Especially when it comes to harder parts of the work.
Is it possible to code 3x better? As in code that's more efficient and doesn't crash as much? Also very likely.
And someone who codes 3x faster also reduces the cost of the project drastically - you can pay a team one month wages instead of three.
Now you put these multipliers together, you get easily more than 10x, quantifiably.
Sports players may try to run 100m a second faster, but star programmers can easily build a project faster and cheaper.
I was going to comment on $15k being pretty far off, but apparently for a mediocre programmer in Brazil it's not that uncommon. Decent ones, though, can get far better salaries from what I've found. I've always thought of starting an outsourcing agency hiring programmers from underpaid countries, but the reality is that the good ones aren't as underpaid as you'd think, and the whole process is incredibly complicated, even if you have substantial ties+citizenship+connections in both the US and the country you're outsourcing to.
Also, if he gets a 10% raise every year, he'll be making 73x as much (21x as much with 3% inflation) at the end of a 45 year career. If he starts out at $15k, he'll end up at $1.1MM/year, $315k in 2019 dollars assuming a hefty 3% inflation. Not half bad for someone who doesn't care about doing his work well and can't google a sorting algorithm!
$15k is the norm where I live. It's usually around 5-10 years experience, not quite "senior". The 10% does cap at some point, lol.
If you do want to make an outsourcing agency, the good ones in a developing country make about $60k, which is enough to live in the top 10%. Companies like Accenture specialize in it, but they're not known for good treatment.
A considerable advantage would be knowing that you shouldn't use a linked list in most scenarios.
The next best would be understanding that there are many different ways to do "linked lists", and it's important to control the choice. Software dependencies have a huge cost.
The next best advantage would be recognizing that if you need to reverse a linked list in practice, you're probably doing it wrong.
Writing a linked list implementation isn't hard. There aren't many things that are hard to implement. What's hard is making good decisions what to implement. No library can help there.
Is Ronaldo 10x as likely to get a hat-trick to send his team forward in Champions League when they needed 3-0 to progress? He's not running 10x as far but he is so much better due to little things, such as being in the right spot at the right time.
Compare this to programming. A programmer who makes good decisions early on the project will likely save his company a lot of money down the line. A few poor decisions here will cost them a lot of money.
Yes! Though Ronaldo is also known for how hard he trains - he is not relying just on his (massive) talent.
Experience gives you exposure to other solutions and can give insights and alternative ways of approaching stuff which as you say can make a huge difference. Sometimes you do just have to grind things out though.
I'm actually arguing that some parts of being a good programmer are things you're born with and that other parts are hard work. I feel some people just understand programming more naturally and if you give that person experience and they work hard then you'll get an amazing programmer at the end.
I could never have been as good as Ronaldo. I could never have even been a professional soccer player, I'm simply not athletic enough.
How on earth is programming not competitive? Coming up with the wrong data model can result in millions of dollars in technical debt, a failure to fully realize a product, or certainly allow a competitor to win out. Bad programmers can structure a product so poorly that it is simply unpleasant to work on, and you won't be able to hire or retain engineers.
I think moreso than most careers, a single software engineer is capable of a tremendous amount of good or harm.
Nobody cares if you wrote the best code in the world if the business model doesn't work out.
I once did some consulting work for a popular technology startup. I was appalled when I saw the hacks they used to get their stuff working.
But their marketing was perfect and all the devs raved about how performant and nice to work with their product was.
But when you looked at the internal code it was clear that a lot of it was written by clueless programmers who did whatever they wanted to get it somehow working. They used the worst hacks to get around the fact that the original product wasn't really built for the area they pivoted into.
That taught me a valuable lesson: everything I thought was important about building a good tech product is irrelevant. As long as it kinda works, you just need someone to sell the thing.
None of what you just described defines "competitive". A situation is competitive where one's gain is, another's loss. There's no such dynamic at play when programming, except maybe career advancement in big organizations. When I write better code today than I did yesterday, you, my teammate, will be happier, not sadder.
All of the things mentioned are ways that bad programming can "allow a competitor to win out". Programming is competitive because markets are competitive.
Now, markets aren't always zero-sum. But they are often competitive.
> When I write better code today than I did yesterday, you, my teammate, will be happier, not sadder.
Yeah but no one's goal should be to overtake the current best programmer in the world in whatever ranking. Your goal should be to excel at solving the problems you're facing.
A spouse is capable of a tremendous amount of good or harm, but that doesn't mean marriage is a competition.
The field for programming jobs is at least highly competitive- and the OSS field is very winner takes all.
Bad analogies suggest different mental models being discussed - what would you suggest is a better analogy - I would be interested in your mental model.
Actually, this might exactly be why the industry pays so well.
Facebook's biggest threat is not dying on its own, but the possibility of future programmers disrupting it. They're fine with paying billions for companies like Instagram and WhatsApp, because of the possibility that these companies would target the same market and hurt their profit margins. They pay very high amounts to poach employees from other large companies as well.
> Three years later, I am still very much the apprentice.
Well, three years is really not a lot when it comes to developing an intuition. Just enough to grasp some basics.
> a writer is someone for whom writing is more difficult than it is for other people
Yeah, I seem to recall Douglas Adams saying that the easier it is to read a text, the harder it was to write it.
At the beginning of my career I was constantly being praised for how fast I work. Well, I did stuff that worked, the effects were quickly visible, everyone was happy.
Even though there were code reviews to weed out the ugliest stuff, I wouldn't want to go back and maintain that software today :P
The character who speaks that line (Polonius) says so in the midst of a rambling and contradictory monologue, though we can certainly give Shakespeare credit for coining it.
Most people talk about KISS, DRY, and YAGNI, but it's uncommon that I meet anyone who actually values replaceability. I don't even mind some repetition of code so long as groups of code aren't tightly-coupled and parts of an application can be easily replaced with rewritten versions.
I had a boss once who was dumbfounded that I actually wanted to refactor code. I guess a lot of people want to write things "perfect" from the very start, but that perfection seems to be a delusion most of the time. The best way to know perfection is to see it in hindsight.
This line of discussion reminds me of a great quote by Sandi Metz, who's pretty well known in the ruby community for harping on the topic. One thing I saw in a talk of hers that really stuck with me: Repetition is preferable to the wrong abstraction.
The first thing that's drilled into a new programmer is DRY. It's easy to understand and it works reasonably well. The next step up seems to be knowing when _not_ to roll stuff up, and how to tell when you're looking at a distinct piece of logic that needs to be reified into it's own entity or function.
I usually state it as "Optimise Code for Deletion" - the best code is code that can be removed completely because that means it can be rewritten cleanly.
Indeed. I love it when the amount of money I make is tied to the quality of my code somehow. Easy to modify, easy to repurpose, easy to replace. These often make it possible to serve customers better -> more $$$
It's why I can't really take a regular job. There is no relationship between the quality of my work and what I get paid.
In my experience, even programmers are poor judges of good quality of code. Usually the criteria they are using to evaluate quality is: "Can I do what I want to do quickly?" This tends to boil down pretty quickly to "Is the code similar to code I've worked on recently?" Understanding the long term consequences of your actions is pretty tough and the ground keeps shifting under your feet. In that environment, having someone else evaluate your performance is really difficult. A manager who isn't actually working on the same code you are working on has absolutely no chance.
Lately, however, I've been trying to shift my approach to design when doing paid work: I try to make things that seem like they should be easy, easy. This is quite challenging in itself, but it's subtly different than trying to write high quality code. My goal is not so much highest possible throughput, or even programming ease. It's to make the process of programming less surprising to the paying customer. Of course, most paying customers have expectations far above what I could possibly achieve, but I see this has allowing myself room to improve.
I believe that if you are able to consistently achieve a result of "projects with this person tend to have fewer problems than projects without this person", it will translate into more $$$.
Yes, I'm always extremely wary of programmers who claim they advance so fast in the art that code they wrote in the last year or two is "garbage". It's more likely that they are just no longer familiar with that old code.
Yeah, being able to recognise the garbage I wrote yesterday is an increase in learning. It doesn't necessarily imply that I won't write different garbage today ;-)
> I believe that if you are able to consistently achieve a result of "projects with this person tend to have fewer problems than projects without this person", it will translate into more $$$.
What nonregular job do you do? What kind of code do you deal with, for what kinds of customers? If you're a freelancer or consultant, how often do you end up revisiting code that you write a year or more later?
It's very easy: most coders are undisciplined hackers, not engineers. Unfortunately the prevalent macho coder culture (agile, and the rest of that crap along the lines of "move fast break things") positively encourages hacking away without much planning, design, or forethought.
Real engineers spend most of their time learning, thinking, designing, and planning. Coding for them is mostly exercise for fingers, something which needs to be done but ultimately providing no challenge. They learned not only from textbooks but also from their own mistakes, and know what to watch for and where to double-check themselves.
The outcomes are strikingly different: code produced by real engineers usually simply works. No need to babysit it in production. It also solves the real problem rather than "improving" on something which was adequate in the first place (face it: most new software replacing the older one is worse - more bloat, more bugs, harder to use). The real engineer understands that complexity is THE enemy, and breeding (or dragging in) unnecessary complexity is a hallmark of a freaking amateur.
Oh, and academia doesn't teach engineering. Your C.S. degree means shit. Old codgers who remember punching cards and incantations like //GO.SYSIN DD * may be tired of learning the shiny new toys and aren't up to the speed on the latest jargon, but over the years of wrangling code they acquired wisdom, and you'd be very well advised to listen to them.
I think labelling a development methodology and macho or not engineering is disingenuous. Any methodology practiced without an engineering mindset looks like that. Agile methodologies practiced by engineers looks like good engineering.
I've had this conversation a million times. I think when you're learning and working on your own to build a skill, do it fast and do it over and over. When you're working on a shared code base, you need to be more cognizant of your actions, of maintaining the established styles and conventions. I've know very few 10x or 100x programmers, and am not one myself. But I have had to deal with people who sacrificed quality for speed and every time a code review for those kind of people comes my way, I know I'm in for a ride that will take me away from the work I need to be doing.
> Three years later, I am still very much the apprentice.
I used to think I’d get to the point where I could just sit down and breathe out perfect code, but that doesn’t happen. As I’ve thought about the reasons why, I came up with the following reasons:
1. Writing code is an act of inventing. If what I’m trying to build already existed, I could just go buy it and save myself a lot of time and money. It doesn’t exist though. I’m being paid to create something new and unique. This requires thought, trial and error, and multiple iterations to get right.
2. The software development landscape is not a static one. I used to build commercial buildings with my dad. Once I learned that studs should be set at 16” on center, how to measure and cut complex angles, and how to finish cement, I never had to learn those things again. They were pretty much static and I got to where I could breathe out a plumbed wall without thinking about it. In the software industry, developers are constantly having to learn new things. New languages become popular, companies jump on things like Kubernetes because everyone else is, the JavaScript community cranks out new libraries and frameworks on almost a daily basis. While the fundamentals of recursion, looping, and conditionals remain the same, the syntax, idioms, and best practices are constantly shifting. After 30 years of programming, I don’t think I’ll ever stop being an apprentice in at least some aspects of my career.
I do not believe the ceramics story. It is either nice fake to prove the point or what that particular teacher valued was a natural expression rather than perfection. Applying that principle to photography would mean that every one of us is now an artist because we made 1000s of vacation photos and photographers that used just film cameras are bound to produce lower quality work. I've also seen many startups that generated a lot of low-quality code that became a tech debt super fast, and all it needed is someone stopping the line and rethinking the whole thing.
I suspect there may be better ways to practice, rather than just doing it more? Some people put effort into memorization of commonly used idioms.
Here's another dubious analogy: when learning music, you do need to practice, but playing a song all the way through a bunch of times is a rather inefficient way to practice it.
Other way of doing it more, is not just doing the same/similar thing again and again, but writing/reading large amount of very different programs. One day you do web app, other day you write a kernel driver, third day you write a scraping app, fourth day you write a tunel driver to get networking over USB, fifth day you reverse engineer some random piece of HW you have laying around and write alternative firmware for it to teach it new things, and so on. (it may not be days :))
Each of these things will branch out into different areas. You may need to write a text parser. You may need to design a USB protocol to relay packets. You may need to learn a processor architecture, calling convention, assembler mnemonics.
Over time you'll gain a lot of knowledge and you may be able to cross-apply plenty of it accross the domain boundaries.
I believe it’s almost identical to other fields: review, revise and reflection is the best way to learn. If someone want to improve his code quality, he should continuously review his code, best together with a “better” programmer. After finding the weak spots of his code, he should try to improve them by himself. To do so, he need to learn from good examples, exploring the reason behind through computer literature and experienced coders. After achieving a better version, he should reflect on his practice, how he might come to the solution himself in the first place, why did he choose the first solution and very soon he will start to recognize certain patterns.
Regarding technical expertise, he should find where the challenge lies and start studying and practicing to make the knowledge sticks. He should go out of his comfort zone, challenge his ability and make mistakes. That is the only way to mastery.
For example (personal mnemonic to cover my own blind spots) when designing my feature, consider:
* Flighting, especially protocol changes, will this change cause clients to be unable to talk to services?
* Risk: if this change breaks its service, what else breaks? What is the recovery path?
* Security: does this unintentionally relax or circumvent existing security boundaries? How much damage can an abuse of any code that writes/modifies do?
This is not a complete list on purpose. I use it to shore up the things I tend to forget to think about.
I'm surprised the author didn't mention learning about programming. Programming did come out of the field of computer science, after all.
Not that I think one should never do any hands on practice, but if you spend a large chunk of time learning the concepts of different mathematical fields related to programming, as well as the teaching of other programmers, you'll become a much better programmer than you ever could otherwise.
I really disagree with this. There's a point on most projects where you've learned about all you're going to learn from it. Maybe it's still growing, but it might not be teaching you anything new. The best way to get good is diversity of experience, and the best way to get diversity of experience is to do a lot of different things. The more diversity you can pack into a smaller time space (without just cramming and rushing things), the faster you'll get good.
If you've only been programming 3 years, you're definitely going to make a lot of mistakes. That's good! That means you're pushing your boundaries and learning. Peer review feedback isn't a mark of shame; and being attached to your code is a _bad_ thing. At the end of the day, coders vastly overvalue code beauty and aesthetics, and overestimate how long their code is really going to live. (I've been a professional for about 11 years, and while I take pride in my work, a lot of the companies I worked for either folded, pivoted to a new product, replaced some of the things I wrote with open source solutions after the problem space had become less novel and a proprietary solution didn't make sense anymore, or a billion other reasons why the code didn't need to live anymore. I'm not saying that's an excuse to write shabby code, but a lot of times "it works and it solves the problem at hand and the code isn't a disaster" is when you should stop working on it).
I think coding is a lot more like creative writing than it is like engineering, and if you want to get good at creative writing you make a point of writing a lot of stories, even though a lot of them won't be good. Or if you're composing music, you write a lot of music, you don't focus on one piece forever. Anything where you make, the more you make, the quicker you get better at it.
I think we’re conflating learning and producing when we discuss quantity and quality. During learning time, quantity is what you want to focus on. During work mode, you want to slow down and think carefully. Only when you are very sure about the selected solution that you start going quickly. Applying the story of James Joyce in a broad stroke might be misguided.
If I have to pick, I’d err on the side of doing more than trying to slow down and write perfect code. When you’re junior, your definition of “perfect” code might be very different from others’. You might be reinforcing bad habits rather than learning. I’ve seen junior SWEs insist on spending hours eliminating a couple lines of duplicated code by pulling them out into functions to achieve their perfect code. At the same time, they neglect basic coding hygiene, put defensive nullptr checks everywhere, and not design for readability and testing.
The author has another blog post about learning to play the violin and how it taught him the importance of deliberate practice which I feel is probably what he is also trying to get across in this post.
I am experiencing something similar as I continue to study the classical guitar.
I agree that the importance of deliberate practice over just coding and coding really does matter - in my day job I will write many lines of code doing the same string processing etc. (I work in Data Science) but when I am trying to learn I really want to think if I am doing it in a Pythonic way, how I might make the code more reusable and so on.
I spent my first 4 years after the university at a company with very little quality control. We had to talk the bosses into having code reviews. When we began having code reviews my older colleagues never complained about anything - everything went through.
That truly was quantity over quality. Oftentimes I had to wade through piece-of-shit code that really made my soul hurt. Really. Bad.
But in hindsight that was good. It's good to have spent 4 years ONLY writing code 8 hours straight 5 days a week.
But I never, ever, want to go back to anything like that.
This mirrors my experience. I spend the first two years of my career working somewhere where pretty much everything went through.
There were some projects in the company lead by developers that were some of the best I ever worked with. They upheld very high standards and the code was some of the best I've seen. Most projects were a total mess and nobody asked questions or reviewed code.
I worked on both type of projects. I think it really helped me. Some projects I just wrote code hours and hours on end without little regard to quality, but later on other projects within the company I was exposed to people and code that was significantly better.
No, he had written himself into a corner at the end of the "Dance of Dragons", and then waited for the tv show screenwriters to help him figure out how to proceed from there - that's the most probable explanation I can come up with (hinted by some statements from his interviews).
I think HBO just made it way easier. Once I heard the series would be a show, I knew we'd never get any more books because the lull between A Feast for Crows and Dance with Dragons was something like 5 years. At that rate, he'd need another 15 to be done while gallivanting on other fantasy titles/collections/anthologies/wtf and endorsing everything under the sun once the first three books gave him clout.
I'm definitely a little bitter about it, but I've accepted it. Parris needs the royalties.
/salt: Somehow, within that time, Robin Hobb managed to spit out a trilogy seemingly once a quarter. Thankfully, Steven Erikson came along and wrote The Malazan Book of the Fallen, which, if you like books, honestly puts ASoIaF to shame on numerous levels (and has the benefit of being done).
For me the best analogy is a carpenter tightening screws. When you start out solving a particular problem part of the scope and required functionality is always not 100% clear. So you start out with an implementation which favors flexibility over things such as DRY or performant. As you learn more about the problem you start to optimize.
One of the biggest mistakes (the biggest one is probably not writing tests) I see junior engineers make is trying to optimize before fully understanding the required functionality.
I'm surprised the author's takeaway from the Joyce quote was disillusionment. He's actually quite close to the secret sauce. I don't think writing more code and ending up with less code are at odds. In fact, I think you need to write more code to end up with less. You write more code in order to understand the problem space. Once you understand the problem space, you can then refactor everything that you wrote into more concise code that more accurately reflects the problem.
James Joyce wrote Ulysses at the rate of a hundred words per day if you only consider the finished product. However, I doubt every word that Joyce wrote ended up in Ulysses. I'm sure he did quite a lot of cutting and rewriting.
The takeaway from me from the two quoted stories was that quality and quantity are not at odds, but instead are the yin and yang of productivity. They reinforce each other. The more things you produce, the more patterns you are exposed to which in turn leads to higher quality since experience leads to efficiency, giving you more time to get things right.
I think when you are starting some new concept, that you are learning as you go you need to just try to do your best of throwing stuff on the wall to get it to work, refactor a lot and keep grinding away. During this time you are building the "Big picture" of what it is.
Once you get a good enough big picture then you probably will start to realize where how to optimize the data and create better methods, but even there it still never stops, you now know the system, and can figure out what you need from the technology to construct a better platform.
The databases I've worked with for over 20 years have evolved as I have added more functionality, I have been able to identify where the priorities were as well as how to make it function more dynamically. I didn't really get the big picture for a decade as they were sill doing part of it analog and only requested (revealed to me) more functionality as the system grew more capable.
You may think you know it all now, but I would think there's still a lot more road to be covered after only three years.
This was my first thought. I think beginners take better to volume but eventually we all hit a wall that takes more deliberate practice and deep reflection. Even then some concepts will be out of reach or take an exceptional teacher to break down. That doesn't even touch on the limits of individual abilities which we all face at some point.
As one of the million people who tried to read Ulysses and failed, I'm not sure I'd hold up Joyce as the example of what programmers should aspire to. There's plenty of popular and even good creative works which were written quickly or even under external time pressure.
This article is good, but these conversations here on HN always bug me.
Yes—write a ton of code.[0] Just do it. Make your favourite couple languages extensions of yourself.
Yes—think through your code really clearly. Spend a day on your migration or other data-structure altering changes. They're going to echo throughout your whole application and, ultimately, organization.
[0] The closer you get to the API, the faster you should code. Not the interface definition itself, that should be thought through, but the code that responds to a call to the interface. The serializer, the part of the controller that calls the render, these parts are easy. Just roll them out. The models and migrations are much more important. Spend time there. It's not an all or nothing thing.
I've been facing this in the past year after I started doubling my consulting rates. A part of me wanted to do 2x better work as well, and that just ended up in getting less work and lower quality work done.
Taking my sweet time is definitely not productive either.
I find the balance is to treat it like sketching. Instead of trying to "print" out code, well designed from scratch, from top to bottom. It's better to sketch out the main "lines" of it. It involves a lot of erasing past lines and old code, or even making redundant code at times. You definitely need a lot of scaffolding and placeholders, especially early on.
This is why programming (as a profession) is engineering and is different computer science (as it is today -- when I was a student 40 years ago it was still turning into a "science"). Engineering as a discipline is at its heart balancing tradeoffs of factors like time, cost, weight, strength, efficiency etc. There is no single pole on any of these axes that can dominate, even though for illustrative purposes it's generally useful to do so when writing an essay.
(Also three years is a really short time, though the author seems to be using that time well in order to learn and meta-learn).
> “Refactoring code” would be something left to the apprentice, not something that I, the master who has churned out enough ceramic pots, would be bothered with.
The master potter does not churn out pots that need to be fixed. He focuses on quantity to get to quality faster, that is the whole point of that story in my opinion.
Also, "refactoring code" should not be seen as such a separate activity that it can (or should) be done by other people.
The point as applies to individuals extends to projects also:
> Programming system builders have also been exposed to this lesson, but it seems to have not yet been learned. Project after project designs a set of algorithms and then plunges into construction of customer-deliverable software on a schedule that demands delivery of the first thing built.
People used to heavily criticise Minecraft from a technical point of view. Their criticisms were often entirely reasonable - it was at one point a very inefficient and unstable piece of software. However, it's now the second best-selling game of all time. Sometimes it's better to churn out a release and fix the problems later than let quality worries block you from achieving anything.
Interesting. I've "evolved" in exactly the opposite way. I used to go for quality all the time. Nowadays I'm pretty confident getting things done swiftly (and thus enabling more code/design iterations) works better for me in both the short and the long term.
That being said, I definitely don't pretend to be the Thomas Mann of software engineering.
Doing it more can provide better result only when it is allowed to trash the previous iterations to use the latest one.
This is not the case in the industry, we cannot change everything in a product at each iteration. This is why at least a little care and architecture have to be done before doing it and trashing must be made with care.
This is why there are so many vulnerable products in the market and engineers are tired of fixing them.
People do not think before they start and they are confident enough to say that they have learned from their faults when they are actually too tired to complete the fixing process.
One prolific programmer who comes to mind from this article is Nikolay Kim, author of the Actix projects in Rust, aiohttp in Python, and many other open source projects. The Actix Project has evolved so much over a short period of time. The author learns by doing, with haste.
Part of me feels like bootcamps are effective for some because you will inevitably start getting it when you do it for 15+ hours a day every day for an extended period of time (which is more than you’d likely do on the job or in a more theoretical CS degree)
Write more, care about your craft in general terms. Then it'll get better. It's a skill, skills get better with more practice. Premature optimization is the root of all evil.
People who want to write "beautiful code" are not role models, don't listen to them. They are vain and not productive. Listen to the people who finish correct programs on time, or mostly working prototypes in hours or days.
How to recognise the former group: they obsess about coding and other standards and processes, plan and discuss too long how to implement something (plan what to implement, do it, then improve it when it's done; inspiration comes from working on something, not from thinking about it) and never even meet their own estimates for how long it'll take them to deliver. They will tell you they are taking so long because they haven't decided yet how to best implement something, but they don't even have a straightforward implementation (there's always one). If they were truly concerned with "the best way", they'd have several solutions implemented already, together with metrics and benchmarks. They haven't, because they're vain impostors and not capable programmers.
I despise this kind of mentality which tries to demonize someone's passion to actually write a beautiful code. This kind of mentality takes away "the human factor" from the working environments turning IT jobs into delivery driven factories. I for one need to enjoy my work to actually be productive. And don't get me wrong - I understand that product people need to deliver products and they enjoy doing their diagrams and presentations but I am an engineer, and I like myself and my life and as a result I just want to do and be surrounded by beautiful things so I honestly don't give a shit if some product will not be delivered on time because product managers did include only business requirements and ignored requirement of my engineering team that we need to be able to sign our name under the code we write and be surrounded be beautiful things in our work life. Sorry, but engineers are part of the team, part of the product, if you ignored that at the beginning, it is not our fault.
Also, ability to write beautiful code implies the ability to deliver code much more then vice versa because there is a good chance that someone who actually is capable of writing beautiful code had to get to that level by actually also delivering stuff to production in the past.
So yes, in my world, someone who writes a beautiful code (meaning - correct, performant, maintainable, understandable, clever and so on) is a pretty good candidate to be a role model.
> meaning - correct, performant, maintainable, understandable, clever and so on
The only little nit-pick I have here is the use of the word "clever". Maybe you and I define clever differently in a software engineering context, but to me, "clever" is a dirty word in programming.
For me, "clever" means undefined behavior, one-off hack, difficult to parse, shortcut, etc. The difference between "clever" code and "bad" code is that "clever" code is written by someone with a lot of knowledge and experience. Their knowledge and experience has allowed them to work with the undefined/undocumented behavior of languages, libs, etc in order to come up with a solution that has the least lines of code/uses the least memory, etc. Not to say that those things are unworthy metrics, but "clever" code doesn't seem to consider maintainability or stability.
I think a better descriptor to strive for is "elegant", defined in my opinion by code that is beautiful in it's own simplicity, succinctness, reliability, and correctness.
For me clever is when someone is facing some problem and comes up with clever solution. Something that for example reduces complexity from O(n^2) to O(n) due to some smart idea. But I agree that elegant is another quality which can be added to the list :)
IME it's you (but /Pace/, no criticism intended, it may be that my area of databases + business dev allows this more easily than yours).
Part of it is using the tools others aren't aware of, so simple things become simple (counterexample on codinghorror, someone not knowing that XML has libraries to parse it, and started parsing it manually with regeps. From my own experience taking over a web-scraping project, I used XSLT where they previously used regexps). I have plenty of other examples of people doing things the hard way because they didn't read the docs.
Part of understandability is recognising a simple solution exists instead of a tangled ball. The simple SQL solutions are often most efficient, with tweaking, and the most understandable (which does NOT get you out of writing comments, BTW!)
Performant? The easiest code to optimise is that which is well written and nominally 'less than efficient'. This separation of layers allows me to put in new layers easily as I can see what's going on. Example: for an SQL + pascal product I got a minimum 10X speedup on the GUI, which really made a huge difference to the users, by sliding in a 3rd layer between 2 existing layers. It was simple and quick to do (3 days).
Other example: if someone had thought for a moment about creative use of SQL indexes they'd not have written another complex - and slooooooow - feature in the product I just mentioned.
So IME understandable = clever = performant = maintainable surprisingly often.
(disclaimer: I'm no coding god, I make plenty of mistakes too).
Yeah, it was more a comment on the natural compromises that come about from trying to be all of the best things in your code. Perfect being enemy of good etc.
Agree entirely that they're all things that contribute to "good"
> People who want to write "beautiful code" are not role models, don't listen to them. They are vain and not productive.
Your thoughts seem to be very subjective. I hope you don't write your code as naive as you wrote this comment.
I code for over 30 years now. It has always been my desire to write beautiful code that reads well, scales well, has no bugs and performs good. Does this makes me a vain imposter and not capable? At this very moment I am working on a total rewrite of an unmanageable codebase that was written by a very 'productive' guy.
Any paradigm can be formalized to the point of perversion. I think that's what the OP meant when they said that the hallmark of that group is how they obsess about standards and processes. The existence of cowboy-style code such as the project you're currently dealing with does not necessarily mean that the "process-over-results" people are in the right. There are many factors to consider, including the structure of the organization you're writing the code for. I don't think it's naive advice to suggest that people spend less time listening to vain role models, instead of actively working on their own skills in order to make better judgements in the future, which brings us back to the point put forward in the article: the way to mastery is through practice.
I think you and the person you're replying to are closer in opinion about what it means to write good, professional code, then your comments are letting on.
How I take the article, and what I think most in this entire comment thread agree with, are the following principles:
[1] code quality IS important. I define 'code quality' here as how many bugs are introduced to do a given feature, how resilient this code base is to requests for additional features/changes, and how easy it is for new hires to grok it / for you to grok it after a few months have passed working on something else and you've lost familiarity.
[2] Whilst it is important, it is not the only concern. In fact, a shipping product is MORE important. That's not to say that code quality isn't important. Any project that has deplorable code quality even if it ships is in trouble.
[3] So far the most effective strategy for producing code with high quality, is experience (that's subjective; my opinion. But I bet you share it). The best way to get experience is to write lots of code. In fact, writing code of LOW code quality should help MORE: The 8th feature request that comes in that initially sounds like a job of a few hours that turns into a weeklong exercise in frustration with stapled onto the end of that process another week or two chasing bugs - and maybe you learn something about where you went wrong in your code that powered the initial shipping product. This sounds like a much better way to learn these lessons than reading a bunch of blogs and listening to a bunch of presentations theorising about what 'beautiful code' means.
[4] extremes are bad. This more or less is already concluded by #1-#3, though. However, the 'beauty' extremists tend to present, blog, and in general act like they hold all the answers, more than the cowboy extremists do. That's entirely subjective opinion, of course. It's just my experience, and yours may well be different. But _IF_ it is indeed true that you're far more likely to run into a beauty extremist than a cowboy extremist, then it stands to reason programmers who are irritated by extremists of any colour will tend to exaggerate somewhat more on the _other side_ of the beauty extremists.
In the end it's a pendulum, isn't it? A presentation or blog post that comes across as authored by a beauty extremist probably was a capable, 'well adjusted' (in the sense that they would agree that shipping code is at least as important, and experience is very important) – but decided to exaggerate a tad on the beauty side to address some perceived notion that the audience's balance was too far towards the cowboy end of the spectrum.
Enough of those kinds of presentations, and a few are bound to perceive that the pendulum has now swung too far out towards the beauty end, and start exaggerating the value of the cowboy end.
What I'm missing in this discussion is that, from my personal experience, people who are very result-oriented hardly ever write correct code. They tend to leave a trail of subtle, hard to find bugs (because foreseeing edge cases is one of hardest parts of programming and doesn't happen naturally while thinking about the happy-path) and poorly designed persistence structures (refactoring code is easy, but refactoring data is painful-to-impossible - you just can't retrospectively collect what you forgot to collect initially).
Of course both extremes are bad. I lean towards the perfectionist camp, so I actively seek to work with someone result-oriented. I've found this teaming to bring great results, where both my partner and myself end up happy with the outcome and fulfilled in ways neither of us would by ourselves (for me because I produce more givem the same time, and for them because they feel much more confident in the resulting programs - corroborated later by the much lower number of bugs found in them).
I know this is the case because I've been told many times by many different people (and most of them actively seek to work with me again afterwards if the opportunity arises).
> People who want to write "beautiful code" are not role models, don't listen to them. They are vain and not productive.
People who push "mostly working" code are hell on earth. Sure you spend 5 hours less on it right now, but it's going to cost literal days or weeks down the line.
How many times have I seen developers push some "good enough" code that they didn't discuss with anyone because it looks good to have XXX commits for their quarterly feedback. They knew "what" to implement but not "how" to implement it or even "why" implement it in the first place, which ends up in hours of useless discussions that would have been avoided by a 1hr meeting before starting coding.
How many times have I seen people going for a DB, stick with it for years because "that's what google/facebook/whatever uses" and end up having to migrate everything to another DB because they went to fast and didn't spend 1/10th of the time on analysing their needs and writing specs.
Do you spend 10 years designing a plane and then 6 months actually building it, or do you spend 6 month planning it and 10 years fixing broken, hacky, "mostly working" parts ? What works for school or hobby projects doesn't work in most professional environments, especially nowadays when developers stays 1-3 years in a company and move to the next job leaving a pile of unmaintainable "mostly working" code.
Coding is the last stage of the process, it's the easy part, a monkey could implement the code if the specs and processes are solid.
People who want to deliver at all cost are not role models, don't listen to them. They are vain and not productive. They might be capable coders but they're not capable engineers for sure.
“Programs are meant to be read by humans and only incidentally for computers to execute.” ― Donald Knuth
> People who push "mostly working" code are hell on earth. Sure you spend 5 hours less on it right now, but it's going to cost literal days or weeks down the line.
Wildly exaggerated, but I'll bite: it's still better to save 5 hours while you are bleeding money (before your product works) and spend 1 week to improve the "beauty" of your code while it's already paying your salary.
> “Programs are meant to be read by humans and only incidentally for computers to execute.” ― Donald Knuth
That's a fine attitude for someone who wants to teach CS or explore programming as a hobby. For professionals, programs need to execute correctly and go into production ASAP. Emotional and mental cost to the programmer is of less importance.
> Emotional and mental cost to the programmer is of less importance.
Only if you want it to be like that.
It's as if an assembly worker from 1930 was telling his mates that working 90 hours a week and wasting his health for a few $ is ok because it's good for their bosses.
Might be ok if you're the #1 in a SV startup, sure, but everywhere else I doubt it'll take you very far.
I have seen way more damage done by the people "mostly working prototypes in hours or days" than people who want to write beautiful code. I understand that fixation to produce something beautiful can lead to paralysis but this is rarely the case. In most cases the prototype grows cancerously and it becomes impossible to fix in quite short time, my advice, listen to the experienced programmers who screwed up more time than you before, even if they seem to be obsessed with "beautiful code".
Most of the damage I've seen was from people who think they can write beautiful code, but instead write a slow overengineered framework and use clout to force everyone else's code into that framework. Sadly, these people are often smart and experienced. It's more of an attitude problem: they are not content with making a library that will sit at the bottom of the call stack, when they could instead make a framework that sits at the top.
I've worked at a couple companies now that have this problem. I know that one is now on their third try at a second gen rewrite of their aging core product. They can't ship it because they haven't found that magic formula to make it the most perfect program ever[0]. Sadly, this has been with multiple teams trying.
[0] I call this the Neo Architecture. They're looking for "the one". The architecture that will allow for any CR to be handled elegantly and beautifully, where all concerns are completely separated, where all data is perfectly abstracted. There's no such thing. It doesn't exist. Just ship already!
> I have seen way more damage done by the people "mostly working prototypes in hours or days" than people who want to write beautiful code
Because people who write "beautiful code" don't often achieve anything that can be seen. Successful companies are based on quickly written code that delivers.
I think it’s a balance. At the other end of the spectrum are the “cowboys” who are productive by writing sloppy code with bad architecture and are either oblivious of or just ignore the huge pile of technical debt they pass along to the people who get to maintain their code.
Moreover, I think there's an 80/20-ish sweet spot. With an attitude of moderation, you can keep a brisk pace and achieve a pretty clean codebase with little extra effort.
It seems best to defer architecting just long enough to see that some component is becoming problematic, but not much longer. Do it too early and you waste effort. Do it too late and you have a much larger mess to clean up.
Technical debt within a working codebase is a much, much better problem to have than not being able to launch or even demonstrate your product. It's really overrated. I left a huge pile of it in my previous company (amassed over 18 years of programming in Perl) and guess what: it's doing great, more profitable than ever before. The code has been slowly enhanced, some of it refactored. Code quality (as in "beauty", not correctness) is the least of your problems in a startup, although it will look bigger for those who have to work on it (pity them but don't overemphasize).
Yes, but the other side of the coin is having a codebase that is so old and such a mess that it can't be changed or upgraded without a tremendous cost.
I have worked for a company that eventually went down because there was absolutely no separation between business logic and user interface. When no one any longer wanted to pay for a system with an ASCII-interface (this was about 1995), they had to throw out about 5 million lines of code and never recovered.
I've worked with systems that had to be thrown out because no one any longer wanted to pay for a week of work for something should be possible in two hours.
Either extreme is stupid. Of course you have to ship. And in early stages it often makes sense to accrue debt to get fast to the market. But of course you also have to be able to be nimble and move fast even after two years and five. You can't do that if all your resources are tied up in interest payments.
> When no one any longer wanted to pay for a system with an ASCII-interface (this was about 1995), they had to throw out about 5 million lines of code and never recovered.
Kinda obvious what their actual mistake was then...
> I've worked with systems that had to be thrown out because no one any longer wanted to pay for a week of work for something should be possible in two hours.
I see a pattern there.
> But of course you also have to be able to be nimble and move fast even after two years and five.
You can also go broke while chasing this illusion, twice. Whether as startup or established company with a "big ball of mud" codebase.
I'd rather tackle the "no one wants to pay" problem in a profitable company than a complete rewrite, or even trying to boot up a new company while, in addition to all the other problems involved, trying to write "beautiful code", thank you.
I don't know anything about you personally, but my experience with people that argue like you is that they don't understand or don't care about maintainable code at all. So every time someone tries to argue for something like best practices they get shouted down as completely unpractical and out of touch. The problem is that the people doing the shouting is often the ones that produces code at high speed. They are productive while everyone else has to clean up their mess.
Maybe I wasn't clear enough, but "no one wants to pay" and profitabitity just don't go along very often.
Technical debt isn't an all or nothing thing. It can just slow you down _a bit_. But if you accumulate enough now things that should take an hour take 2 hours. or 4 hours. Or 3 days!
The non-refutable answer is you should spend an adequate amount of time thinking about stuff before doing them. Anecdotally I've found that people are not very good at thinking well while simultaneously coding, so thinking a bit before opening the code browser works pretty well
Being able to distinguish avoidable technical debt and reasonable tradeoffs at the time can save you a hell of a lot of time. And yeah working in messy codebases suck! But more importantly it can cause people to lose time they could spend working on more features and shipping stuff
This may be hard to hear depending on your experience, but some companies need to develop code over decades. When the problem is not money but ability to maintain, delivering a poc one month earlier is not always a competitive edge.
It actually doesn't matter because you can always rewrite a working product. If you never have a product nothing helps.
In the end, all code is debt. The only thing is whether you earned anything from the debt. If you never earn anything it doesn't matter that you didn't take on that much debt.
> It actually doesn't matter because you can always rewrite a working product.
False. Sure, technically, this is true, but in practice there are lots of variables at play here. Who is going to pay for the rewrite? Is the company profitable enough to do that while maintaining legacy code for current users to use? I'm not saying the only way to develop a product is to get it right the first time, but ideally, organisations shouldn't have to pay for the same product twice.
> At the other end of the spectrum are the “cowboys” who are productive by writing sloppy code with bad architecture and are either oblivious of or just ignore the huge pile of technical debt they pass along to the people who get to maintain their code.
In my 20 years of programming i've never heard of a well architectured codebase that is cleanly written, problem free, future proof and a joy to maintain.
Lots of open source projects are well architected and cleanly written[1]. Demanding that they also be problem free, or future proof, is a red herring. I've never heard any proponent of clean code claim that these are side effects of clean code.
[1] Examples I can think of off the top of my head: KHTML/Webkit, Hanami.rb, the Go standard library, sqlite...
Just as bad as the people who obsess over "beautiful code" and whatnot are the people who straight out reject the idea, then label those who observe them the "enemy" of this industry.
I think the key here really is mindfulness. Developers read code significantly more than they write them, and one of the primary goals of best practices and clean coding guidelines is to help teams communicate effectively through code. Writing code well is a skill, but that doesn't start and end at taking all the "rules" to heart; that's just the first step. The next is really to recognise that there aren't any rules, but more heuristics based on collective experiences. Software development is not an exact science, and just like the Agile Manifesto, clean coding guidelines only provide framework upon which to build any team's practices: a starting point from which to find what actually works for their project.
This, 1000x times. Experience throwing code does a lot in my experience (~7 years programming, 4 professionally). Learn the best practices, and know when they break. Heck, purposely break them and see if/when/how they bite you back!
A very good rule of thumb to recognize the former group is to simply ask "when is this best practice not valid"? This will tell me whether they are consciously proposing it or mindlessly adding complexity. For every single "best practice" I can find you a case where it's not valid, and if you cannot then there's something very wrong.
Interesting view. In your experience, how did you learn about the best practices - and, then, consider asking when that practice is not valid?
It looks like you add purposeful breaks, experiment, and then see what happens. I wonder if you also talk with your colleagues, self-reflect, think about the big picture rather than just the task at hand when you're doing this?
I started coding for fun because I wanted to build things, so when I had a problem I searched how to fix it. At some point, I started noticing "larger" problems that have to do with code structure and data flows.
Since I am self-taught and everyone was talking about "best practices", I started to learn and follow those around 1-2 years into coding. I built beautiful abstract glass houses that didn't get me any closer to my objectives! So I was curious why these best practices were slowing me down and started digging deeper to be able to know how, when and why to apply them. Lots of conflicting info online, so had to start thinking about those by myself and not just follow random articles. I even started searching conflicting info to see the two sides.
But yeah, you are totally right, devs get stuck into that pixel and forget about the larger picture. I switched my thinking to a purely "ROI" for the business, and I am convinced it's been a strong win-win. But you have to learn about the real, implicit and explicit, business objective, which is something 99% of devs do not really care (or need to, in this market). I also started realizing how code was many times not taking me closer to my objectives, but that's a topic for another day.
This is very useful. I also agree that it's tough to capture insights because of the conflicting info in many places, from online to even in books and videos. Actively seeking out conflicting info and studying the sides is a good strategy.
Also like your view: "I switched my thinking to a purely "ROI" for the business, and I am convinced it's been a strong win-win. But you have to learn about the real, implicit and explicit, business objective, which is something 99% of devs do not really care (or need to, in this market)." I find this increasingly important, too, thank you.
"inspiration comes from working on something, not from thinking about it"
Not sure about that - the applications I'm most proud of actually came from extended periods where I was involved in maintaining an existing system, carefully thinking about how it could be improved, so when a chance came I could propose a new application with a well defined list of benefits. Note that by "extended period" I mean a couple of years!
It would be much easier if the next person were the creator of the code, but instead we have this machine cog mindset of "we can just throw any of our developers at any given problem whenever".
Also known as the difference between doing things right (beautiful code) vs doing the right thing (working software).
I like ideas from the systems thinkers like Russell Ackoff about this.
> Ackoff expands by suggesting that doing things right is about efficiency but doing the right thing is about effectiveness. He makes a strong case for the connection between wisdom and doing/identifying the right things. He notes further that when we try to do things right about the wrong thing, we actually make things worse. Such attempts at improvement actually take us further from both the recognition and accomplishment of the “right thing”.
I think you're right for some classes of problems.
For others, going slow and painstakingly seeking to simplify is the only way. Otherwise the complexity will eat you alive and the whole edifice will collapse on itself.
The issue here isn't beautiful code vs quick code.
I think it's between spending some time and thinking about the problem you are trying to solve vs brute forcing a quick prototype to get it done.
I think your code will look beautiful by itself if the engineer fully understands the problem at hand.
This process takes time and practice. I personally throw away my quick first solution and let the fog clear first.
Somethings that have generally given me good results: drawing stuff, taking small breaks, listening to my tests (if my tests hurt, it means my design is bad)
I've "finish[ed] more correct programs on time" by planning all of the details in a Design Document before writing a line of code, than I ever have by planning at a high level only before implementing, but that's just my personal anecdote. Although, I do take the MVP-then-iterate approach if I don't have a tight deadline, because the early feedback loop can be invaluable in that type of development process.
> You can still write beautiful code and be productive. OpenBSD contains some examples.
Yes, by all means - if you can, do it. But please try to become as productive as John Carmack first before you aspire to write similarly beautiful code.
Adding to that: written code has no value, only running code has.
The quality of what running code is, or needs to be, depends on the stage of your product and who will be responsible for keeping the code running (either the developer himself or an ops team for instance).
Programming is simple. Domains and interfaces are difficult. People over engineer stuff. People like to think they found the key to hidden wisdom. I believe I just summed up 90% of the difficulty with programming.
Just for the record procedural programming and functional programming are the only two styles of programming to me that make sense. Procedural because its how the machine model works and functional because its how a mathematical description of computation works. WTF is OOP (how the over active imagination of a child works I would reckon).
OOP attempts to map to real objects, (and sometimes not so real objects). I know it's hated by functional purists, but it really does make it easier to think through problems. I personally don't hate it.
Functional purists are the only one's who hate it. OOP by its very nature encourages developers to make bold abstractions prematurely (where prematurely is interchangeable with unnecessarily) and subjectively. Mean while the procedural programmer (Golang) has already finished the segment of the program having written one big function. If a piece of that big function is needed elsewhere she factors it out. Most of her program are simple loops and conditionals. Programs are easy. Programs are easy, why complicate something that is easy.
The machine should also guide the programmer as much as possible. I wish Golang had refinement types like Liquid Haskell or ATS. Some simple static analysis to cut down on crufty slow runtime test suites.
I'm more in the data oriented design camp where you use objects all the time, but also hate what seems to be flagrant abuse of OOP by many programmers.
Casey Muratori had a good quote with regards to it (paraphrased): "Having objects in your code is fine, it's natural. We've been doing it since before OOP was a thing. It's the whole phrase 'object-oriented' that's the problem. You're orienting your thinking around objects, not functions. It's the orientation that's bad about it, not whether you wind up with an object."
Over-engineering drives up LoC counts, so there's (perhaps implict) incentive to engage in it -- either you or your colleagues perceive greater productivity.
Re OOP, it provides a method to model what you term a "domain".
I love it how people hate and spit on the idea of code generation, but, in their daily lives, hit short cuts on their keyboard, especially in OOP environments, to generate 1000s of lines of cruft and even seem to enjoy that because it looks like you did so much... File after file appear with 10+ lines in it and checking into git looks like you did herculean work today! It really makes people feel better so I guess why it is liked and popular. With more terse languages (usually some form of fp or apl or forth or a mix) you can easily end up after a day with an empty screen with a blinking prompt; you might have done a lot in your head and on paper but you have no proof for your boss and even you yourself feels like a bit of wasted time.
I do find that a lot of programming stuff that used to be hard is easier for me now that I've been progrmaming for 38 years. But that doesn't mean I spend all my time doing things I can do without thinking, and it certainly doesn't mean I write code that doesn't need refactoring. Maybe my designs are better than they used to be — I think so — but often that's because I developed the design more incrementally through refactoring, rather than less incrementally by planning.
I do avoid a lot of errors by thinking through the consequences of a choice before taking action, in a way that I couldn't always do. But often that choice is specifically the choice to plan out a big design at the start of a project.
Also, though, what counts as "big" has changed for me. What I can hack together in an afternoon now might have taken me a week ten or fifteen years ago. So I can explore alternatives with less risk, in a way.
The author describes a lot of anxiety and guilt about doing things imperfectly. I think that's a big obstacle to improvement — other people can pick up on that and will be reluctant to give you feedback, and feedback from other people is a really fast way to improve. Also, it tends to shunt you into tasks that don't challenge you enough — it pushes you to avoid risk, and pushes other people to not ask you to do things that are at the limit of your abilities. This is precisely the dynamic Atwood was trying to combat by telling his story.