> there’s a big difference between having ten years of experience and having one year of experience repeated ten times
Love this thought! It's surprisingly common to meet engineers with 6-7 years of experience with incredibly bad habits that they picked up working essentially at a single company that they joined right out of college. Repeating the same questionable patterns for 6 years doesn't provide a lot of growth opportunity. In this context, FAANG's obsession with hiring new grads is a questionable practice, people get stuck.
I've heard that exact same adage used in reverse by those "6-7 years at the same company" engineers against engineers with careers of shorter stints. The logic being that if you never experience 5/6/7+ years of continuous ownership, you never get to see how your decisions pan out, and you miss out on other forms of longitudinal experience.
If one thing is a constant, it is that engineers will always come up with these silly little adages and cliches to discount the experience of their peers and continue telling themselves they are the smartest kid in the room.
What are these 7 years projects? Sending a man on the moon? Most large software projects take 2-3 years or else they run serious risk of getting cancelled completely
Very much depends on what you are working on. What SaaS is ever stopped being developed and doesn’t need ownership after 2-3 years? Products with embedded software might ship in 2-3 years but many still need bug fixes and new features long after that so therefore needs ownership.
If it’s a saas then original mvp will take anywhere from 6mo to 2y then it either dies or scales in which case it’ll be nearly rebuilt again with a much larger team this time. Therefore you will not learn anything on year 4 that you didn’t already learn in year 3
Is this an honest question or one of those HN social status signalling things where we all feign unfamiliarity with what software development looks like outside of the YC bubble? Not snark, seeking clarification.
My understanding is that they meant actual individual projects within large companies never take this long. So "projects" as in "features" that ICs work on. I agree with that point, I hardly ever get to go back an evaluate the decisions I made a year ago. Ownership changes happen all the time, plus refactors, stack upgrades, other code changes... No one I know gets to work on a single self-contained feature/area for 6-7 years straight.
If you're working in a company that large then presumably you have architects and leads who are making broader decisions that you can see the effects of.
the point about being around for longer to see the effects of your mistakes assumes you're not a cog in the wheel.
I don't think this article limits the definition of architecture exclusively to very broad systems, like designing a new graph DB service. Architecture choices happen at all levels, and true systems architects are seldom involved in that.
> the point about being around for longer to see the effects of your mistakes assumes you're not a cog in the wheel.
Ideally—yes. In practice, decisions that should take QARs into account but don't are made by almost any IC level. I have seen systems designed by interns. You could say it's a company culture problem, and I would partially agree. On the other hand, tech companies have a tendency to lean into empowering ICs and so what ends up happening is that inexperienced engineers design systems that are only reviewed by overworked (and maybe not particularly experienced and/or motivated) senior ICs.
I definitely know more about non-YC than "YC bubble" and having talked to a bunch of YC founders i'm pretty sure there isn't a specific to YC way of building software - lots of different companies trying totally different things
That's an interesting point. Can't say I agree based on my experience in both small startups and mega-corps.
Startups are so chaotic and fast-paced that one usually only needs 1-2 years (if not less!) to see how earlier decisions pan out. Very frequently the stack and the codebase undergo monumental changes in that short period of time due to the changes in business requirements and scale.
Mega-corps are vast engineering efforts with hundreds if not thousands of daily contributions. While I could technically go back and try to evaluate my choices from 6-7 years ago, it would be fairly hard to decouple my individual contributions from the changes that happened afterwards (functional/non-functional feature requirements changed since then, the codebase is unrecognizable, etc). 6-7 years is just too long of a time frame for certain eng areas (web/native product is a primary example). Saying that, I can imagine that there are slower-paced areas where this time frame is more relevant, e.g. database engine development.
I will always value someone who has worked at 5 companies for 2 years each more than someone who has worked at the same company for 10 years. Most of the learning in any job happens during the first year. Imagine someone who has onboarded 5 times into 5 different company cultures and architectural systems. Such a person is a walking software engineering textbook.
So you actually see “1 year of experience repeated 10 times” as a positive! I like that, but do I think there’s value in thinking about how a system should evolve over the long term and in seeing the long-term consequences of your own architectural decisions.
I’ve seen this play out so many times I’m tired of it. New team member joins. Super “productive”, rewrites enormous chunks of existing code and builds six new things. Great.
Then they leave, and you find out the bits they rewrote were the carefully crafted and well documented protobuf APIs now replaced by ad hoc JSON where the parsing and spec are strewn across thirty places. The new projects you quickly realize make no sense whatsoever and don’t actually do the things they were supposed to do, but kind of look like it if you aren’t paying too close attention.
Now they’re a “senior engineer” at the next company.
very often development teams will value re-use and dry to the point that they'll conclude they need to stuff most things into a common library and share it amongst all their codebases.
Anyone who has seen this done long term has seen this accrete until everyone is afraid to change the common library for fear of breaking _everything_.
IOW, the risk profile of something like this grows over time. It starts out small but each time a project adds it as a dependency the risk grows.
and yes, I understand everyone is going to post about versioning and change management and all the myriad ways one can try and mitigate this.
the point here is that when using a common library like this you must be circumspect in what you put into it. The risk threshold for it being worth going into this common library is going to be much lower for someone who pushes for a common lib and then skips to the next job vs someone who was there over the next 3-5 years and personally experienced the pain and fear such approaches invoke longer term.
and to repeat myself since I know this is HN.
No one is saying you can never have common libraries like this, what's being said is that the long term risk is much more likely to be respected by someone who long term experience and therefore that person with long term experience is much more likely to be able to design a system that can be worked on productively over the long term.
I couldn't disagree more. The first year is when you learn how to work at that specific company, on that specific software. The subsequent years are when you refine your understanding and decision-making by building on top of your first year's learning. It's when you learn which of the ideas you learned actually work and how they might go wrong. Being a "walking textbook" doesn't make you a good programmer. Thinking about how to build things, building them, and then taking responsibility for them -- these are skills that take time and sustained effort to develop.
This sounds like it makes sense but in practice I've noticed two things:
a) Many people who have only ever worked at 1 or 2 companies have some really critical gaps in their technical breadth. Every time I've switched jobs, I've had multiple "I can't believe we didn't use this at my last job" moments.
b) I've seen many "lifers" in big companies who built some great big system at some point in the past and now do very little other than maintain this hairball. It's very rare that, for someone who has been in the same role for 10 years, their last 5 years were as productive as their first 5. Despite promos and all. You get older, the job gets easier, and you get lazier.
And on the flip side, people who hop around every year or two always seem to be able to build things quickly… and then they leave and everyone else gets both the surprise of learning how badly everything was slapped together plus the joy of of maintaining that hairball for years to come.
Somebody who never spends an appreciable amount of time in one role never has to deal with the long-term consequences of their decisions and misses out on that opportunity to learn and grow.
a) This seems like a stronger argument for hiring multiple people with diverse experiences rather than favoring individual hires with lots of short gigs.
b) This seems like a problem you can screen for during the interview process rather than engaging in resume discrimination.
you both describe valid approaches, similar to a breadth- or depth-first search. one will get into deep waters with "some great big system", regardless of the specifics.
As a farmer, I get years of experience. I only see about one crop grow each year, so it really takes decades to start to see patterns in the conditions the crops come up in (e.g. years of drought, years with little sunshine, years that are cold, etc.) with little in my control to change that schedule. Good management decisions require having an understanding of all of those patterns, and as such, a year is significant boundary.
But in software, the most significant boundary is how fast you can work. The faster you can work, the more scenarios you can try. Time is not completely removed from the equation, but two developers with different performance characteristics can have wildly different experience levels after an equal amount of time has passed. In this case, a year is not a significant boundary.
Don't work at megacorps where people are incentivized to ship big milestone branded projects and "quantifiable" impact because you will end up with massive codebases that are never refactored, over-engineered with such failures as code generation and abstract interfaces for everything, and keep layering on crap while tech debt entropy maximizes and relative agility grinds to a halt.
Agreed. Big tech is not were you learn good coding. There's no way they can recover from it. Most HR people there have no idea what they're doing and devs are all political and self-interest and it's probably full of foreign intelligence operatives who are interested in complexity as it helps them to hide backdoors all over the place.
I still think there ought to be formalized, international journeyman and apprenticeship programs for SWE, HWE, SRE, QA, product management, project management, and technical management independent of employer. The lack of generational knowledge and culture transfer leads to droves of novices learning bad habits.
Meta: I love this style of article and I wish it was more common and in more detail.
"How to build perfect software in Django" is incredibly hard to write, but "15 common Django architectural mistakes" is a lot easier to write is extremely useful.
IMO "15 common Django architectural mistakes" is a less useful focus for these types of posts. Architecture can't be taught by listing quick tips. Or at least not only by listing quick tips.
The unfortunate reality is that in many (most?) cases when inexperienced engineers face these challenges, it's too late to follow quick tips... their company's Django app has been set up 10 years prior, and now they have dozens upon dozens of layers of abstraction in the codebase.
Teaching how to think about architecture, how to evaluate options and how to make decisions is a more reliable and applicable skill.
It definitely wouldn't be sufficient on it's own, so it's no substitute to understanding architecture. I think it's more a useful technique for ways of learning architecture that is often underexplored. Most posts focus on what to do, and few seem to focus on what not to do. Shifting the balance somewhat would be valuable IMO.
Have we gone full circle? "10 hot tips for the perfect app" was a style that was so over used and clickbaity that HN rewrites the title to remove the count.
I think my comment was unclear. I don't find the listicle to be the interesting part; rather, articles that focus on anti-patterns. A lot of architectural mistakes are because people make fairly straightforward mistakes that could be easily avoided with good resources focused on what not to do.
It's ironic that they use pictures of failing bridges, after describing software architecture practices which are never used to build bridges. Yes, let's experiment with this 300 million dollar bridge for a few years, and make sure the construction workers are part of defining the fill type and dimensions of the structural pillars.
The way I wish software were created is more like physical infrastructure. There's still huge problems with construction, to be sure. But it lasts longer and is more likely to succeed when completed. There's all kinds of requirements, analysis, and inspection to ensure it works correctly. The people putting it together don't need to be very skilled; they're working with off-the-shelf commodity parts, manufactured to a minimum specification, with specific dimensions and attributes, which loosely couple in many configurations with identical parts from different vendors all over the world. Combining them in specific ways has quantifiable, predetermined results. And you know that for the parts that require being designed correctly, the people designing them had very specific minimum qualifications that take years to attain.
Companies today, whenever they want to build a software product, think they need to build an entire factory first. But companies making physical products wouldn't do that, because building a factory requires factory-building skill, that has nothing to do with the widget they want to make. Instead they would find a factory and hire them to build their widget. Software may be "modern", but its production is antiquated.
I hear you, but the promise and curse of software is that it is malleable. We are not building bridges with extremely clear and obvious success and failure, we are creating user experiences that vary subjectively based on who is using them and what other digital or physical realities they interact with. It's an alluring fantasy that if we just let the seasoned experts design everything and gave them space to work, we could have better quality systems.
This is not how it plays out though, you get burned from both sides. Inevitably some non-technical leadership stakeholders ask very fair questions about "why can't we just...", and they're not wrong about what's possible, it's just different from what came before and it's not possible to rebuild entire digital systems as quickly as the good ideas come.
On the implementation side, details matter and abstractions leak, seemingly small requirement changes undermine assumptions behind major architectural decisions. The idea that you can have a handful of seasonsed experts guiding an army of low-skill builders just doesn't work out with the same economics of physical construction. The details matter too much, and the implications of pure logic are too diverse to be covered by the equivalent of physical building codes.
> we are creating user experiences that vary subjectively based on who is using them and what other digital or physical realities they interact with
I don't believe that. When you create a coffee shop, you build it to have a specific, repeatable, singular user experience. Some may like it more than others, but if the experience is really good, people will keep coming. You don't keep changing your coffee shop week to week for years on end. The maintenance is in things like cleaning the floor, not changing the shape of the coffee counter, tables and chairs.
> seemingly small requirement changes undermine assumptions behind major architectural decisions
That's just poor design. Commercial buildings are built large and open so that the business renting them has the flexibility to change things around in the space without calling a general contractor to rebuild the walls or raise the ceilings. If you build your software with tiny rooms and load-bearing walls in the interior, yeah, you're gonna need to rearchitect to get more tables in there.
The fatal flaw in these software systems is they're effectively a single business, who's hiring a general contractor, to build an entirely new building, with very specific use cases in mind. When their business finally grows, or they just want more natural light in there, they rebuild the building. It's just bad business sense.
In the very worst case we should be renting out a new building, not building or rebuilding one. Only the largest businesses should consider constructing new buildings, and even then, they should be building it to last for a decade or more. (And this isn't even about real estate, it's about their business requirements, wasting money on construction, and the risks of construction delays and failures)
I agree with your final sentence that businesses often undertake very ill-advised software projects. However I don't think the building analogy is very illuminating here; the thing about buildings is that while they have complexity, the costs, benefits and use cases are a lot more intuitive and can be explained to an FP&A professional who can use pretty standard methods to project potential outcomes across a wide range of projects.
Software is not like this as there is no limit to the scope that can be built in to a single app. You don't have the constraints of physical space, materials and human use cases and locality. The "load-bearing" aspects of software are not just physical infrastructure with CPU/memory limits, but also fundamental choices in the data model, some things more than others, and what starts out as a casual choice can become load bearing if further construction builds on that assumption. Buildings don't have legacy data.
> When you create a coffee shop, you build it to have a specific, repeatable, singular user experience.
I can't tell if you're suggesting software should be built this way, but after 25 years in the consumer space in multiple startups, scaleups, and largish companies, I have never seen any software company succeed based on rigidly adhering to singular user experience in this way. Even when you can describe the experience simply, the details and tradeoffs are enormous, and companies that win are the ones that are able to iterate quickly without letting the quality degrade to the point that the wheels come off. Facebook vs Friendster vs MySpace is a pretty good example. In my own founding experience, I was able to bootstrap a streaming service to 6-figure subscribers over 8 years leaving behind a graveyard of failed competitors because they couldn't thread the needle between good UX, good engineering, and the right content investment.
I'm sure it's different in well established and standardized corners of B2B and industrial software where you are not subject to the fickle nature of consumers, but in B2C you are default-dead if you insist on some kind of fixed requirement and long-term vision that doesn't change.
>The people putting it together don't need to be very skilled; they're working with off-the-shelf commodity parts, manufactured to a minimum specification, with specific dimensions and attributes, which loosely couple in many configurations with identical parts from different vendors all over the world.
Isn't this in some ephemeral way exactly what most of us do? Sure, there are people in the world doing real Computer Science(TM) but most of us are assembling apps from existing parts. Sure there's glue and there are domain specific bits, but there's a lot under the hood in both paid and open source forms aren't significantly different from going to a parts bin and pulling the right SKUs.
There are many failure modes, and by the time you show up... the entrenched incompetence will constrain the options within current project inertia and budgets.
Usually, it is better to branch a separate "new" team in another space, functionally deprecate the old project one feature at a time, and jettison the previous team including the manager after 6 months of uptime.
Trying to untangle a mess often takes 7 times longer than simply re-building a better version with well defined use-cases. =)
> by the time you show up... the entrenched incompetence will constrain the options within current project inertia and budgets.
this is so true, but sometimes the incompetence is so bad it's not feasible to build a competent team (very very difficult at least).
I'm actually in the middle of this now and I have to tell you, I've been doing this for 25+ years and I'm absolutely flabbergasted that people with this level of incompetence are gainfully employed in this industry. It's so bad we have contractor teams running circles around them and by circles I mean 2-3x their velocity with code designs that are as good or better.
> People who are not building the architecture should not make decisions about it.
Nope! Everyone can and does make mistakes. A good architect should accept and learn from suggestions and ultimately better technical decisions put forward by members of the team, whether it’s the senior with 70 years experience or the 1-month experience, bright junior with a good idea.
I was in a team where the architect chose to use React without TypeScript, and a nasty .NET 4/React mix for the front end. This was 2-3 years ago. I suppose it’s what he knew and was comfortable with? This combination caused no end of issues and an unnecessarily difficult development process. Unfortunately I wasn’t around at the time, but I’d have spoken up, and probably been listened to, which is what i’d expect from any good team, regardless of your position. Better is better, doesn’t matter whose mouth it comes out of.
A lot of the architects I’ve worked with have been rather snobby and quick to dismiss less experienced team members in my experience.
Unless your project complexity is on the order of blinking an LED, there isn't enough time in the day for a monolith, I'm afraid. You are going to have to build in coordination with other services (OSes, DMBSes, etc.)
Highlight all the code you have written / are responsible for. Have you highlighted just one single project that gets deployed as one unit? Then you have a monolith. Monolith doesn't mean you never talk to other services; it means you only own one.
My coworker highlighted just one single project that gets deployed as one unit. I have a second hobby project I work on at home of which I also take ownership. He is building a monolith, I am not? Even though we are working on the exact same project?
Um. Okay. So, just so I am clear, the advice here can be rephrased as: "Save a matter of life or death, don't have hobbies"?
You can use Rails/Django/NextJS + Postgres/Mysql and while I guess that's not technically a monolith, everybody calls it that anyway because the coordination is pretty much abstracted out. And that's gonna get you pretty far. If you really wanted it to be a monolith, you could use sqlite and that would actually get you pretty far too, but why would you when postgres/mysql is just as easy and will scale so much further?
> I have run into exactly zero engineers who don't know exactly what I'm talking about.
And I am afraid I haven't broken your streak. I know what Django is and what it usually entails.
I'm not sure what "monolith" adds, though. Simply saying "Django" would communicate the exact same thing, no? Is there a "Django non-monolith" that it could be confused with?
Per your definition above, it seems there is no software that is not a monolith.
I have no idea what you are trying to say, but my best guess is that you are referring to a case where one Django application is coordinating with 24 other services in some fashion?
Per fallingknife's definition, that's still a monolith. No different than a Django application coordinating with MySQL. If it is that you think 25 > 1 is somehow significant, throw in Stripe, GPT, etc. in addition to MySQL. He would still call that a monolith. It's all the same.
That would not be a monolith by my definition as I say that coordination removes software from being a monolith, but we've long moved past my definition. We're only here now to understand where fallingknife's definition allows for there to be anything other than monoliths.
? Article does not talk about pitfalls of architecture, proceeds to talk about hierarchical processes in organization. Guess it's a failure to uphold and implement layers of abstractions. Good thing there is a software engineer to distribute knowledge on software development process engineering. The failure of the article is the article.
From the context I assumed these are the things I knew as “non-functional requirements”. Basically, random stuff which doesn’t have specific associated use cases. Depending on the project, they might specify environment, setup and deployment, throughput and latency, system requirements, security, reliability, compliance, etc.
In systems design quality attributes are non-functional requirements that are used for the evaluation of the system itself rather than the intended behavior of the feature being implemented. E.g. system extensibility, scalability are quality attributes.
i used to work with these so called "software architects". they didnt ship any code but wrote articles like this at length that everyone pretended to read.
eg, I have worked with many so called 'developers'. They shipped lots of code, but most of it was solving the wrong problem, didn't fit any business requirement, added unnecessary complexity, had to be replaced almost immediately, etc. etc.
Shipping software means comprises but compromising everything will result in a failure.
You surely wouldn't argue that companies should compromise on developer machine specs, for example? Too many organisations cheap out somewhere on the development process in the name of "efficiency", and this article is arguing (and I agree) that that's short-sighted and will cause more trouble than it saves in the long run.
> shipping software means compromise, most of these points are basically "don't compromise on X".
as, you disagree with a lot of the article because it's telling you not to compromise on some things and compromises are necessary for software development. That only logically works if you're also saying that compromises are necessary everywhere, doesn't it? Otherwise what's to disagree on? If you're saying compromise may be required on any of items 1 through 10, and I'm telling you not to compromise on 1-5, well then 6-10 are your zones of flexibility.
I certainly don't think the 12 points brought up in the article are everything and therefore any interpretation of my comment as applying to everything is unreasonable.
If your stance is "always do X" then you can replace yourself with a post-it note. Just write "always do X" on the post-it note, then when you have a decision point, refer to it. You're done.
Actual engineering is about weaving through constraints and goals and that often implies compromises. Someone who can do that well cannot be replaced by a post-it note.
And guess what "always compromise" can also be placed on a post-it note so very obviously that is not what I'm saying.
Love this thought! It's surprisingly common to meet engineers with 6-7 years of experience with incredibly bad habits that they picked up working essentially at a single company that they joined right out of college. Repeating the same questionable patterns for 6 years doesn't provide a lot of growth opportunity. In this context, FAANG's obsession with hiring new grads is a questionable practice, people get stuck.