And if more systems engineers had more design knowledge, navigating the AWS console wouldn’t be like walking on hot coals. But it’s still $X0 billion/year business!
We’re all different at good things, and it’s usually better to lean into your strengths than it is to paper over your weaknesses.
We can wish everyone were good at everything, or we can try to actually get things done.
> We can wish everyone were good at everything, or we can try to actually get things done.
False dichotomy. There's no reason we can't have both.
I want to be clear, there's no perfect code or a perfect understanding or any of that. But the complaint here about not knowing /enough/ fundamentals is valid. There is some threshold which we should recognize as a minimum. The disagreement is about where this threshold is, and no one is calling for perfection. But certainly there are plenty who want the threshold to not exist. Be that AI will replace coders or coding bootcamps get you big tech jobs. Zero to hero in a few months is bull.
It’s not a false dichotomy at all. You only have so many hours in a day. At a startup, it’s very unlikely (certainly not impossible!) that your differentiation will come from very cheap system orchestration - your time is likely better spent on building your product.
Minimum knowledge is one thing; minimum time to apply it is another.
If you had to spend time / VC money learning all of this stuff before you could begin to apply it, I absolutely agree, it's a waste of time. That's not my point. My point is people (by people, I mean "someone interested in tech and is likely to pursue it as a career") can and should learn these things earlier in life such that it's trivial once they're in the workforce.
I could go from servers sitting on the ground to racked, imaged, and ready to serve traffic in a few hours, because I've spent the time learning how to do it, and have built scripts and playbooks to do so. Even if I hadn't done the latter, many others have also done so and published them, so as long as you knew what you were looking for, you could do the same.
Yeah this is what I meant as well. Though I'd also argue that on the job returning is essential too. Which should come through multiple avenues. Mentorship from seniors to junior as well as allowing for time to learn on the job. I can tell you from having been an aerospace engineer you'd be given this time. And from what I hear, in the old days you'd naturally get this time any time you hit compile.
There's a bunch of sayings from tradesmen that I think are relevant here. And it's usually said by people who take pride in their work and won't do shoddy craftsmanship
- measure twice, cut once
- there's never time to do it right, but there's always time to do it twice
- if you don't have time to do it right when will you have time to do it again?
I think the advantage these guys have is that when they do a shit job it's more noticeable. Not only to the builders but anyone else. Unfortunately we work with high abstractions but high skill is the main reason we get big bucks. Unfortunately I think this makes it harder for managers to differentiate high quality from shit. So they'd rather get shit fast than quality a tad slower because all they can differentiate is time. But they don't see the how this is so costly since everything has to be done at least thrice
> False dichotomy. There's no reason we can't have both.
I'd kinda want to argue with that - it is true, but we don't live in vacuum. Most programmers (me included, don't worry) aren't that skilled, and after work not everyone will want to study more. This is something that could be resolved by changing cultural focus, but like other things involving people, it's easier to change the system/procedures than habits.
Are you wanting to argue or discuss? You can agree in part and disagree with another part. Doesn't have to be an argument.
To your point I agree. I would argue that employers should be giving time for employees to better themselves. It's the nature of any job like this where innovation takes place. It's common among engineers, physicists, chemists, biologists, lawyers, pilots, and others to have time to learn. Doctors seem to be in the same boat as us and it has obviously negative consequences. The job requires continuous learning. And you're right, that learning is work. So guess who's supposed to pay for work?
Well here's the choice, we do that and build good things or we don't and build shit.
If you look around I think you'll notice it's mostly shit...
There's a flaw in markets though which allows shit to flourish. It's that before purchasing you can't tell the difference between products. So generally people then make the choice based on price. Of course, you get what you pay for. And many markets people are screaming for something different that isn't being currently met, but things are so entrenched that it's hard to even create that new market unless you're a huge player.
Here's a good example. Say you know your customers like fruit that is sweet. So all the farmers breed sweeter and sweeter strawberries. The customers are happy and sells go up. But at some point they don't want it any sweeter. But every farmer continues anyways and the customers have no choice but to buy too sweet strawberries. So strawberry sells decline. The farmers not having much signal from customers other than price and orders, what do they do? Well... they double down of course! It's what worked before.
The problem is that the people making decisions are so far removed from all this that they can't read the room. They don't know what the customer wants. Tbh, with tech, often the customer doesn't know what they want until they see it. (Which this is why so much innovation comes from open source because people are fixing things to make their own lives better and then a company goes "that's a good idea, let's scale this)
> there's no perfect code or a perfect understanding or any of that
I'm unsure what those terms mean. What are qualities that perfect code or perfect understanding would have?
Depending on your framing I may agree or disagree.
Just to lob a softball, I'm sure there are/were people that have a perfect understanding of an older CPU architecture; or an entire system architecture's worth of perfect understanding that gave us spacecraft with hardware and firmware that still works and can be updated (out of the planetary solar system?), or Linux.
These are softballs for framing because they're just what I could type off the cuff.
I'm making a not so subtle reference to "don't let perfection be the enemy of good". A saying often said to people who are saying things need to be better.
To answer your softball, no, I doubt there was anyone who understood everything except petty early on. But very few people understand the whole OS let alone do any specialized task like data analysis, HPC, programming languages, encryption, or anything else. But here's the thing, the extra knowledge never hurts. It almost always helps, but certain knowledge is more generally helpful than others. Especially if we're talking memory but things like caching, {S,M}I{S,M}D, some bash, and some assembly go A LONG way
I'm a big fan of fundamental knowledge, but I disagree somewhat with your statement. The thing that startups care most about is a product market fit. And finding that fit requires a lot of iteration and throw-away code. Once the dust settles and you have an initial user base, you can start looking into optimizations.
> Once the dust settles and you have an initial user base, you can start looking into optimizations.
But people never do. Instead they just scale up, get more funding, rinse and repeat. It isn't until the bill gets silly that anyone bothers to consider it, and they usually then discover that no one knows how to optimize things other than code (maybe – I've worked with many devs who have no idea how to profile their code, which is horrifying).
> But people never do. Instead they just scale up, get more funding, rinse and repeat. It isn't until the bill gets silly that anyone bothers to consider it,
Yes because usually the other option is focus on those things you advocate for up front and then they go out of business before they get a chance to have the problems you're arguing against.
There is such a variety of work environments, and realistically most people learn on the job. Everyone has different skills and knowledge bases.
When I was at <FAANG> we didn’t control our infrastructure, there were teams that did it for us. Those guys knew a lot more about the internals of Linux than your average HNer. Getting access to the SSD of the host wasn’t a sys-call away, it was a ticket to an SRE and a library import. It wasn’t about limited knowledge, it was an intentional engineering tradeoff made at a multi-billion dollar infra level.
When I worked at <startup>, we spent 1hr writing 50loc and throwing it at AWS lambda just to see if it would work. No thought to long term cost or scalability, because the company might not be there tomorrow, and this is the fastest way to prototype an API in the cloud. When it works, obviously management wants you to hit the “scale” button in that moment and if it costs 50% more, well that’s probably only a few hundred dollars a month. It wasn’t about limited knowledge, but instead an intentional engineering tradeoff when you’re focused on speed and costs are small
And there is a whole bunch of companies that exist in between.
This is exactly my experience. Nearly every dev on my team can dive into the details and scale that service effectively, but it’s rarely worth it.
If an engineer costs $100/hour, scaling an extra $100/month (or even an extra $1k/month) is generally a no brainer. That money is almost always better served towards shipping product.
Premature optimization may hit them hard. Overengineering is imo usually the bigger technical debt and a huge upfront cost as well. Well-thought out plans tend to become a sunken cost fallacy. Making room for changes is hard enough in XP like ways of working. When you have to tell your manager that half a year of careful plans and engineering can be thrown away, because of the new requirements, which emerge from late entry to market, you look like a clown. Plans and complexity usually introduce more risk than less.
Infra should not require much in the way of redoing if it's done correctly. Foundational software's configuration like RDBMS schema, maybe, but I wouldn't classify that as infra per se.
Seriously, I'm struggling to figure out how "we have servers that run containers / applications" would need to be redone just because the application changed.
Some things that can happen: Product gets canned.
Customers want on premise in their data center.
Usage spikes are too extreme and serverless is simply the cheapest option.
I would always recommend "serverless" monolith first with the option to develop with mocks locally/offline. That's imo the best risk/effort ratio.