I personally know and have (tangentially) worked with the guy and none of what you’ve said is true.
> Look at his CV. Tiny (but impactful) features ///building on existing infrastructure which has already provably scaled to millions and likely has never seen beneath what is a rest api and a react front end///
Off the top of my head he wrote the socket monitoring infrastructure for Zendesk’s unicorn workers, for example.
I certainly don’t agree with everything Sean says and admit that “picking the most important work” is a naive thing to say in most scenarios.
But writing Python in production is trivial. Why would anyone lie about that? C is different OTOH. But just because you do a single config change and get paid for that doesn’t mean it’s true for everyone.
Also, staff at GitHub requires a certain bar of excellence. So I wouldn’t blindly dismiss everything just out of spite.
Such a familiar feeling. Articles similar to this one make lots of sense to and I do try to embrace simplicity and not optimize prematurely, but very often I have no idea whether it's the praised simplicity and pragmatism or just a lack of experience and skills.
I believe even at FAANG-like companies, only a lucky minority is involved at that level of scale. Most developers just use the available infrastructure and tools without working on the creation of S3 or BigTable.
This famous blog post [0] suggests that the default behaviour at Google at least is for everything to deal with massive scale. Doesn't mean everyone is involved in creating massive-scale infrastructure like S3 or BigTable, but it does mean using that kind of infrastructure from the start
There's another reason for that. Deep in my heart, I would love to be part of a team that works on truly data-intensive applications (as Martin Kleppmann would call them) where all the complexity is justified.
For example, I am more of the "All you need is Postgres" kind of software engineer. But reading all those fancy blog posts on how some team at Discord works with 1 trillion messages with Cassandra and ScyllaDB makes me envious.
Also, it seems that to be hired by such employers you need to prove that you already have such experience, which is a bit of a catch-22 situation.
I feel like the phrase "all you need is Postgres" has the (often unspoken) continuation of "until you actually get to a trillion messages".
In other words, the developers you're envious of didn't start with Cassandra and ScyllaDB, they started with the problem of too many messages. That's not an architectural choice, that's product success.
Absolutely. To put it differently, unfortunately not everyone has a chance to be part of a product's organic evolution from "all we need is Postgres" to "holy crap, we're a success, what is Cassandra by the way?"
As a data point, I've been at two data-intensive startups where they eventually needed to pull (some) of their table-like data out of postgres, and for both that was past a $100MM valuation.
This varies by domain of course, but non-postgres solutions are generally built for very specific problems – they're worse than postgres at everything except one or two cases.
Only places that are making good money can afford to have overengineering.
Overengineering is more prevalent the more money a company makes and companies who overengineers will pay good money to keep the overengineering working.
Something about my old CTO and VP of Eng I respected is they were still technical enough to call out this kind of thing. For as big as that company was they really held down complexity and overengineering to a real minimum.
Unfortunately the rest of the executive has leaned on them so hard about AI boosting productivity they aren’t able to avoid thst becoming a mess
It is a shame that so many companies try to scale by just hiring a lot of people, the more people you have in a single project the more overengineering you will end up with.
Some of it is consequence of managing so many individual contributors, I still believe a lot of companies use microservice stuff as a way to scale to more teams than to more scalability/reliability/observability.
Some of it is just people coming up with clever solutions (and leaving after the fact) and a lot from resume-driven development.
This also happens because plenty of candidates learn the buzzwords and patterns without understanding the trade-offs and nuances. With a competent enough interviewer, the shallowness of knowledge can be revealed immediately.
Identifying candidates who repeat buzzwords without understanding tradeoffs is easy. It’s part of the questioning process to understand the tradeoffs.
The problem with the comment above is that it’s not discussing tradeoffs at all. It’s just jumping to conclusions and dodging any discussion of tradeoffs.
If you answer questions like that, it’s impossible to tell if the candidate is being wise or if they’re simply BSing their way around the topic and pretending to be smart about it, because both types of candidates sound the same.
It’s easy to avoid this problem by answering questions as asked and mentioning tradeoffs. Trying to dismiss questions never works in your favor.
Yes, I would probably phrase it like this. "Under the current load, I would go super simple and use X, which can work fine long enough until it doesn't. And then we can think about horizontal scaling and use Y and Z". Then proceed with a deeper discussion of Y and Z, probably.
After all, interviewing and understanding what your interviewer expects to hear is also a valuable skill (same as with your boss or client).
Yes, and this is exactly why LinkedIn-driven development exists in the first place. Listing a million technologies looks much more impressive on paper to recruiters than describing how you managed to only use a modular monolith and a single Postgres instance to make everything work.