This seems to be written from the "Engineers are monkeys" perspective. As if they spend their time flinging poo and you really need "solid" boing technology that's already well designed so the poo doesn't mess it up.
You shouldn't choose node.js or MongoDB because they are "innovative"-- but because they are poorly engineered. (Erlang did what node does but much better, and MongoDB is poorly engineered global write lock mess that is probably better now but whose hype way exceeded its quality for many years.)
The engineers are monkey's idea is that engineers can't tell the difference-- and it seems to be supported by the popularity of those two technologies.
But if you know what you're doing, you choose good technologies-- Elixir is less than a year old but its built on the boring 20 years of work that has been done in Erlang. Couchbase is very innovative but it's built on nearly a decade of couchdb and memcache work.
You choose the right technologies and they become silver bullets that really make your project much more productive.
Boring technologies often have a performance (in time to market terms) cost to them.
Really you can't apply rules of thumb like this and the "innovation tokens" idea is silly.
I say this having done a product in 6 months with 4 people that should have taken 12 people 12 months to do, using Elixir (not even close to 1.0 of elixir even) and couchbase and trying out some of my "wacky" ideas for how a web platform should be built-- yes, I was using cutting edge new ideas in this thing that we took to production very quickly.
The difference?
Those four engineers were all good. Not all experienced-- one had been programming less than a year-- but all good.
Seems everyone talks about finding good talent and how important that is but they don't seem to be able to do it. I don't know.
I do know is, don't use "engineers are monkies" rules of thumb-- just hire human engineers.
Having come from Etsy and witnessed the success of this type of thinking first hand, I think you missed the point of the article and I think you are using a tiny engineering organization (4 people) in your thinking, instead of a medium to large one (120+ engineers).
The problem isn't "we are starting a new codebase with 4 engineers, are we qualified to choose the right technology?" it's "we are solving a new problem, within a massive org/codebase, that could probably be solved more directly with a different set of technologies than the existing ones the rest of the company is using. Is that worth the overhead?" and the answer is almost always no. Ie: is local optimization worth the overhead?
Local optimization is extremely tempting no matter who you are, where you are. It's always easy to reach a point of frustration and come to the line of reasoning of "I don't get why we are wasting so much time to ship this product using the 'old' stuff when we could just use 'newstuff' and get it out the door in the next week." This happens to engineers of all levels, especially in a continuous deployment, "Just Ship" culture. The point of the article is that local optimization gives you this tiny boost in the beginning for a long term cost that eventually moves the organization is a direction of shipping less. It's not that innovative technologies are bad.
> But if you know what you're doing, you choose good technologies
No, if you know what you are doing you make good organizational decisions. It matters less what technology you use than that the entire organization uses the same technology. Etsy has a great engineering team and yet the entire site is written in PHP. I don't think there is a single engineer working at Etsy who thinks PHP is the best language out there, but the decision to be made at the time was "there is a site using PHP, some Python, some Ruby etc., how do we make this easier to work on?" Of those three python and ruby are almost universally thought of as better languages than PHP, but in this case the correct decision was picking a worse technology because more of the site was written in it, the existing infrastructure supported it more completely and so as an organization and a business we could get back to shipping products more quickly by all agreeing to use PHP. Etsy certainly does not think of its engineers as monkeys, quite the opposite.
This optimization may as well become global when it'll come to hiring in the future. Quote from the post:
[...] what it is about the current stack that makes
solving the problem prohibitively expensive and difficult
Etsy, as a very successful PHP shop, surely understands that PHP codebase itself presents an expense in a form of non-hired smart engineers that pass on the company because they won't work with this language.
Plus, there are examples when the local optimization (i.e. staying with whatever legacy stack because it's proven) may lead to a global failure because of the unmaintainable "spaghetti blob" codebase with duck tape everywhere.
To me, that explains why Etsy has trouble with new tools and languages - they can't hire the developers it'd take to successfully run a project outside of the tools they are used to.
Analogously, if you promote by external hiring, you hemorrhage the kind of employee you'd want to promote internally. If you always stay in a particular sandbox, you lose the kind of employee that can work outside it.
If PHP is a deal breaker then that person isn't the type of person you want to hire, since they obviously care more about incidental issues like language choice than solving real problems. That's not to say people can't groan about it (like any other workplace annoyance) but I'll still go work somewhere doing amazing things even if the cafeteria food kind of sucks.
Anecdotally, it usually turns out that good engineering practices can be brought into any medium. PHP comes with a higher than average number of foot guns, and there is a lot of terrible PHP code out there that is unfortunate to find when you Google something, but it's self evident that a good engineering organization can build solid systems in PHP. (See also: the diligent use of a specific set of C++ features in shops building cutting edge graphics/game technology.)
It's not incidental -- choice of language has a massive impact on your developers' day-to-day experience as a human being, and many of the best programmers will find themselves incredibly frustrated by using a language like PHP (not because of ego but because PHP has properties which makes it frustrating to work with).
Of course, from a certain mindset I suppose anyone unwilling to sacrifice their happiness on the altar of your corporation's profit might be dubious... the question then becomes whether this affects recruitment and retention, and if so, whether you can still accomplish the things you want with mediocre talent and high turnover...
You see C++ used in graphics/game because it strength is low level memory/cpu control needed for cutting edge work. PHP is optimized for "time-to-market" not cutting edge, so chances are the "real problems" you want to hire for are not that interesting. Heck, a big advantage is PHP is that most of the problems you encounter are already solved for you.
Most of the time, "use of PHP" correlates to "not really interesting problems", so is a good proxy to use when deciding where to work :)
The truth is that even at great companies there is plenty of "crud work" to do. It is important work and you are solving real problems but there is little personal/professional growth or learning.
One way to keep that kind of work interesting and to provide growth and learning is to use new/different tools and technology to do it.
I totally agree, that's why I said medium to large. My main point is that you are solving radically different problems with even 20+ engineers than you are with only 5 or so.
Both Facebook and Google (!) aren't really large-scale by the standards of tech companies gone by. Facebook has about 9000 employees. Google has 50,000. By contrast, Microsoft has 128,000, HP has 300,000, IBM has almost 400,000, and DEC had about 300,000 at its peak.
In the startup world, we (rightly) focus on growth, but it's worth remembering that there are giant companies out there using really, really boring technology. In some segments IBM mainframes, DB2, and COBOL are still the technologies of choice.
To add to what you're saying there are also government departments and giant companies out there that do your tax, pay for the roads, handle your insurance and handle your banking where somebody 20 years ago chose a technology that wasn't boring.
These entities are now having huge problems trying to get off 1980s or 1990s non-boring non-standard technologies that are no longer supported.
There are places that have bought the company that was going insolvent that built their non-standard database or framework....
"Nobody ever got fired for buying IBM" had good reason behind it.
A lot are consultants, at least at HP and IBM. (Microsoft and DEC are much more engineering-heavy; I've heard that Microsoft has the structure of 1 engineer, 1 PM, and 1 tester per team.) Remember that out of Google's headcount, only about 20,000 are engineers. When I was on Search, Bing had more engineers working on it than Google Search.
> By your measure, Facebook is still not a large-scale engineering organization
"9,199 employees as of December 31, 2014" -- that's probably close enough to his metric to call it a large-scale engineering organization.
Of course, there's the real question of why Facebook needs to be a 10k+ engineer organization. For a minute it looked like they'd grow past their MySpace 2.0 roots. That becomes less convincing every day.
Ah nice catch, I misread the quote as 9k engineers. That said, after seeing their new offices, I'm not sure I can retract the general sentiment of my previous post.
My take tends to be not that 'innovation' is bad, but that there are a couple of risks:
- The weaknesses of new tech may not be fully understood. A lot of new tech solves existing problems, while re-surfacing problems that the old tech solved. Everyone thinks it's great until they've used it for a bit longer, and run into those issues.
- New tech runs a higher risk of disappearing/becoming unsupported. If you plan to support your product for a long time, that's a valid risk factor.
For myself, I'm wary of having very new tech as a fundamental underpinning of any piece of work I need to stick around. I'll likely adopt frameworks or database systems cautiously, unless their superiority is overwhelmingly obvious. On the other hand, I'd be a lot more willing to take risks on a simple library.
With a smaller, simpler piece of tech, it's easier to replace if something goes awry, and it's easier to evaluate in its totality prior to taking the risk.
It's not only that, it's also that if you have two languages in your codebase you now need two ways to deploy, two types of application servers, two types of testing frameworks/QA setups etc. If having the two languages means you can create a product only marginally faster/better then it is not worth all of that overhead. As mentioned in the article, there are places where the cost becomes worth it, for instance faceted searching is done in Java via Solr at Etsy. But for the most part fitting your problem into the existing infrastructure is a lot better for the organization than bringing in the perfect technology.
I agree with this. That being said, the author of the post seems to knock 'the right tool for the job', but I recently built two scrapers: one that scrapes an API (one time thing) and one that scrapes some websites (will probably be used once a month). The API one runs on PHP and auto-refreshes with a <meta> tag -- boring, but it works.
The one that scrapes websites I did with Node since some sites are multi-steps and the latency of a single scrape, plus the database latency could've turned this into a multi-week run with PHP
Individual humans are smart. Groups of humans are dumb. When you're hiring people that you will personally work with, you can filter for smart. When you have to work with another group of humans, it's safer to assume that they are stupid.
> Individual humans are smart. Groups of humans are dumb.
Actually, you have that exactly wrong[1].
"Behavioural economists and sociologists have gone beyond the anecdotic and systematically studied the issues, and have come up with surprising answers.
Capturing the ‘collective’ wisdom best solves cognitive problems. Four conditions apply. There must be: (a) true diversity of opinions; (b) independence of opinion (so there is no correlation between them); (c) decentralisation of experience; (d) suitable mechanisms of aggregation."
"Crowds" != "Groups". In a crowd, the individuals behave independently; each person makes their own judgment as to the best course of action and pursues it. In a group, the individuals are constrained to come to a collective decision and implement it.
That difference is crucial. Markets function based on the wisdom of crowds; they work because if one person has the right information but everybody else is dumb, the one iconoclast stands to make a lot of money and force out all the dumb people. Statistics function according to the wisdom of crowds; it works because errors contribute little to the mean, while most people, arriving independently at their conclusion, tend to be closer.
Groups all have to agree on the same conclusion. When this condition occurs, the only conclusion that they can agree on is one that can be communicated to all members of the group, which is necessarily limited by the ability of the weakest group member to understand it.
Both markets and groups are much better at quantifying power differentials than in assessing information objectively and making useful predictions about the future.
This is why groups tend to be dumb. So much energy goes on hierarchical posturing and social signalling that there's relatively little left over for practical intelligence.
Orgs that can break through this can do astounding things. But the successes tend to be more rooted in the values of science and engineering as processes than in market processes.
Historically, every so often you get an org that works as intelligence amplifier and is more than the sum of individual talents.
But this configuration seems to be unstable, and so far as I know no org has ever made it stick as a permanent feature of a business culture.
Of course I was oversimplifying :) In any case, that study removes many of the reasons that groups of humans make bad decisions - which is unfortunately impossible to do in most real-world contexts.
If you want to be more precise, we often make assumptions of people belonging to a group that is not our own. The safest assumption to make is that all other groups are dumb. Ironically, this likely reinforces the problem: Why is this other group assuming our application doesn't have feature XYZ? Of course it does, because we're good at what we do. But obviously they must not be very bright to make such an assumption...
Groups are a low-pass filter on the abilities of the individuals that compose them. To teach something to a group, you have to communicate it to every member; this communication is naturally bound by the ability to understand of the person who is least familiar (or least enthusiastic) about the particular tech.
You can often cut the time needed for a complex project in half simply by cutting the team in half and telling each group to work on it independently. The problem is that now you have two problems - or rather, two solutions. If you try to integrate them together, you end up reintroducing all the communication hassles and more. If you throw one out, you'll alienate and probably lose all the developers who worked on it. If you bring both to market, you confuse your customers and lose brand equity.
I'll throw another one down your way. An organization I worked with had a about 5 million lines of COBOL in one system (they had several more and this one systems was only about 15% of their total transactional workload). It used a proprietary pre-relational database that allowed users to do both queries (of a sort) and do things like the value from the query result + 1500 bytes.
They tried re-writing pieces in Java at a cost of tens of millions of dollars. Java was the new hotness. In addition, they built out a Java hosting environment using expensive, proprietary Unix hardware to reach the same production volume as the mainframe. However, it was grossly under-utilized because the Java code couldn't do much more than ask the COBOL code what the answer was to a question by using Message queues. More millions of dollars went to keep up licenses and support contracts on essentially idle hardware.
They tried moving it to Windows, using .NET and MicroFocus COBOL. But the problem was they would still be tied to COBOL, even though they (conceptually) had a path to introduce .NET components or to wrap the green-screen pieces in more updated UIs. But that in itself was a problem because all their people knew the greenscreen UI so well it was all muscle memory. Several workers complained because new GUI actually made them slower at their jobs.
They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone. For the most part they were tied to that COBOL code because no one understood everything that it did and there were only a handful of COBOL programmers left in their shop (I think 6) and they were busy making emergency fixes on that + several other millions of lines of code in other systems.
They were, however, looking for an argument to retire COBOL and retire the mainframes. The cheapest solution would have been to stick with COBOL. Hire programmers. Teach them COBOL (because it was painfully difficult to find any new COBOL people and for various reasons they could not off-shore the project). Continue to develop and fix in COBOL (especially before the last remaining COBOL programmers died or retired). If you cleaned up or fixed a module, maybe move it to Java when possible.
The long story short is the decision to introduce a new technology, even in the face of an ancient, largely proprietary (since it's really about IBM COBOL on mainframes), and over-priced solution can actually lead to a worse outcome. Had they stayed with boring technology. Had they in-sourced more of their COBOL workforce. They might not have felt happy, but they would have been in a much strong, better position. Instead they were paying for a mainframe, and a proprietary Unix server farm, and software licenses on both Unix and z/OS.
When I last was there they were buying a new solution from Oracle which was supposed to arrive racked up and ready to go. Several weeks in they essentially said it would take months before the first of the new Oracle servers would be ready for an internal cloud deployment on which to try to re-host some software. I'm not even sure what they think they would be re-hosting but they talked about automatic translation of COBOL to Java.
> They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone.
Can you explain for people who never ever been close to such an environment how this can happen, and why do they still care about upholding the requirements they don't know about?
Let's say you have a business process, like if a shipping manifest goes through any one of the following 3 cities, then you need to file form XYZ, unless the shipper is one of the following official government agencies and they've filed forms ABC and DEF. That was the original requirement in 1980. It was documented, put in a series of binders, and placed on a shelf.
1982, Adds another port to the list of special port cities, but only if shipping goods of type JKL or MNO. That change was documented in an inter-office memo and filed away. Except the only time you have the type of goods information is in a different module - so even though it pertains to the original business process, it's in another module that prints the ship's manifest to (physically) mail it to the insurer.
1989, the original requirements binders are moved to a storage facility.
1992, The memo is also sent to an archive facility. Original manuals have been destroyed because the records retention policy is 10 years.
1994, There's a change in the law and an emergency fix was put in, and the comments were put into the source code.
1995, The source code with the comments is lost, so an older version of the source code is recovered with the just the code change.
And so on and so on
Until 2015. You have 5,000 to 10,000 lines of code that deal with the original requirement. They're split into multiple modules. They reside in a source code base of 5,000,000 lines of code. The people that use your software have a combination of the software + a whole bunch of unwritten rules like: "If it's this country, and this port, and this port of origin - PF10 to get to the override screen and approve the shipment. Add 'As per J. Randal' in the comments."
Yes, this "boring = good!" trope is frequently weaponized to shut down people's voices. Happened to me.
One thing I realized is these blogposts are consumerist. They talk about "Python" and "MongoDB". Very little about underlying ideas like "algorithms", "computational paradigms" or "expressive power".
And they have hypersimplified plans about "three innovation tokens". Instead of "risk analysis" or "evaluate tradeoffs".
One company shut me down with such blogposts... while it let devs run amok with an architecture which did n^2 (more?) network calls... where each call transfered one RDBMS row at a time. It dragged down the intelligence of everyone who really knew better; they spent "sprints" trying to find micro-optimizations, knowing exactly that the system was fundamentally ridiculous.
So I spent a weekend reimplementing it in the Scary Fun Language. Because it was my weekend dammit, and Embracing Boredom damaged my brain too much. Scary Fun was the only way to start mending it. And it succeeded.
So of course the first order of business was to rewrite it in the Embrace Boredom language.
I think the point the parent was making was that using terminology like "scary" or "boring" for technological choices is fundamentally an abrogation of responsibility. It's cargo cult programming. Good programming, intentional engineering, is one where people are cognizant of the risks and other tradeoffs of different languages, frameworks, designs, etc.
It shouldn't be surprising that an organization that opted out of a discussion on technology stacks would also ultimately opt out of discussing algorithmic complexity as well, it speaks to a lack of sophistication and maturity at the institutional level.
I think it's more nuanced; certain technologies afford certain designs. There really are differences in languages, and those differences expose users to different misbehaviors -- or the dual, to different optimizations.
You're actually demonstrating the point of the blog post, which is that the Boring RDBMS had a well-understood failure mode that "everyone who really knew better" would've recognised.
You shouldn't choose node.js or MongoDB because they are "innovative"-- but because they are poorly engineered. (Erlang did what node does but much better, and MongoDB is poorly engineered global write lock mess that is probably better now but whose hype way exceeded its quality for many years.)
The engineers are monkey's idea is that engineers can't tell the difference-- and it seems to be supported by the popularity of those two technologies.
But if you know what you're doing, you choose good technologies-- Elixir is less than a year old but its built on the boring 20 years of work that has been done in Erlang. Couchbase is very innovative but it's built on nearly a decade of couchdb and memcache work.
You choose the right technologies and they become silver bullets that really make your project much more productive.
Boring technologies often have a performance (in time to market terms) cost to them.
Really you can't apply rules of thumb like this and the "innovation tokens" idea is silly.
I say this having done a product in 6 months with 4 people that should have taken 12 people 12 months to do, using Elixir (not even close to 1.0 of elixir even) and couchbase and trying out some of my "wacky" ideas for how a web platform should be built-- yes, I was using cutting edge new ideas in this thing that we took to production very quickly.
The difference?
Those four engineers were all good. Not all experienced-- one had been programming less than a year-- but all good.
Seems everyone talks about finding good talent and how important that is but they don't seem to be able to do it. I don't know.
I do know is, don't use "engineers are monkies" rules of thumb-- just hire human engineers.