Since at least the 70s people have been trying to "componentise" software in the same way that electronics is componentised: rather than assembling something out of a pile of transistors, build integrated circuits instead. The intent is to reduce cost, complexity and risk.
This has never yet quite worked out in software. Object-orientation was part of the resulting research effort, as are UNIX pipelines, COM components, microkernels and microservices. When it goes wrong you get "DLL Hell" or the "FactoryFactoryFactory" pattern.
It looks like the javascript world has forgotten about integration and instead decided to do the equivalent of assembling everything out of discrete transistors every time. The assembly process is automated, so it appears costless - until something goes wrong.
But really this is the fault of the closed source browser manufacturers, who prefer to attempt lockin over and over again through incompatible features rather than converge on common improvements.
I disagree. I think it has worked out quite well. Nowadays nobody has to write basic data structures or algorithms themselves. Unfortunately, the hard part of building software that is useful to today's businesses is not sorting lists or storing dictionaries.
But remember that things like computing a KWIC index used to be real problems back in the day that required serious programmer work. They have become trivial thanks to better libraries and better computers.
> But really this is the fault of the closed source browser manufacturers, who prefer to attempt lockin over and over again through incompatible features rather than converge on common improvements.
There's currently a single major browser engine that is still closed source, EdgeHTML (And with the current trend of open sourcing things at Microsoft, this might change very soon)
Plus the standard bodies were created to prevent that. After a significant stagnation in the mid-2000s, that was ended by the WHATWG, we're getting amazing progress.
Steve McConnell's classic book "Code Complete: A Practical Handbook of Software Construction" references some ~1980s studies of code quality. IIRC, defects per KLOC was inversely correlated with function length, leveling off around 200–400 LOC per function. Software composed of many micro-functions (or, in the npm case, micro-libraries) is more difficult to grok because there is more context "off screen" to keep in your head.
An unrelated Cisco study of code reviews found that 200–400 LOC is the limit of how many LOC can be effectively reviewed per hour. Applying the findings of these studies suggests that neither functions nor patches/pull-request diffs should not exceed 200 LOC. FWIW, I have worked on commercial software that had functions many thousands of lines long! :)
This has never yet quite worked out in software. Object-orientation was part of the resulting research effort, as are UNIX pipelines, COM components, microkernels and microservices. When it goes wrong you get "DLL Hell" or the "FactoryFactoryFactory" pattern.
It looks like the javascript world has forgotten about integration and instead decided to do the equivalent of assembling everything out of discrete transistors every time. The assembly process is automated, so it appears costless - until something goes wrong.
But really this is the fault of the closed source browser manufacturers, who prefer to attempt lockin over and over again through incompatible features rather than converge on common improvements.