It's not just the algorithm, but the frame of mind to consider an optimisation. I guess it's a rare sight in the age of electron apps and cloud startups.
Electron apps aren't slow, bloated, and awkward because the new junior SWE on the team used a O(n^2) tree-walking algorithm for the app's search feature - they're like that because it's inherent in using a general-purpose web-browser engine for your desktop GUI.
Micro-optimizing application software programs by implementing different algorithms is completely detached from the big engineering choices made at the very start of a project where the application's substrate and platform are chosen - and those decisions are made not with a view towards program computational efficiency, but primarily towards developer-productivity. Thanks to Electron someone who grew-up making websites as a teenager with little to no exposure to the horrendously unproductive and beginner-hostile world of MFC, GTK, and Qt can make an engaging and appropriate cross-platform desktop UI in under a day.
-----
If we want to see the Electron "problem" fixed, then the best solution is for the Electron team to figure out how to cut down their build of Chromium to remove all of the features unnecessary for trusted desktop applications (no, we don't need process isolation!). I'd love to see a build of Electron+Chromium with all of the JavaScript removed, so that it's a bare-bones HTML+CSS layout and rendering system, and have it wired-up to some OOP application binary (be it Java, .NET, C/C++, etc) which manipulates the DOM - I don't see why that should need more than a few dozen MB RAM and run in a single process.
People in these arguments always talk about the algorithms, but I think that entirely misses the point. It rarely is about the algorithms themselves, but, rather, it's about the data structures.
One is obviously tied to the other, but what I mean is that many slow apps are slow because the people just used the wrong data structure. Sometimes it's as simple and silly as using a list and constantly iterating over it instead of using a dictionary/KV-map. I think the idea with having people know about "algorithms" is to get them in a state of mind where they will automatically pick more appropriate data structures. I really don't remember how to implement RB-trees or AVL-trees, nor do I really know their pros and cons against each other at this moment (I have a very very faint idea), but I know they exist and I certainly have a better idea of when to use a tree versus a list, versus whatever.
Would I fail these interviews? Probably, unless I studied a bit, but do I think that the concepts that they ask about are pointless? No, not at all. I've looked at my fair share of legacy code bases built by subpar developers and the one thing that always pops up is the bad data structures chosen. Once we fix that, usually everything else automatically falls into place.
EDIT: To be clear, what I mean is that in most cases, just picking the right data structure, among the most basic and elementary data structures given to you by the language, is more than enough. Only in rare cases does one then have to go beyond that and carefully engineer a more precise algorithm. The data structures are way more than half the problem, in nearly every application.
I think you hit the nail on the head. Data structures are so much more important than throwing more threads at the problem. Someone could write beautiful lock-free code but choose a ring buffer (lock free queue) instead of a concurrent set and it's all for not.
> Sometimes it's as simple and silly as using a list and constantly iterating over it instead of using a dictionary/KV-map.
It's pretty easy to walk away from an algorithms course with the very basic intuitive understanding that "dictionaries trump all other data structures." Certainly that misses out on all of the cases when hash maps are a liability, e.g. when dealing with sequential data, but most questions end up being pro-dictionaries anyway.
Hashmaps, priority-queues/heap-trees, and others are all wildly different approaches of implementing a dictionary - all with their own different Big-O characteristics.
If one implements a low-level code, then one has freedom to pick good data structures. But when writing GUI apps there is simply no such freedom as data structures are already defined by libraries.
For example, a GUI framework typically uses a notion of widget tree that is fundamental to the library design. But the end user UI does not look as an arbitrary tree with deep nesting. It is easy to see that using a tree for this leads to extreme denormalization of data. Normalizing that to a relational form should remove a lot of duplication and code (often hidden) to synchronize that duplicated state. But try that with a popular framework. It is not doable in practice. So one sticks with tree architecture and its inefficiencies.
Chromium supports a single process switch at least to use for development of Chromium itself. While it does reduce memory consumption, it does not help much. All data structures in Chromium are tailored for multi-process case with no memory sharing. Replacing processes with threads do not change that.
As for removing V8 JS engine from blink I guess it is possible. But again, blink is tailored for accessing from JS and the layout and rendering code is huge so one does not save much.
> Electron apps aren't slow, bloated, and awkward because the new junior SWE on the team used a O(n^2) tree-walking algorithm for the app's search feature - they're like that because it's inherent in using a general-purpose web-browser engine for your desktop GUI.
¿Por qué no los dos? If we're going to implement a desktop GUI via a general-purpose web browser, it should at least be as fast as a generic webpage.