Point is, most normal computer usage absolutely should NOT require a 10TB SSD and 256GB ram. It didn't for providing the same/more functionality a few years ago, why does it suddenly require these days?
Because a web-app is the only way you can monetize a desktop app like functionality in 2019.
Linux desktop toolkits and dev environment utterly sucks.
You'll have to develop 3 different codebases for Win, Mac and Linux.
Worse still, you'll have to reinvent app updates technology if you go with Qt or will be tied into a different updating technology and still have to deal with piracy problems.
Probably some combination of things that break principle of least surprise along with not needing to think about that thing very frequently. A good IDE and good coding standards can smooth things over.
I'm ok with regex too, but if you asked me to do something with look-behind I'd have to experiment a little to figure out how it works again.
Re: performant code, the developer writes the code once, it is read many times, and it is run many many many times. Several orders of magnitude more.
Yes, making code that is easy to read and less bug-prone is good. But at the end of the day the customers are going to be running your code millions of times a day, and if you need to make the code slightly harder to read to improve performance, then by all means do so.
If your code is only going to be run once and must be reliable, then you can make a different trade-off.
What is premature and what is not? Is choosing fit for a job data structures a premature optimization? I don't think so. But I've seen people arguing against it.
First you make it work, then you benchmark it. Then you see if that particular part is a bottleneck and whether there is a business case for optimisation.
I know it's fun and exciting to optimise a function to perform at maximum efficiency, but people tend to forget that someone has to read that piece of code in the future and understand it.
All the fancy tricks might've given a 2% increase in performance, but made it 200% less understandable by anyone except the codegolfing optimizer trying to be clever. =)
Spectrum of performance:
LO |---*-------*--------*------------*-------| HI
^ ^ ^ ^
| | | |_root of all evil if premature
| | |_you should be here
| |_you can be here if you don't do stupid things
|_you are here
--
> All the fancy tricks might've given a 2% increase in performance, but made it 200% less understandable by anyone except the codegolfing optimizer trying to be clever.
This applies to hairy, last-ditch effort optimizations. The kind of your average programmer isn't even capable of doing. It's nothing like the optimizations most real-world code needs.
It's why I consider the "premature optimization" adage to be actively harmful, as it legitimizes lack of care and good craftsmanship.
From what I've seen, a lot of code can be trivially optimized with no net loss to readability (and sometimes a gain!), by simply removing dumb things, mostly around data structures. Fixes involve using vectors instead of lists or hash tables, depending on size and access and add/delete patterns. Using reference equality checks instead of string comparisons. Not recalculating the same value all over again inside a loop.
The kind of things above are ones that bleed performance all over your application, for no good reasons. I consider it a difference between a newbie and a decent programmer - whether or not they internalized how to code without stupid performance mistakes, so that the code they write is by default both readable and reasonably performant.
Then you go to actual optimizations, the kind that benefit from a benchmark - not because doing them elsewhere is wrong in principle, but because they take time and noticeably alter code structure. Using better algorithm, and/or using a better data structure, both come here. They don't have to impact readability, as long as you isolate them from the rest of the system behind a simple interface.
(Like, e.g. one day I achieved 100x boost of performance of an application component by replacing a school-level Dijkstra implementation with a two-step A* -based algorithm and data structure specifically designed for the problem being solved, and easily managed to wrap it in an even simpler interface than original. Since the component was user-facing, it pretty much single-handedly changed the perception of application from sluggish to snappy. The speedup itself probably saved many people-hours for users who were a captive audience anyway (this was an internal tool).)
Only then you get to the "premature optimization is a root of all evil" part, which is hairy tricks and extreme levels of micromanagement. Making sure you don't cons anything, or more than absolutely necessary. Counting cycles, exploiting cache-friendly data layouts, etc. This can have such a big impact on a system and surrounding code that it does really benefit from not being done until absolutely needed (except if you know you'll need it from the start - e.g. in some video games).
>changed the perception of application from sluggish to snappy
.. so, you measured the performence (sluggish), saw the need for improvement and improved it (snappy). It is not premature optimization. It would be premature optimization if it happend without mesaurement and without need.
I agree with your examples above. If you choose the right data strcuture/algorithms/patterns without sacrafising readability or development speed. By all means. But don't spend hours to improve something which dosn't need improvement.
That's true. What I personally advocate is: first, learn enough about programming and language to not do stupid things - so your code is already somewhat performant by default, at zero cost to readability. Second, when you're designing, think a little bit about performance and, given two designs of similar complexity but different performance characteristics, pick the more performant one. Three, when refactoring, if you see something stupid performance-wise, fix it too. All these things cost you little time and make your application overall snappier and less likely to develop performance problems in the future.
Beyond that, measure before you optimize, as such interventions will require larger amount of effort, so it makes sense to do them in the order of highest-impact first.
(Also note that "performance", while usually synonymous to "execution speed", is really about overall resource management. It's worth keeping memory in mind too, in particular, and power usage if your application could be used on portable devices. Which is really most webapps nowadays.)
It should just be turned off for large channels. Pop an error message and direct users who need it to a support article, if they follow up real support people then see how many people complain.
I ran into this earlier this week. Turned out a macro that was implemented as a wrapper function in debug linked differently in ship. I don't see this first-hand very often but I'd be surprised if this kind of thing isn't an uncommon cause of ship/debug differences.
If you made a "office space" kind of game, and it was something you could actually call a game, I wouldn't be surprised to see a community grow up around it as people discussed path prioritization, how to speed run the TPS reports level, and how the recent nerf to casual dress fridays have impacted the meta.
I use this - it's way more responsive and faster than the current gmail UI.
My only gripe with the basic html version is that the back button is broken when you're trying to go back to search results after clicking on an email (you need to click on "Go back to search results").
Thanks. I have Firefox 60.3.0esr (64-bit) in Debian jessie. So maybe it's the age, or that I block WebGL. Or maybe spoofing referrer. Or blocking local storage. Or something else about NoScript.
Once your memory usage climbs and your OS starts paging/swapping things to disk, seemingly trivial operations that page in new code will take forever.