> I still keep words of my database optimization lecturer who said that by his experience optimization below 1 OOM aren’t worth it and most „good ones” are 3+
Like everything, it depends. Is the query gating a lot of other things, especially things that can be done in parallel? Shaving 10 ms off might very well be meaningful. Is it a large OLAP query, and the owning team has SLAs that depend on it? Going from 60 --> 55 minutes might actually matter.
The two biggest performance-related issues with RDBMS that I deal with, aside from indexing choices, are over-selection (why on earth do ORMs default to SELECT * ?), and poor JOIN strategies. Admittedly the latter is often a result of poor schema design, but for example, the existence of semi/anti-joins seems to be uncommon knowledge.
If SLAs are involved then I’d argue it’s not about optimization but about business goals, which unsurprisingly take precedence.
But there is another case that is very similar: threshold passing (or how I like to call it - waterfalling). Small inefficiencies add up and at some point a small slowdowns reach critical mass and some significant system breaks everything else.
When system was designed by competent engineers huge optimizations aren’t easy, so it’s shaving couple millis here and couple millis there. But, as in the first case, I’d categorize it as a performance failure.
Like everything, it depends. Is the query gating a lot of other things, especially things that can be done in parallel? Shaving 10 ms off might very well be meaningful. Is it a large OLAP query, and the owning team has SLAs that depend on it? Going from 60 --> 55 minutes might actually matter.
The two biggest performance-related issues with RDBMS that I deal with, aside from indexing choices, are over-selection (why on earth do ORMs default to SELECT * ?), and poor JOIN strategies. Admittedly the latter is often a result of poor schema design, but for example, the existence of semi/anti-joins seems to be uncommon knowledge.