I use lots of libraries and it works fine. Popular libraries are usually written by competent developers, some times they will even tell you the time complexity of a function in the documentation. Most of the time library code won't be a problem.
It really seems to me like you don't understand the advice about premature optimization. This is it, this is what Knuth is talking about. Profiling everything, optimizing everything. That's premature optimization.
Mature optimization is when you have an application and you see that part of it is slow. You investigate, for example with a profiler. Maybe you write benchmarks. Then you try a different approach and benchmark that as well (either with an actual benchmark library or just by running the code locally) and see if the new solution is faster.
I like to refer to what I do as macro optimization. I don't spend significant time worrying about performance, I just write code and kind of just keep complexity, IO and such in mind, avoiding badly scaling algorithms if I can. I don't care if the badly scaling algorithm might be slightly better than the one that scales, because that difference is nearly always negligible.
Like you said in your other comment, some times the constant can make the O(n^2) algorithm faster than the O(n) one for small inputs. But that difference is nothing, it's gonna be like one runs in 5 nanoseconds and the other runs in 10. So you saved 5 nanoseconds and nobody noticed nor cared. Then the input grows and now your badly scaling algorithm runs in 2 hours whereas the scaling one runs in a few milliseconds. That's why you use the best scaling algorithm by default. Even if you think the input will never grow you might be wrong, and the consequence of being wrong is bad. So you have a high risk and low reward.
> It really seems to me like you don't understand the advice about premature optimization. This is it, this is what Knuth is talking about. Profiling everything, optimizing everything. That's premature optimization
You greatly misunderstand what I'm saying
1) profile your code
2) optimize what needs to be optimized
3) there is no step 3
Why did you suddenly assume I said you should optimize everything? We don't have infinite time on our hands
Because one does not just profile their code. It branches, different parts do different things. You need to run the code with sensible input, multiple different inputs to hit the different branches etc.
If you have a good workflow that includes profiling your code that's cool, it's probably a good way to improve the performance of your code. But personally I think it's enough to test the code and make sure it works and is reasonably fast. I'm not going to spend time eliminating nano and low digit milliseconds.
If something actually keeps users waiting I'll look into it. As long as it's near instant it's good enough for me.
It really seems to me like you don't understand the advice about premature optimization. This is it, this is what Knuth is talking about. Profiling everything, optimizing everything. That's premature optimization.
Mature optimization is when you have an application and you see that part of it is slow. You investigate, for example with a profiler. Maybe you write benchmarks. Then you try a different approach and benchmark that as well (either with an actual benchmark library or just by running the code locally) and see if the new solution is faster.
I like to refer to what I do as macro optimization. I don't spend significant time worrying about performance, I just write code and kind of just keep complexity, IO and such in mind, avoiding badly scaling algorithms if I can. I don't care if the badly scaling algorithm might be slightly better than the one that scales, because that difference is nearly always negligible.
Like you said in your other comment, some times the constant can make the O(n^2) algorithm faster than the O(n) one for small inputs. But that difference is nothing, it's gonna be like one runs in 5 nanoseconds and the other runs in 10. So you saved 5 nanoseconds and nobody noticed nor cared. Then the input grows and now your badly scaling algorithm runs in 2 hours whereas the scaling one runs in a few milliseconds. That's why you use the best scaling algorithm by default. Even if you think the input will never grow you might be wrong, and the consequence of being wrong is bad. So you have a high risk and low reward.