> Big-O is just a formal word for how much time you’re going to take, a way to figure out if you’re likely to take more time than available
This is a common misunderstanding about big-O. It's not about how much time you're gonna take but it's actually a measure of how complexity affects time growth as the data grows.
"Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. "
The quote from Wikipedia is factually correct, but it seems you misunderstood my comment.
> It’s not about how much time you’re gonna take
It most definitely is about predicting running time, or memory consumption. Big-O is literally a straightforward way to write the formula for best/avg/worst case time or memory of a program, modulo a constant. The second part of your sentence contradicts the first part.
I’m not sure I buy that there’s any common misconception about Big-O either, I’ve never seen much evidence of that. Anyone who’s learned Big-O in school, or has used it at work, or read past the first sentence on Wikipedia, or commented about it on HN, knows what Big-O means and that for small n, O(n) can be bigger than O(n^2). It’s just not that tricky.
I like Wikipedia a lot, and I agree this one’s right. Why do you dislike it, and why is that relevant?
I'm not the person you're replying to, but it seems to me that there may be an example of a common misconception about Big-O in your comment here:
> Big-O is literally a straightforward way to write the formula [describing] best/avg/worst case time or memory of a program
Technically, Big-O notation should only be used to provide an upper bound on the best/avg/worst case[0]. And typically people only use it to discuss the worse case or average case, just because talking about the upper bound on an algorithm when its inputs are restricted to its best possible inputs is typically much less useful.
But providing an upper bound is not quite "writing the formula" (giving a complete picture), because you haven't specified the lower bound (Big/Little-Omega) - very loosely speaking, if you only ever talk in terms of Big/Little[1]-O, you've only provided half the picture. Of course to be fair, Big-O of an algorithm is the side of the picture that is way more likely to bite you in production!
Disclaimer: not a mathematician, so salt to taste :)
The “worst case” is the part referring to the upper bound. Big-O is referring to the “order” of the run time, meaning the largest term in the formula for large n.
Again, everyone knows this if they know Big-O at all. I am intentionally not being picky and pedantic with my words, because the context of this thread was people complaining about unnecessary formality and unnecessary and unhelpful over-concern about academic correctness rather than practicality. There is no widespread misconception, but there are people who like to wax philosophical and try to demonstrate their superior knowledge...
My side intention was to detail why @rafiki6 might have been incorrect without knowing it, since they claimed correctness and complained about downvotes.
Not entirely sure why I am getting downvoted here. It's factually correct and relevant to the discussion. I'm starting to think someone is actively trying to downvote my comments.
This is a common misunderstanding about big-O. It's not about how much time you're gonna take but it's actually a measure of how complexity affects time growth as the data grows.
"Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. "
From: https://en.wikipedia.org/wiki/Big_O_notation
I generally don't like wikipedia, but they got this one right.