Where M is amount of data per node and N is the amount of all data, each node that you descend skips a proportion M-1/M of all the remaining data as long as N is large enough to require multiple levels of nesting.
So for practical sizes of M, as in a few cache lines, and sufficiently large datasets of size N a binary tree will eliminate 1/2 of data each node checked and a B tree with M set to 8 would skip 7/8ths of the data each node. When not descending nodes they are both in Log2(N) time, but when jumping nodes this hypothetical B-Tree is using Log8(N) time.
So yeah if the N and M are close then they are similar, but those aren't interesting cases. Datasets that small are fast to iterate even if unsorted, and large unwieldy values of M are silly and impractical. Once you have thousands or millions of items in N then the savings of eliminating downtree data from checked at all becomes real. Or put another way binary trees are always averaging Log2(N) amount of checks but B trees have some mix of Log2(N)+LogM(N) amount of checks which should be lower in most cases and so close in edge cases as to not matter.
But again, I state that counting comparisons is silly. On any modern hardware (past 20 years) load time dominates the comparison time and B-Trees will just have fewer loads, memory complexity is the "interesting" part of this problem unless you are a C64 or some zany future company with hyper fast RAM.
So for practical sizes of M, as in a few cache lines, and sufficiently large datasets of size N a binary tree will eliminate 1/2 of data each node checked and a B tree with M set to 8 would skip 7/8ths of the data each node. When not descending nodes they are both in Log2(N) time, but when jumping nodes this hypothetical B-Tree is using Log8(N) time.
So yeah if the N and M are close then they are similar, but those aren't interesting cases. Datasets that small are fast to iterate even if unsorted, and large unwieldy values of M are silly and impractical. Once you have thousands or millions of items in N then the savings of eliminating downtree data from checked at all becomes real. Or put another way binary trees are always averaging Log2(N) amount of checks but B trees have some mix of Log2(N)+LogM(N) amount of checks which should be lower in most cases and so close in edge cases as to not matter.
But again, I state that counting comparisons is silly. On any modern hardware (past 20 years) load time dominates the comparison time and B-Trees will just have fewer loads, memory complexity is the "interesting" part of this problem unless you are a C64 or some zany future company with hyper fast RAM.