Tail-recursions are the exact equivalent of loops.
We've lost sight of what "recursion" means. There's a difference between recursive functions and recursive algorithms, which need by definition a stack and there are lots of such problems that can't be optimized to run in constant space. Whether that stack is the call stack, or a manually managed stack (within a loop or a tail-recursion), that doesn't matter, what matters is that the space requirements are definitely not constant.
It's nice that the GCC compiler can optimize that Fibonacci sample. However you're not getting a guarantee that it can optimize every function that is optimizable. I'm also pretty sure that the compiler is extremely limited in what it can do here. And that is dangerous, as running in constant space is often a matter of correctness, not one of optimization. I would rather have the compiler err on me if it sees non-tail recursive calls, rather than try to be smart sometimes.
Scala handles this in quite a nice way: the compiler will optimize tail-recursive functions without needing any special effort, but if you want to ensure a particular function is tail-recursive as a matter of correctness, you annotate it with @tailrec and the compiler will error out if it can't optimize that function.
A recursive algorithm is one that computes a result by way of the same algorithm applied to a smaller input. (Smaller, or else the algorithm will diverge.)
The distinction between "stack" and "parameter" is a minor one. A simple factorial function has an ever-growing (accumulating) parameter.
Tail-recursive algorithms can be translated into loop algorithms. Thet are still recursive.
f x | x `mod` 2 == 0 = f x/2
f x | x `mod` 2 == 1 = f $ 3*x - 1
f 1 = [ whatever ]
is perfectly well defined on positive integers `x`, although many recursive calls actually apply `f` to larger values.
(To be fair, you didn't actually say what 'smaller' meant. I think it's one of the standard term-rewriting theorems that, if given the freedom to define 'smaller' appropriately, then any total recursive function does make only smaller recursive calls.)
> I'm also pretty sure that the compiler is extremely limited in what it can do here. And that is dangerous, as running in constant space is often a matter of correctness, not one of optimization. I would rather have the compiler err on me if it sees non-tail recursive calls, rather than try to be smart sometimes.
I've sometimes wondered why OCaml has "let" and "let rec", but not "let tail rec". It seems that something like this would be good for documentation and avoinding errors.
> Tail-recursions are the exact equivalent of loops.
Not necessarily. In the common case of a procedure calling itself a loop does wind up being equivalent. OTOH with mutually recursive procedures (two or more procedures which call each other), there is no straightforward way to convert the tail calls into a loop.
Your example assumes that you can inline the calls to a and b. If you can't (for example if one of them calls themselves) then your optimization won't work.
This is really only essential in languages which provide recursion as a primitive and not iteration. Scheme is the prototypical example and was, I believe, the first to require tail call elimination in its standard for this very reason. In C it really doesn't matter that much because you can just write the iteration explicitly and then you know you won't blow out the stack since there is no recursive call in the first place. As a bonus, many programmers find it easier to understand.
Tail-recursions are the exact equivalent of loops.
We've lost sight of what "recursion" means. There's a difference between recursive functions and recursive algorithms, which need by definition a stack and there are lots of such problems that can't be optimized to run in constant space. Whether that stack is the call stack, or a manually managed stack (within a loop or a tail-recursion), that doesn't matter, what matters is that the space requirements are definitely not constant.
It's nice that the GCC compiler can optimize that Fibonacci sample. However you're not getting a guarantee that it can optimize every function that is optimizable. I'm also pretty sure that the compiler is extremely limited in what it can do here. And that is dangerous, as running in constant space is often a matter of correctness, not one of optimization. I would rather have the compiler err on me if it sees non-tail recursive calls, rather than try to be smart sometimes.