In general a recursive object looks something like:
Function xxx(parameters)
list of instructions 1
if condition then Call xxx
list of instructions 2
Return
The first list of instructions is obeyed on the way up the spiral, i.e. as the instances of the function are being built, and the second list is obeyed on the way back down the spiral as the instances are being destroyed. You can also see the sense in which the first list occurs in the order you would expect, but the second is obeyed in the reverse order that the functions are called in.
This is one of the reasons that recursion is more than just a simple loop. A loop just goes in one direction, but recursion goes up the chain of calls and then back down again.
Sometimes a recursive function has only the first or the second list of instructions and these are easier to analyze and implement. For example a recursive function that doesn't have a second list doesn't need the copies created on the way up the recursion to be kept because they aren't made any use of on the way down! This is usually called tail recursion because the recursive call to the function is the very last instruction in the function, i.e. in the tail end of the function. Many languages can implement tail recursion more efficiently than general recursion because they can throw away the state of the function before the recursive call, so saving stack space.
So tail recursion is good because it can be efficiently implemented, but you can see that limiting yourself to tail recursion is missing some of the power of recursion because it only has a forward order of execution like a simple loop.
In book but not in this extract:
What Use is Recursion?
A Case for Recursion -The Binary Tree
Nested Loops
The Paradox of Self-Reference
Summary
The simplest way of repeating anything is to use a loop and the shape of the flow of control is indeed just a loop.
There are conditional loops and enumeration loops but the only type of loop you need is the conditional loop.
Recursion, or self-reference, is another way of repeating things. It isn't a necessary construct as it can be emulated using loops, but it is very useful.
Infinite recursion is the equivalent of an infinite loop.
The most important idea in recursion is that a complete copy of a function is brought into existence with each recursive call.
You can think of the flow of control in recursion as a spiral moving up each time the function is called and then spiraling down as each function completes.
The block of code before the recursive call is executed on the way up the spiral and the block of code after the recursive call is executed on the way down the spiral.
If the block of code after the recursive call is eliminated we have tail recursion, which is efficient to implement.
Recursion can be useful when implementing an algorithm that has a common recursive definition. It is more necessary, however, when the data structure is itself recursive.
Recursion is the easiest way to implement a variable number of nested loops and this is also the reason why recursive algorithms are exponential.
Many paradoxes arise from contradictory self-reference, which can be thought of as infinite recursions. The result of an infinite recursion is undecidable and all of the paradoxes of computer science are infinite recursions.
Recursion is an example of self-reference which lends itself easily to mysticism and is often called a strange loop. Human consciousness is the result of us observing us, which in turn is an example of the universe observing itself, a strange loop that goes beyond the scope of this book
A Programmers Guide To Theory
Now available as a paperback and ebook from Amazon.
Contents
What Is Computer Science? Part I What Is Computable?
The Wasmer team has released Wasmer 5.0. The WebAssembly runtime adds experimental support for more back ends including V8, Wasmi and WAMR. It also now has iOS support, and upgraded compilers includin [ ... ]