Functional And Dysfunctional Programming
Written by Mike James   
Thursday, 15 November 2018
Article Index
Functional And Dysfunctional Programming
State and Stateless
Monads

So what does all this have to do with functional programming?

The simple answer is that functional programming aims to remove the state and mutability out of programming and make it more like static maths. In other words it attempts to freeze time out of programming in the same way maths does. 

This is the usual "standard" statement of functional programming, but it isn't quite as pure as this sounds.  There is a lot of messiness in functional programming but it is often wrapped up in so much jargon you can't really see it for the clutter. 

In math functions are stateless. If you evaluate sin(Pi) at 12 noon today then you will get the same answer when you evaluate it at 12 noon tomorrow. In math functions don't change. 

In functional programming functions aren't just ways of grouping things together in a neat package. Function are first class objects and can be used anywhere a number or an expression can be used. They are also "higher order" functions which can accept functions and spit out functions as their result. Most important of all functions never change the state of the system - they don't have side effects. 

The no-side effect rule is a bit difficult to stick to because if the system is going to deliver a result or interact with a user then side effects are essential and some state has to change. 

You will often hear the term immutable used in connection with functional programming and exactly what it means is often confused with a simpler idea - object immutability.

Object immutability is an easy idea. For example if a String object is immutable it can't be changed but this doesn't mean you can't do:

string1=string1+"A";

What happens in this case is that the String object that the variable string1 is referencing is destroyed and a new object created with "A" attached to its right hand end. This is an almost trivial meaning of immutable and if it wasn't pointed out it would go unnoticed. 

The real important use of immutable in functional programming is the way variables don't - vary that is.

For example in F# if you write

Let x=1

then that's it x is one forever and ever. This is immutability of reference or value depending on the type of data involved. 

Hold on a moment doesn't banning variables mean that you can't write things like

Let x=1
x=x+1

You can't increment a variable because there are no variables. Once a symbol is bound to a value it doesn't and can't change. 

OK, so how do functional programmers do things that "normal" dysfunctional programmers take for granted - like count how many times something happens or add up the first ten integers?

The answer is that in functional programming no symbol is ever bound to a value until its final value is known. This is the "frozen time" approach of math. 

Take, for example, the problem of adding up the first ten integers. In most programming languages you would write something like:

for(i=0;i<10;i++){
 total=total+i;
}

That is, you would use a for or some other sort of loop to form a running sum until you had the answer, i.e. 45. The symbol  "total" would change its value all through the loop until it got to the final answer. It is a machine that whirs into life to loop round counting things.

If you want to make symbols immutable then you can't do things in this way. You have to use recursion to put off binding any intermediate results until you have the final answer.  

The way to do this is to suppose that we have a function that returns the sum of integers from x to 9, e.g. sumto9(x). Well it's obvious that sumto9(9) is 9, it is also obvious that sumto9(8) is 8+9 or 8+sumto9(9). You can also see that this generalizes to sumto9(x)=x+sumto9(x+1) with sumto9(9) defined as 9. Writing this as a recursive function gives:

function sumto9(x){
 if(x=9) return x; 
 return sumto9(x+1)+x;
)

Now you can write:

let total=sumto9(0)

total never has to be set to an intermediate value, it just gets bound to its final value in one go. 

You could argue that all of the "variables" of the process are now in the recursion, but notice that x doesn't change - it just gets created in a new context each time the function is called. That's the whole point of function parameters - they allow things to change by being re-created - see object immutability discussed earlier. Function parameters do most of the work that variables do in dysfunctional programming.

As we don't have variables we don't need loops and recursion plays the role of iteration and parameters play the role of loop variables.

OK, but what is this "tail recursion"?

Well if you can arrange your recursive functions so that the call to the next function occurs right at the end of the function i.e. in the tail then the function can be optimized by the compiler into a loop.

Yes that's right - you go to the trouble of thinking recursively and the compiler undoes it all and re-writes it as a loop.

The reason that this can be done is that as soon as the tail of the function is reached the compiler can throw away all of the state information about the function, i.e. its stack frame, and treat the new call as part of the same function call. That is it unwinds:

let total=sumto9(0)

into something like:

let total=0+(0+1)+(1+1)+(2+1)+(3+1)+(4+1) ... +9

With tail recursion optimization functional languages are as efficient as dysfunctional languages.

There are lots of other related mechanisms for putting off binding a value to a symbol until the value is completely determined but they all have a similar feel and use the same sort of techniques. 



Last Updated ( Monday, 26 November 2018 )