Programmer's Guide To Theory - Where Do The Big Os Come From
Written by Mike James   
Monday, 16 March 2020
Article Index
Programmer's Guide To Theory - Where Do The Big Os Come From
Finding The Best Algorithm

You may know about big O notation, but what does it tell you about the algorithm in question? This is what this extract from Chapter 15 of my recent book is all about.

 

A Programmers Guide To Theory

Now available as a paperback and ebook from Amazon.

cover600

Contents

  1. What Is Computer Science?
    Part I What Is Computable?
  2. What Is Computation?
  3. The Halting Problem
  4. Finite State Machines
    Extract 1: Finite State Machines
  5. Practical Grammar
  6. Numbers, Infinity and Computation
    Extract 1: Numbers 
    Extract 2: Aleph Zero The First Transfinite
    Extract 3: In Search Of Aleph-One
    Extract 4: Transcendental Numbers
  7. Kolmogorov Complexity and Randomness
    Extract 1:Kolmogorov Complexity 
  8. The Algorithm of Choice
  9. Gödel’s Incompleteness Theorem ***NEW!
  10. Lambda Calculus
    Part II Bits, Codes and Logic
  11. Information Theory 
  12. Splitting the Bit 
  13. Error Correction 
  14. Boolean Logic
    Part III Computational Complexity
  15. How Hard Can It Be?
    Extract 1: Where Do The Big Os Come From
  16. Recursion
    Extract 1: What Is Recursion
    Extract 2: Why Recursion
  17. NP Versus P Algorithms
    Extract 1: NP & Co-NP
    Extract 2: NP Complete

<ASIN:1871962439>

<ASIN:1871962587>


Computational complexity is the study of how hard a problem is in terms of the resources it consumes. Notice that this is not the study of a single problem, but a set of similarly formulated problems. For example, you might be able to solve a particular quadratic equation, but what we are interested in is the solution to all quadratic equations.

You might think that how hard a problem is would be of practical interest, but not much theoretical interest. You might expect that problems would show a smooth variation in how hard they are and how much computing resources they require. It turns out that there are distinct classes of difficulty that result from the very nature of algorithms. The easiest of these classes can still present us with problems that are effectively out of our reach, but the hardest present us with problems that fairly certainly will always be beyond our computational powers. This in itself has practical consequences.

In chapter but not in this extract:

  • Orders
  • Problems and Instances
  • Polynomial Versus Exponential Time
  • A Long Wait

Where do the Big Os Come From?

It is interesting to ask what makes one algorithm O(n), another O(n2) and so on? The answer lies in the loops. If you have a single loop that processes a list of n elements then it isn't difficult to see that it runs in time O(n). That is:

For 1 to n
  do something
Next 

takes a time proportional to n.

If you nest one loop inside another you get O(n2):

For 1 to n
  For 1 to n
    do something
  Next 
Next 

You can continue this argument for O(nx). In other words, the power gives you the maximum depth of nested loops in the program and this is the case no matter how well you try to hide the fact that the loops are nested!

You could say that an O(nx) program is equivalent to x nested loops. This isn't a precise idea - you can always replace loops with the equivalent recursion, for example, but it sometimes helps to think of things in this way.

Another way to think of this is that an O(n) algorithm examines the list of possibilities or values just once. An O(n2) algorithm examines the entire list once for each item in the list. An O(n3) algorithm examines the entire list once for each pair of items in the list and so on.

As well as simple powers of n, algorithms are often described as O(log n) and O(nlog n). These two, and variations on them, crop up so often because of the way some algorithms work by repeatedly "halving" a task. This is also often referred to as a "divide and conquer" algorithm. For example, if you want to search for a value in a sorted list you can repeatedly divide the list into two - a portion that certainly doesn't contain the value and a portion that does. This is the well known binary search. If this is new to you then look it up because it is one of the few really fundamental algorithms.

Where does log n come into this? Well ask yourself how many repeated divisions of the list are necessary to arrive at a single value. If n is 2 then the answer is obviously one division, if n is 4 then there are two, if n is 8 then there are three and so on. In general if n = 2m then it takes m divisions to reach a single element.

As the log to the base 2 of a number is just the power that you have to raise 2 by to get that number, i.e.

log2 n = log22m = m

you can see that log2n is the just the number of times that you can divide n items before reaching a single item.

Hence log2n and n log2n tend to crop up in the order of clever algorithms that work by dividing the data down. Again, you could say that an O(log2 n) algorithm is equivalent to a repeated halving down to one value and O(nlog2n) is equivalent to repeating the partitioning n times to get the solution.

As this sort of division is typical of recursion, you also find O(log2 n) as the mark of a recursive algorithm. When a recursive algorithm runs in O(nx) it is simply being used as a way of covering up the essentially iterative nature of the algorithm!

What about exponential-time algorithms? Consider for a moment O(an) where a1 = a corresponds to the work needed to process a problem of size 1 and a2 corresponds to the work to process a problem of size 2 and so on. This looks like, first, a single loop doing a fixed amount of work and then like a pair of nested loops doing the same amount of work. Similarly O(an) looks like n nested loops doing some work.

Notice that in this case a is the average time that one of the loops takes. That is, you get an exponential-time algorithm when it involves a variable number of nested loops. The bigger the n the more nested loops and this is why it slows down so quickly with decreasing n. A variable number of loops in an algorithm may be something you are not familiar with. In practice, the easiest approach to implementing a variable number of loops is to use recursion. As discussed in the next chapter, recursion is really all about variable numbers of loops and exponential-time performance.

What sort of problem needs a variable number of loops? Consider the simple problem of displaying all of the values in a list of length n:

For i=1 to n
  display ith value in list
Next 

Now consider the problem of displaying every pair of values, the first taken from one list and the second from another:

For i=1 to n
  For j= 1 to n
    display ith value in list1 and jth in list2
  Next 
Next 

Now consider the same problem for three lists, you need three nested loops. If I ask you to display all of the combinations of items from n lists, with values taken from each list, you can see that this is going to need n nested for loops. How do you write this variable number of loops? The best answer is that you don’t and use recursion instead, see the next chapter.

Exponential time usually occurs with problems of this sort where you have to look at every combination of possible items and this needs a variable number of nested loops.



Last Updated ( Saturday, 21 March 2020 )