The Trick Of The Mind - Top Down |
Written by Mike James | ||||
Friday, 22 December 2023 | ||||
Page 2 of 3
Bottom UpIf there is a top-down you can be sure that there will be a bottom-up. The bottom-up approach is how things were done in the early days of computing. As its name suggests, it corresponds to creating subroutines that do the lowest-level things concerned with the problem. In the playing chess example this would be things like defining the board, moving pieces and so on. Of course, in the case of pure bottom-up you are speculatively creating routines that might or might not be used in your final solution. You are creating a toolkit, or library, of potentially useful new commands. After you have done this you change into another mode and start to use your toolkit to do bigger tasks such as detecting check, taking another piece and so on. Bottom-up is generally not a good approach because the creation of useful routines has no guiding direction. It can best be described as blundering around in the dark in the hope of encountering a solution. What bottom-up is useful for is providing a toolkit that a top-down approach can use to tackle the problem at a higher level. Embarking on the chess playing program with a library of useful routines ready to use is a much better starting point than with just having the commands of a general-purpose language. A more practical approach is to work top-down but occasionally take time out to create some lower-level, bottom-up routines that simplify your work. In practice, creating complex systems is difficult and usually messy and a constrained approach rarely works. Sometimes what you have to do is to move between levels as the project progresses, but at least you have a framework to consider what you are doing. Bottom-up interpreted as implementing a library of useful routines is not a silly way to begin a project. How Big Should A Subroutine Be?How do you know you have reached the bottom of the top-down hierarchy and it is time to write some instructions that get a job done rather than simply call yet more delegating subroutines? The answer is fairly simple, but first we need a small detour. One of psychology's early phases (there have been many) was based on information or communications theory. It viewed the human processor as an information channel. The idea was that the human wasn't so much a data processing system, but a data communications system. The inputs were the senses and the outputs where whatever the human did or perceived as a result of the inputs. This approach studied the channel capacity and the effects of noise on signal processing. There are some theorems relating to things like reliability, bandwidth and noise and these are applicable to humans just as much as to networks and radio channels. The exact details aren't particularly important, but what is important is the obvious statement that humans have limited data channels. One of the nicest and simplest results in this area is the 'magical number seven' or Miller's law after the psychologist who discovered it. (To know more search the Internet for “The Magical Number Seven, Plus or Minus Two”.) After lots of experiments it was found that, on average, humans can manage to keep in their short term memories seven items of information. The variation observed in the experiments was such that a more accurate statement of memory capacity is seven items plus or minus two – hence the title of the paper referenced above. You can argue about the fine detail and whether or not the number seven deserves its special position. You can also argue about the limits applying to different cognitive processing, but the important point is that there is a limit, not its actual value. Humans are bandwidth-limited information processing machines and we have a channel capacity or a data rate limit. ChunkingWhat has this got to do with programming? The answer is quite a lot. You are subject to the same information processing limits as any lesser mortal. This is the reason why all programming methods endeavor to hide the complexity of a program in a layered structure such as top-down. You will often hear programmers talking about complexity reduction, but actually what is going on is complexity hiding, also known in the wider world as "chunking". As we have discovered, programs are built up using small subroutines, functions or modules. Each of these should be small enough to meet the seven plus or minus two criterion. Each layer should equally be confined to the same size limits, so that every layer can be easily understood. There is also a subtle effect caused by the nesting of control instructions. One If instruction is easy to understand, but multiple, nested, If instructions are much harder to untangle. The same is true of loops nested within loops, within Ifs and so on. Nested control structures make it difficult to chunk. The idea behind chunking is that a human can deal with about seven things at a time but can work with different levels of abstraction to work with bigger systems. That is, you can remember seven letters, or seven words, or perhaps seven sentences or the titles and plot summaries of seven very long books. The principle of chunking applied to programming is to build a hierarchy of code, each composed of small objects that can be understood in a single look. I have occasionally encountered too literal implementations of the 'seven' part of the rule. The point is that seven is a guideline and the actual number depends on all sort of factors but "Keep It Small Stupid" is a much better statement of intent than "Keep It Simple Stupid", which is the usual interpretation of the KISS acronym. Now you know the origin of the magic number seven that crops up so often and you also know that seven could just as well be five or nine. In fact, the situation is much vaguer than even these loose numbers would suggest. The real confounding factor is what exactly constitutes an item that the "seven" applies to? Should it be seven characters, seven words, seven lines, seven subroutines.... Clearly the unit used in the rule of seven matters. Basically the unit should always be scaled to fit into the situation so that it is one chunk of comprehension or memory. Easy to mandate, but usually much harder to do. |
||||
Last Updated ( Friday, 22 December 2023 ) |