|The Heart Of A Compiler|
|Written by Mike James|
|Thursday, 01 April 2021|
Page 2 of 3
As soon as programmers started to write macros for pieces of arithmetic like Z=X+Y i.e. ADD3 X,Y,Z, they started to think about more ambitious facilities.
It seemed like a simple extension to allow any arithmetic expression to be written in a program. Why can’t we write
as part of a program and expect a macro to convert this into the necessary machine code to move the data add, multiply and so on?
In fact this simple facility is very difficult to implement and it required significant steps forward in computer science to work out how to do it. The problem isn’t at all obvious when you first consider the problem. At first sight it looks as if translating a piece of arithmetic should be no more difficult than many of the complicated pieces of behavior captured by simple macros. In fact you can quickly prove that it is beyond the capabilities of a simple macro which simply doesn't have the power to express the rules of arithmetic.
For example, you might think that all you have to do is notice when there is a “+” sign and arrange to add the two quantities on either side of it. That is, in the case of A+B the machine code that is needed is simply:
That is move the contents of A to AL, then add the contents of B to AL which now contains A+B.
This seems to work no matter what A and B are and this is indeed well within the capabilities of a macro. The problems start when you try to extend what the macro can do.
You very quickly discover that things are more complicated.
For example X+A+B would be translated to something like:
which is clearly wrong because the result of adding A to X is lost when we move the contents of A into AL for the second time, before we add A to B, i.e. what is worked out is A+B.
The reason for this failure of our simple rule is that an operator like “+” can have another arithmetic expression as one of its left- or right-hand “operands.
That is, the most general form of an addition isn’t:
You might smell a recursion coming on here because
and yes it really is recursion that is the key method in translating high-level languages.
To see this you have to notice that what we have just written is that an expression can be an expression plus another expression and this is clearly recursive – that is, the thing being defined is on both sides of the definition.
Notice that the rules:
(read the arrow as “is a”)
fits A+B, A+B+C, A+B+C+D and so on and if you add the rule:
then you also have A*B, A*B+C, A+B*C and so on.
You can use these rules, they are called production rules, to actually generate the code from an expression. All you have to do is scan the expression and see which rule fits.
For example, if we have:
then clearly it starts
When we get to the + sign this fits the rule
and when we hit B we have “expression->variable”, i.e. B.
Each rule that matches can be arranged to generated a suitable piece of code but we have to be a little bit cleverer than our earlier method. We have to generate the code recursively. For example, we could use the following instructions (in pseudo code):
The code that this small chunk of program generates can deal perfectly well with the arithmetic expressions described by the rules given above.
For example, A*B+C is translated to:
The order problem
So we can do it!
We can translate expressions to machine code using an automatic system.
No we can’t!
What we have managed to do is find a way of translating a small subset of expressions correctly. There are lots of expressions that aren’t covered by the rules given above that lie just outside the scope of our translator.
You might think that conquering this problem was just a matter of adding rules and writing more programs like that given above. However our methods are very ad-hoc and as you add rules the translation procedures become increasingly complicated and messy with lots or IFs to deal with exceptions.
This is how the problem was tackled in the early days of computing and a line of languages that eventually developed into Cobol slowly pushed forward the range of expressions that could be compiled into machine code.
Yes that’s right, in the effort to translate arithmetic expressions assemblers had moved on to become “compilers”. Although along the way they were called all sorts of things including “autocoders” and “translators”.
What might surprise you is that there is a deeper problem waiting to be discovered even in the small subset of expressions we have defined. Our small program does indeed translate
correctly but it doesn’t work for the seemingly similar
How can such a small change break our rules?
The reason is that it reads and translates from left-to-right and so for B+A*C it generates the code:
which if you examine it clearly works out (B+A)*C which is not the same as B+(A*C).
Arithmetic expressions do not work from left-to-right and B+A*C means multiply A by C and then add B.
This non-left-to-rightness is something that many people have never really noticed! More surprisingly, it is something that some people simple don’t believe and even argue about when it is first pointed out to them. I have even had people argue with me that (B+A)*C is always the same as B+(A*C)! (If you are in any doubt or need to convince a doubter then just get them to try (1+2) times 3 i.e. 9 verses 2*3 plus 1 i.e. 7.
Mathematicians say that multiplication has priority over addition and math teachers say “BODMAS” or something similar and this has caused as much trouble for compilers as for school pupils. It is the reason that the language of arithmetic and later algebra is hard and subtle.
An arbitrary expression can be as non-right-to-left as you care to make it and the order of evaluation can jump backwards and forwards. For example, consider:
First you can work left-to-right and evaluate A*B but then you have to skip over the “+” and evaluate C*D only to jump backwards and finally evaluate the + by adding together the results of the multiplications.
Arithmetic is truly difficult!
|Last Updated ( Thursday, 01 April 2021 )|