Binary Arithmetic
Written by Mike James   
Monday, 30 October 2023
Article Index
Binary Arithmetic
The place value system
Logic and switches

Logic and switches

After George Boole had described his Boolean logic, and after various people had built logical machines based on it, the simplicity of the “true/false” algebra was beginning to be understood.

Charles Pierce sketched out how batteries and switches could be used to calculate logic but this was ahead of its time. Fifty years later in the 1940 Claude Shannon looked at the ideas again and noticed that he could build not just logic circuits using high and low voltages to represent true and false, he could also build arithmetic circuits.

All you have to do is write down the addition table for binary arithmetic for a single “digit” or bit:

       A + B = R
       0   0   0
       0   1   1
       1   0   1
       1   1   0

You might recognise this as the truth table for the Exclusive OR and this is indeed all that is needed to add two bits together - as long as you are prepared to ignore the carry.

In other words the addition table for binary symbols is the same as the Exclusive OR in logic.

If you want to generate the carry then you need to add another column to the logic table

       A + B = R C
       0   0   0 0
       0   1   1 0
       1   0   1 0
       1   1   0 1

Now you can see that the carry is simply an And operation.

Looking back at the addition table for binary you can now see that it can be represented using nothing but an Exclusive OR and an And gate.

Put the two together and you can add two bits and generate a result and a carry - this is called a half adder because it doesn’t deal with the problem of taking account of anything that might have been generated by an earlier pair of bits being added.

 

halfaddA half adder takes two bits and adds them to give a result and a carry

 

To create a full adder all we need to do is combine two half adders so that the first adds the two bits to produce a result and a carry and the second adds the result and the carry to produce a final result and a final carry.

 

fulladd

A full adder takes two bits and the carry from the previous addition and produces a result and a carry

 

A little while after Shannon worked out that binary arithmetic was very easy to implement, the same idea was rediscovered by George Stibitz. He was musing on the problems of using relays in telephone circuits when he suddenly realised that he could use the open/closed positions as 0/1 and do arithmetic.

He quickly worked out a half adder and built it on his kitchen table using light bulbs to show the results. In 1940 he demonstrated a simple binary relay computer and in the audience was John Mauchly who went on to build, with John Eckert, the first electronic computer - ENIAC.

 

Shannon

 

Shan1

Shannon and his first adder

 

Decimal Not Yet Dead!

The big shock is that ENIAC, one of the first electronic computers, was a decimal computer not a binary computer.

Part of the reason was that the electro-mechanical computers that came before it were decimal but it must also have been that the advantages of binary just weren’t that obvious - even though this seems incredible.

The only exception to the rule that early computers were decimal was Conrad Zuse’s Z1 built in 1936. It also seems that he realised that binary was the way to build a computer in 1935, which gives him a strong claim to have thought of the idea before Shannon or Stibitz. Switching to binary was such a simplification that Zuse was able to build complete computers on his kitchen table.

Representing numbers in binary makes the connection not only with electro-mechanical switching circuits but with more advanced electronics. Once you settle on using binary it becomes very easy to find physical systems that have two states that can be used to represent 0 and 1. This makes creating storage and memory devices very simple.

You can use everything from holes punched in paper tape or cards to the direction of magnetisation to represent numbers in binary. However, there is a cost-efficiency. If you code a number using binary then each storage location stores a single bit but if you can find a way to store more than just two states at a particular location then you can store more data in the same space.

Put another way, a binary number has more bits in it than the equivalent decimal number has digits. For example, the decimal number 1234 is 10011010010 in binary. Binary numbers take more space and we trade space for simplicity. Some forms of Flash memory, for example, use three or more voltage levels to store data using non-binary bases.

So far no one has gone back to basics to build a decimal computer but one day, when binary computers have run out of steam, perhaps Babbage will be proved right.

For more on binary numbers and two's complement representation in particular, see Binary - negative numbers

 

fulladd

 

Related Articles

Floating point numbers

Boolean Logic

Charles Babbage

Claude Shannon

Eckert & Mauchly and ENIAC

  • Mike James, Chief Editor of I Programmer is also a prolific author. In addition to his books on specific languages, he has written The Programmer’s Guide To Theory which sets out to present the fundamental ideas of computer science in an informal and yet informative way and The Trick Of The Mind: Programming and Computational Thought, aimed at programmers and non-programmers alike, which examines the nature of programming and reveals why it is a very special skill. 

What Programmers Know

knowcover

Contents

  1. The Computer - What's The Big Idea?*
  2. The Memory Principle - Computer Memory and Pigeonholes*
  3. Principles of Execution - The CPU
  4. The Essence Of Programming
  5. Variables - Scope, Lifetime And More*
  6. Binary Arithmetic
  7. Hexadecimal*
  8. Binary - Negative Numbers*
  9. Floating Point Numbers*
  10. Inside the Computer - Addressing
  11. The Mod Function
  12. Recursion
  13. The Lost Art Of The Storage Mapping Function *
  14. Hashing - The Greatest Idea In Programming
  15. Advanced Hashing
  16. XOR - The Magic Swap*
  17. Programmer's Introduction to XML
  18. From Data To Objects*
  19. What Exactly Is A First Class Function - And Why You Should Care*
  20. Stacks And Trees*
  21. The LIFO Stack - A Gentle Guide*
  22. Data Structures - Trees
  23. Inside Random Numbers
  24. The Monte Carlo Method
  25. Cache Memory And The Caching Principle
  26. Data Compression The Dictionary Way
  27. Dates Are Difficult*
  28. Sequential Storage*
  29. Magic of Merging*
  30. Power of Operators
  31. The Heart Of A Compiler*
  32. The Fundamentals of Pointers
  33. Functional And Dysfunctional Programming*

* Recently revised

espbook

 

Comments




or email your comment to: comments@i-programmer.info

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Floating Point Numbers

Inconvenient though they may be, fractions are the real stuff of numbers and to work with them we need to know about floating point numbers ...



Programmer's Guide To Theory - Gödel And All That

Given infinite computing power surely there cannot be any problem or puzzle that is incapable of solution? The famous, or infamous, incompleteness theory of Kurt Gödel says different, but what d [ ... ]


Other Articles

<ASIN:0521370957>

<ASIN:0763741493>

<ASIN:0596510047>

<ASIN:0060512806>

 



Last Updated ( Saturday, 04 November 2023 )