How Memory Works
Written by Harry Fairhead   
Friday, 03 August 2018
Article Index
How Memory Works
Core Memory
Binary Addressing

Core memory

The best form of memory before large-scale integrated circuits made RAM (Random Access Memory) chips a commonplace was core memory. This is yet another simple idea but one that took a long while to make work. Jay Forrester thought of using magnetic rings or “cores” threaded on wires. The bits were stored using the direction of magnetisation of the core. Each core stored one bit and at first core memory was expensive and occupied a lot of space.

By the 1970s core memory had advanced to the point where 1MByte could be implemented in something that looked like a small car! Later, core memory made mini-computers possible with huge main memories in the region of 16Kbytes or more.


A late core memory holding 1024 bits of data on a 32x32 matrix.


One of the interesting advantages of core memory was that it held its contents even when the power is removed. Early mini-computers, for example, could be switched off at night and you could come back in the morning and expect to start off from where you left the program! Core memory is still sometimes used where this sort of persistence and robustness is required.

There were some other odd methods of storing data. Von Neumann, for example, tried using “Selectron” valves – a gas filled discharge tube that stored bits as tiny neon lights. Von Neumann’s computer was going to use 40 x 4KByte Selectrons but these were expensive to make and over time, unreliable.

Eventually core memory took over and in its turn transistor memory took over from it in the form of static and dynamic RAM chips and this is exactly what we use today.

In your desktop machine the bulk of the memory  is built using dynamic RAM packaged in the form of SIMMs (Single Inline Memory Modules). A much smaller amount of memory, cache memory, is implemented using static RAM chips because it is faster than dynamic RAM. 


So now we know how memory works. Well only up to a point; we still don’t really know how the addressing and the data retrieval work. There are lots of different technologies for storing information but how the data is addressed tends to work in the same general way. 

In particular it doesn’t matter if you are using static or dynamic RAM or even magnetic cores, the basic method of addressing is the same.

Each memory element or cell can store one bit and it has a data input line, a data output line, a Read/Write line and a select line. The select line activates the cell and Read/Write line tells it either to output its contents or store what is at its input.



A generalized basic memory element

What happens next is that the cells are organised into a grid with horizontal and vertical selection wires – row and column selects.

There is one cell at the intersection of each row and column select. The cell is only selected when both its row and its column select are high.

The data outputs of all of the cells are connected together and the same with all of the inputs and the Read/Write control lines. This means the memory array has only a single common input and output and one Read/Write line.



A memory array


Now suppose you want to store a 1 in the array at a particular location.

The data is placed on the data input line and the Read/Write line is set low to indicate that you want to write to the array. Finally the appropriate row and column select is set to high to select the cell in question and it, and only it, stores the data on the data line.



Storing a one


To read the data back from the array you do the same thing only with the Read/Write line set high to indicate that you want to retrieve the bit in question.

Only the selected cell outputs anything to the common data out line.



Reading the array



Last Updated ( Friday, 03 August 2018 )