Fundamentals of Computers & Programming

This morning I’m tempted to describe the basic theory of how computers run programs, but first what is a program or a computer?

They are instructions composed of at least four basic operations:

  1. Store arbitrary ammounts of data to be read later
  2. Repeat instructions (to process more data or refine answers)
  3. Do different things for different data
  4. Communicate to, or ideally with you (otherwise what’s the point of computation?)

More operations can be added, but they’re not strictly needed.


We can ofcourse electrical circuits capable of following these operations.

For (3) we can use build upon electrically-controlled switches called “transistors”.

For (1) we can use electrical feedback or, for more efficiency, capacitors. Though the latter requires circuitry to periodically reload it’s values.

For (2) we can include a periodic “clock signal” amongst the inputs, driven by the (dis)charging of capacitor.


From there we can abstract:

  1. feedback into fixed-sized “registers” or capacitors into “RAM”, the latter of which includes a condition for which capacitors are read given some “memory address”. This storage is all connected via some special wires called “the bus”.
  2. The periodic clock signal drives a “control unit” to determine which instruction to run and how to run it.
  3. Transistors are abstracted into logic gates, adders, etc.
  4. It can send signals (indirectly) to your monitor, etc.

That gives us some “CPU” hardware to run arbitrary programs, though modern CPUs are lot more complicated to deal with how much faster than RAM they’ve become.

In the early days we loaded software into our computers using holes punched in paper, but today we rely entirely on software to do it. And we have software to translate from a more human-friendly language to what the machine natively understands.


A “programming language” tends to follow these general steps in order to do it’s translation:

  1. “Lex” the input text into a sequence of “tokens”.
  2. “Parse” the relationships between those tokens into a hierarchical “Abstract Syntax Tree”.
  3. “Analyze” that AST in order to extract more implicit information.
  4. “Optimize” the code to run slightly faster.
  5. Figure out where to put all the data.
  6. “Compile” it from the AST into the machine code.

Using these “compilers” you can write your code in C, etc rather than machine code where you have variables and pointers (for 1), while/for loops (for 2), and if statements (for 3).

I/O (4) tends to get a lot more complicated in order for the computer to output something you can understand, so it tends to get relegated to the standard library. Where it calls down through “language bindings” and “system calls” into the operating system’s “kernel”, which’ll then coordinate outputs to the hardware.


Though I find functional programming like in Haskell to be a clearer expression of the concept. There data is stored (1) by passing them into functions, recursion is used exclusively for loops (2), pattern matching (which is a lot more expressive!) and higher-order functions are used for conditionals (3), and your functions arguments and return value are it’s I/O (4).

Everything else is a standard library abstraction, or some syntactic sugar upon these constructs.

Fin.