Daniel Wilson-Thomas

Combinatron: Introduction

Modern central processing units are marvelous feats of engineering. They enable us to create solutions to problems never thought possible. They do, however, have one deficiency: a single processing core can only do one thing at a time. We’ve attempted to remedy that by placing more processing cores on a single chip. This works up to a point, but programming and interacting with those cores without causing deadlocks, data races, or other problems is difficult and error-prone. The Combinatron project is an exploration of a different approach to the widespread traditional von-Neumann architecture.

You might be familiar with the concept of a Turing machine as a model of computation. There are many models of computation all with different properties. The most important feature they have is that they are all equivalent in their capabilities. They all can compute the same programs. A Turing machine is a simple model that uses a read head and a sequence of symbols. Then, following some rules, the machine traverses and modifies the symbols as appropriate. You might also be familiar with the Lambda calculus. This is a model of computation “based on function abstraction and application using variable binding and substitution”. Intuitively what this means is you can write an expression according to the syntax rules of the lambda calculus, then you can apply reduction rules to the expression to reduce it until it cannot be reduced any further. This process of reduction is where the work of computation happens.

An expression in the lambda calculus can be represented according to an abstract syntax tree. The same reduction rules can then be applied to this tree, mutating it as the reduction steps proceed. This process is entirely local to the particular expression being reduced, which means it is also inherently parallel. Just as the arithmetic expression \[(1 + 2) + (3 + 4)\] can be reduced to \[(3) + (7)\] at the same time, so too can independent lambda expressions be reduced at the same time. The tree structure can be pre-processed before work begins on it to share identical expressions. This transforms it into a graph and yields the property that work can be shared and duplication of effort can be avoided.

The lambda calculus includes variable abstraction and application as part of its definition. However, abstraction isn’t strictly necessary and application is all that is necessary to have a Turing equivalent model. This is good news because an algorithm for reducing lambda expressions runs into some significant complexity when it comes time to substitute variables. In the lambda calculus an expression is said to be a combinator if it acquires all of its data as arguments. That is, it cannot rely on any external or global state. It turns out that with a few clever combinators and the same reduction rules from the lambda calculus an incredibly simple language can be constructed. A set of these combinators is called a basis, and the traditional basis is the SK basis. The S and K combinators have the following definitions.

\[ S x y z = x z (y z) \] \[ K x y = x \]

The Combinatron uses a different combinator basis, the BCKW basis, for reasons of efficiency that will be explained in a later article. The B, C, K, and W combinators have the following definitions:

\[ B x y z = x (y z) \] \[ C x y z = x z y \] \[ K x y = x \] \[ W x y = x y y \]

The Combinatron uses these four combinators as a basis for its instruction set. It processes these along with a handful of other instructions to perform reductions on a program until it cannot reduce any further.

My roadmap with Combinatron is as follows. First I wanted a consistent formal semantics for how the machine will work. This has undergone a few major revisions but I’m confident that it is for the most part stable. Second I wanted an implementation of the machine in software for the purposes of experimentation. This is also in a state of near completion. Finally I plan on implementing the machine in hardware using an FPGA. This will give me a working version in silicon that I’ll be able to benchmark and improve. I don’t have any experience with working with FPGAs, so I anticipate this taking some time.

The Combinatron is in its basic form still sequential. It can only perform one reduction at a time. Thus the end goal is to eventually write an operating system for the Combinatron that can manage and distribute work among many individual processing cores, creating a truly massively parallel processing network.

I’d always appreciate questions and contributions to the work. Here are links to the specification and emulator. You can contact me at danielwilsonthomas@gmail.com.