You might have heard that functional programming is based on Lambda Calculus – and indeed this is true (LISP was greatly influenced by Alonzo Church’s Lambda Calculus and you might have heard of “lambda”s in other functional languages as well) but originally Church tried to reason about mathematics not based on sets but on functions – and created Lambda Calculus as a tool for this.

For now this series is going to explain only the so-called untyped lambda calculus and I will not try to give you every single technical aspect (as this is rather involved and might look rather boring for people new to the ideas) – for example I will give you some of the definitions but will not go into the nastier technicalities concerning beta-reduction that can easily arise if accidentially bind a variable in the process.

I did write a small lambda-calculus interpreter you can download and play with, that does handle those problems (or so I hope – in case you find a bug with this please leave me a comment or a mail).

So let’s get started by giving you basic

## Definitions

Lambda Calculus deals with Terms so we should start by saying what a Term is. Well this is indeed rather simple – a term can be:

- just an
**Atom**or**Variable**with a given name – that’s the basic blocks we won’t change - an so called
**Abstraction**that gives the hole thing it’s name: if \(x \) is a variable and \(T\) a term (so you see the definition is of course recursive) then \(\lambda x . T\) is called an abstraction – will will think of this as a function taking a parameter \(x\) with body \(T\) but it’s important to understand that lambda calculus works by simply following rules without using any semantic meanings to the symbols! - a construct called an
**Application:**if \(A\) and \(B\) are terms than we will write this as \( (A B) \) – we will think of this as applying the argument \(B\) to the function \( B \) but again: this is not important for the system but gives us a good intuition about what is happening behind the scenes.

That’s it – you don’t need any more syntax. But to make thinks a bit more readable I will allow to introduce names or identifiers for terms like this $$ Id := \lambda x.x $$ and we will use a form of currying: Instead of having to write something like $$ \lambda x . \left( \lambda y . (y x) \right)$$ I will allow to write $$ \lambda xy. (y x) $$. And we will think of the application to be left assoziative, so instead of \( (A B)C \) we will simply write \( A B C \) – my interpreter allows you to do the same. By the way: as there is no \( \lambda \) on your common keyboard the interpreter expects you to insert the abstractions as

Id := \x.x

## Magic calculations or the \( \beta \)-reduction

This is where things gets interesting. Look at a application \( A B \) – if \( A \) is an abstraction \( \lambda x.F \) then we can use our intuition and think of this as the argument \(B\) beeing applied to the function \(F\) and the \(\beta\)-reduction rule does just this: it will substitute every “free” occurrence of \(x\) in \(F\) by \(B\).

Here “free” is a bit of technical term – do make thinks not more complicated as they have to be: every \(x\) in the body \(B\) of a abstraction \(\lambda x.B \) is said to be bound and the free variables are just those not bound – so this rule makes sure that we only apply the arguments to those places where we don’t “override” the meaning of \(x\). There are other things you have to take care if you substitute terms (for example if you substitute \(y\) by \(x\) in \(\lambda x.y \) you end up getting \(\lambda x.x\) but the substitution should not change the meaning of the construct so you have to rename \(x\) in there to make sure not to shoot yourself – take a look at the wiki article or consult a book if you are interested in this things – they are called \( \alpha \)-conversions and substitution rules).

So as an example let’s look use this rule on $$(\lambda x.x) a$$: the rule tells us to substitute every occurrence of \(x\) with \(a\) so of course the result is just \(a\) (and your intuition that this is just a form of identity is surely right).

The “magic” here is that this rule is basically all you need to do computations!

Don’t believe it? Well let’s see how we can get lambda-calculus to add natural number…

## Natural numbers: representation and simple operations

You might remember from school that we can represent the numbers just by defining **zero** and a **successor** function that yields the next natural number after it’s argument. We will use this as well so we first have to say what zero should be in our calculus.

There are a couple of good definitions and you could start by saying that zero should be just an variable but as we deal with functions and as it will prove to be ingenious we will say that zero is a function of two arguments: $$ Z0 := \lambda fx.x $$ (Hint: you can think of \(f\) as a function and of our number representation as doing something like $$n \rightarrow x \rightarrow f^{n}( x )$$).

Now we need the **Succ **operator still and as the hint suggests this should just prepend one more \(f\) in front of the argument number: $$ Succ := \lambda n . \left( \lambda fx.f(nfx) \right) $$ – as you see this takes a number \(n\) and “returns” (using our intuition) a function that “calls” \(n\) with it’s arguments (\(f\) and \(x\)) and passes the result of this once again into \(f\).

So now that we have the representation of the natural numbers (for example \( 3 == Succ (Succ (Succ Z0))) \)) we can look for something like an addition-operator – that is a function taking two numbers \(n\) and \(m\) (represented by our abstraction) and returning the representation of \(n+m\).

This is not too complicated: $$ Add := \lambda nm . \left( \lambda fx.nf(mfx) \right) $$.

So let’s try this to double \(1 == Succ Z0 == \lambda fx.fx \):

$$ Add 1 1 == \left( \lambda nm . \left( \lambda fx.nf(mfx) \right) \right) \left(\lambda fx.fx\right) \left(\lambda fx.fx\right)$$

$$ =\beta= \left( \lambda m . \left( \lambda fx.\left(\lambda fx.fx\right)f(mfx) \right) \right) \left(\lambda fx.fx\right)$$

$$=\beta= \left( \lambda fx.\left(\lambda fx.fx\right)f\left(\left(\lambda fx.fx\right)fx\right) \right)$$

$$=\beta= \left( \lambda fx.\left(\lambda fx.fx\right)f\left(\left(\lambda fx.fx\right)fx\right) \right)$$

$$=\beta= \left( \lambda fx.\left(\lambda fx.fx\right)ffx \right)$$

$$=\beta= \lambda fx.ffx == 2$$

Or using my little interpreter:

LAMBDA CALCULUS INTERPRETER =========================== >>> Z0 := \fx.x \fx.x == [Z0] >>> Succ := \nfx.f(nfx) \nfx.(f(nfx)) == [Succ] >>> Add := \nmfx.nf(mfx) \nmfx.(nf(mfx)) == [Add] >>> Z1 := Succ Z0 \fx.(fx) == [Z1] >>> Z2 := Succ Z1 \fx.(f^{2} x) == [Z2] >>> Add Z1 Z1 \fx.(f^{2} x) == [Z2]

## Something similar using F#

Just to emphasize that F# as a functional programming can handle the same stuff: here is a simple F# script that demonstrates the same kind of defintions and operations for natural numbers – maybe you will find this more readable than the lambda-calculus stuff.

module NaturalNumber = let Z0 = fun f x -> x let Succ n = fun f x -> f (n f x) let Add n m = fun f x -> n f (m f x) let Z1 = fun f x -> Succ Z0 f x let interprete n = n (fun i -> i+1) 0 do printfn "1 + 1 = %d" (interprete (Add Z1 Z1))

As you can see this is more or less exactly the same we did with our lambda-calculus terms (we have to use the “fun f x –>” part for the compiler to be happy) and the “interprete” function is just to translate our intuition about the symbols like Z0 and Succ into F# representation for numbers.

That’s it for today – I hope you enjoyed this first overview on lambda-calculus – make sure to let me know what you think!