# Introduction to Minimax

This will be the start of my new “Connect4” series where we will build the small game I showed you in the teaser post. So let’s get going

## What is this all about?

The basic idea is this: For some kind of games (like connect-4) we can view the state of the game at any given moment as a node in a (possible very big) tree were a edge from a node A to B represents a valid move from state A leading to the new state B. We are only interested in a game between two players where in a state S either player 1 or player 2 chooses a edge. For our game the two players always change so they take their turns one player after the other until the game has ended (one player won or it’s a draw). Our winning strategy is just choosing a edge whose subtree guarantees us a favorable outcome in the end, no matter what the other player does. The search-algorithm (leading to Minimax) for this is not hard to understand. You give every leaf (a node without children, because the game has ended in this state) value 1 if player 1 has won, value –1 if player 2 has won or 0 if it’s a draw. Now we recursively give each internal node of the tree a value by finding the maximum-value of it’s children if it’s player 1’s turn or the minimum-value if player 2 has to choose. This way in the end every child of the start-node will have a value either 1,0 or –1 – now if player 1 can choose a edge leading to a 1-node he will win by always choosing so, will get a draw if he cannot choose 1 but a 0 or will loose for sure if he can only choose –1-nodes.

You may ask: “So you say it’s stupid to play chess because if played perfect it’s clear whose winning?” – Yes I do. BUT: the tree is this big in this case that you won’t be able to calculate it at the current level of computer-memory and speed. But it’s easy for games like Tic-Tac-Toe and indeed it has been done for choose-4.

But we don’t will do this – instead we will only decent the tree for a few levels and create the child-nodes on the go. Then of course we don’t know in each case if the node we are looking at in the end will be a winning node. If it is – fine we give it a very big or very small value – depending on what player wins (player 1: very big, player 2: very small). If not we have to give some rating to the node based on this. This really is the hard part. For most games you have to get a very good method to calculate this value based on the board-state and for games like Go this is very hard. In our case a very dumped down (we will talk about this later) version of this rating will do the job so fine that the little program will beat novices like me (didn’t play the game since I was a school-kid and not good either) with no sweat by only looking the next 4 moves – a feat that will take almost no time on your normal PC nowadays.

## Simple Sample of Minimax in Action

What I told you so far is called Minimax and he is a small strip of a simple use-case:

We start with only the root-node we don’t know it’s value yet but want to know what the next state should be (the move we have to make is more or less the same as the next state because in our game there is a obvious bijective mapping between them (doing the move ). Let’s say it’s players 1 turn so we are in a “Max”-Layer and we want to search 2 layers deep: As we don’t have reached the search-depth (we are only at 0 of 2) we go on and generate the children of this state: and recursively move on looking at the left of the children. Now we are in a MIN-layer but still only at 1of2 so we move on… again looking at the left child we are now at level 2of2 so we should not step any deeper. Instead we will go on and rate this child – let’s say we get 2. We do the same with left child getting 4 and so the parent (a level higher) will get value 2 (it’s at a MIN layer) and we stand at: Then of course we do the same with the right child at the 1of2-level (MIN). Let’s say we end up as follows (the child there will get value 3): As the first node was in a MAX layer we should choose the right child having value 3 – and indeed this we will do – arriving at the decision “take the second child (having value 3)”. ## Implementing this in F#

We first need a couple of easy type-definitions to help us read the code. Indeed the same will be used for the Alpha-Beta-pruning algorithm we will look at next – so it’s a sensible thing:

// what Kind of Layer - minmizing / maximizing
type LayerRateType =
| Min
| Max

/// definition of a search space for generic states
/// CreateChildStates will be called to get all children of a given state
/// RateState will be called to rate a state based on the maximizing player!
type SearchSpace<'state> = { CreateChildStates : 'state -> 'state list; RateState : 'state -> float }

/// a search will be a curried function that given
/// * the search-depth as int (how many layers in the tree shoud be searched)
/// * the search-space that should be used (see above)
/// * the initial-state R and a flag telling the method if it should begin with a minimizing or maximizing strategy
/// tries to find the (childstate, rating of this state based on the maximizing player) of R's best (least if LayerRateType.Min or else biggest) child
type Search<'state> = int * SearchSpace<'state> -> ('state * LayerRateType) -> ('state * float) option


With this and what we talked about above the code for the algorithm should be rather easy to understand (or so I hope). To help you with this I choose not to combine the min/max processing into the same function but to duplicate all the code into two sub-functions. Indeed the alpha-beta-pruning will fit very nicely into this and we will only have to change a few lines – but more on this next time – so here it is:


/// Implements a search-function using Minimax
module MiniMaxAlgorithm =

/// Private implementation of the algorithm
let private searchMinMax (depth : int) (space : SearchSpace<'a>) (root : 'a, lrt : LayerRateType) : ('a * float) option =

/// the method used for a min-Layer
let rec findMin d (node : 'a) =
// find all children of current node
let children = space.CreateChildStates node
if List.isEmpty children then None         // if there are no children then we found Nothing
elif d = depth                             // else if we reached max. search-depth
then                                       // then rate current state and return it
(node, space.RateState node) |> Some
else                                       // else recursivly find the best child
let rec findBestChild (best : ('a*float) option) cs =
match cs with
| []     -> best
| c::cs' -> let minV = match best with
| Some (_, minV) -> minV
| None           -> System.Double.MaxValue
// recursive decent into the other layer-type
let recur = findMax (d+1) c
// if there were no other children, then rate the current node
let cVal = match recur with
| Some (_, v)       -> v
| None              -> space.RateState c
// check if this child is better than the current-best one
if cVal < minV then findBestChild (Some (c, cVal)) cs'
else                findBestChild best cs'
findBestChild None children

/// the method used for a max-layer - similar to findMin
and findMax d (node : 'a) =
let children = space.CreateChildStates node
if List.isEmpty children then None
elif d = depth
then
(node, space.RateState node) |> Some
else
let rec findBestChild (best : ('a*float) option) cs =
match cs with
| []     -> best
| c::cs' -> let maxV = match best with
| Some (_, v)    -> v
| None           -> System.Double.MinValue
let recur = findMin (d+1) c
let cVal = match recur with
| Some (_, v)       -> v
| None              -> space.RateState c
if cVal > maxV then findBestChild (Some (c, cVal)) cs'
else                findBestChild best cs'
findBestChild None children

// start the algorithm with the right layertype
match lrt with
| LayerRateType.Max -> findMax 0 root
| LayerRateType.Min -> findMin 0 root

/// the public search-function - see definition in AlgorithmDefinitions.fs
let Search : Search<_> = (fun (searchDepth, space) -> searchMinMax searchDepth space)


As always – feel free to give comments, critique or questions just below. And stay tuned for the next in the series explaining the extension of the ideas presented here to make this more effective …