Learn Alpha Beta Pruning in AI – Boost Your Game Development

Introduction

Alpha Beta Pruning in AI is a search algorithm that reduces the number of nodes evaluated in the minimax algorithm in game theory and decision-making processes. It’s commonly used in two-player games like chess and checkers, allowing the algorithm to disregard branches that don’t need to be considered, thus speeding up the computation.

The principle behind alpha-beta pruning is to eliminate large parts of the search tree, using a branch-and-bound technique. It works by maintaining two values, alpha and beta.

Alpha Beta Pruning in AI

The number of game states is exponential in the depth of the tree. No algorithm can completely eliminate the exponent, but we can sometimes cut it in half, computing the correct minimax decision without examining every state by pruning large parts of the tree that make no difference to the outcome. The particular technique we examine is called Alpha Beta Pruning in AI.

Let’s go through the calculation of the optimal decision. The steps are explained in the below figure, where the outcome can be identified as the minimax decision without evaluating two of the leaf nodes.

Alpha Beta Pruning in AI

The above image displayed the stages in the calculation of the optimal decision for the game tree.

  • At each point, we show the range of possible values for each node.
  • The first leaf below has the value 3. Hence B, which is a MIN node, has a value of at most 3.
  • The second leaf below B has a value of 12; MIN would avoid this move, so the value of B is still at most 3.
  • The third leaf below B has a value of 8; we have seen all B’s successor states, so the value of is exactly 3. Now we can infer that the value of the root is at least 3, because MAX has a choice worth 3 at the root.
  • The first leaf below has the value 2. Hence C, which is a MIN node, has a value of at most 2. But we know that is worth 3, so MAX would never choose C. Therefore, there is no point in looking at the other successor states of C. This is an example of Alpha Beta Pruning.
  • The first leaf below D has the value 14, so D is worth at most 14. This is still higher than MAX’s best alternative (i.e., 3), so we need to keep exploring D’s successor states.

Notice also that we now have bounds on all of the successors of the root, so the root’s value is also at most 14.

  • The second successor of D is worth 5, so again we need to keep exploring. The third successor is worth 2, so now D is worth exactly 2. MAX’s decision at the root is to move to B, giving a value of 3.

Another way to look at this is as a simplification of the formula for MINIMAX. Let the two unevaluated successors of node have values x and y and, then the value of the root node is given by,

MINIMAX(root) = max(min(3,12,8), min(2,x,y), min(14,5,2))
      = max(3, min(2,x,y),2)
      = max(3,z,2) where z = min(2,x,y) ≤ 2
      = 3

In other words, the value of the root and hence the minimax decision are independent of the values of the leaves and , therefore they can be pruned.

Alpha Beta Pruning can be applied to trees of any depth, and it is often possible to prune entire subtrees rather than just leaves. The general principle is: 

Consider a node somewhere in the tree (check the below image), such that Player has a choice of moving to . If the Player has a better choice either at the same level or at any point higher up in the tree, then the Player will never change the position. So once we have found out enough to reach this conclusion, we can prune it.

Alpha Beta Pruning

The Image displays a general case for Alpha Beta Pruning. If m or m’ is better than n for Player, we will never get to n in play.

Remember that minimax search is depth-first, so at any one time we just have to consider the nodes along a single path in the tree. Alpha Beta Pruning gets its name from the two extra parameters in MAX-VALUE (state, α, β) that describe bounds on the backed-up values that appear anywhere along the path:

α = the value of the best (i.e., highest-value) choice we have found so far at any choice point along the path for MAX. Think: α = “at least.” 

β = the value of the best (i.e., lowest-value) choice we have found so far at any choice point along the path for MIN. Think: β = “at most.”

Pseudocode for Alpha Beta Pruning

Here is a basic pseudocode implementation of alpha beta pruning integrated into the minimax algorithm:

				
					function alphaBeta(node, depth, alpha, beta, maximizingPlayer)
    if depth == 0 or node is a terminal node
        return the heuristic value of node

    if maximizingPlayer
        maxEval = -infinity
        for each child of node
            eval = alphaBeta(child, depth - 1, alpha, beta, false)
            maxEval = max(maxEval, eval)
            alpha = max(alpha, eval)
            if beta <= alpha
                break  // β cut-off
        return maxEval

    else
        minEval = +infinity
        for each child of node
            eval = alphaBeta(child, depth - 1, alpha, beta, true)
            minEval = min(minEval, eval)
            beta = min(beta, eval)
            if beta <= alpha
                break  // α cut-off
        return minEval
				
			

Explanation of the Pseudocode:

1. Base Case: The recursion ends if the desired depth is reached or the node is terminal (no further moves possible).

2. Maximizing Player:

  • Initialize maxEval to negative infinity.
  • Iterate through child nodes.
  • Recursively call alphaBeta for minimizing condition.
  • Update maxEval and alpha.
  • If alpha is greater than or equal to beta, break the loop (prune the rest of the subtree).

3. Minimizing Player:

  • Initialize minEval to positive infinity.
  • Iterate through child nodes.
  • Recursively call alphaBeta for maximizing condition.
  • Update minEval and beta.
  • If beta is less than or equal to alpha, break the loop (prune the rest of the subtree).

Alpha Beta Pruning Algorithm

				
					function ALPHA BETA SEARCH(game, state) returns an action
	player ← game. TO MOVE(state)
	value, move ← MAX-VALUE(game, state, –∞, +∞)
	return move

function MAX-VALUE(game, state, α, β) returns a (utility, move) pair
	if game. IS-TERMINAL(state) then return game. UTILITY(state, player), null
	v ← –∞
	for each a in game. ACTIONS(state) do
		v2, a2 ← MIN-VALUE(game, game.RESULT(state, a), α, β)
		if v2 > v then
			v, move ← v2, a
			α ← MAX(α, v)
		if v ≥ β then return v, move
return v, move

function MIN-VALUE(game, state, α, β) returns a (utility, move) pair
	if game. IS-TERMINAL(state) then return game. UTILITY(state, player), null
	v ← +∞
	for each a in game. ACTIONS(state) do
		v2, a2 ← MAX-VALUE(game, game.RESULT(state, a), α, β)
		if v2 < v then
			v, move ← v2, a
			β ← MAX(β, v)
		if v ≤ α then return v, move
return v, move

				
			

Notice that these functions are the same as the MINIMAX-SEARCH functions, except that we maintain bounds in the variables α and β, and use them to cut off search when a value is outside the bounds.

Alpha Beta search updates the values α and β of and as it goes along and prunes the remaining branches at a node (i.e., terminates the recursive call) as soon as the value of the current node is known to be worse than the current α or β value for MAX or MIN, respectively. The complete algorithm is given in the above image, and the 2nd image traces the progress of the algorithm on a game tree.

Related Articles

Conclusion

Alpha Beta Pruning is a powerful optimization technique for the minimax algorithm, primarily used in game theory and AI for two-player games. By intelligently pruning branches of the search tree that will not affect the final decision, it allows for much deeper searches within the same time constraints, improving the efficiency and performance of game AI systems.

Implementing alpha-beta pruning can drastically reduce the computational burden, allowing for quicker decision-making processes while maintaining optimal gameplay strategies. This method not only speeds up the evaluation of possible moves but also enhances the AI‘s ability to compete against human players by examining more potential outcomes in less time.

FAQ's

Alpha Beta Pruning is a search algorithm that aims to reduce the number of nodes in its search tree that the minimax algorithm evaluates. This algorithm, known as adversarial search, is frequently employed for computer-controlled play of two-player combinatorial games, such as Connect 4, Chess, and Tic Tac Toe.

The player who maximizes applies alpha cutoffs, which stop moves at the minimizing level. In a similar manner, player (or opponent) cuts of moves are minimized at maximizing ply to apply beta cutoffs. It’s also crucial to remember that alpha and beta cutoffs are applied at different plies rather than simultaneously.

Minimax is a two-pass search in which the nodes at the ply depth are given heuristic values in the first pass, and the values are then propagated up the tree in the second pass. Alpha Beta search operates based on depth first. A MAX node’s initial or temporary value is known as its alpha value.

Alpha Beta Pruning can be done on a tree at any depth, and in certain cases, the entire sub-tree as well as the tree’s leaves are pruned. The definition of the two-parameter is:

Alpha: At every stage of the Maximizer path, this is the best (highest-value) option we have discovered thus far. Alpha has a starting value of -∞.