Can alpha-beta pruning be applied to expectiminimax for a 2048 AI without a minimizing player?

I am in the process of developing an AI for the 2048 puzzle game, aiming to implement the minimax algorithm. However, I’ve realized that the structure of the 2048 game resembles an expectiminimax tree as there isn’t a minimizing opponent; only the AI makes the moves while random tiles appear afterward.

My question is, can alpha-beta pruning still be effectively utilized in this context where there is no Min player? Is it a requirement to have both Max and Min nodes for alpha-beta pruning to function?

If alpha-beta pruning is not suitable for this scenario, could you suggest alternative methods to reduce unproductive branches during the search? I want to enhance the performance of my AI while maintaining a good level of accuracy.

I appreciate any suggestions you might have!

You’re correct - alpha-beta pruning struggles with expectiminimax due to the presence of chance nodes. The averaging from random tile placements disrupts the clear cutoff conditions essential for alpha-beta pruning.

There are algorithms like Star1 and Star2 that attempt to prune in stochastic games, but they are quite complex and provide little benefit. In my experience developing a 2048 AI, I found that focusing on intelligent heuristics rather than deep searches yielded better results. Implementing monotonicity and smoothness functions with a more shallow lookahead proved much more effective than traditional methods. Additionally, consider using Monte Carlo Tree Search, which is better suited to handling randomness without relying entirely on the minimax framework. Ultimately, in 2048, strong position evaluation typically trumps deep searching.

Alpha-beta pruning breaks down with expectiminimax because chance nodes mess up the pruning conditions. Pruning works when you’ve got that clean min-max alternation where you can safely ignore branches that won’t matter. But expectiminimax throws expected value calculations into the mix at chance nodes, so you usually end up evaluating everything anyway. For your 2048 AI, try depth limits with iterative deepening instead. Move ordering helps too - tackle score-boosting moves or merge setups first. Caching board states is huge since 2048 revisits similar positions constantly. Honestly, I’ve had better luck combining a solid evaluation function with shallow search than trying to force pruning techniques that don’t fit this type of problem.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.