I’m working on creating an AI bot for the 2048 puzzle game using the expectiminimax approach. The thing is, in 2048 there’s no opponent player trying to minimize my score, so I don’t have the typical Min nodes that you see in regular minimax trees.
Since alpha-beta pruning usually works by cutting off branches when we know the minimizing player won’t choose them, I’m confused about whether it can even be applied here. The game tree has Max nodes (my moves) and Chance nodes (random tile spawns), but no Min nodes.
If alpha-beta pruning isn’t suitable for this situation, what other techniques can I use to avoid exploring unnecessary branches and speed up the search? I want to make my AI more efficient without losing accuracy.
Any advice or alternative optimization methods would be really helpful. Thanks in advance!
Alpha-beta pruning doesn’t work with expectiminimax since chance nodes need weighted averages of all outcomes - you can’t just cut branches like with regular minimax. However, you can still prune based on probabilities by setting thresholds for low-probability outcomes and ignoring random tile spawns that barely impact your expected value calculation. I typically focus on tile placement odds (90% for 2-tiles, 10% for 4-tiles) and eliminate spawn positions contributing less than 1% to total expected value, which is effective. Alternatively, Monte Carlo sampling could be used to randomly check some tile spawns instead of every possible outcome. This may reduce accuracy but offers a significant speed boost and often yields comparable performance.
u cant use reg alpha-beta pruning with expectiminimax, but there r other tricks. transposition tables r perfect for 2048 - u’ll see the same board positions over n over. move ordering helps too - just try the highest-scoring moves first. I’d also suggest stayin shallow with a solid eval func instead of searchin deep. works way better than you’d expect for this game.
You’re right - standard alpha-beta pruning doesn’t work with expectiminimax chance nodes. But there are workarounds that work well for 2048 AI. I’ve used a modified version where you can still prune chance node branches when the expected value gets so low it won’t affect the parent max node’s choice. Just need to manage thresholds carefully. Honestly though, aggressive depth limiting with a solid heuristic beats trying to force alpha-beta variants. Iterative deepening helps too since shallow search results guide the deeper ones. For 2048 specifically, I get better performance focusing compute on promising moves through smart move ordering and limiting search to just the top few moves per position. Way better than trying to prune the whole tree. It’s about balancing search depth with evaluation quality, not exploring everything.