Can alpha-beta pruning be used in 2048 game AI without a minimizing player?

I’m working on building an AI bot for the 2048 puzzle game using a minimax approach. The thing is, 2048 doesn’t really have an opponent trying to minimize my score like in chess or checkers. It’s more like an expectiminimax situation where I’m maximizing and then there’s just random tile placement.

Since there’s no actual minimizing player in 2048, I’m not sure if alpha-beta pruning makes sense here. The game tree has my moves trying to get the highest score, then random 2 or 4 tiles appearing in empty spots.

If alpha-beta pruning won’t work for this type of game, what other techniques can I use to cut down on unnecessary branches in my search tree? I want to make my AI faster without checking every possible path.

Any advice on optimizing search for games with random elements would be really helpful.

You’re right - alpha-beta pruning sucks for 2048 since there’s no real opponent trying to screw you over.

But you can still prune smartly. I built a similar bot last year and used probability thresholds. If a move sequence has under 5% chance of happening, skip it. The speed boost is worth losing a tiny bit of accuracy.

Cache your game states too. 2048 boards repeat constantly during search, so throw evaluations in a hash table. Cut my computation time by 40%.

For random tiles, don’t check every empty cell. Just focus on the 3-4 most likely spots based on your strategy. If you’re doing corner play, you only care about certain tile placements anyway.

Depth limiting is huge. Go deep on promising branches, stay shallow on garbage ones. I used board smoothness and monotonicity to figure out which branches were worth exploring.

One more thing - precompute evaluation values when you can. Calculating merge potential and position weights every single node adds up fast.

Alpha-beta pruning won’t work here - expectiminimax needs to evaluate all chance nodes to get the right expected values. But branch and bound works great for 2048.

I built a 2048 AI with Monte Carlo Tree Search instead of straight minimax. Handles the randomness way better. You don’t need perfect tile placement probabilities either - sampling does the job.

For expectiminimax optimization, try beam search. Keep only the top N moves at each level. I stuck with the best 3-4 moves per position and barely lost any play quality while cutting the search space dramatically.

Iterative deepening with time cutoffs works well too. Start at depth 4, then keep going deeper until you hit your time limit. You’ll always have a decent move ready even if search gets cut short.

Transposition tables are huge though. 2048 positions transpose all the time because of equivalent board states, so memoization gives massive speedups. Just make sure your hash function uses actual tile values, not just positions.