I’m working with a MySQL database running inside a Docker container and dealing with a really large database schema. For debugging purposes, I frequently need to restore the database to its original state, but using mysqldump takes forever because of the size.
What I’m looking for is a way to run MySQL in some kind of temporary mode where all changes made during a session get wiped out when the server restarts. This would save me tons of time during development and testing.
I already tried creating backups of the Docker volume but that approach uses way too much disk space given how big my database is. Has anyone found a good solution for this kind of scenario?
I’ve hit this same problem with huge test databases. Here’s what actually works: MySQL’s binary logging with point-in-time recovery is solid. Take a snapshot at a clean state, then use the binary log position to roll back instantly - no full dumps needed. Docker’s copy-on-write with overlay drivers is even better. Make a base image with your clean data, then spin up throwaway containers from it. Each container starts fresh but keeps changes isolated. Kill the container and everything resets automatically. For massive databases, I use filesystem snapshots (LVM or ZFS) under Docker. Snapshots are instant no matter how big your database is, and rollbacks take seconds instead of hours of restore pain.
u can try mounting the MySQL data on tmpfs, so it’s all in memory. Also, set innodb_flush_log_at_trx_commit=0 to make sure changes aren’t saved on disk. This way, evrything is wiped once the container restarts, which is gr8 for testing without persisting data! lol