Best filesystem block size configuration for MySQL/MariaDB performance

I’m setting up a database server and need help with block size optimization. I have a MariaDB database running in a VM on Proxmox, and the storage is coming from TrueNAS over NFS. Right now I’m using the default 128K block size that TrueNAS sets up automatically. But I’ve been reading that smaller block sizes might work better for database workloads. Has anyone tested different block sizes with MySQL or MariaDB? I’m thinking about switching to 16K blocks but want to make sure this is actually worth doing before I rebuild everything.

Switched from 128K to 16K blocks on my MariaDB setup six months ago - definitely worth it for transaction response times. MySQL/MariaDB uses 16K pages by default, so matching your filesystem block size eliminates overhead. With 128K blocks you’re reading way more data than you need for typical database ops. Yeah, CPU usage goes up a bit and you’ll get more frequent I/O, but for OLTP workloads the performance gains are totally worth it. Just tune your NFS mount options properly - I run sync writes with caching disabled since database consistency matters. Rebuilding took about 4 hours for my 200GB database, but the performance difference made it worthwhile.

depends on ur workload, but 16k hits the sweet spot most times. ran benchmarks last year - saw 15% better query times switching from the default 128k. just tune ur NFS settings first - match rsize and wsize to ur block size. back up everything tho, NFS storage migrations can get messy.

Had the same performance nightmare with MariaDB until I figured out the block size mismatch was destroying my throughput. You’re spot on about 16K - dealt with this exact issue when queries crawled despite solid hardware. Here’s what everyone skips: NFS over network with 128K blocks means you’re yanking huge chunks across the wire for tiny transactions. Switched 8 months back and query response got way better, especially with multiple connections hitting it. Watch out though - your Proxmox VM disk allocation needs to match the new block size or you’ll just shift the bottleneck. Test under peak load too since more I/O ops can hammer your network differently than fewer big transfers.

Been running databases on NFS for years - block size absolutely matters. Your 128K setup is creating unnecessary overhead since MariaDB reads data in 16K pages anyway. I migrated our production system from default blocks to 16K and saw immediate improvements in random read performance, which is crucial for databases. Most people miss this: larger blocks force the system to read more data from disk even when you only need a small chunk. Database workloads are heavily random access, so this becomes a real bottleneck. Test it on a copy first before rebuilding though - NFS can be tricky and you need to verify your network can handle the increased I/O frequency that comes with smaller blocks.