I have split my database across several MySQL instances for better performance. Sometimes I need to fetch data from different database shards at the same time instead of running queries one after another.
What’s the best approach to execute multiple MySQL queries simultaneously in PHP? I want to avoid waiting for each query to finish before starting the next one.
I found some tutorials about async cURL and using shell commands, but I’m wondering if there’s a cleaner solution built into PHP or MySQL extensions. Are there any native PHP functions or libraries that can handle concurrent database connections efficiently?
Here’s a simple example of what I’m trying to avoid:
$results = array();
$databases = ['shard_1', 'shard_2', 'shard_3'];
foreach($databases as $dbName) {
$connection = new PDO("mysql:host=localhost;dbname=$dbName", $username, $password);
$stmt = $connection->prepare("SELECT * FROM users WHERE active = 1");
$stmt->execute();
$results[$dbName] = $stmt->fetchAll();
}
This runs each query sequentially which takes too long when I have many shards.
I’ve dealt with this same issue in distributed setups. ReactPHP with their MySQL connector works great here - you can spin up multiple non-blocking connections and run queries concurrently without the process management headache. Another option that’s worked well for me: MySQLi with async queries using mysqli_poll(). Just start your queries with the MYSQLI_ASYNC flag, then poll for results as they finish. Keeps everything in native PHP, no extra dependencies. For production, definitely check out ProxySQL for connection pooling. It handles concurrent connections across shards way better. Bottom line - avoid blocking operations and use PHP’s multi-socket capabilities.
Gearman’s worth a look if you’re already running distributed stuff. I queue each shard query as a separate job, then workers hit them all at once across different processes or servers. You get real parallel execution plus job management and error handling baked in. Your PHP script dumps all the jobs at once and grabs results as they finish. Takes some setup but scales like crazy when you add more shards. We dropped from 12 seconds to 3 seconds with barely any code changes. Workers keep persistent DB connections too, so no connection overhead per request. Great option when you want reliability without wrestling with async connections.
Use a message queue like RabbitMQ or Redis to pool connections. PHP pushes query jobs to different queues (one per shard), then workers grab jobs and run them against their databases. Workers dump results into shared cache or another queue that your main PHP process checks for completion. Best part? If one shard dies, the rest keep running. Failed queries get retried without killing everything else. Takes work to set up initially, but adding new shards is just spinning up another worker. We’re running 20+ shards like this with sub-second response times.
if your php setup supports it, check out the pthreads extension. you can create worker threads for each shard, running in true parallel. it’s kinda tricky to set up tho, and not all hosts allow thread-safe php. for an easier route, curl_multi handles are great - set up mysql api endpoints and fire them off together.
These solutions work, but you’re overcomplicating things. I deal with this exact problem daily in our distributed systems.
Running parallel queries isn’t the real challenge. You need something that manages connections, handles failures, and processes results as they arrive. What about joining data from different shards or transforming it before it hits your frontend?
Workflow automation solves this. Skip the async connection management and polling in PHP. Set up a workflow that hits all shards at once, waits for responses, and combines the data however you want.
The workflow handles parallel execution, errors, and data processing. Your PHP triggers it and gets clean results back. Much cleaner than spawning processes or juggling coroutines.
I built this for a client last month with 12 database shards. Dropped from 8-second sequential queries to under 2 seconds, with automatic retries and result caching.
Latenode makes this dead simple. Build the whole parallel query workflow visually and trigger it from any PHP app: https://latenode.com
swoole’s your best bet here. it handles async mysql connections natively and runs multiple queries simultaneously without blocking. way cleaner than shell commands or curl hacks. just spin up coroutines for each shard connection - they’ll run concurrently. scales great with large datasets too.
Been running sharded MySQL for years. Best approach I’ve found: use pcntl_fork() to spawn child processes for each shard query. Each child gets its own DB connection, runs independently, then passes results back via shared memory or temp files. Parent waits for all children with pcntl_waitpid(). Works great when you need real parallelism across multiple DB servers. Just watch out - some shared hosts disable pcntl functions, so check that first. Also keep an eye on MySQL connection limits since you’ll hit multiple servers at once. For lighter stuff, mysqli_poll() works fine, but heavy data processing across many shards? Process forking gives you actual concurrent execution.