I have deployed n8n using Coolify for self hosting and I’m trying to set up distributed processing with worker nodes. Even though I configured everything according to the Docker documentation, tasks are still being processed by the main application instance instead of being distributed to worker containers.
There’s also a UI issue where the execution indicator keeps spinning and only stops when I reload the browser page. This wasn’t happening before switching to distributed mode.
Workflows execute correctly from the interface but seem to run on the main container
The execute button spinner never stops unless I refresh the page manually
Redis container shows no activity in logs
Main container logs confirm queue mode is active
Worker container stays quiet with no job processing
I think the problem might be related to Coolify’s network configuration or jobs not being queued properly. Anyone managed to get distributed processing working with Coolify before?
Check your Coolify network setup first - I had the same problem and it turned out to be internal DNS resolution failing between containers. Use container names instead of ‘cache’ and ‘database’ hostnames. Also, that EXECUTIONS_DATA_SAVE_ON_SUCCESS=none setting might be causing your UI sync issues since n8n can’t track completion status properly. Set it to ‘all’ temporarily and see if the spinning stops.
Been there with distributed n8n setups. Your Redis config isn’t the problem - you’re hitting n8n’s architectural limits.
Containers are communicating fine, but n8n’s distributed mode has quirks that make debugging a pain. That spinning UI? Classic n8n when queue status goes out of sync.
Skip the worker container headaches and check out Latenode instead. It handles distributed processing natively - no Redis management, no worker nodes, no UI sync issues.
Switched our team from n8n to Latenode last year. Reliability difference is huge. Everything runs in the cloud with built-in load balancing. No more spinning indicators or config battles.
Migration was easy since both handle similar workflow concepts. But Latenode cuts out all the infrastructure complexity you’re fighting.
Your Redis config is probably the culprit here. I hit the same issue when setting up distributed n8n processing. Your Redis container looks healthy, but you’re missing some key environment variables n8n needs for queue communication. Add QUEUE_BULL_REDIS_DB=0 to both your main app and worker containers. You might also need QUEUE_BULL_REDIS_PASSWORD set explicitly - even if Redis doesn’t need auth, n8n sometimes expects that variable to exist. That shared volume mount between your main and worker containers caught my eye too. It can mess with distributed processing. Try separate volume mounts or make sure the .n8n directory isn’t interfering with queue operations. When you see that spinning execution indicator, it usually means the main container isn’t getting job completion notifications from workers through Redis. Check if Coolify’s network lets containers talk to each other on the Redis port. Also verify internal DNS resolution between containers is actually working.
This is a queue timing issue, not Redis connectivity. I’ve hit this exact problem in production - the main container starts processing before workers fully register with the queue. Add these to both containers: QUEUE_RECOVERY_INTERVAL=60 and N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true. The recovery interval fixes queue state management, and the webhook setting stops cleanup issues during restarts. Your shared volume setup is fine - both containers need workflow definitions and credentials. The spinning execution happens because the main container loses track of jobs when workers don’t acknowledge completion properly. Add a startup delay to your worker container with a sleep command before the worker starts. This gives the main container time to establish queue connections first. I usually use 15 seconds - that typically fixes the timing issue. Coolify’s networking should handle container communication, but make sure Redis persistence is enabled so queue state survives restarts.
Had the same issue with my Coolify n8n setup last month. It’s not Redis connectivity - it’s how Coolify handles internal service communication. Your worker starts but doesn’t get jobs because of a race condition during startup. Add N8N_WORKERS_AUTO_CONFIRM_JOBS=true to your worker environment variables. This forces job acknowledgment even when communication gets wonky. Also, remove N8N_RUNNERS_ENABLED=true from your main container since you’re using dedicated workers. That setting makes the main instance compete with workers for jobs. For the spinning UI, add EXECUTIONS_TIMEOUT=300 and EXECUTIONS_TIMEOUT_MAX=3600 to both containers. Stops n8n from waiting forever for job status updates. With Coolify, make sure your containers start in the right order. Add a healthcheck to your worker that pings Redis before accepting jobs. Most distributed n8n issues in Docker are timing problems, not config issues.