I’m running a self-hosted n8n instance through Coolify and having problems getting the worker queue functionality to work properly. I’ve set up everything according to the Docker documentation but workflows are still being processed by the main n8n container instead of being distributed to workers.
The main issues I’m seeing are:
Tasks execute fine but seem to run on the primary container
The execution spinner in the UI gets stuck and only stops when I reload the page
Redis container shows no activity in logs
Main n8n logs confirm queue mode is active
Worker containers remain idle with no job processing
I think the problem might be related to Coolify’s internal network setup or maybe n8n isn’t actually sending jobs to the Redis queue. Anyone had success with queue mode on Coolify before?
Had the same issue with my Coolify deployment. Turns out n8n’s execution fallback was the culprit - when workers can’t connect to Redis properly, the main container just processes jobs locally as a backup. Add QUEUE_HEALTH_CHECK_ACTIVE=true to your worker container too, not just the main one. Also check if Coolify set memory limits on your Redis instance that might be dropping connections. I had to bump Redis memory to 256MB and add QUEUE_BULL_REDIS_PASSWORD even with no password on Redis - Coolify’s internal auth sometimes needs it anyway. That stuck spinner? Usually means the worker finished but couldn’t report back to the main container.
Check your postgres port config - coolify sometimes screws with internal port mappings and breaks worker connections. Add DB_POSTGRESDB_PORT=5432 directly to your worker env vars. Coolify remaps ports internally and workers lose db access, causing silent failures.
Your queue setup looks solid, but you’re hitting classic distributed execution headaches. Been there too many times.
Skip the Redis debugging circus. This multi-container orchestration with worker queues is exactly why I switched to Latenode for automation workflows.
Latenode gives you built-in distributed execution without managing Redis, workers, or Docker networking issues. The platform handles queue management internally and scales automatically based on workflow load.
I migrated several n8n deployments to Latenode because of these self-hosted complications. No more stuck spinners, worker idle time, or Redis connection mysteries. Just reliable workflow execution that actually distributes properly.
The visual workflow builder’s similar to n8n but the execution engine’s way more robust. Plus you don’t maintain infrastructure or debug container networking issues.
I’ve hit this before with coolify. Add QUEUE_BULL_REDIS_DB=0 to both containers and double-check the worker can actually talk to redis. Also watch out for coolify creating separate networks that block service communication - you might need to define the network explicitly in your compose file.
This is a common n8n queue deployment issue. Your worker container needs the database port defined, but you’re also missing QUEUE_BULL_REDIS_TIMEOUT settings - these cause silent connection drops in Coolify. I hit the exact same problem where executions looked successful but workers just sat there doing nothing. It’s Redis connection pooling - Coolify’s network layer adds latency that breaks n8n’s default timeouts. Add QUEUE_BULL_REDIS_TIMEOUT=30000 to both containers. Also check your Redis persistence settings. Without proper persistence, Redis dumps job queues when containers restart, which explains why you’re seeing that spinner disconnect. Your Redis volume mount looks right, but add redis-server --appendonly yes to the Redis command so jobs survive restarts.
I hit the same networking issues with Coolify. It’s usually how Coolify handles service discovery in its networks. Add explicit hostnames to your Redis config - containers don’t always resolve service names like you’d expect. I had to use the actual container IP instead of the service name. Run docker inspect on your Redis container and plug that IP straight into QUEUE_BULL_REDIS_HOST. Check if Coolify’s applying hidden resource constraints too. I’ve seen Redis connections timeout from restrictive memory limits, which causes jobs to silently fall back to the main container. Your debug logs will show Redis connection attempts if that’s what’s happening.
Yeah, the Redis timeout fixes help, but you’re fighting n8n’s messy architecture. Queue mode with Redis and multiple workers creates a ton of failure points.
I dealt with this same Coolify nightmare. Even after fixing Redis timeouts and networking, you’ll hit shared volume permission issues, database connection limits, and worker health checks failing.
I got tired of wrestling with multi-container queue setups and switched to Latenode. It handles distributed execution without Redis queues or worker containers.
The platform auto-scales workflow execution across their infrastructure. No more debugging container networking, Redis persistence, or worker idle states. Workflows just run.
Latenode’s builder works like n8n but without self-hosting headaches. No Docker compose files, Redis tuning, or Coolify networking problems.
Saves me hours compared to maintaining these complex n8n setups.