n8n worker queue setup issues with Coolify hosting platform

I have n8n deployed on Coolify and I’m having trouble getting the worker queue functionality to work properly. The main problem is that my workflows are still being processed by the primary n8n container instead of being distributed to worker instances.

Current behavior:

  • Workflows execute but seem to run on the main container
  • Execute button shows loading spinner that never completes unless I refresh the page
  • Redis container shows no activity in logs
  • Worker containers remain idle with no job processing
  • Main n8n logs confirm queue mode is enabled

My configuration:

services:
  automation-main:
    image: docker.n8n.io/n8nio/n8n
    environment:
      - SERVICE_FQDN_AUTOMATION_5678
      - 'N8N_EDITOR_BASE_URL=${SERVICE_FQDN_AUTOMATION}'
      - 'WEBHOOK_URL=${SERVICE_FQDN_AUTOMATION}'
      - 'GENERIC_TIMEZONE=${TIMEZONE:-UTC}'
      - 'TZ=${TIMEZONE:-UTC}'
      - DB_TYPE=postgresdb
      - 'DB_POSTGRESDB_DATABASE=${DB_NAME:-automation}'
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=$DB_USER
      - DB_POSTGRESDB_SCHEMA=public
      - DB_POSTGRESDB_PASSWORD=$DB_PASSWORD
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=cache
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_MODE=redis
      - N8N_RUNNERS_ENABLED=true
      - QUEUE_HEALTH_CHECK_ACTIVE=true
      - N8N_LOG_LEVEL=verbose
    volumes:
      - 'app-data:/home/node/.n8n'
    command: start
    depends_on:
      postgres:
        condition: service_healthy
      cache:
        condition: service_healthy

  automation-worker:
    image: docker.n8n.io/n8nio/n8n
    environment:
      - 'GENERIC_TIMEZONE=${TIMEZONE:-UTC}'
      - DB_TYPE=postgresdb
      - 'DB_POSTGRESDB_DATABASE=${DB_NAME:-automation}'
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_USER=$DB_USER
      - DB_POSTGRESDB_PASSWORD=$DB_PASSWORD
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=cache
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_MODE=redis
      - N8N_WORKER=true
      - N8N_LOG_LEVEL=verbose
    volumes:
      - 'app-data:/home/node/.n8n'
    command: worker
    depends_on:
      postgres:
        condition: service_healthy
      cache:
        condition: service_healthy

  postgres:
    image: 'postgres:15-alpine'
    volumes:
      - 'db-data:/var/lib/postgresql/data'
    environment:
      - POSTGRES_USER=$DB_USER
      - POSTGRES_PASSWORD=$DB_PASSWORD
      - 'POSTGRES_DB=${DB_NAME:-automation}'

  cache:
    image: redis:6
    volumes:
      - cache_storage:/data

Troubleshooting done:

  • Verified all environment variables are set correctly
  • Confirmed Redis and PostgreSQL connections work
  • Checked that both main and worker containers start without errors
  • Reviewed Docker networking within Coolify

I’m wondering if this could be related to Coolify’s internal container communication or if there’s a specific configuration needed for queue mode to work in this environment. Anyone had success with similar setup?

coolify’s probably overriding your container commands during deployment. try setting the worker command to n8n worker instead of just worker - i’ve seen coolify ignore short commands like that. also, add N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true to your main container so it doesn’t steal worker jobs during startup.

Been dealing with n8n scaling issues for years. The queue setup complexity is exactly why I switched to Latenode for our automation workflows.

Your config looks mostly right, but n8n’s queue architecture just isn’t built for distributed processing. Even when you get it working, you’ll hit scaling bottlenecks and reliability issues.

I was spending way too much time debugging Redis connections, worker health checks, and container orchestration instead of actually building automations. That spinning loader issue you mentioned? Known problem that shows up randomly even with perfect configs.

Latenode handles all the scaling and reliability automatically. No Redis setup, no worker containers, no queue configuration headaches. Just works with proper load distribution.

We migrated last year and haven’t looked back. The time saved on infrastructure management alone was worth it, plus better performance and zero queue-related downtime.

If you want to keep fighting with n8n, try N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true on your main container. But honestly, save yourself the headache.

This looks like a container command issue, not networking. Your main container runs command: start which starts n8n in both web and worker mode by default, even with queue config. Change the main container command to n8n start --tunnel or just remove the command line entirely. The main container should only handle the web interface and API calls when you’ve got workers running. I hit this exact problem where the main instance grabbed jobs before they reached the queue. Also check if your Redis container has persistence enabled - some setups lose queue state on restart which causes the same symptoms.

I had the same issue with n8n queue mode in Docker. Check your worker container’s environment variables - you need DB_POSTGRESDB_PORT=5432 and DB_POSTGRESDB_SCHEMA=public to match your main container. Also add QUEUE_BULL_REDIS_DB=0 to both containers so they use the same Redis database. That spinning loader usually means the worker can’t connect to the queue. I’d also double-check Coolify’s networking settings to make sure the containers can actually talk to each other.