Issues with n8n Queue Mode on Coolify Deployment

I’m currently running n8n on Coolify, and I’m experiencing some difficulties with the worker setup. The jobs intended for the worker nodes aren’t being distributed correctly, as they seem to be processed by the main n8n instance instead of going through the queue. Despite checking my configuration multiple times, everything seems to be set up as it should be.

Additionally, I’m encountering a problem with the UI where the execution spinner gets stuck, and the only way to see updates is to refresh my browser. This issue didn’t occur before I enabled the worker mode.

Here’s the docker-compose configuration I am using:

services:
  automation:
    image: docker.n8n.io/n8nio/n8n
    environment:
      - SERVICE_FQDN_AUTOMATION_5678
      - 'N8N_EDITOR_BASE_URL=${SERVICE_FQDN_AUTOMATION}'
      - 'WEBHOOK_URL=${SERVICE_FQDN_AUTOMATION}'
      - 'N8N_HOST=${SERVICE_URL_AUTOMATION}'
      - 'GENERIC_TIMEZONE=${APP_TIMEZONE:-America/New_York}'
      - 'TZ=${TZ:-America/New_York}'
      - DB_TYPE=postgresdb
      - 'DB_POSTGRESDB_DATABASE=${DB_NAME:-automation}'
      - DB_POSTGRESDB_HOST=database
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=$DB_USER
      - DB_POSTGRESDB_SCHEMA=public
      - DB_POSTGRESDB_PASSWORD=$DB_PASSWORD
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=cache
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_MODE=redis
      - N8N_RUNNERS_ENABLED=true
      - QUEUE_HEALTH_CHECK_ACTIVE=true
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
      - N8N_SECURE_COOKIE=false
      - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
      - N8N_LOG_LEVEL=debug
    volumes:
      - 'app-data:/home/node/.n8n'
    command: start
    depends_on:
      database:
        condition: service_healthy
      cache:
        condition: service_healthy

  worker-node:
    image: docker.n8n.io/n8nio/n8n
    environment:
      - 'GENERIC_TIMEZONE=${APP_TIMEZONE:-America/New_York}'
      - 'TZ=${TZ:-America/New_York}'
      - DB_TYPE=postgresdb
      - 'DB_POSTGRESDB_DATABASE=${DB_NAME:-automation}'
      - DB_POSTGRESDB_HOST=database
      - DB_POSTGRESDB_USER=$DB_USER
      - DB_POSTGRESDB_SCHEMA=public
      - DB_POSTGRESDB_PASSWORD=$DB_PASSWORD
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=cache
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_MODE=redis
      - N8N_RUNNERS_ENABLED=true
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
      - N8N_SECURE_COOKIE=false
      - N8N_WORKER=true
      - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
      - N8N_LOG_LEVEL=debug
    volumes:
      - 'app-data:/home/node/.n8n'
    command: worker
    depends_on:
      database:
        condition: service_healthy
      cache:
        condition: service_healthy
  
  database:
    image: 'postgres:16-alpine'
    volumes:
      - 'db-data:/var/lib/postgresql/data'
    environment:
      - POSTGRES_USER=$DB_USER
      - POSTGRES_PASSWORD=$DB_PASSWORD
      - 'POSTGRES_DB=${DB_NAME:-automation}'

  cache:
    image: redis:7
    restart: always
    volumes:
      - cache_storage:/data

The logs from the main application indicate it’s set to queue mode, but the worker container isn’t providing any output. There are also no indications of activity within Redis. I’m beginning to think that the issue might be related to how networking is configured in Coolify or whether jobs are being queued correctly.

Has anyone successfully implemented this with Coolify in the past?

I encountered a similar problem with n8n’s queue mode on Coolify previously. It turned out the Redis connection between the containers wasn’t working correctly. Your configuration appears mostly fine, but you should set N8N_WORKER=false on the primary automation service. Without this, the main instance attempts to process tasks itself, diverting them from the designated workers. Additionally, volume permission issues may arise since both containers share the same mount; ensure the worker can access the workflows. You might consider setting N8N_USER_FOLDER=/shared to both services and revising the volume mount. The UI spinner issues typically occur when the main instance can’t receive status updates from the workers. Ensure Redis has sufficient memory and that the connections function properly. To separate n8n’s queue data, adding QUEUE_BULL_REDIS_DB=1 could be beneficial. Coolify’s networking generally supports container communication, but confirm that both containers can access Redis using the same hostname.

No output from the worker container means it’s not connecting to the queue properly. I hit this same issue when I deployed n8n with queue mode last year. Your docker-compose looks fine, but the worker’s probably starting before Redis is ready. Add restart: unless-stopped to your worker service and set up Redis auth even if it’s just a basic password. When the UI spinner gets stuck, it means the main instance can’t get execution status from Redis. Check if Coolify has network isolation settings blocking container communication. Also make sure your Redis container has enough memory - I’ve seen silent failures where connections look good but queue operations don’t work. Run docker logs worker-node to see if it’s even trying to connect to Redis.

Had this exact problem last month! Check your Redis logs first - probably empty. Add QUEUE_BULL_REDIS_PASSWORD= (leave it blank) to both services even without Redis auth. Your worker’s missing DB_POSTGRESDB_PORT=5432 too. Coolify gets weird with network timing, so throw in restart policies and health checks for Redis/Postgres.