Nginx Proxy Manager connection issues following Ubuntu system update

Background

I’m hosting several applications using Docker containers on my Ubuntu machine. I use Nginx Proxy Manager to handle domain routing for these services. Everything was working perfectly until I ran a system update.

What happened

After running system updates, all my containerized services went offline. Being inexperienced with server management, I got worried and tried to fix things before simply restarting the system. I executed these commands in sequence:

apt-get update && apt-get upgrade
cd ~/proxy-manager
docker compose up -d
sudo reboot

Current problem

Following the system restart, my containers are running but Nginx Proxy Manager won’t let me access the admin interface. When I try to log in, the page just spins for several seconds and then stops responding. All my domain redirects have also stopped functioning.

The application container is showing this database connection error:

[5/29/2025] [8:50:37 AM] [Global   ] › ✖  error     connect ETIMEDOUT Error: connect ETIMEDOUT
    at Connection._handleTimeoutError (/app/node_modules/mysql2/lib/connection.js:205:17)
    at listOnTimeout (node:internal/timers:581:17)
    at process.processTimers (node:internal/timers:519:7) {
  errorno: 'ETIMEDOUT',
  code: 'ETIMEDOUT',
  syscall: 'connect',
  fatal: true

The database container logs don’t show any errors though.

Docker setup

Both containers run on a shared network called webservices:

services:
  proxy:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '80:80'   # HTTP
      - '443:443' # HTTPS  
      - '81:81'   # Management interface
    environment:
      DB_MYSQL_HOST: "database"
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: "proxyuser"
      DB_MYSQL_PASSWORD: "secretpass"
      DB_MYSQL_NAME: "proxydb"
    volumes:
      - ./app-data:/data
      - ./certificates:/etc/letsencrypt
    depends_on:
      - database

  database:
    image: 'jc21/mariadb-aria:latest'
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'rootpass'
      MYSQL_DATABASE: 'proxydb'
      MYSQL_USER: 'proxyuser'
      MYSQL_PASSWORD: 'secretpass'
      MARIADB_AUTO_UPGRADE: '1'
    volumes:
      - ./db-storage:/var/lib/mysql

networks:
  default:
    external: true
    name: webservices

How can I get this working again?

looks like your containers might be starting in wrong order after the update. try stopping everything with docker compose down then bring up database first with docker compose up database -d, wait like 30 secs, then start the proxy container. ubuntu updates sometimes mess with docker’s internal timing and mariadb needs extra time to get ready especially after unexpected shutdowns.

The ETIMEDOUT error suggests your database container isn’t fully ready when the proxy container tries to connect. I’ve encountered this exact issue after system updates that change Docker’s networking behavior. Try adding a healthcheck to your database service and modify the depends_on configuration to wait for the database to be actually ready, not just started. Also check if your external network ‘webservices’ still exists properly after the reboot - sometimes Docker networks get corrupted during updates. Run docker network ls to verify, and if needed recreate it with docker network create webservices. The spinning login page is definitely caused by the database connection failing, so once you resolve the network connectivity between containers, everything should work normally again.

I had a similar database timeout issue after an Ubuntu update that affected my Docker setup. The problem is likely related to container startup timing - MariaDB needs more time to initialize after system updates, especially if there were any filesystem or Docker daemon changes. Try increasing the connection timeout by adding DB_MYSQL_TIMEOUT: 60000 to your proxy environment variables. Also, your database might be stuck in recovery mode after the abrupt restart. Check the database container logs more thoroughly with docker logs [container_name] --tail 100 to see if there are any initialization messages you missed. Sometimes the MariaDB container appears healthy on the surface but is still performing crash recovery in the background. If that doesn’t work, you might need to temporarily stop both containers, backup your database volume, and restart them with a clean initialization sequence to get past the connection timeout issue.