Docker Container Keeps Shutting Down at Same Time Daily

I’m experiencing a strange problem where my Docker container stops running at the same time every morning. The logs show this message:

[8/21/2025] [3:05:00 AM] [System] › ℹ notice Process ID 145 got SIGTERM signal
[8/21/2025] [3:05:00 AM] [System] › ℹ notice Shutting down service.

I initially thought this was due to my auto-update tool, but I’ve turned off the updates for this container. Even after I removed the update manager, the issue persists.

Here’s my docker-compose file:

# Reverse Proxy Service
proxy_service:
  image: jc21/nginx-proxy-manager:latest
  container_name: reverse_proxy
  environment:
    - TZ=America/Chicago
  ports:
    - "80:80"
    - "81:81"
    - "443:443"
  volumes:
    - /home/docker/nginx/config:/data
    - /home/docker/nginx/ssl:/etc/letsencrypt
    - /home/docker/nginx/custom:/snippets
  labels:
    - "com.centurylinklabs.watchtower.enable=false"
  restart: unless-stopped
  networks:
    - web_proxy

Does anyone have any suggestions on what could be causing this shutdown at the same time every day?

could be watchtower still running? even with the label disabled, check if it’s active with docker ps | grep watchtower. also, some systems have wonky logrotate configs that send signals to containers during rotation. I had the same issue once - turned out my backup script was pausing containers before running.

That timing screams host system swap issues or OOM killer. I had the exact same thing happen - containers getting killed during quiet hours when garbage collection kicked in. Check your host’s memory usage and look at /var/log/syslog or dmesg for OOM messages around 3:05 AM. Could also be Docker’s cleanup processes running - try docker system events to see what’s actually happening. Does your host have network monitoring tools that restart services during maintenance? Container registries and health check failures can cause restarts too, so make sure your nginx-proxy-manager isn’t pulling updates even with the watchtower exclusion.

It seems like there’s a possibility that a scheduled task or cron job on your host system may be terminating your container. I encountered a similar issue previously, where my hosting provider had routines set up that would kill containers consuming excessive memory at specific times. I recommend you check the cron jobs on your host by running crontab -l and examining directories like /etc/cron.d/ and /etc/cron.daily/. Additionally, reviewing system logs around 3:05 AM with the command journalctl -S "3:00" -U "3:10" may provide more insights into what’s happening at that hour. Some monitoring or security tools might also terminate containers based on resource usage, so ensure there’s no such software configured for scheduled scans.

Classic container monitoring problem. I’ve seen this exact issue - containers getting killed by system processes or hitting resource limits.

SIGTERM usually means Docker daemon or the host system killed it. Check if you’ve got maintenance scripts running at 3:05 AM. Also check your host’s memory usage around that time.

Here’s what I’d actually do - set up automated monitoring that detects when your container dies and restarts it immediately. Plus configure alerts so you know what’s causing it.

I use Latenode for this. Create a workflow that checks your Docker container status every few minutes. If it’s down, it restarts automatically and sends you notification with logs.

You can also grab system metrics right before shutdown happens, so you’ll finally see what’s killing your container. Way better than checking logs manually every morning.

That 3:05 AM timing screams scheduled system operation, not random resource issues. I’ve seen this exact thing - containers getting killed by misconfigured logrotate processes that restart services. Check your cron jobs, but also dig into your Docker daemon config and run systemctl list-timers. Docker has its own maintenance routines, and host systems often run cleanup tasks targeting specific containers. Also worth checking: does your hosting provider have automated maintenance windows? Most VPS/cloud services run background stuff during off-peak hours that can mess with containers. Look for system updates, disk maintenance, or security scans around that time. The SIGTERM signal is key here - that’s a graceful shutdown request, not a crash. Really narrows things down. Try running docker events overnight to catch the Docker daemon activity in real-time and see what correlates with the shutdown timing.