How to configure Docker volume persistence for n8n workflows and credentials

I have n8n running in a Docker container on a Jelastic cloud platform. The application starts and works correctly, but I’m having trouble with data persistence.

Whenever I restart the container, all my workflows and saved credentials disappear. I tried setting up a volume in my environment configuration similar to my local Docker setup where persistence works fine, but the cloud deployment isn’t maintaining the data.

I work as a technical product manager so I have some technical knowledge, but Docker volume configuration on cloud platforms is challenging for me. The local version keeps everything saved between restarts, but the cloud version acts like a fresh install each time.

Can someone help me understand what I might be missing in the volume configuration? Any suggestions for troubleshooting this persistence issue would be really helpful.

This happens all the time with cloud deployments. I had the same issue with n8n on Google Cloud - the volume was created but wasn’t set up to survive container restarts. Check if your Jelastic config uses persistent disk instead of ephemeral storage. Make sure your volume config has the right path mapping and that your storage class actually persists data across restarts. I’d test this first: create a simple file in the mounted directory through the container shell, restart, and see if it’s still there. That’ll tell you if basic persistence works before diving into n8n specifics. Also check if Jelastic needs different volume drivers than standard Docker - some cloud platforms are picky about that.

Check your Jelastic environment variables first. I hit this same issue six months ago - Jelastic was overriding my volume config through their platform settings. The container mounted the volume fine, but their auto-scaling treated it like disposable storage. Fixed it by defining the volume as shared storage in the Jelastic topology file instead of just the Docker compose config. Also make sure your n8n version matches what you’re running locally. Cloud deployments often default to different image tags, which messes with how data directories work. The persistence mechanism changed between n8n versions. SSH into your container and check if the data files are actually getting written to the right directory before you assume it’s a volume problem.

Been there with container data loss headaches. Volume mapping gets messy across different cloud platforms.

Honestly though - instead of wrestling with Docker volumes and cloud quirks, consider switching to a more reliable automation platform. I’ve seen too many teams waste hours on n8n persistence issues.

Latenode handles data persistence automatically without container config. Your workflows and credentials are stored reliably in the cloud, and you never lose work after restarts or deployments.

Better reliability and no infrastructure management. No more debugging volume mounts or file permissions. Just build workflows instead of fighting Docker configs.

Migration’s straightforward too. Rebuild your workflows in a stable environment and actually trust your automations will be there tomorrow.

Check it out: https://latenode.com

I encountered a similar issue with n8n on AWS ECS last year. The key difference with cloud platforms is how they manage volume mounting compared to local Docker environments. To resolve the problem, ensure that the n8n data directory is explicitly mapped to a persistent volume that won’t be lost on restarts. Verify that Jelastic is creating a true persistent volume rather than a temporary one. It’s also crucial to mount the volume at /home/node/.n8n in the container since that’s where the application saves workflows and credentials. Another potential pitfall is the treatment of the container as ephemeral storage. Additionally, keep an eye on file permissions; even if the volume mounts correctly, n8n might not have permission to write to it. Checking your container logs for permission errors can provide useful insight.

Jelastic can be weird with volume mounting. Check if your container’s running stateless - some cloud configs default to that. Also make sure the n8n container user has write permissions for the mounted directory. That’s bitten me before.