n8n automation stops responding and creates damaged files

I’m running n8n in a Docker container on my local machine. I have a basic automation that receives data through a webhook trigger and saves binary files using a file writer node. The file writer saves data to a folder that’s mapped from the container to my host system.

The workflow was working fine until I made some changes to install an image metadata library. Now when the webhook gets triggered, the execution never completes and shows as running forever in the n8n interface. The API call returns a 500 error. Files do get created in the target folder but they’re corrupted and can’t be opened as images.

The logs show an error about moving files from temp storage to the final binary data location, but the files seem to be there. Here’s my workflow setup:

{
  "name": "save-photo-upload",
  "nodes": [
    {
      "parameters": {},
      "name": "Trigger",
      "type": "n8n-nodes-base.start",
      "typeVersion": 1,
      "position": [150, 300],
      "id": "start-node-123"
    },
    {
      "parameters": {
        "fileName": "=/uploads/images/{{ $json.query.imageName }}.png",
        "dataPropertyName": "=fileData",
        "options": {}
      },
      "name": "Save File",
      "type": "n8n-nodes-base.writeBinaryFile",
      "typeVersion": 1,
      "position": [800, 300],
      "id": "file-writer-456"
    },
    {
      "parameters": {
        "authentication": "headerAuth",
        "httpMethod": "POST",
        "path": "photo-upload-handler",
        "responseMode": "lastNode",
        "options": {}
      },
      "name": "HTTP Receiver",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 1,
      "position": [400, 300],
      "webhookId": "webhook-789",
      "id": "http-trigger-101"
    }
  ],
  "connections": {
    "HTTP Receiver": {
      "main": [[
        {
          "node": "Save File",
          "type": "main",
          "index": 0
        }
      ]]
    }
  },
  "active": true
}

The error shows: ENOENT: no such file or directory, rename '/home/node/.n8n/binaryData/temp/...' -> '/home/node/.n8n/binaryData/executions/...'

How can I debug this file handling issue? I don’t want to rebuild everything from scratch because I have custom configurations.

Had the same issue when my n8n binary data storage got corrupted after container updates. Those hanging executions plus file corruption means your binary data directory is messed up. Stop your n8n container and manually clear the binary data temp folder - it’s at /home/node/.n8n/binaryData/temp/ inside the container. Access it through your volume mount on the host, delete everything in the temp directory while n8n’s stopped, then restart. That image metadata library you installed might be holding file handles or creating race conditions when moving temp files to final location. Check if it spawns background processes that don’t clean up properly. Also verify your disk space on both host and container - the rename operation fails without enough space during the move. Your workflow structure looks fine, so this is definitely an environment or dependency problem, not configuration.

Check if your Docker container’s running out of memory during file processing. I had similar hanging issues - turned out the image metadata library was eating RAM during operations. Run docker stats while triggering the webhook to watch for memory spikes. That library might also need extra volume mounts or environment variables to work in containers. Did you update your Docker setup after installing it?

This is a classic Docker volume permission issue I’ve seen with n8n containers before. The ENOENT error during file renames usually means the container can’t write to the mounted volume or there’s a user ID mismatch between container and host. First, check your Docker container’s user permissions. The n8n container runs as user ‘node’ (UID 1000), so your host directory needs to be writable by this user. Fix it with sudo chown -R 1000:1000 /path/to/your/mounted/directory on the host. Also verify your Docker volume mapping. Make sure you’re mounting the binary data directory correctly in your docker-compose or run command. The container needs write access to both temp and final binary data locations. Since this started after you installed the image metadata library, there might be a conflict with how binary data gets processed. Try removing that library temporarily to see if the issue goes away. The corruption suggests the file write gets interrupted mid-process, which fits the permission theory.