n8n automation gets stuck indefinitely and creates damaged files

I’m running n8n in a Docker container on my local machine. I have a basic automation that takes data from a webhook trigger and saves it as a file using a binary file writer node. The binary writer saves files to a container folder that’s mapped to my local server directory.

The workflow is designed to receive image uploads from a web form and store them on my server. Everything was working fine until I made some changes to the Docker container (added an image metadata parsing library). Now I’m stuck and can’t figure out how to fix it.

When the automation runs, it never finishes and shows a 500 error to the API request. In the n8n interface, the execution stays in “running…” status forever. The weird thing is that a file does get created in the target folder with the right size, but it’s corrupted and won’t open as an image.

The logs show an error about moving binary data from the temp folder to the final location, but the file is actually there. I’ve tried everything I can think of but I’m out of ideas. How can I debug this issue further? What might be causing this problem?

Here’s my workflow setup:

{
  "name": "save-photo-upload",
  "nodes": [
    {
      "parameters": {},
      "name": "Start",
      "type": "n8n-nodes-base.start",
      "typeVersion": 1,
      "position": [250, 400],
      "id": "start-node-123"
    },
    {
      "parameters": {
        "fileName": "=/uploads/images/{{ $json.query.imageName }}.png",
        "dataPropertyName": "=fileData",
        "options": {}
      },
      "name": "Save File",
      "type": "n8n-nodes-base.writeBinaryFile",
      "typeVersion": 1,
      "position": [800, 400],
      "id": "file-writer-456"
    },
    {
      "parameters": {
        "authentication": "headerAuth",
        "httpMethod": "POST",
        "path": "image-upload-endpoint",
        "responseMode": "lastNode",
        "options": {}
      },
      "name": "HTTP Trigger",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 1,
      "position": [500, 400],
      "webhookId": "upload-webhook-789",
      "id": "webhook-node-abc"
    }
  ],
  "connections": {
    "HTTP Trigger": {
      "main": [[
        {
          "node": "Save File",
          "type": "main",
          "index": 0
        }
      ]]
    }
  },
  "active": true
}

I hit the same issue when adding new dependencies to my n8n Docker setup. Hanging executions with corrupted files usually means permissions or volume mounting problems from your container changes. First, check if your new metadata parsing library is messing with the file writing process. Some image processing libraries lock files or change how binary data gets handled. Remove that library temporarily and see if it still happens. Files creating with correct size but corrupted means the write operation’s getting interrupted mid-process. Run docker logs [container_name] during execution - you might catch memory issues or library conflicts that don’t show in n8n’s interface. Also double-check your volume mappings after the container rebuild. Docker sometimes doesn’t recreate mount points properly and you get permission mismatches. I had to completely nuke and recreate my container volumes to fix similar hanging. Try adding error handling with a Set node before the file writer to inspect the actual binary data structure coming through.

This screams binary data handling issue from your Docker changes. When n8n workflows hang with corrupted files, it’s usually because the binary stream isn’t closing or flushing properly. Your new metadata library is probably messing with Node.js file descriptors or memory allocation. First, check your container’s memory limits - image processing libraries are RAM hogs and cause weird timeouts. Run htop inside the container during workflow execution to see if you’re maxing out resources. Try adding explicit encoding to your writeBinaryFile node. Docker environment changes can mess with how binary data gets read. Set dataPropertyName to ‘data’ instead of ‘fileData’ and see what happens. That 500 error + infinite running status? Classic unhandled promise rejection. Your metadata library probably added an async operation that’s not resolving. Check if it needs specific setup or cleanup steps in Docker.

Your Docker container rebuild probably screwed up the temp folder permissions. n8n uses /tmp to process binaries before moving them to their final spot. Your metadata lib needs write access there but can’t get it. Run docker exec -it [container] ls -la /tmp while it’s running - you’ll probably see permission denied errors. Quick fix: rebuild the container with proper user permissions or mount /tmp as a volume. Files get corrupted when the temp-to-final move fails halfway through.