n8n automation process gets stuck and creates damaged files

I’m running n8n in a Docker container on my local machine. I have a basic automation that uses a webhook to receive data from an API POST request and then saves it using a file output node. The file gets saved to a folder in the container that maps to a directory on my host system. This workflow is supposed to handle image uploads from a web form and store them on my server.

Everything was working fine until I made some changes to the Docker setup (added an image processing library). Now I can’t get it back to working order and I’m stuck. I could start over completely but I’ve customized the installation quite a bit and I’m worried I won’t remember all the changes.

When I trigger the workflow through the API, it never completes and gives a 500 error back to the client. In the n8n interface, the execution shows as “running…” forever. The weird thing is that a file does get created in the target folder and it’s about the right size, but the image is corrupted and won’t open.

The logs show an error about moving binary data from a temp folder, but I can see the execution folder and file are actually there. What could be causing this issue and how can I debug it further?

Here’s my workflow setup:

{
  "name": "save-form-image",
  "nodes": [
    {
      "parameters": {},
      "name": "Start",
      "type": "n8n-nodes-base.start",
      "typeVersion": 1,
      "position": [150, 300],
      "id": "start-node-id"
    },
    {
      "parameters": {
        "filePath": "=/uploads/gallery/{{ $json.query.imageName }}.png",
        "binaryPropertyName": "=fileData",
        "options": {}
      },
      "name": "Save File",
      "type": "n8n-nodes-base.writeBinaryFile",
      "typeVersion": 1,
      "position": [800, 280],
      "id": "file-writer-node"
    },
    {
      "parameters": {
        "authentication": "headerAuth",
        "httpMethod": "POST",
        "path": "image-upload-handler",
        "responseMode": "lastNode",
        "options": {}
      },
      "name": "API Receiver",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 1,
      "position": [400, 280],
      "webhookId": "webhook-uuid-here",
      "id": "webhook-receiver-node"
    }
  ],
  "connections": {
    "API Receiver": {
      "main": [
        [
          {
            "node": "Save File",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  },
  "active": true
}

the image processing lib you added is probably interfering with file writes. first, check if your container has enough disk space - corrupted files usually happen when writes fail halfway through. try testing with a smaller image to see if it works. could be a timeout issue instead of actual corruption.

I see a potential issue with your binary data handling. The webhook receives multipart form data, but you’re referencing fileData as the binary property without checking if it’s properly extracted from the POST request. When handling file uploads through webhooks, binary data processing varies based on how it’s sent. If your form sends the image as multipart/form-data, the binary property probably isn’t named fileData by default. Add a debug step or check the webhook’s raw output to see what binary properties are actually available. Your file path expression might be wrong too. The {{ $json.query.imageName }} syntax assumes the image name comes through as a query parameter, but with POST file uploads, this data usually sits in the body or form fields. Double-check that $json.query.imageName actually resolves to a valid filename. Since corruption started after adding the image processing library, there might be a conflict in binary data handling. Try temporarily removing that library to see if the issue goes away - this’ll help you figure out if it’s a config problem or library conflict.

I’ve encountered similar issues following Docker configuration changes, particularly regarding volume mounts and file permissions. The fact that files are being created but are corrupted suggests a problem in the binary data transfer between the temporary storage and the final location.

Firstly, you should verify that your volume mounts remain correctly configured after the Docker adjustments. If the addition of the image processing library altered any volume mappings or the container’s working directory, that could contribute to the issue. The temp folder error indicates that n8n might be unable to move files from its internal temporary location to your mapped volume.

Consider running the container with the --privileged flag temporarily to eliminate any permission issues. Additionally, ensure that the /uploads/gallery/ directory truly exists and has the necessary write permissions both inside the container and on your host system, as Docker updates can reset ownership to root, which could hinder file operations.

It’s also possible you’re facing memory constraints; image processing libraries can be resource-intensive, and if your container exhausts memory while processing binary data, this might result in file corruption. Check the logs with docker logs [container-name] for any memory-related errors that might not be visible in the n8n interface.