Unable to establish connection between n8n and local MCP server - MCP Client sub-node connection failure

I’m having trouble setting up my n8n automation workflow to communicate with a local MCP server I built using FastAPI MCP. No matter what configuration I try, n8n keeps failing to establish the connection.

I’ve already verified that my MCP server works properly by testing it with MCP-Inspector and also confirmed it functions correctly with Cursor IDE. Both tools can connect and use the server without any issues.

The error I keep getting from n8n is a connection failure in the MCP Client node. I’m running n8n in a Docker container using this command:

docker run -it --rm --name automation-node \
  -p 5678:5678 \
  -v automation_data:/home/node/.n8n \
  -e N8N_SKIP_RESPONSE_COMPRESSION=true \
  docker.n8n.io/n8nio/n8n

I added the compression skip flag based on some forum posts suggesting that gzip compression might interfere with SSE transport, but this didn’t solve the problem.

My n8n setup includes version 1.94.1 running on Docker with SQLite database and enterprise license. The MCP Client node is configured to point to my local server endpoint but consistently fails to connect.

Has anyone successfully connected n8n to a local MCP server? What configuration steps might I be missing?

Update: Fixed

The issue was resolved by changing the server URL from http://localhost:8000/mcp to http://host.docker.internal:8000/mcp. The problem was that localhost inside the Docker container refers to the container itself, not the host machine where my MCP server was running.

Running into Docker networking issues like this is frustrating but teaches you a lot about container isolation. I’ve dealt with similar headaches when trying to connect containerized apps to local services. One thing that might help others facing this same problem is checking your MCP server’s binding configuration - make sure it’s listening on 0.0.0.0:8000 rather than 127.0.0.1:8000, otherwise even with the correct Docker networking setup it won’t accept connections from outside the host. Also worth mentioning that if you’re planning to scale this setup or move it to production, consider putting both n8n and your MCP server in the same Docker network using docker-compose. That way they can communicate using service names instead of relying on host.docker.internal, which makes the whole setup more portable and robust.

Good catch on the Docker networking issue. This is actually a pretty common stumbling block when working with containerized applications that need to communicate with services on the host machine. I ran into something similar when setting up my development environment with multiple Docker containers.

For future reference, if you’re on Linux and host.docker.internal doesn’t work out of the box, you can also use --add-host=host.docker.internal:host-gateway in your docker run command to make it available. Another approach that works reliably across platforms is using --network=host mode, though this removes the container’s network isolation.

It’s worth noting that the compression flag you tried was actually addressing a different category of issues related to Server-Sent Events, so that was good troubleshooting instinct even though it wasn’t the root cause here.

yeah docker networking can be a real pain sometimes! i’ve been bitten by the localhost vs host.docker.internal thing more times than i care to admit lol. another quick tip - if you’re still having issues you can also try using your actual machine’s IP address instead of host.docker.internal, sometimes that works when the docker internal stuff acts up.