Local MCP server integration with n8n workflow fails - Connection error in MCP Client node

I’m having trouble getting my n8n automation workflow to connect with a local MCP server that I built using FastAPI MCP framework. The connection keeps failing even though I’ve verified the server works fine.

I already tested my MCP server with MCP-Inspector and it shows all tools are available and working correctly. I also used it successfully with Cursor IDE, so I know the server itself is functioning properly.

When I try to use the MCP Client node in n8n, I get a connection error saying it cannot reach the MCP server. This happens no matter what configuration I try.

I’m running n8n in a Docker container with this setup:

docker run -it --rm --name n8n_instance \
  -p 5678:5678 \
  -v n8n_volume:/home/node/.n8n \
  -e N8N_SKIP_RESPONSE_COMPRESSION=true \
  docker.n8n.io/n8nio/n8n

I added the compression skip flag because I thought SSE transport might be having issues with gzip compression, but that didn’t help.

My n8n version is 1.94.1 running on Docker with SQLite database. The MCP Client node is configured to point to my local server endpoint but it just won’t connect.

Update: Fixed

The issue was that I was using http://localhost:8000/mcp which points to localhost inside the Docker container, not my host machine. Changing it to http://host.docker.internal:8000/mcp solved the problem and now the MCP Client connects successfully!

Classic Docker networking gotcha! Hit the same thing connecting n8n to my local PostgreSQL - wasted hours debugging the database connection when it was just the localhost issue. For anyone else stuck on this: host.docker.internal is made for containers to reach services on the host machine. Linux users might need --add-host=host.docker.internal:host-gateway in your docker run command if it doesn’t work out of the box. You can also use --network=host mode when running n8n - this makes the container use the host’s network stack directly. Then localhost:8000 works normally, but heads up - it exposes all container ports to the host.

Nice catch on the Docker networking! Firewalls can also block connections even when networking looks fine. If host.docker.internal still doesn’t work, check if your MCP server is binding to 0.0.0.0:8000 instead of 127.0.0.1:8000 - containers can’t reach services that only bind to loopback.

Docker networking issues with n8n are super frustrating but pretty common. The host.docker.internal fix helps, but MCP server config is just as important. Make sure your FastAPI server listens on all interfaces with host="0.0.0.0" in uvicorn - not just localhost. Check if your MCP server has CORS settings blocking n8n container requests too. I had similar issues with another service and had to add proper CORS headers for cross-origin requests from the Docker network. Also check the n8n container logs - they usually have more detailed errors that don’t show up in the node config screen.

This topic was automatically closed 4 days after the last reply. New replies are no longer allowed.