Cannot establish connection between n8n and local MCP server - MCP Client sub-node connection failure

I’m having trouble getting my n8n automation to communicate with a local MCP server I built using FastAPI MCP. The connection keeps failing no matter what configuration I try.

I’ve already verified that my MCP server works properly by testing it with MCP-Inspector and also confirmed it functions correctly with Cursor IDE. So I know the server itself is running fine.

The error I keep getting from n8n shows that it cannot connect to the MCP server. I’m running n8n in a Docker container using this command:

docker run -it --rm --name automation_engine \
  -p 5678:5678 \
  -v automation_data:/home/node/.n8n \
  -e N8N_SKIP_RESPONSE_COMPRESSION=true \
  docker.n8n.io/n8nio/n8n

I added the compression skip flag because I thought gzip might be interfering with the SSE transport, but that didn’t help.

My workflow has an MCP Client node that’s supposed to connect to my local server but it just won’t establish the connection. I’ve tried different configurations but nothing seems to work.

Here are my current system details:

  • n8n Version: 1.94.1
  • Platform: docker (self-hosted)
  • Node.js: 20.19.1
  • Database: sqlite
  • License: enterprise

Has anyone else run into this issue? I’m not sure if it’s a networking problem with Docker or something else entirely.

Update: Fixed!

Turns out the issue was with the URL I was using. I had http://localhost:8000/mcp which was trying to connect to localhost inside the Docker container instead of my host machine. Changed it to http://host.docker.internal:8000/mcp and now it works perfectly!

Good catch on the Docker networking issue. This happens all the time with containerized apps trying to talk to host services. host.docker.internal works great on Docker Desktop, but for Linux production you’ll want --network host or add --add-host=host.docker.internal:host-gateway to your docker run command. I hit the same thing with a different automation setup and wasted hours messing with MCP config when it was just container isolation. You can also bind mount your MCP server or run both services in the same Docker network if you don’t want to deal with the host.docker.internal dependency.

Classic Docker gotcha! I just throw --network=host on my docker run command and skip the port mapping headache. Container uses the host network directly, so localhost actually works like you’d expect. Less secure, sure, but way easier for dev work than memorizing host.docker.internal nonsense.

Docker networking strikes again! Had the same headaches setting up my workflow automation. What worked for me: create a custom Docker network and run both n8n and the MCP server on it. Cuts out all the localhost confusion. If you’re scaling or going to production, use docker-compose to manage both services. Way cleaner networking and you won’t need the host.docker.internal hack. Heads up - some corporate firewalls block Docker’s internal networking, so you might need --network host in those setups.