MCP Client connection issues with local server in n8n workflow - Unable to establish connection

MCP Server Connection Problem in n8n

I’m having trouble getting my n8n automation to talk to my local MCP server that I built using FastAPI MCP. No matter what I try, the connection just won’t work.

What I’ve verified

I already double checked that my MCP server works properly using MCP-Inspector tool and also tested it with Cursor IDE. Both work fine so I know the server itself is good.

The error I keep getting

When I try to use the MCP Client node in n8n, it gives me an error saying it can’t connect to the MCP server.

My current setup

I’m running n8n in a Docker container with this command:

docker run -it --rm --name automation \
  -p 5678:5678 \
  -v automation_data:/home/node/.n8n \
  -e N8N_SKIP_RESPONSE_COMPRESSION=true \
  docker.n8n.io/n8nio/n8n

I added that environment variable because I read somewhere that compression might mess up the SSE transport, but it didn’t seem to help.

What I’ve tried

I looked through several forum posts and GitHub issues about similar problems but none of the solutions worked for me. I’m using n8n version 1.94.1 running on Docker with SQLite database.

The MCP Client node is configured to point to my SSE MCP server endpoint but it just won’t connect. I can see the tools are available when I test with other clients.

System details

  • n8n Version: 1.94.1
  • Platform: Docker self-hosted
  • Node.js: 20.19.1
  • Database: SQLite
  • License: Enterprise production

Any ideas what might be causing this connection problem?

Update: Fixed!

Turns out the issue was with the URL I was using. I had http://localhost:8000/mcp but that points to localhost inside the Docker container, not my host machine. Changed it to http://host.docker.internal:8000/mcp and now it works perfectly!

Classic Docker networking gotcha! Hit the same issue last month with postgres connections. localhost inside the container isn’t the same as localhost on your host machine. Good catch figuring that out - host.docker.internal saves tons of headaches.

Docker networking gets everyone when they start with MCP servers. Inside the container, localhost points to the container, not your machine. I wasted hours debugging this exact thing before figuring out it was networking. You can also use your machine’s actual IP instead of host.docker.internal - works better if you’re deploying to different environments. Quick heads up: firewalls can still block these connections even with the right URL, so check those rules if you’re still having issues.

Same headaches here with MCP through Docker. Hardcoding host.docker.internal gets messy when you’re running multiple services or switching between hosts. I switched to Docker Compose with a custom network - way cleaner for service discovery. Also hit a wall where the MCP server was binding to 127.0.0.1 instead of 0.0.0.0. Even with Docker networking set up right, connections would fail. Make sure your FastAPI server listens on all interfaces, not just localhost. The binding config matters as much as getting Docker networking right.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.