Unable to establish connection between n8n and local MCP server - MCP Client sub-node connection failure

Issue Description

I’m having trouble getting my n8n automation workflow to connect with my locally running MCP server. The server was built using the FastAPI MCP framework and works perfectly when tested with other tools.

What I’ve Verified

  • MCP server functionality confirmed through MCP-Inspector testing
  • Successfully tested server integration with Cursor IDE
  • Server responds correctly to all test queries

Current Setup

Running n8n in a Docker container with this configuration:

docker run -it --rm --name automation_server \
  -p 5678:5678 \
  -v automation_data:/home/node/.n8n \
  -e N8N_SKIP_RESPONSE_COMPRESSION=true \
  docker.n8n.io/n8nio/n8n

Error Details

The MCP Client node consistently fails with connection errors when attempting to reach my local MCP server endpoint. I’ve tried various troubleshooting approaches including disabling response compression.

Environment Information

System Details:

  • n8n Version: 1.94.1
  • Platform: Docker (self-hosted)
  • Node.js: 20.19.1
  • Database: SQLite
  • License: Enterprise (production)

Storage Config:

  • Success/Error logs: enabled
  • Binary mode: memory
  • Manual execution: enabled

Pruning Settings:

  • Enabled with 336 hour retention
  • Max count: 10000 executions

Solution Found

The issue was related to Docker networking. When using http://localhost:8000/mcp as the server URL, n8n was trying to connect to localhost within the container instead of the host machine.

Fix: Changed the MCP server URL from http://localhost:8000/mcp to http://host.docker.internal:8000/mcp

This allows the containerized n8n instance to properly reach the MCP server running on the host system. The connection now works perfectly and all MCP tools are accessible within the workflow.

I had the same issue with containerized apps trying to reach host services. The host.docker.internal fix you found works great for Docker Desktop on Windows and macOS. On Linux though, you’ll need either the --network host flag or add --add-host=host.docker.internal:host-gateway to your docker run command. What worked better for me was just using my host machine’s actual IP instead of localhost. Get it with ip route show default on Linux or ipconfig on Windows. This way’s more portable across different Docker setups and doesn’t depend on Docker Desktop features.

You could also use docker compose instead of plain docker run. Set up a network bridge and reference services by name - no need to mess with host.docker.internal. Way cleaner and works the same across all platforms without platform-specific workarounds.

Here’s what saved me hours of debugging - use the Docker bridge network IP directly. Run docker network inspect bridge and grab the gateway IP (usually 172.17.0.1). Then set your MCP server URL to http://172.17.0.1:8000/mcp. Way more reliable than host.docker.internal since it doesn’t need Docker Desktop and works the same on Linux, Windows, and macOS. I’ve run this setup in production for months with zero connectivity problems. Just make sure your MCP server binds to 0.0.0.0 instead of 127.0.0.1 so it accepts connections from the Docker bridge network.