I’m having trouble getting my n8n workflow to connect to a local MCP server that I built using FastAPI MCP. The connection keeps failing even though I’ve verified the server works fine.
I already tested my MCP server with MCP Inspector and it shows all tools are available and working. I also tested it with Cursor IDE and it connects without any issues. So I know the server itself is running correctly.
But when I try to use the MCP Client node in n8n, I get an error saying it cannot connect to the MCP server. I’m running n8n in a Docker container with this command:
I added the compression skip flag because I read that gzip compression might interfere with SSE transport but it didn’t help.
In my workflow, I have the MCP Client node configured to point to my local server endpoint but it just won’t connect. The error message shows up in the MCP Client sub-node.
I’m using n8n version 1.94.1 in Docker with SQLite database and enterprise license. The server is running on my local machine.
Update: The issue was solved by changing the URL from http://localhost:8000/mcp to http://host.docker.internal:8000/mcp. The problem was that localhost inside the Docker container refers to the container itself, not the host machine where my MCP server was running.
Docker networking strikes again! This exact thing tripped me up when I first started containerizing my dev setup. The localhost resolution inside containers is one of those gotchas that seems obvious later but wastes hours of debugging. Another approach - use your host machine’s actual IP instead of host.docker.internal if you hit platform compatibility issues. Find it with ip route show default on Linux or ipconfig getifaddr en0 on Mac. I’ve found this more reliable in some CI/CD environments where docker.internal isn’t always available. Key lesson: always think about network boundaries when containerizing apps that need to talk to host services.
yeah, classic docker issue! i just use --network=host when developing locally - cuts through all the hostname mess. you lose some isolation, but for dev work it’s fine and everything works like it’s running natively. obviously don’t do this in production, but for testing it’s way simpler than tracking internal hostnames.
Good catch on the Docker networking issue. I hit the same problem with containerized apps talking to host services. host.docker.internal works great on Docker Desktop for Mac/Windows, but Linux users need --add-host=host.docker.internal:host-gateway in their docker run command since it’s not available by default. You could also try --network=host mode if you don’t need container isolation, though you’ll lose port mapping benefits. For production, I just run both the MCP server and n8n in the same Docker network with docker-compose - kills these hostname headaches completely.