I’m having trouble with my N8N workflow that keeps crashing on the second HTTP request.
Setup: I built a NodeJS service locally that uses Steel browser (switched from Puppeteer). The service runs in Docker and works great when used standalone.
The Problem: When I call this service through N8N HTTP nodes, the first request works fine but the second one always fails with this error: Name production mismatch issue
What’s weird: The NodeJS service stays running and responds normally. I can test it manually using N8N’s “Execute Node” button or with Postman and make multiple calls without any problems.
Only happens during actual workflow execution when N8N makes two consecutive HTTP requests to my service.
Already tried:
- Double checked all request parameters and data
- Added wait nodes between HTTP calls (tried 2-15 second delays)
- Nothing helps
This seems like an N8N workflow execution issue rather than a problem with my service. Anyone else run into something similar? Any ideas what might cause this?
This sounds like N8N’s connection pooling getting confused. I’ve hit the same thing when N8N tries reusing connections between HTTP nodes in one workflow run. N8N’s keeping some internal state that breaks on the next call. Try enabling ‘Ignore SSL Issues’ in your HTTP node settings - even if you’re not using HTTPS. This forces N8N to create fresh connections each time. Also, check if your Steel browser service actually cleans up resources between requests. Docker containers sometimes hold onto session data that messes with N8N. What worked for me was splitting this into two separate workflows and using a webhook to chain them. Forces N8N to treat each HTTP call independently instead of trying to be smart about reusing stuff.
had this exact issue last month! N8n’s workflow context gets corrupted between calls. set different timeout values for each HTTP node, even when they’re hitting the same service. also check if your Steel browser is releasing memory properly between requests - docker sometimes keeps zombie processes running. quick fix that worked for me: add a simple GET health check call between your main requests. resets n8n’s internal state.
I encountered a similar situation with N8N while interacting with containerized services in quick succession. The ‘Name production mismatch’ error doesn’t stem from your service but rather indicates that N8N’s internal state is getting mixed up between the HTTP requests. To resolve this, I suggest adding unique headers for each HTTP call, such as timestamps or a workflow execution ID. Additionally, enabling the ‘Always Output Data’ option in the HTTP node settings can help maintain consistency in response formats, as N8N might fail if it detects discrepancies. Since your service functions correctly during manual tests, it’s likely a workflow issue rather than a fault in your code.