Hey everyone!
I’m fairly new to working with n8n workflows but I’m really enjoying learning about it. I’ve managed to get some basic workflows running and now I want to create something more advanced.
What I’m trying to do:
I want to build a chatbot that can pull answers from my WordPress FAQ section before using ChatGPT as a backup option.
My current workflow setup:
- WordPress node is connected and authentication works fine
- Resource type: Page
- Operation: Get
- FAQ page ID is configured correctly
The problem I’m facing:
When someone asks a question, I can see the WordPress node is processing (it shows the loading spinner), but the chatbot jumps straight to ChatGPT responses instead of waiting for the FAQ data to load first.
What I need help with:
How can I make sure the workflow checks my FAQ content first and only uses ChatGPT if no relevant answer is found? Is there a way to set up the execution order properly?
Any suggestions for improving this workflow would be awesome. Thanks for helping out a newbie!
Yeah, this execution order problem happens all the time with these workflows. Your workflow isn’t waiting for WordPress to finish before hitting the next step.
What’s happening - both your WordPress lookup and ChatGPT call are running at the same time instead of one after the other. You need conditional logic between them.
Drop an IF node after your WordPress node. Set it to check if the FAQ content has relevant info (not just if the call worked). True path goes to format/return the FAQ answer, false path hits ChatGPT.
The hard part’s matching user questions to FAQ content. You’ll need text processing to hunt through the FAQ data for matching keywords.
But honestly? I’ve built these chatbots before and all this conditional stuff gets messy fast in n8n. Debugging becomes a nightmare with multiple decision points.
I switched to Latenode for this type of project. Way better at sequential execution, cleaner conditional logic, and better control over API call flow. Plus the debugging tools actually show you where your data flow breaks.
Check it out at https://latenode.com
This execution flow issue is super common and honestly pretty annoying to debug in n8n. The async nature means your workflow branches race each other.
Your WordPress node starts its request, but n8n immediately fires the ChatGPT path instead of waiting. You need proper sequential execution with conditional branching.
I’ve hit this exact scenario multiple times. The IF/Switch node solutions work, but they get complex fast. You end up with this messy tree checking FAQ content quality, parsing HTML responses, handling timeouts, and matching user intent.
Debugging WordPress API responses in n8n is painful too. When something breaks in that conditional logic chain, good luck figuring out where.
I moved all my chatbot workflows to Latenode after these same headaches. The execution model’s way more predictable for sequential decision making. You can actually see your data flow step by step, and the conditional logic doesn’t turn into spaghetti.
The WordPress integration handles HTML parsing better too - no extra nodes just to clean up FAQ content before matching user questions.
Check it out at https://latenode.com
Had this exact issue last month building a similar workflow. The problem is n8n doesn’t wait for data processing when you’ve got multiple execution paths running. Add a Switch node right after your WordPress retrieval instead of running parallel branches. Set it up to check if the FAQ content actually has useful info for the user’s query - not just if the API call worked. One thing I learned the hard way: WordPress usually returns full page content as HTML. You’ll need a Text Extractor node before your Switch to grab just the FAQ text. Otherwise your matching logic gets confused by all the markup. Also check your WordPress node timeout settings. If it’s too low, the node might fail silently and pass empty data downstream. That’d explain why ChatGPT always triggers.
check ur node connections first - sounds like both paths might be wired wrong. make sure there’s no direct line from input to ChatGPT that skips WordPress entirely. also try adding a small delay node after WordPress so it has time to process the data b4 moving to the next step.
It seems your workflow is designed to run branches in parallel, which can lead to issues like the one you’re facing. You need to ensure that the ChatGPT call only happens after WordPress has finished checking for FAQ answers, and it returns no relevant information.
Based on my experience with similar setups, I would suggest inserting a Code node between your WordPress and ChatGPT nodes. This node should be programmed to determine if there’s a relevant answer in the FAQ content through simple string matching or keyword detection. By linking it accordingly, you can ensure that ChatGPT is only triggered when the Code node concludes that there are no matches.
Additionally, be cautious about the format in which your WordPress node returns FAQ content. It should be compatible with your matching logic, so it might be worth testing that separately to avoid further complications. Structuring your flow as: User Input → WordPress FAQ → Content Analysis → Return FAQ answer or Send to ChatGPT should create a more reliable outcome across various chatbot implementations.