Combining multiple RAG workflows in Flowise platform

I’ve built two separate chatflows in Flowise and need help combining them into one unified system.

My Current Setup:

  • First workflow: Vector-based RAG using QARetrievalChain that returns a runnable object
  • Second workflow: Graph RAG with Neo4j database using GraphCypherChain that also returns a runnable object

What I Want: Merge both workflows so I can use outputs from both systems to create a single combined response.

Attempts So Far:

  1. Tried connecting through LLM Chain node with custom instructions in Prompt Template, but couldn’t link it properly with the Conversational retrieval chain due to compatibility issues
  2. Attempted using custom tool integration but didn’t get the results I was looking for

Question: How can I successfully combine these two different RAG approaches in Flowise? Any suggestions on the best way to merge vector search results with graph database results?

I’ve been dealing with the same RAG combo headaches at work. Flowise gets messy when you’re mixing different retrieval methods.

Your problem is that Flowise wasn’t built for complex multi-RAG setups. Those compatibility issues with Conversational retrieval chains happen all the time when you’re jamming different runnable objects together.

I solved this by moving the orchestration completely outside Flowise. Built the logic with Latenode to handle workflow coordination.

Here’s my setup:

  1. Latenode hits both vector RAG and graph RAG endpoints at once
  2. Grabs responses from both systems
  3. Sends combined data to final LLM for synthesis
  4. Returns unified response

You get way more control over merging results. Can add custom logic for weighing responses, handling conflicts, or picking which RAG system to prioritize based on query type.

Best part? Keep your existing Flowise workflows as-is and just layer orchestration on top. Way cleaner than hacking everything together inside Flowise.

Pretty quick setup at https://latenode.com

Try the Multi-Retrieval QA Chain in Flowise. I hit the same wall building a hybrid system last year. The trick is treating both RAG outputs as separate retrievers instead of trying to merge them at the runnable level. Make a custom function node that works as a dispatcher - it takes the user query and scores relevance for both vector and graph retrievals. Then use a Document Combiner node to merge everything before it hits your final LLM. The real magic happens in your prompt engineering. Structure your template to separate vector content from graph relationships clearly. Try something like ‘Based on document excerpts: {vector_context} and knowledge relationships: {graph_context}, give me a complete answer.’ You’ll get hybrid RAG functionality while staying native to Flowise. Performance barely takes a hit compared to external solutions.

I ran into the same issue and solved it with parallel processing. Don’t try forcing Flowise to handle both RAG systems directly - it’s a headache. Instead, use HTTP Request nodes to call external endpoints for your retrieval logic. Set up your vector RAG and graph RAG as separate API services, then build a coordinator flow in Flowise that hits both endpoints at once with the same query. Here’s the trick: use a Function node with JavaScript to parse and merge the JSON responses before sending everything to your final LLM node. This keeps things clean and manageable. The best part? You can tweak each RAG system without breaking the whole workflow, and you control exactly how to combine results based on confidence scores or relevance.