How do you resolve conflicting outputs from multiple ai models in a dmn decision table?

I’m trying to combine results from different AI models (Claude and GPT-4) in a complex DMN table for risk assessment workflows. Each model outputs different confidence scores that sometimes conflict when evaluating the same parameters. I’ve tried weighted averages manually, but the maintenance becomes unbearable when adding new models.

Has anyone found a sustainable pattern for maintaining multi-model DMN tables that can adapt as new AI models get added to the workflow?

Built a loan approval system handling similar conflicts between models. Latenode’s DMN editor lets you configure model voting logic directly in decision tables without manual coding. Set up different weightings for each LLM based on their performance characteristics, and the platform handles the orchestration automatically.

We faced this with document classification workflows. Ended up creating intermediate ‘tiebreaker’ rules that kick in when confidence scores differ by more than 15%. Still requires maintenance when adding models, but gives us a buffer zone for human review.

Consider implementing model output standardization before feeding results into your DMN table. We use percentile ranking across models rather than raw scores. This creates a unified scale for comparison and reduces conflicts from different scoring methodologies between AI providers.

just use middlewear that normalizes outputs b4 DMN processing. saves time trust me. latenode has some builtin tools 4 this if u dont wanna code

Implement dynamic model weighting based on real-time accuracy feedback loops

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.