Orchestrating ROI across departments: when finance and operations have different definitions of "savings", how does the actual workflow handle it?

I’m wrestling with a practical problem that I suspect others have hit too.

We’re trying to build an ROI model for workflow automation, but finance, operations, and sales define “cost” and “savings” completely differently. Finance cares about cash outflow and payback period. Operations cares about hours freed up and throughput increases. Sales cares about throughput without caring much about the hours argument.

If I’m building an automation workflow that needs to calculate ROI and report it to leadership, I can’t just average their definitions. Each department needs to see their metrics validated the way they think about them.

But here’s the challenge: if the workflow is orchestrating processes across all three departments, how do you actually attribute ROI without it turning into an blame-session about whose metrics are “real”?

The question I’m actually asking: can you build a workflow that runs the same processes but attributes ROI differently depending on who’s looking at the results? Or do you end up with three separate workflows? And if you do end up with multiple workflows, how do you keep them from diverging in their underlying assumptions?

I’m curious if anyone’s actually solved this in a way that scaled.

We hit this exact problem. Finance wanted to see cost amortization over three years. Operations wanted to see daily throughput impact. Sales just wanted to know if deals were closing faster.

What actually worked was building a single source-of-truth workflow that logged everything granularly. Then we built different aggregation layers on top that reported different metrics to different audiences.

The same underlying data—execution time, cost incurred, throughput processed, errors prevented—gets aggregated three different ways. Finance sees it as payback period and NPV. Operations sees it as FTE hours freed up. Sales sees it as deals per day.

One workflow logging the raw events. Three different reporting views. The beauty is when someone challenges the numbers, you can trace back to the raw event log and show exactly what happened. There’s no ambiguity about the underlying data, only about how it’s interpreted.

Don’t try to make a workflow solve the political problem. The workflow’s job is to be accurate and transparent. The interpretation is a business conversation, not a technical problem.

What actually helped was having the workflow be so transparent that the three departments had to align on methodology rather than arguing about whose metrics are valid. We documented every calculation. When finance disagreed with operations about how to value time savings, the transparency forced them to have that conversation.

The workflow forced honesty because it wasn’t hiding anything. Once they aligned on methodology, the results were the same for everyone.

The operational approach is to separate measurement from interpretation. Build one workflow that measures everything consistently: time spent, cost incurred, throughput, errors. That’s the data layer.

Then build separate transformation layers for each department’s perspective. Finance transforms the raw measurements into NPV and payback. Operations transforms them into FTE hours and efficiency gains. Sales transforms them into deal velocity.

Same underlying facts. Different views. This works because you’re not trying to force agreement on what savings mean. You’re just agreeing on what the underlying facts are. The interpretation is each department’s job, not the workflow’s job.

The most robust approach is attribute-based measurement with flexible aggregation. The workflow logs attributes of each transaction or process execution: start time, end time, cost components, quality metrics, throughput contribution.

Then you build department-specific aggregation logic that consumes those attributes and calculates ROI according to that department’s framework. Finance aggregates across cost and time horizons. Operations aggregates across capacity and utilization. Sales aggregates across outcome vectors.

The workflow doesn’t compromise by trying to be one thing to everyone. It’s accurate and detailed at the transaction level. Interpretation happens at the aggregation layer. That’s where departments can disagree, and it’s transparent why they do.

One workflow logging raw data. Multiple reporting views per department. Same facts, different interpretations. Transparency prevents arguments.

Log raw data consistently. Let each department interpret differently. Transparency forces honest methodology conversations.

This is a case where the workflow architecture matters more than the specific tool, but Latenode’s flexibility helps here. We built a central workflow that orchestrates processes across departments and logs everything comprehensively. Then we built downstream workflows for each department that aggregate the same raw data differently.

Finance’s workflow calculates NPV and payback from the raw logs. Operations’ workflow counts FTE hours and throughput. Sales’ workflow tracks deal velocity. Same underlying events, three different roll-ups.

The key is that Latenode let us build this modular approach without duplicating logic. The central workflow does the accurate measurement once. Each department’s downstream workflow just reshapes the data according to their KPIs.

When stakeholders questioned our numbers, pointing to that shared source-of-truth made the conversation constructive. We could trace any disagreement to methodology, not measurement. That’s actually solved a lot of the political friction because it’s data-driven and transparent.