I’ve been trying to build a proper total cost of ownership model to compare our platform options, and it’s getting messy. The variables are multiplying.
With Make or Zapier, you’re looking at per-operation pricing plus your execution volume. But then you’ve got to layer in whether you’re handling AI-heavy workflows or not. And if you are, that’s additional subscriptions on top.
When I try to project 12 months out with different volumes, different workflow complexity levels, and different AI model usage patterns, the calculation becomes so dependent on assumptions that I’m not sure the number means anything.
Some of the question is structural: if you consolidate AI model pricing into a single subscription, does that actually simplify the TCO calculation, or just move the complexity around?
I’m curious whether people have built TCO models that feel reliable, or whether everyone’s just making educated guesses. What variables actually matter when you’re trying to do an apples-to-apples comparison?
The consolidation definitely simplifies one part of the math. Instead of forecasting five separate AI subscription costs, you’re forecasting one. But you still need to model volume against platform pricing, and that’s where most teams get stuck.
What helped us was breaking it into three scenarios: baseline workload, peak expected volume, and worst-case expansion. Then we worked backward from each scenario to see which platform stayed most economical across all three. That forced us to think about where price structures actually broke on each option.
Make’s per-operation pricing scales fast. Zapier’s task-based pricing has quirks around how tasks are counted. When you add unified AI to the picture, the platform that gives you the most model flexibility tends to win because you’re not locked into expensive models for cost reasons.
We built the model in a spreadsheet with input sliders so we could adjust volume assumptions and see the impact. Sounds tedious, but it made the uncertainty visible instead of hiding it in point estimates.
I’ve found that the AI pricing consolidation actually matters less than people think for TCO. What matters more is the base platform cost at your expected volume. We run maybe 10,000 workflows monthly. At that volume, the platform cost differences are bigger than the AI subscription differences. The consolidation is nice for accounting simplicity, but it’s not the deciding factor in the math.
The accuracy problem is real. You’re making assumptions about workflow complexity that will change. You’re guessing at what percentage of your workflows will actually use AI features. You’re estimating error rates and retries. Each assumption uncertainty compounds. The best approach is sensitivity analysis—build your model, then stress-test it. What if error rates double? What if half your forecasted workflows never actually get built? Where does that leave you? That tells you which platform option is most robust to being wrong about your forecasts.
Build three scenarios: low, expected, high volume. Run numbers for each platform. Pick the one that doesn’t explode in the high scenario. That’s more reliable than trying to forecast perfect accuracy.
The consolidated AI pricing actually does help with TCO accuracy because you have one fewer variable to forecast. You know your model costs upfront—it’s baked into the subscription. What you’re really modeling is execution volume against the platform’s per-execution pricing.
We found that when you separate out the AI subscription variable, the comparison becomes clearer. Platform A costs X per thousand executions. Platform B costs Y per thousand executions. Throw in unified AI access and those numbers stay constant. There’s less moving pieces to forecast.
For accuracy, we built the model with three volume scenarios and ran the numbers for each. The platform that stayed most economical across all three was the safer bet. Unified AI pricing made that calculation more stable because we weren’t guessing at whether AI costs would spike.
The spreadsheet with volume sliders approach actually works. You get to see where each platform breaks, and that informs your decision better than a single point estimate.