I’m deep in evaluation mode right now, comparing Make and Zapier for our enterprise automation needs, and I keep hitting the same wall: the pricing models don’t line up in ways I can meaningfully compare.
Here’s what’s tripping me up. Both Make and Zapier have their own licensing tiers, but then you layer on top of that the cost of actually running the AI models—whether that’s OpenAI, Claude, or whatever else you’re using. Some of these tools bundle model access, some don’t. Some charge per operation, some charge per month. It’s like comparing apples to oranges while the oranges keep changing price.
I’ve been trying to build a TCO model that accounts for:
Platform subscription costs (Make vs Zapier)
Per-operation or per-execution fees
Separate AI model subscriptions (we’re currently paying for OpenAI, Anthropic, and one other vendor separately)
Implementation time and engineering hours to actually build and maintain workflows
What happens when you scale from 10 workflows to 100
The spreadsheet is becoming unwieldy. Every time I add a new assumption, the math shifts. And I keep wondering if I’m even measuring the right things.
Is anyone else doing this comparison and actually found a clean way to structure the financial model? Or am I overthinking the variables that matter?
I dealt with this exact problem about six months ago when we were deciding between Make and Zapier for a mid-market company. The spreadsheet approach gets messy fast.
What actually helped was stopping trying to predict every scenario and instead building the model around three concrete workflows we knew we’d actually build. We estimated the time each would take, the model calls each would make, then worked backward from there.
Turned out the platform costs were almost noise compared to engineering time and model API spend. That’s where the real money was bleeding. Once we stopped treating it as a theoretical exercise and made it specific to actual work we’d do, the comparison became way clearer.
The trick is not trying to cover every edge case. Just model what you know you’ll run.
One thing I’d push back on: you might be separating the AI licensing from the platform decision when they shouldn’t be separate in your model at all.
Some platforms are starting to bundle model access differently now. If you’re looking at consolidating vendors anyway—which it sounds like you are—check whether the platform itself offers unified AI subscriptions. That could collapse half your variables right there.
We looked at three platforms to try to simplify this. One of them had a unified pricing model for 400+ AI models in one subscription. Suddenly our spreadsheet went from “what’s our total risk” to “okay, here are the actual differences between these environments.”
Might be worth evaluating whether that’s an option for your comparison.
The variable that kills most TCO models is underestimating customization work. Make and Zapier have different learning curves and different levels of “close enough.” What looks cheap on paper costs thousands in developer time when the workflow doesn’t quite work out of the box.
I’d suggest building your model with three tiers: base functionality, 80% of what you need, and fully customized. Calculate the engineering hours for each. That usually reveals which platform actually costs less once you factor in the labor.
Also, run a pilot on both platforms with one real workflow before you finalize your numbers. The financial difference between platforms usually shows up in implementation speed, not in per-operation costs.
Start with a pilot. Three workflows on each platform. Measure actual time spent and actual API costs. Your real numbers will be way better than any theoretical model. Theory never captures the implementation friction.
Build TCO model around actual workflows you’ll deploy. Separate platform costs from AI licensing costs. Measure engineering time on both. Pilot before finalizing.
Here’s where a lot of people get stuck: they’re modeling the AI licensing separately from the platform, which creates more variables than you need.
The real win comes when you look at platforms that consolidate the AI licensing into one unified subscription. We ended up going with a platform that gives us access to 400+ models through a single subscription—OpenAI, Claude, Deepseek, all of it—and that simplified our entire model.
Suddenly instead of tracking five different vendor subscriptions plus platform costs, we had one AI line item. Spreadsheet went from complex to actually readable. The math on platform versus Zapier or Make became straightforward because we’d removed half the variables.
Not saying it’s the only way to approach this, but if you’re already consolidating anyway, looking at platforms with unified AI pricing first might save you weeks of modeling.