We’ve been running Make for about two years now, mostly handling basic integrations between Salesforce, Slack, and our CRM. The platform works fine for straightforward stuff, but lately we’ve been hitting a wall.
Here’s the problem: we’ve accumulated subscriptions to GPT-4, Claude, and a couple of smaller models across different departments. Each one has its own API key, its own billing cycle, its own maintenance overhead. Then we started looking at Zapier as an alternative, but the per-task pricing model made the situation even more complicated. We’d need to factor in the AI model costs on top of Zapier’s per-task charges.
I decided to actually map out what we’re spending. The spreadsheet got ugly fast. Between Make’s operation-based pricing, Zapier’s per-task model, and our scattered AI subscriptions, the total cost of ownership was basically impossible to calculate accurately. Every quarter, some new AI model gets added somewhere, a team realizes they need GPT-4 for a specific workflow, and the costs just scatter.
What surprised me most was realizing that consolidating everything into a single plan with 400+ AI models built in would actually simplify the math. Instead of tracking five different subscriptions plus platform costs, you’d have one number. The execution-based pricing means you’re paying for what you actually use—not per operation or per task, but per execution time.
We ran some numbers on a specific workflow we use frequently: generating 2000 emails with GPT and pushing them to Google Sheets. On Make, that scenario costs roughly 7.67 times more than it would on a platform with consolidated AI access. That’s not a small difference.
I’m curious—how are other teams actually calculating TCO when you’ve got multiple platforms and multiple AI subscriptions running in parallel? Are you consolidating everything into one place, or does the complexity of tracking different costs just become background noise?
We went through the same thing about six months ago. The real insight for us was that the per-operation pricing on Make becomes a killer when you start layering in AI tasks. Every prompt call, every API request, every data transformation—each one charges as a separate operation.
We actually pulled all our AI subscriptions and moved to a single plan. What changed wasn’t just the cost per workflow, but also the visibility. Instead of guessing how much we’re spending across departments, we now have one clear picture. The execution-time model means a complex workflow that processes 1000 records costs the same whether it runs with two AI models or five.
The tricky part was the migration itself. We had workflows built on Make that relied on specific model versions from separate subscriptions. Consolidating meant we had to rebuild a few of them, but after that point the cost tracking became actual math instead of educated guessing.
The fundamental issue here is that Make charges per operation and Zapier charges per task, but neither of them simplifies AI model access. You end up with two separate cost centers that don’t talk to each other. When I looked at our own setup, the biggest variable turned out to be how frequently workflows run. A workflow that executes 100 times a month on Make might cost significantly more than one that runs 10 times, even though the actual work is identical. Add AI model costs on top, and the per-operation model becomes exponentially more expensive. Consolidating to execution-time pricing actually makes the cost linear rather than exponential. It’s not just about reducing the number, it’s about making it predictable.
The challenge most teams face is that Make and Zapier were designed before AI became a primary workflow component. They layer AI access on top of their existing pricing models instead of building it in from the start. This creates the exact situation you’re describing—fragmented costs, fragmented visibility, and TCO calculations that depend on too many variables. When you consolidate to a platform where AI is native and pricing is based on execution time, several things change simultaneously. First, the math becomes transparent. Second, scaling up doesn’t create exponential cost increases. Third, teams stop making architectural decisions based on trying to minimize operation counts. The real efficiency gain comes from the third point—you can design workflows for clarity and effectiveness rather than cost optimization per operation.
We tracked it too. Keep seperate cost per platform + ai subscriptions tracked. then consolidate one plan. Spreadsheet makes tcoo visible. way easier to present to exec leadership.
This is exactly the problem I see teams hitting. The operational overhead of managing multiple cost centers—Make operations, Zapier tasks, individual AI subscriptions—ends up becoming more expensive than the actual automation itself.
We consolidate everything into one execution-based pricing model. One subscription covers 400+ AI models, so no more scattered API keys or separate billing cycles. The workflows we build operate at baseline cost, whether they use one model or twenty.
For that email generation scenario you mentioned—2000 emails with GPT to Google Sheets—the cost difference is dramatic once everything’s in one place. You’re looking at actual execution time rather than operation counts, which fundamentally changes how you architect workflows. Teams stop making design decisions based on trying to minimize costs and start making them based on what actually works.
Start by calculating your current all-in costs using the exact workflow examples you run most frequently. Then model those same workflows under time-based pricing. The comparison usually makes the case for consolidation pretty clear.