Stack7 Labs
← All posts

How to pick between Zapier, n8n, and Make now that they all run AI agents

Zapier, n8n, and Make all shipped agentic features this quarter. The feature comparison doesn't matter as much as it used to. What matters now is which pricing model blows up first when you put real agents in front of real volume.

Dustin Landry6 min read

When a client asks us which automation platform to start on, the Zapier versus n8n versus Make question used to be easy. Zapier won on breadth of connectors. n8n won on self-hosting and cost at scale. Make won on visual workflow building. All three shipped agentic features in the last four months, and most of that feature-based logic has collapsed. The interesting question today is what happens to your bill once agents start running.

What actually changed in Q1 2026

Zapier launched Zapier Agents and an AI Copilot that builds Zaps from a natural-language prompt. Agents run on a separate billing unit called activities, not tasks, and a single multi-turn agent interaction uses several activities at a time.

n8n shipped version 2.0 in January. The headline piece is the AI Agent Tool Node, which lets one agent call other agents and tools inside a single workflow. LangChain is native now, and there are persistent-memory primitives for agents that need to remember past runs.

Make shipped Maia, a natural-language workflow builder, and Make AI Agents for autonomous task execution. Make also changed its billing unit from operations to credits last August. Native AI steps now cost more than one credit each, which is a quiet repricing that matters once you plug LLMs into a scenario.

The three pricing models, in one sentence each

Zapier is task-based. One completed action is one task. Agents are priced separately in activities.

n8n is execution-based. One workflow run is one execution, regardless of whether the workflow has three steps or thirty.

Make is credit-based. One scenario step is at least one credit. Native AI steps cost multiple credits.

Those models look similar from ten feet away. Up close they diverge fast once AI enters the workflow.

A worked example

Picture a support-triage agent running for a mid-sized ecommerce operator. A ticket comes in. The agent pulls the ticket, reads recent order history from the CRM, classifies the request with an LLM call, and does one of three things: reply from a knowledge base, escalate to a human queue, or kick off a refund. Average it out to ten internal steps per ticket and two LLM calls. Volume is 3,000 tickets a month.

On Zapier, that shape of workflow is roughly 30,000 tasks a month on the standard side, before any agent behavior is priced. The Team plan covers 2,000 tasks for $69 a month. You're well outside that, so you're either paying overages at roughly 1.25x per extra task or negotiating an enterprise plan. On top of that, if you want multi-turn reasoning and memory, you're stacking the Agents add-on. A multi-turn agent interaction uses several activities, so 3,000 tickets could easily burn 10,000 activities. The Pro Agent add-on is $33.33 for 1,500 activities, which does not cover you at that volume.

On n8n's Pro plan, that same workload is 3,000 executions a month. It fits inside the 10,000-execution allowance, full stop. €60 a month. The LLM calls are billed directly by OpenAI or Anthropic, the same as on the other platforms. If the team is willing to self-host, n8n's Community edition is free. You pay only for infrastructure and LLM tokens.

On Make's Pro plan, 3,000 tickets times ten steps is 30,000 credits, and the two AI steps per ticket add another few thousand credits on top. Call it 45,000 credits. Pro is $16 for 10,000 credits, so you need three or four of those, landing in the $50 to $80 range.

At this volume, the cheapest and most expensive options land roughly three to four times apart. Push volume to 10,000 tickets a month, or add long-running agents with memory and retry loops, and the gap widens. The execution-based model in n8n stops caring about internal complexity. The task and credit models compound it.

Where each platform's pricing breaks

Zapier breaks on complexity per workflow. A five-step Zap is five tasks. An agent that reasons, calls three tools, and replies is easily fifteen activities. If your use case is "one trigger, one action," Zapier is still clean and cheap. If your use case is "agent does five things per run," you bleed.

n8n breaks in two places. First, heavy scheduled polling (cron triggers firing every minute with no real work to do) still consumes executions, so a badly designed workflow wastes allowance. Second, on the cloud plan you can hit concurrency limits before you hit the execution limit, which forces a plan upgrade for the wrong reason.

Make breaks on AI-heavy scenarios specifically. The operations-to-credits shift from August made AI steps multi-credit, and a scenario with three native AI steps plus a dozen data-manipulation steps can eat twenty to thirty credits per run. Multiply by volume and the Pro plan disappears fast.

A decision rule we actually use

Under roughly 5,000 actions a month, and no agentic workflows with persistent memory: pick whichever platform your team will use. Zapier's connector library is still the largest, and at this scale you are not price-sensitive enough to care about the model.

Between 5,000 and 50,000 actions a month, or with even one AI agent doing real work: default to n8n on the cloud plan. Execution-based pricing protects you from your own complexity. If the team can run infrastructure, the self-hosted Community edition removes the platform bill entirely. You pay only for LLM tokens.

Above 50,000 actions a month: self-host n8n, or negotiate an enterprise deal. Either way, stop renting task-based pricing at that volume.

Make fits cleanly in the middle band too, especially if the team is visual-first and not technical. Its price-per-step model is the least punishing when most of the steps are light data wrangling and only a few are AI.

What we'd pick today for a new client

Self-hosted n8n, on a small VPS, with LLM keys pointed at OpenAI, Anthropic, or an open model depending on the use case. The execution-based model gives you room to iterate on complex workflows without the bill jumping every time you add a step. Self-hosting adds about a day of setup and modest ongoing patching. For any client with even moderate automation volume, that investment pays back inside the first month.

If the team won't run infrastructure, we put them on n8n Cloud Pro. If they're visual-first and non-technical, Make. Zapier stays on our shortlist for prototypes and proofs of concept, where connector breadth and polish matter more than per-task cost.

Platform choice used to be a feature decision. With agents in the mix, it's a pricing-model decision first.

If you're trying to size up a platform change or an agent rollout and want a second pair of eyes on the numbers, reach out. We run these comparisons for clients all the time.

More posts on automation and AI in production.

Back to the blog