Claude Routines - The Promise, the Limits, and Why n8n Isn't Going Anywhere

Anthropic launched Routines for Claude Code - combining schedules, API webhooks, and GitHub triggers into natural-language automation. The concept is compelling, but tight daily run caps and token budget constraints mean n8n, Windmill, and Trigger.dev aren't going anywhere yet.

Datum

April 16, 2026

Three Weeks Later, Anthropic Raises the Bar

Three weeks ago, I wrote about Claude Code's Cloud Scheduled Tasks - the feature that finally let me retire my n8n background jobs for AI-powered automation. No more separate infrastructure. No more Edge Functions as workarounds. Just Claude, a repo, and a schedule.

On April 14th, Anthropic went further. They launched Routines - and on paper, this looks like the feature that could genuinely replace traditional workflow automation tools. Not just for developers, but for anyone who automates recurring work.

Routines bundle three things that Scheduled Tasks couldn't do: time-based schedules (like before), API webhooks for external triggers, and GitHub event listeners that react to pull requests and releases. All running on Anthropic's cloud infrastructure, with access to your repos and MCP connectors.

A PR opens - Claude reviews it against your team's checklist automatically. A monitoring alert fires - Claude correlates it with recent commits and drafts a fix. Every morning, a routine scans your analytics, spots a traffic drop on a landing page, and sends a Slack message with recommendations.

Natural language instead of dragging 18 nodes together. That's a compelling pitch.

What Routines Actually Are

Let me be precise about what shipped. A routine is a saved Claude Code configuration that bundles:

  • A prompt - the instructions Claude executes autonomously
  • One or more GitHub repositories - cloned fresh at the start of each run
  • MCP connectors - Slack, Linear, Google Drive, whatever you've wired up
  • A cloud environment - network access, environment variables, setup scripts

Three trigger types are available:

Scheduled: recurring cadence from hourly to weekly, or custom cron. Minimum interval: one hour. Times in your local timezone.

API: a dedicated HTTP endpoint per routine. POST with a bearer token to trigger on demand. You can pass a freeform text payload - useful for piping in alerts from Datadog, Sentry, or your own monitoring.

GitHub: reacts to repository events. Currently limited to pull requests (opened, closed, labeled) and releases. Supports filters on author, title, base branch, labels, and draft status.

A single routine can combine all three triggers. Run nightly on schedule, react to every new PR, and accept API calls from a deploy pipeline - all in one configuration.

Each trigger spawns an independent cloud session. Routines run fully autonomously - no permission prompts, no approval dialogs. Commits and PRs carry your GitHub identity, not a bot account. By default, Claude can only push to claude/-prefixed branches, though you can override this per repo.

The "Not Quite" Problem

Here's where the enthusiasm meets reality.

Routines come with daily run caps that are - being generous - tight.

Plan Monthly Price Daily Runs
Pro $20/mo 5
Max $100-200/mo 15
Team per-seat 25
Enterprise custom 25

Five runs per day on Pro. That's it. And every run draws down your normal subscription usage - the same token budget that, as of March 2026, has been the subject of significant community frustration. GitHub issue #41930 documents widespread reports of sessions depleting in minutes, with users identifying prompt-caching bugs and throttling as likely causes.

Running routines that consume from this same pool is a risk calculation. A content pipeline that runs daily? A monitoring routine that checks every few hours? You'll hit the wall fast. Very fast.

For context: n8n self-hosted lets you run workflows as often as you want. No daily caps, no token costs for simple automations. Windmill gives you 1'000 free executions per month on their community plan, with no limit on self-hosted. Trigger.dev offers generous free tiers for background jobs. The economics are fundamentally different.

Where Routines Win - and Where They Don't

The comparison to n8n, Windmill, or Trigger.dev isn't apples to apples. These tools occupy different niches despite surface-level overlap.

Traditional workflow tools are deterministic. Same input, same output. Every time. They excel at data plumbing - syncing databases, routing webhooks, transforming payloads, triggering downstream systems. They have visual debugging, team collaboration, and hundreds of pre-built integrations. n8n alone offers over 1'200 nodes.

Routines are non-deterministic. Claude reasons about the input and produces a response - which may vary between runs. That's their strength and their limitation simultaneously. For tasks that require judgment - reviewing code against team standards, triaging alerts by correlating multiple signals, analyzing content for relevance - this is genuinely better than any if-else chain you could build in a visual workflow editor.

Dominik Gabor, after building over 40 workflows with both approaches, put it well: Claude Code generates workflow scripts that are approximately 40-50% ready. n8n gives you the visual canvas and debugging tools to go from 50% to 100%. They're complementary - not competing - tools.

The Register was less diplomatic, calling Routines "mildly clever cron jobs." That's unfair but not entirely wrong. For simple scheduling, they're overkill. For complex AI reasoning on a schedule, they're genuinely novel.

The Practical Trade-offs

If you're evaluating whether Routines can replace parts of your automation stack, here's what I'd consider:

Routines are better when the task requires AI judgment. Code review against nuanced team standards. Content analysis that needs to understand context. Alert triage that correlates across multiple data sources. Documentation maintenance that needs to understand what changed and why.

n8n, Windmill, Trigger.dev are better when the task is deterministic and high-volume. Data synchronization. Webhook routing. ETL pipelines. Anything that needs to run 50 times a day reliably. Anything where non-determinism would be a bug, not a feature.

Neither replaces the other. The most sophisticated setups will likely use both - n8n handles orchestration, scheduling, and data gathering, while Claude handles the reasoning steps within that pipeline. The n8n-MCP server integration already supports this hybrid approach.

There are also practical concerns beyond the run limits. Routines don't support team sharing - they belong to your individual account. There's no way to insert a human approval step mid-run. GitHub triggers only support pull requests and releases during the preview - no pushes, no issues, no comments. And all actions appear under your identity, which makes audit trails confusing in team settings.

What This Means for the Bigger Picture

Routines represent something important even if the current implementation has constraints. The shift from "describe a workflow in a visual editor" to "describe what you want in natural language" is significant. Not revolutionary - that word is overused - but significant.

For the growing number of organizations running AI agents in production, this adds another dimension of automation. And with it, another dimension of responsibility. When routines run autonomously on a schedule, with repo access and external integrations, the security surface expands. I wrote about this three weeks ago in the context of Scheduled Tasks, and everything I said then applies even more now.

This is the problem space Shipwright operates in - orchestrating AI-driven development with traceability, security testing, and compliance documentation baked into every step. The more autonomy we give these agents, the more critical it becomes that the process around them is disciplined. Not because bureaucracy is fun, but because "it runs automatically" only works when you can also verify what it did and why.

→ Explore Shipwright

The Verdict

Claude Routines show the right direction. Prompt instead of flowchart, AI reasoning instead of if-else chains. For tasks that need judgment - code review, content monitoring, alert triage - they're genuinely simpler than anything you can build in a visual workflow editor.

But with 5-15 daily runs, token budget consumption from an already-strained pool, and a research preview label, this is a promising prototype - not a production replacement for your automation infrastructure.

n8n, Windmill, Trigger.dev - they're not going anywhere. Not yet.

Not quite.

 


Sources:

  • Anthropic - Automate work with routines - April 2026 - code.claude.com
  • The Register - Claude Code routines promise mildly clever cron jobs - April 2026 - theregister.com
  • The New Stack - Claude Code can now do your job overnight - April 2026 - thenewstack.io
  • Dominik Gabor - Will Claude Code replace n8n? After 40+ workflows - 2026 - dominikgabor.com
  • VentureBeat - We tested Anthropic's redesigned Claude Code and Routines - April 2026 - venturebeat.com
  • GitHub Issue #41930 - Widespread usage limit drain reports - March 2026 - github.com