Claude Code Cloud Scheduled Tasks – Why I Never Understood the OpenClaw Hype
Anthropic launches Cloud Scheduled Tasks for Claude Code. A look at why autonomous AI agents like OpenClaw are a security risk, how I approach it differently with Claude Code Skills — and why security matters more as autonomy grows.

Datum
From Hype, Control, and the Question of Who You Trust With Your Machine
Over 250’000 GitHub stars in just a few weeks. Fortune called it a security crisis. Trend Micro spoke of “invisible risks.” And my LinkedIn feed was full of people enthusiastically showing how OpenClaw answers their emails, manages calendars, and summarizes Slack messages.
Honestly, I never understood the hype.
Not because the technology isn’t impressive. It is. But an autonomous AI agent running on your machine, with access to your data, navigating through untrusted content, and communicating externally — that’s exactly what Simon Willison calls the “lethal trifecta”: private data, untrusted content, and external communication. Three things that together create a perfect attack surface for prompt injection and data exfiltration.
The numbers prove him right. Censys identified over 21’000 publicly reachable OpenClaw instances. Moltbook, a social network built for OpenClaw agents, leaked 35’000 email addresses and 1.5 million API tokens. CVE-2026-25253 received a CVSS score of 8.8. A Cloud Security Alliance study showed: one in five organizations deployed OpenClaw without IT approval.
Why I Do It Differently
With Claude Code, I build my own skills. Sounds like more effort — and initially, it is. But the difference lies in control.
I write the prompts. I define which tools the skills are allowed to use. I prevent prompt injection as best I can — by setting clear boundaries between system instructions and user input, not letting unvalidated external content into the context, and testing the skills in a proper software development process. Security testing included.
No black-box ecosystem I have to blindly trust. Skills that I have under control. At least more under control.
The OWASP Top 10 for Agentic Applications 2026 has a nice term for this: “Least Agency.” Autonomy is a feature that must be earned — not a default setting.
The One Problem That Remained
What bugged me until recently, though: recurring tasks. When a skill needs to run not just once but regularly — research trends weekly, generate a report daily — I always needed additional infrastructure for that.
In my case, that was n8n running in the background. Or sometimes Supabase Edge Functions with a cron job. Both work — but they’re additional systems that need to be maintained, monitored, and secured. Another attack surface, another point in the architecture that I need to keep an eye on.
Cloud Scheduled Tasks: The Gap Closes
Anthropic has now closed exactly this gap. With Cloud Scheduled Tasks, Claude Code no longer runs only locally on the desktop but on Anthropic infrastructure. Connect your repo, define a task, set a schedule. Done.
Two variants are available:
Local: Tasks run on your machine, with access to local files and tools. Requirement: Desktop app open, machine awake. Up to 50 parallel tasks per session, automatic expiry after three days.
Cloud (Remote): Tasks run on Anthropic infrastructure against a fresh clone of your repo. Your machine can be off. That’s the actual — let’s call it: the feature I’ve been waiting for.
For me, this means concretely: my skills live in the repo. The cloud session has direct access. No more n8n needed, no Edge Functions as a workaround. One less piece of infrastructure, one less point of failure.
The Bigger Question
What remains is the responsibility. And it’s not getting smaller.
When AI agents run in the cloud, with repo access, on a schedule, without anyone watching — security doesn’t become less important. It becomes more important.
OWASP lists ten critical risks for agentic applications. Number one: Agent Goal Hijack — attackers redirect an agent’s objectives by manipulating instructions, tool outputs, or external content. Number five: Unexpected Code Execution — agents generate or execute attacker-controlled code. Number six: Memory & Context Poisoning — persistent corruption of the agent’s memory.
These aren’t theoretical scenarios. These are documented attack vectors.
Simon Willison’s recommendation is clear: the only way to stay safe is to avoid the lethal trifecta entirely. Meta calls it the “Rule of 2”: if an agent has access to sensitive data, either the input or the output must be severely restricted.
What This Means for Businesses
For organizations deploying or planning to deploy AI agents, there are a few practical trade-offs to consider:
- Control vs. Convenience: Ready-made agents like OpenClaw are faster to set up. Custom skills cost more upfront but give you more control. The question isn’t “what’s easier” but “what can I take responsibility for.”
- Earn autonomy: Start with minimal autonomy. Give the agent only the permissions it needs. Least Privilege isn’t new — Least Agency is its extension for AI.
- Scheduling needs governance: When tasks run automatically in the cloud, you need clear policies. Who can create tasks? Which repos may be connected? How do you monitor what the tasks are doing?
Whoever gives AI agents more room to operate must also invest more in securing them. This doesn’t just apply to developers. It applies to anyone responsible for IT systems.
This is exactly why I’m building Shipwright — an open-source framework that orchestrates the full software development lifecycle on Claude Code. Every skill, every scheduled task, every line of generated code follows a spec-driven process: requirements traceability, automated security testing, compliance documentation that stays current with every build. Not because it’s fancy, but because with growing agent autonomy, process discipline is the only thing standing between “it works” and “it works safely.”
Sources:
- Simon Willison – The lethal trifecta for AI agents – June 2025 – simonwillison.net
- OWASP – Top 10 for Agentic Applications 2026 – genai.owasp.org
- Fortune – Why OpenClaw has security experts on edge – February 2026 – fortune.com
- Trend Micro – What OpenClaw reveals about agentic assistants – February 2026 – trendmicro.com
- Anthropic – Run prompts on a schedule – Claude Code Docs – code.claude.com
- The Hacker News – OpenClaw AI Agent Flaws – March 2026 – thehackernews.com
