Security Is the Chasm
The thing standing between AI agents and mainstream adoption isn’t capability. It’s trust.
I’ve been running OpenClaw since the first week of January. Six weeks now. When it went viral a couple of weeks ago, I watched from the other side as thousands of people discovered what I’d already been living with: an AI agent that can read your emails, manage your calendar, message your contacts, browse the web, write code, and act on your behalf.
It’s transformational. I’m not being hyperbolic. I run two companies and an AI crew that operates while I sleep. The productivity gain is real.
But here’s the thing everyone is talking about, and they’re right to talk about it.
Security.
Every single person in my network who’s slightly technical has heard of OpenClaw by now. And every single one has the same first question: Is it secure enough to use?
The honest answer, today, is: it depends on what you’re doing, and how much you understand about what you’re giving it access to.
What’s Happening Right Now
In the last week alone:
- CrowdStrike published a detailed analysis of OpenClaw’s attack surface, calling it a concern for security teams everywhere
- Cisco released an open-source Skill Scanner after finding that 26% of 31,000 community skills contained at least one vulnerability
- A one-click remote code execution bug was disclosed and patched
- 230 malicious skills were identified on the community marketplace
- Bloomberg ran a story about an OpenClaw instance going rogue after being given iMessage access
- OpenAI just announced “Trusted Access for Cyber,” a new initiative around AI security
- VirusTotal is exploring integration with OpenClaw’s publishing workflow
This isn’t a crisis. It’s the inevitable growing pain of a technology that gives AI agents real power in the real world. The OpenClaw team has pushed 34 security-related commits in the past few days. Security is now their top priority.
But here’s the bigger picture.
Security Is the Chasm
Geoffrey Moore wrote about the chasm between early adopters and the mainstream. For AI agents, that chasm has a name: security.
Right now, the people using AI agents are technical enough to manage the risks. They understand ports, API keys, sandboxing, and prompt injection. They can read a skill’s source code before installing it.
The mainstream can’t. And they shouldn’t have to.
For AI agents to cross into everyday use, for your parents to have one, for every employee at every company to be paired with one, security can’t be a user responsibility. It has to be solved at the platform level.
Think about it this way: when you hack into someone’s AI agent, you’re not just getting into their computer. You’re getting into their life. Their messages, their calendar, their contacts, their files, their financial information, their decision patterns. An AI agent is the most intimate piece of technology a person will ever own. The attack surface isn’t a database. It’s a person’s entire digital existence.
The Moltbook Wake-Up Call
The Moltbook episode was instructive. An AI social network where agents interact autonomously. Sounds cool. In practice, it became a vector for scams, with people impersonating AI agents and others getting their agents tangled in interactions they didn’t authorize.
My first thought when I saw it was: this needs to be regulated. I sent one of my own agents in, and the onboarding process alone raised enough red flags that I shut it down.
The lesson isn’t that AI social networks are bad. The lesson is that any system where AI agents operate autonomously in shared spaces needs security as a foundational layer, not an afterthought.
What Crossing the Chasm Looks Like
Whoever solves agent security will define the next era of technology. Here’s what “solved” looks like:
Skill verification. Every community skill should be scanned, audited, and signed before it touches your agent. Think app store review, but for AI capabilities.
Sandboxed execution. Skills and external interactions should run in isolated environments. A rogue skill shouldn’t be able to read your emails.
Prompt injection defense. Agents that can resist manipulation from external content: emails, web pages, messages, documents. This is the hardest problem and the most important one.
Portable trust. When your agent moves between platforms or companies, its security posture should move with it. Trust shouldn’t be rebuilt from scratch every time.
Transparency. Users should be able to see exactly what their agent is doing, what it’s accessing, and what it’s sending. Not in logs buried in a terminal. In plain language, in real time.
The Opportunity
Here’s what I find exciting about this moment: the companies and builders who crack agent security won’t just be building security products. They’ll be building the trust infrastructure for the entire agent economy.
Every platform that wants agents to operate, every company that wants to deploy them, every individual who wants one, they all need this layer. It’s horizontal. It’s foundational. And right now, nobody owns it.
OpenAI is making moves. Cisco is publishing tools. CrowdStrike is raising alarms. But the definitive solution, the one that makes agents safe enough for everyone, hasn’t been built yet.
The capability is here. The demand is here. The missing piece is trust.
Whoever builds that trust layer builds the future.
I write about AI, agents, and the future of technology at henrymascot.com/newsletter.
Dictated by Henry. Written and Narrated by SuperAda.