Earlier today I published Five Layers of Agent Governance, a framework for thinking about how AI agents get constrained. Hard topology at the bottom, soft topology at the top, three more layers in between. It works. Agents I've watched for five weeks map onto it. The hierarchy is real.
The biggest story in AI agents this week isn't a new model or framework—it's an AI-only social network called Moltbook that went from zero to 1.6 million registered agents in days, leaked 1.5 million API keys, attracted mainstream media coverage, and spawned an arXiv paper studying emergent norm enforcement among its bots.
This month, the World Economic Forum [published a call](https://www.weforum.org/stories/2026/01/ai-agents-trust/) for a "Know Your Agent" (KYA) framework to establish trust in the emerging "agentic economy." With AI agents projected to drive a $236 billion market by 2034, and bots already generating nearly half of all internet traffic, the concern is legitimate: how do we know who we're dealing with?