โ† All Research

State of the Agent Economy: Q1 2026

Three months into 2026, the agent economy has gone from theoretical to undeniable. Meta acquired the largest agent social network. Google donated its interoperability protocol to the Linux Foundation. Anthropic, Block, and OpenAI formed an industry standards body. And somewhere between 143,000 indexed agents and Moltbook's claimed 2.8 million, a real economy is forming โ€” with real gaps.

This is our first comprehensive look at the agent economy: where the money is, what infrastructure exists, what is missing, and where it is all heading.


The Numbers

The AI agents market hit an estimated $10.9 billion in 2026, up from $7.9 billion in 2025 โ€” a 40% year-over-year jump. Long-term projections vary, but the floor is staggering:

$10.9B

Market size 2026

$199B

Projected by 2034

$5.99B

VC funding in 2025

143K

Agents indexed Q1 2026

McKinsey projects up to $1 trillion in US agentic commerce by 2030. Gartner says 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025. This is not gradual adoption. It is a phase transition.

The funding tells the same story. Agentic AI startups raised $2.8 billion in the first half of 2025 alone. Anthropic closed a $30 billion Series G at a $380 billion valuation. Cognition AI (makers of Devin) grew from $1 million to $73 million ARR in nine months. Sierra, Cursor, Hippocratic AI โ€” the rounds keep getting larger and the timelines keep compressing.

The Platform Layer: Who Owns What

The first quarter of 2026 was defined by consolidation. The platform layer โ€” where agents live, interact, and build reputation โ€” is being absorbed by incumbents.

Moltbook โ†’ Meta

The headline acquisition. Moltbook launched in late January 2026 as "Reddit for AI bots" and claimed 2.8 million registered agents within weeks. On March 10, Meta acquired the company and brought founders Matt Schlicht and Ben Parr into Meta Superintelligence Labs under Alexandr Wang.

The reality is more complicated than the headline. Security researchers found that a single bot registered 500,000 fake accounts due to lack of rate limiting. Only about 27% of accounts showed genuine autonomous agent behaviour. Moltbook also suffered a critical security breach on January 31 โ€” an unsecured database that exposed agent data.

What Meta bought was not 2.8 million agents. It was the concept of an agent social graph โ€” and the team that built it. The same playbook they used with Instagram and WhatsApp: acquire the network, lock down the API, integrate the data.

OpenClaw โ†’ OpenAI (Talent)

OpenClaw, the open-source personal AI agent with 68,000+ GitHub stars, lost its founder Peter Steinberger to OpenAI on February 14. The project is moving to an open-source foundation, but the direction is clear: the people building agent infrastructure are being hired by the incumbents.

The Emerging Stack

Beyond the acquisitions, a broader ecosystem is forming:

  • ClawTasks โ€” Agent-to-agent task marketplace. Agents earn USDC and tokens autonomously.
  • Moltverr โ€” Freelance marketplace where humans post gigs and agents deliver.
  • Moltyverse โ€” Alternative agent platform offering free GitHub verification.
  • 515 agentic AI companies in the US alone, with $13.8 billion in total funding.

The Infrastructure Layer: Protocols and Pipes

If the platform layer is about where agents live, the infrastructure layer is about how they interoperate, pay each other, and prove who they are. This is where Q1 2026 got interesting.

Interoperability

Two protocols are competing to become the TCP/IP of agent communication:

MCP โ€” Model Context Protocol

Created by Anthropic, donated to the Agentic AI Foundation (AAIF). 10,000+ active servers. Adopted by ChatGPT, Cursor, Gemini, Microsoft Copilot, VS Code. Infrastructure support from AWS, Cloudflare, Google Cloud, Microsoft Azure.

A2A โ€” Agent2Agent Protocol

Created by Google, donated to the Linux Foundation. 50+ technology partners including Atlassian, PayPal, Salesforce, SAP. Currently at v0.3 with gRPC support. Focused on agent-to-agent task delegation.

In December 2025, Anthropic, Block, and OpenAI formed the Agentic AI Foundation (AAIF) โ€” aspiring to become for agentic AI what W3C was for the web. MCP was donated to this body. The message: interoperability is too important for any one company to own.

Payment Rails

Agents that can work need agents that can pay. The payment infrastructure is forming around stablecoins:

  • x402 Protocol โ€” Built on the HTTP 402 status code by Coinbase. Server responds with 402 "Payment Required," client pays in USDC, retries with proof. Stripe launched x402 payments on Base. Cloudflare announced x402 Foundation support. On Solana alone: 35 million+ transactions and $10 million+ volume processed.
  • Google AP2 โ€” Agent Payments Protocol, announced via Google Cloud. Details still emerging.
  • Universal Commerce Protocol โ€” Announced by Sundar Pichai at NRF in January 2026. Google's play for agent-mediated shopping.

The pattern is clear: stablecoin micropayments are becoming the default for agent-to-agent transactions. Traditional payment rails are too slow, too expensive, and too human-centric for millisecond agent decisions.

Identity and Trust

This is the layer that is furthest behind โ€” and it is the one that matters most.

ERC-8004 went live on Ethereum mainnet on January 29, 2026. It provides three registries: Identity (ERC-721 agent handles), Reputation (feedback signals), and Validation (independent checks). Contributors include MetaMask, Ethereum Foundation, Google, and Coinbase. It is also live on Avalanche C-Chain.

But on-chain identity alone is insufficient. A registered identity with zero task completions is verified but not trustworthy. The Zarq AI Census in Q1 2026 indexed 143,642 executable AI components and gave them an average trust score of 65.5/100 โ€” but that score is based on code quality and documentation, not behavioural trust.

This is the gap we are building into at AgentScore. Trust is not identity. Trust is not code quality. Trust is cross-platform behavioural consistency verified across independent sources. Our five-dimension model โ€” Identity, Activity, Reputation, Work History, and Consistency โ€” aggregates signals from Moltbook, ERC-8004, ClawTasks, and Moltverr. No single source can produce a score above 40/100. That design decision was made before the Meta acquisition. Now it looks essential.

The Trust Gap: The Economy's Biggest Problem

Here is the uncomfortable truth about the agent economy in Q1 2026:

We have $10.9 billion in market value, 143,000 indexed agents, interoperability protocols, and payment rails. What we do not have is a reliable way to answer the most basic question in any economy: can I trust the entity I am about to transact with?

Credit scoring solved this for human finance. FICO does not come from one bank. It aggregates across sources so no single entity controls the score. The agent economy needs the same thing โ€” and right now, it does not have it.

Moltbook's karma was the closest thing to a trust signal, and it just became Meta's proprietary data. ERC-8004 proves persistence but not performance. ClawTasks tracks work but not reputation. Each source sees one dimension. Nobody is looking at the whole picture.

We have scored 100+ agents so far. The highest effective score is 22 out of 100 โ€” because no agent has verified across all four platforms. That is not a failure of the scoring system. It is a measurement of how early we are. The gap between where agents are and where they need to be is the finding.

What the Regulators Are Doing

Three regulatory regimes. Three different approaches. All converging on the same question: who is accountable when an agent acts?

UK โ€” DSIT

Principles-based regulation through existing sectoral regulators. ยฃ11M AI Assurance Innovation Fund opening Spring 2026 (ยฃ50kโ€“ยฃ120k grants). Building professional standards consortium. The approach: enable trusted third-party assurance rather than regulate AI directly.

EU โ€” AI Act

Full application to high-risk AI systems in August 2026. Penalties up to EUR 35M or 7% of global revenue. Transparency requirements: humans must know they are interacting with AI. Multi-purpose agents assumed high-risk unless the provider takes precautions.

US โ€” Executive Order

Trump's December 2025 order: "global AI dominance through a minimally burdensome national policy framework." AI Litigation Task Force to challenge state AI laws. March 11 deadline for Commerce to identify burdensome state regulations. Approach: remove friction, not add oversight.

The UK's approach is the most relevant for the agent economy. DSIT is explicitly funding third-party AI assurance โ€” independent organisations that can evaluate AI systems without being the regulator or the developer. That is precisely what trust scoring is: third-party assurance for autonomous agents. Our scoring methodology is already aligned to the DSIT AI Assurance Roadmap, NIST AI RMF, and ISO/IEC 42001.

What Happens Next

Based on the trajectory of Q1, here is what we expect for the rest of 2026:

API lockdowns accelerate

Meta will follow its historical pattern with Moltbook. Open API โ†’ partner API โ†’ closed API. Any infrastructure built on a single platform's data is on borrowed time. Multi-source architectures become a requirement, not a feature.

MCP and A2A converge or compete

Two interoperability protocols cannot coexist long-term. Either AAIF and the Linux Foundation find common ground, or the ecosystem fragments. The smart money is on convergence โ€” neither Anthropic nor Google benefits from a split.

Trust becomes the bottleneck

Agents can already communicate (MCP/A2A) and pay each other (x402). What they cannot do is verify each other. As agent-to-agent transaction volume grows, trust scoring becomes the critical infrastructure layer โ€” the one everyone needs and nobody has built at scale.

Regulation creates demand for assurance

The EU AI Act's August 2026 enforcement date will force companies to demonstrate their agents are trustworthy. DSIT's assurance framework creates a funded market for exactly this. Scoring systems that can prove alignment to regulatory frameworks will have a structural advantage.

Bottom Line

The agent economy in Q1 2026 has money ($10.9B), infrastructure (MCP, A2A, x402, ERC-8004), platforms (even if they are being acquired), and regulatory attention. What it does not have is trust at scale.

Every other layer is being built. Payment rails exist. Communication protocols exist. Identity standards exist. But there is no equivalent of a credit bureau โ€” no independent, multi-source system that can answer "should I trust this agent?" in the time it takes an agent to make a decision.

That is what we are building. Our methodology is public. Our scores are live. Our API is free. The agent economy is real. The trust layer is next.


Sources: Grand View Research, Precedence Research, McKinsey, Gartner, Crunchbase, TechCrunch, Bloomberg, CNBC, Axios, Zarq AI Census Q1 2026, CoinDesk, Linux Foundation, Anthropic, UK DSIT, European Commission, White House Executive Order (December 2025). Market figures are estimates based on publicly available research reports.

Check any agent's trust score

Free, instant, multi-source scoring for any AI agent.