The IAB Tech Lab released version 1.0 of its Content Monetization Protocol (CoMP) specification on March 10, opening it for public comment through April 9, 2026. The framework establishes a standardized way for publishers and AI systems to negotiate commercial terms before any content crawling or use occurs. In plainer terms: it's an attempt to build payment infrastructure for the information that powers large language models.
The timing isn't accidental. Publishers have experienced search referral traffic declines exceeding 50% in some cases, per the IAB's own documentation. AI systems are summarizing and synthesizing publisher content at scale, reducing the incentive for users to click through to the original source. The economic relationship between content creation and content consumption is breaking.
"AI systems require chips, power, and information. Information is the only input in that equation that does not yet have a consistent commercial infrastructure around it," Anthony Katsur, CEO of IAB Tech Lab, said in the announcement. That framing is deliberately provocative. Chips cost money. Power costs money. Content, so far, has been treated as free raw material.
If you're a marketer who relies on earned media, publisher partnerships, or the existence of high-quality editorial content, this story matters more than you might think at first glance.
The Architecture of a Content Marketplace
CoMP isn't a paywall. It isn't a replacement for robots.txt or CDN-level access controls. The framework assumes publishers already have blocking mechanisms in place. What CoMP does is create a standardized path from "no access" to "access under agreed commercial terms."
Think of it as installing a payment counter next to the locked door.
The protocol is designed to work across two scenarios: direct licensing arrangements between publishers and AI companies, and third-party marketplaces that aggregate content offerings. In both cases, CoMP provides a machine-readable way for content owners to signal their permissions, pricing, and terms. AI systems can check those signals before accessing or using content.
MarTech.org's coverage described the framework as creating "common rails" for what has so far been a patchwork of proprietary licensing deals and informal scraping arrangements. The protocol reduces bespoke technical work on both sides. Publishers don't need custom integrations for every AI platform. AI systems don't need separate licensing negotiations with every publisher.
How it works in practice
At a technical level, CoMP introduces a standardized API and metadata schema that publishers can implement at their edge layer. When an AI system's crawler encounters a CoMP-enabled site, it receives structured information about what content is available, under what terms, and at what price. The crawler can then proceed, negotiate, or move on.
The framework also addresses attribution and usage tracking. Publishers can specify how their content should be credited when used in AI-generated outputs. This is particularly significant for news organizations, whose reporting frequently appears in AI summaries without any reference to the original source.
The Publisher Coalition Behind the Framework
Several major content organizations have already expressed support. The Weather Company, Bertelsmann, and publisher-focused consultancy Beeler.Tech are among early backers cited in the announcement.
Achim Schlosser, VP of Global Data Standards at Bertelsmann, called the release "an important step toward establishing interoperable, transparent standards for fair value exchange in the AI ecosystem."
The support from these organizations is meaningful but not sufficient. CoMP only works if AI companies adopt it. And right now, the major AI players, OpenAI, Google, Anthropic, Meta, and others, haven't publicly committed to the protocol. Without their participation, CoMP risks becoming a well-designed standard that nobody uses.
The enforcement gap
This is the fundamental tension in the specification. CoMP is a protocol, not a law. It provides the technical infrastructure for commercial agreements, but it doesn't compel AI companies to enter into those agreements. A crawler can encounter CoMP metadata and simply ignore it. The framework's drafters acknowledge this explicitly, noting that CoMP "assumes that content owners have established robust blocking strategies at the delivery point."
Translation: if an AI company doesn't want to pay, you still need to block them yourself. CoMP just makes it easier to let them in under negotiated terms.
Why Marketers Should Pay Attention
At first glance, this looks like a publisher-versus-AI-company story. It isn't.
AI answers and agents are increasingly shaping how people discover information, products, and brands. If the economics of content access change, that ripples through the entire media ecosystem. Consider the implications:
If premium publishers find sustainable AI monetization models through frameworks like CoMP, the quality and availability of trusted editorial content could stabilize. Your earned media strategy still has a foundation. If publishers can't monetize AI access, many will either shrink or shift to content formats that AI systems can't easily summarize. That degrades the discovery environment your brand depends on.
For performance marketers specifically, the AI-driven search landscape that's replacing traditional search results relies heavily on content quality. If the content feeding those AI responses deteriorates because publishers can't sustain their operations, the quality of AI-generated recommendations and answers deteriorates with it. The garbage-in, garbage-out dynamic applies here.
There's a stranger possibility, too. If CoMP succeeds and AI companies start paying meaningful licensing fees, those costs will eventually flow downstream. The AI tools your marketing team uses could get more expensive. Or AI companies might limit their content access to reduce costs, potentially reducing the comprehensiveness of their outputs.
The Technical Underpinnings
The CoMP specification introduces several concrete technical components that deserve attention beyond the high-level framing.
First, there's a new metadata schema that publishers implement at their content delivery layer. This schema includes fields for content classification, licensing terms, pricing models (per-query, per-token, subscription-based, or hybrid), and attribution requirements. The schema is designed to be machine-readable, so AI crawlers can process licensing terms automatically without human negotiation.
Second, CoMP defines a handshake protocol. Before an AI system accesses content, it queries the publisher's CoMP endpoint, receives terms, and either accepts them programmatically or routes the request to a third-party marketplace for negotiation. This creates an auditable record of content access agreements.
Third, the framework includes usage reporting standards. Publishers can request structured reports showing how their content was used, how many queries referenced it, and what downstream outputs included their information. This is the attribution layer that publishers have been demanding.
The specification draws on existing web standards, including elements from robots.txt, structured data markup, and API authentication patterns. That's a pragmatic choice. Building on familiar infrastructure lowers the adoption barrier for both publishers and AI companies.
The marketplace dimension
Perhaps the most consequential element of CoMP is its support for third-party content marketplaces. Rather than requiring every publisher to negotiate directly with every AI company, the framework allows intermediaries to aggregate content offerings, negotiate bulk terms, and handle payment distribution.
This marketplace model echoes how programmatic advertising evolved. Individual publishers once negotiated ad deals directly with every advertiser. Ad exchanges and SSPs created standardized infrastructure that reduced transaction costs and enabled scale. CoMP is attempting the same structural transformation for AI content licensing.
If content marketplaces emerge under the CoMP framework, they could become significant new revenue channels for publishers. They could also become new intermediaries that extract value from both sides of the market, just as ad exchanges did. The history of digital media intermediation suggests caution about assuming that standardization automatically benefits content creators.
The Bigger Picture: Who Controls the Information Supply Chain?
CoMP is arriving at a moment when the relationship between content creators and AI platforms is being renegotiated across the entire digital economy. OpenAI has signed individual licensing deals with outlets like the Associated Press and News Corp. Google offers a robots.txt-based opt-out mechanism. The EU AI Act gives content owners legal rights to opt out of training data collection.
None of these approaches, taken individually, create a functional marketplace. They're stopgap measures. CoMP is the first serious attempt at building standardized commercial infrastructure for the AI content economy.
The public comment period runs through April 9, 2026. The final specification, assuming it advances past this stage, would then need adoption from both publishers and AI systems to have any real impact. That adoption timeline is measured in years, not months.
What's unresolved is whether a standards body can move fast enough to matter. The AI industry is advancing at a pace that makes traditional standards processes look glacial. By the time CoMP reaches widespread adoption, the AI content landscape could look entirely different.
For now, though, the framework represents something important: the first attempt to treat information as a commercially structured input to AI systems, the same way chips and power are treated today. Whether the industry agrees with that framing will determine a lot about how the next era of digital content plays out. And if you work in marketing, that's your landscape.

