Fighting for quality news media in the digital age.

  1. Comment
October 1, 2025

Publishers with AI licensing deals have seven times the clickthrough rate

But only a select group of publishers have deals with the LLMs.

By David Buttle

Publishers with OpenAI licensing deals benefit from a ChatGPT clickthrough rate almost seven times higher than those without agreements, according to the latest State of the Bots report from Tollbit.

This suggests that AI gatekeepers’ discretionary deal-making, which currently largely locks out all but large, premium, English-language titles, will be a key determinant of success as AI adoption and diffusion continues to accelerate. We should be concerned.

In private conversations over recent weeks a number of publishers with agreements have described growth in referrals from ChatGPT.

Whilst it remains a fraction of that delivered from Google, it is starting to reach almost meaningful levels for some.

Underpinning this is the greater visibility delivered by large feature boxes displayed when the chatbot responds to certain relevant prompts (pictured, top).

The OpenAI deals that secure ChatGPT’s access to real-time journalism also provide this additional prominence and traffic. After an 18-month period of deal-making that appeared to conclude earlier this year, OpenAI has penned around 40 such licensing partnerships with content businesses.

The current wave of deals skew heavily towards English-language premium journalism – mostly news – from the largest media businesses on either side of the Atlantic. Small, non-English language and independent media have been largely shut out so far.

Similar, but far smaller or more nascent, AI licensing programmes are being pursued by Google (secretively, but reported in Bloomberg and elsewhere), Amazon and Microsoft.

Algorithms give way to opaque AI licensing deal decisions

Since the advent of digital news distribution, publishers have relied on third-party platforms to reach audiences. However in the case of search and social media it is algorithms that determine the level of prominence and promotion a particular piece of content receives.

Whilst these algorithms are often – and justifiably – criticised for their opacity and distorting incentives for media businesses and individual creators, they were at least applied in a uniform manner, generally without favour or exception.

This is not the case for content to feed AI applications. Decisions about which content is subject to amplification – in the form of which licensing agreements to strike – is now entirely opaque, being the output of decision-making that takes place behind closed doors by anonymous and unaccountable tech executives.

And it doesn’t seem that regulators are likely to be in a position to help. The UK and EU’s new digital competition rules (under our new Digital Markets Unit regime and the Digital Markets Act in Europe) contain obligations on platforms to ensure fair treatment, but it is unclear how these would apply between licensed and unlicensed content. These new digital regulatory levers were designed for the previous era, not for a world of bilateral commercial agreements that underpin content distribution.

We are relatively early in the AI era, but on the current trajectory, LLM-powered technologies will likely replace conventional search for many use-cases in the coming years. As this comes to pass, the operators of these applications will sit in critical gatekeeping roles, determining which publishers can access both audiences at scale and now licensing revenues.

The upsum; get a deal and you’ll get paid and get (some) traffic. If you don’t have a deal, you’ll get neither.

AI systems need independent, specialist content too

If we play this forward and the deal-making pursued by big tech continues in this discretionary, bilateral form, the emerging distribution environment carries very substantial risks for the sustainability of many outlets but also, and more importantly, the media and information ecosystem more widely.

For example, as AI developers will always prefer the efficiency of a smaller number of larger partnerships, will this lock in the advantage of the largest players?

There is also the very real potential of deal-making being influenced by political pressure. We know that big tech is in hock to Washington, so why would a firm risk a deal with a title that’s out of favour with the White House or its allies?

This dynamic will also likely raise the barriers to entry for new businesses – how can a new media brand get to the scale needed to secure an agreement, if it doesn’t have one to start with?

Clearly these risks carry real consequences for society more broadly. They could undermine freedom of expression, narrow the range of perspectives on offer and throttle the emergence or survival of media that serve niche and minority audiences.

There is no obvious solution to this emerging issue. Policy-makers need to be aware of it and to consider whether there are interventions that might mitigate its worst effects. Of course, though, it is not for governments to determine the terms and participants of freely-entered commercial agreements.

Media businesses themselves – both those with and without deals – can and should be working together, harnessing their collective power to establish AI licensing norms and to create a market that will allow them to extract fair value from their content.

Bilateral agreements will continue to exist but independently-produced, non-English language, specialist and niche content is also needed to power AI systems. A marketplace in which all creators can get paid for the value they deliver is urgently needed.

Topics in this article : ,

Email [email protected] to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog

Websites in our network