The AI Revolution: Why Publishers Are More Ready Than They Think

By Benjamin Lanfry, Chief Supply & Operations Officer, Ogury

Digital publishers have faced a slew of disruptions, from social media algorithms reshaping audience engagement to search updates upending traffic strategies. Generative AI is ushering in another paradigm shift, but one that many publishers have inadvertently prepared for as a result of strategies already in place:: diversifying revenue streams, prioritising audience loyalty, and taking a proactive stance in protecting intellectual property.

As a result, forward-thinking publishers will likely avoid the worst-case scenarios of AI’s impact. They may even benefit from the technology, using it to increase content output and syndication, while striking mutually beneficial agreements with AI companies hungry for training data. Meanwhile, publishers who feel unprepared must look towards their proactive peers and emulate their strategies, as being able to capitalise on AI’s benefits while mitigating its risks will determine who sinks or swims.

How publishers can face AI disruption from a position of strength

There’s no doubt that AI-powered chat interfaces and search engines — such as ChatGPT, Google Gemini, Deepseek, and Perplexity — will further decrease search traffic towards publishers, while the rollout of advertising inventory in AI search will create yet more competition for advertising budgets.

However, this merely accelerates changes in content discovery that have been years in the making. Before AI Overviews launched in May 2024, more than half (58.5%) of Google searches resulted in zero clicks. Savvy publishers saw that the writing was on the wall and switched away from an SEO-focused model to a more holistic off-platform strategy that encompasses all platforms where their current and potential audiences spend their time.

Though changing course is always daunting, breaking away from a reliance on search can be freeing from an editorial perspective. Instead of tailoring content to suit one particular (and ever-changing) algorithm on one platform, successful publishers are considering how their content can be developed or adapted for video, podcasts, social media, and so on.

This is an area where AI tools can do a lot of heavy lifting. Repurposing content for different platforms and media is exactly the sort of laborious busywork that can bog down human teams, but that AI can handle in a flash. Publishers can also use AI to dig into and categorise their archives, massively improving on-site search and opening avenues for their audiences to rediscover legacy content, a fascinating opportunity in an online ecosystem that is often solely focused on the “now”.

Beyond changes in technology and media, the most fundamental shift among digital publishers has been from trying to capture the attention of fly-by-night visitors from search engines to cultivating a loyal — and often paying — audience. This simultaneously reduces reliance on and improves the value of advertising revenues, as a trusting audience is a rich source of first- and zero-party data, the former being user data collected with content and the latter being data that is given directly through polls and surveys.

In short, the strategies that successful publishers have been pursuing to adapt to declining search traffic and fragmented audience attention will also mitigate any potential negative impact from AI. Those most at risk are the publishers who were already clinging to outdated audience growth and monetisation strategies, and who will now have to pivot with great urgency.

Publishers must recognise and push their leverage

The legal system moves at a slower pace than technology development, which led many LLM developers to play loose with copyright law before concepts such as fair use had been tested in court. Intellectual property was scraped from across the open web  (and even downloaded from torrents) to be used in training data, without permission from rights holders, which becomes contentious the moment such data is used for commercial gain.

This period of legal limbo is coming to an end. At the time of writing, court cases between rights holders — including many publishers — and AI developers are in progress across the globe, and a legal consensus will soon take shape. At the same time, regulators are inking legislation to clarify the situations in which rights holders need to be recompensed. Throughout this process and afterwards, publishers must be proactive in protecting their intellectual property.

Giving AI developers carte blanche to use scraped data from digital publishers would undermine the very foundations of copyright law, so the most likely regulatory outcome will be that publishers will be remunerated — perhaps retroactively, depending on the transparency demanded of training data — for the use of their content.

To get ahead of the issue, many publishers and AI developers have struck commercial agreements for the licensed use of publisher content in training data and AI products, with the most high-profile being News Corp’s estimated $250 million deal with OpenAI, though AI startups such as ProRata.ai have also made inroads on licensing deals.

The fact is, AI companies need publishers. If a chatbot can’t answer, “What’s happening today?” then it falls short of a vital utility that consumers expect. AI can’t perform news gathering, reporting, or investigative journalism; it can’t write product reviews, holiday guides, or lifestyle content. AI can’t create anything truly new or engage with the “offline” world. For an AI product to serve its most vital function as a human assistant, it needs to be kept abreast of current events and trends and surface the human-created content that human audiences desire.

Publishers, with their editorial guidelines and codes of ethics, also have a significant advantage in accuracy. “Hallucinations” have proven a stubborn issue in generative AI output and the longer the issue persists the more it seems that it may be endemic to the technology, undermining its ability to serve as a reliable disseminator of information.

A recent study by the BBC found that in 100 responses generated by mainstream AI chatbots using BBC content as a source, 51% were found to have significant issues, 19% had factual errors, and 13% had modified or invented quoted materials.

History has shown people will favour convenience over quality, but there’s only so long AI-generated search results can get away with their work-in-progress disclaimers before their utility comes under serious scrutiny. Such services will need to prioritise accuracy, which may mean directly quoting and attributing their sources rather than synthesising answers, especially when it comes to current events.

There is no denying that digital publishers will face disruption in the coming years, and not all will be positive. However, the publishing industry has weathered intense technological changes time and time again and survived, though transformed along the way. If AI companies want to remain a relevant component of audiences’ engagement with media, failing to support publishers — and creative industries as a whole — would mean killing their golden goose. Cooperation is the only way forward.

Tags: AI