How Adland Can Defund the Business Model of Disinformation

cubes reading from fake to fact

By Csaba Szabo, Managing Director, EMEA, Integral Ad Science (IAS)

For anyone in any doubt about the real-world implications of online disinformation and fake news, need look no further than the recent protests against vaccines and climate change.

Consider disinformation alongside Covid-19 and vaccines — and we are in dangerous territory and it is arguably why there was so much vaccine hesitancy, particularly in younger people. What’s more, a recent study investigated the extent of posts downplaying or denying the climate crisis on social platforms and their potential reach.

Here we see how online content — often dressed up with an air of respectability and quality and often ad funded — could purposefully deceive a diverse range of people into believing things that were ultimately a danger to people’s health or habitat.

And it’s not just about anti-vaxxers or climate change; online disinformation is a tool for political extremists seeking to sow division, hate groups and those who seek to make money from people’s fear and confusion on a range of subjects.

Scale of The Problem

The world wide web has kept us better connected, broken down barriers and expanded our knowledge. However, it’s also led to the rise in disinformation.

It’s a business model for society’s most malignant players and it needs to be dismantled. Part of the problem in doing so is that many disinformation websites appear respectable and newsworthy and it often requires expertise to prove otherwise.

And not only are users being deceived; brands securing ad placements on our web browsers are, too. Some brands, without the necessary protections in place, are unwittingly funding disinformation through programmatically-traded ads i.e. using a platform that buys and sells ad space automatically.

Indeed, the fact that premium brand-owners are having their ads running alongside such content is part of what brings a sense of credibility to disinformation in the first place.

So what do we know about the scale of the problem?

According to The Global Disinformation Index (GDI), a non-profit working with governments and businesses to tackle the problem, around $75m in ad spend ends up on 20,000 disinformation sites that have been identified worldwide.

Meanwhile, in July 2020, 480 English language Covid-19 disinformation sites accrued $25m in mainstream ad revenue, helping fund the spread of conspiracy theories and false information.

The GDI is now aiming to identify, down-rank and de-fund dangerous content, with the focus of its efforts on those who use disinformation for financial gain. Consequently, it has created a risk rating to provide brands and ad tech companies with a neutral assessment to help direct ad spend towards safe and trusted content.

In parallel, ad tech verification companies detect misinformation by scanning for clearly fake or hyperbolic terminology. More sophisticated solutions use AI technology to detect emerging threats by determining which sites have a strong correlation to more high profile sources of misinformation. This, in addition to organically flagging domains through human review of sites via watchdogs reviews or similar content reviewers such as the GDI.

Thus, it’s a concerted effort that relies on both human expertise and AI to identify ‘flags’ that can be combined to give an accurate assessment of a web page or domain.

Expertise is the crucial factor here. Recognising disinformation can be hard — facts might need to be verified — and much of it is packaged to look like respectable journalism or purports to be hosted by a trustworthy domain.

Instead, these are often created by banned activist groups and tend to have URLs that appear overnight and masquerade as real news sites, including Sky News, BBC and the Guardian.

The Industry’s Responsibility

Combatting disinformation isn’t just an important ethical consideration for all involved. The ultimate goal, of course, is to direct spend towards ad space that doesn’t damage but enhances a brand’s image, the outcome of which will fund quality journalism, protect users and defund bad actors.

We know from numerous studies that the majority of consumers would stop using a brand or product if they viewed its ads next to false or inflammatory material. As brands consider this inflated area of risk, they should ensure their protocols, partners and brokers are able to proactively avoid bidding for ads on sites which commercialise inaccuracies and distort facts.

Brands should also seek to track the accuracy of the content and reporting of advertising campaigns and make a conscious effort to use their ad spend to support media which displays a sense of quality and trustworthy journalism – as set out by Reporters Without Borders and the ​Journalism Trust Initiative​.

The global ‘Covid-19 infodemic’, as recognised by the World Health Organisation, has driven trust in all news sources to record lows, according to the 2021 Edelman Trust Barometer. Therefore, an “overwhelming majority” of consumers expect brands to act and advocate on societal issues.

Tackling disinformation, and disrupting the incentives that create it, is a moral, reputational and commercial consideration. It also feeds into a much broader narrative about responsibility within our online ecosystem.

Industry stakeholders including the global association of advertisers (World Federation of Advertisers) and platforms such as Google, Facebook, Twitter, Microsoft and TikTok, to name a few, have come together to create a Code of Practice. Endorsed by the European Commission, its signatories commit to fight online disinformation.

Guidance from the European Commission include measures for platforms to share regular disinformation activity reports, tackle fake accounts, empower consumers to report such content, grant access to research communities so outside experts can help monitor online disinformation through privacy-compliant access to platform data, and exchange information on disinformation ads that have been refused by any one platform — so there’s a more coordinated response to shut out bad actors. It’s a bold list and time will tell how binding the codes will become and to which degree platforms step up to the plate.

There are already some positive steps taking place. Google has placed a blanket ban on running any ads on its platforms – including YouTube – pertaining to climate change misinformation. It will cease to run ads, or the monetisation of ads, related to “content referring to climate change as a hoax or a scam.” As Google explains, “Advertisers simply don’t want their ads to appear next to this content. And publishers and creators don’t want ads promoting these claims to appear on their pages or videos.”

Early results suggest that industry initiatives such as these codes of practices are working. A survey of advertising professionals found that only 8% saw fake news becoming a greater concern in 2021, down from 33% in 2020. More so, a recent report that identifies the portion of ads landing on online pages that were flagged as potentially unsafe to brands, discovered that the portion of ads landing on sites with fake news is decreasing. In fact, the risk of appearing near offensive or controversial content, such as disinformation, reduced across all online environments (such as mobile and desktop) and in some places by half in H1 2021, compared to H1 2020.

The commercial internet as we know it is being rebuilt: entire systems and ways of working are being recalibrated in an effort to make the internet a better, safer, and more trustworthy place.

Tackling disinformation and other forms of poor quality and nefarious content is an important strand of the wider mission. So let’s tackle it together and move the dial on trust to where we want it to be.

GET IN TOUCH
Got a Question? We’ve Got Answers.