The AI Arms Race: How Tech Can Generate and Detect Fakes in the Age of Misinformation

By Emma Lacey, SVP EMEA, Zefr

During the COVID-19 pandemic, Nike and Amazon were among a number of big brands  reported to have unknowingly advertised on conspiracy sites,  while, more recently, ads have found themselves alongside Russian propaganda. As well as the ethical implications of appearing next to harmful content, brands’ reputations risk being tarnished in the eyes of consumers, as their budgets are wasted on irrelevant audiences.

Technology is a double-edged sword. While machine learning capabilities have evolved to detect misinformation, some AI systems have also emerged to create and proliferate it across channels. Take the world of deepfakes and AI image generators that have made uncensored, false images more accessible to users. The nuances of misinformation also mean it’s often difficult to identify, and once it’s shared across the internet it can be extremely hard to prevent its spread. It’s paramount that the industry, and society as a whole, works together to combat misinformation and get ahead of the tech that’s helping to produce it.

What is misinformation?

Since its launch in 2019, the Global Alliance for Responsible Media (GARM) has been working to improve brand and user safety by addressing harmful content on digital media platforms and its monetisation via advertising. GARM has recently expanded its guidelines to include guidance on misinformation, which it defines as “the presence of verifiably false or willfully misleading content that is directly connected to user or societal harm.”

False content has obvious ramifications for brands. If their advertising appears adjacent to misinformation, they can be perceived to be endorsing it, which damages trust with key customers. In addition, media spend takes a big hit if advertising campaigns wind up next to harmful content; precious inventory is wasted, crucial audiences are missed and ROI plummets.

The rise of deepfakes

There has been a disturbingly rapid rise of deepfake videos in recent years, with their numbers doubling every six months since observation from December 2018.

Deepfakes are made using a form of AI called deep learning to create images of false events. They can take the form of videos and images and can even be applied to voices to mimic a person’s speech. Many deepfake videos of celebrities and public figures have appeared over the years, with Barack Obama among them and, more recently, Keanu Reeves.

Needless to say, these false videos can take a dark turn as they are often created with malevolent intentions. For example, a deepfake emerged of Ukrainian President Volodymyr Zelenskyy last year in which he appeared to call on Ukrainian citizens to surrender to Russia. Moreover, many women have been the subject of deepfakes featuring pornographic and nude content, which has a devastating impact on their lives and wellbeing.

While the evolution of technology has facilitated exciting new capabilities, malicious players are also capitalising on these opportunities for their own ends. As a result, digital media is facing an arms race between the AI that is helping to generate fake content, and the AI that is combating it.

What can brands do?

The dangers of misinformation, and its wide-reaching consequences for society and brands, prompted GARM to introduce it as a category within its Brand Safety Floor and Suitability Framework in 2022. This has helped brands to become more aware of the importance of monitoring misinformation and has provided key, unified guidelines for identifying, and demonetising it during the media planning, targeting and optimisation process.

This is especially vital for brands running advertising campaigns across social platforms. A vast amount of user-generated content (UGC) is created on these channels every day meaning it can be difficult for brands to keep on top of misleading information spread by creators. Furthermore, influencers can build unbreakable connections with their followers over a period of time underlining the urgency of being able to monitor video content at scale as users are increasingly trusting what they say. As a result, brands need a quick and efficient way to detect untruthful or harmful information and ensure their advertising is placed next to content that is suitable and safe to make the best use of this scalable, trusted content.

Machine learning systems exist to identify fake content; data is extracted from smart fact-checking engines that are able to tell, for example, if an ad thumbnail image contains nudity, violence, illegal substances and the like. These systems are also being used to scrape online knowledge repositories to find facts that can support or disprove information. Over time, this data reveals patterns that can be applied at scale to advertising campaigns.

However, it’s important to keep teams of human moderators involved at all times throughout the process. As efficient as AI is at learning to detect misinformation, there is still no replacement for the human mind. By checking the judgments made by AI systems, human moderators add an extra layer of validity and ensure the monitoring system is robust.

Looking ahead, it’s important that brands, platforms, regulators, users and technology vendors work together to stay vigilant about misinformation. In a world where trust matters and brands can be ‘cancelled’ due to one wrong move, industry players need to pull out all the stops to stay ahead of bad actors and ensure that good prevails in the AI arms race.

GET IN TOUCH
Got a Question? We’ve Got Answers.