Are Advertisers the New Content Moderators?

By Noor Naseer, VP of Media Innovations + Technology, Basis

Misinformation isn’t just a problem for users anymore — it’s a branding crisis. As media platforms scale back moderation, advertisers are being forced into the role of risk manager—whether they like it or not. What was once a space for connection and discovery has devolved into a digital battleground, rife with misinformation, shadow publishers, and covert data mining—leaving users exposed and advertisers entangled in ethical risk.

While users may be disillusioned by the growing presence of misinformation, the stakes are even higher for advertisers. According to media measurement firm IAS, 65% of consumers report that they are unlikely to purchase a product or service from a brand that advertises near misinformation.

For ad campaigns to be fully effective, the onus is on brands and agencies to ensure campaigns have clear guardrails to be successful. Inaction on misinformation is not an option for advertisers.

How can advertisers take action?

Pulse

Agencies must go beyond check-the-box brand safety processes and instead actively guide clients in redefining safety in today’s turbulent media landscape. This begins with a rigorous audit of the current publishing ecosystem, especially platforms monetizing through ad placements. Are those spaces merely under‑moderated or actively propagating misinformation? Most often, the reality falls into shades of gray.

Clients may still tolerate ad placements in loosely regulated or controversial environments—whether by design or default. It falls to decision makers to question those comfort zones thoughtfully, recalibrate brand safety guidelines, and ensure every measurement ladder up to the client’s foundational values.

The stakes couldn’t be clearer. In Q1 of 2024, risky political content rose 29 percent compared to Q4 2023, and overall misinformation content increased by 25 percent in social feeds measured in the U.S. More than 75 percent of U.S. consumers say they feel less favorable toward a brand whose ads appear on sites that spread misinformation.

Transparency

If a program isn’t succeeding or if ads have run on suspect inventory, an upfront report is key in pivoting the campaign and analyzing any additional problem areas. Therefore, a regular cadence of reporting is needed – for clients and internally. This reporting should be utilized in agencies so everyone is working with the same information, establishing systems and workflows. Brands are more likely to trust agencies that approach uncertainty with a clear framework for decision-making, even when the answers aren’t immediate. Transparent reporting and open communication help foster that trust, creating a shared understanding of both the risks and opportunities in today’s complex media environment. 

Audits

The landscape of potential targets and sites changes daily. It is imperative that agencies conduct regular audits to ensure content is not appearing on flagged sites/sources. A regular review of campaign data will help ensure that ads are not running in places that are flagged. Audits allow agencies to be agile in proactively cutting off paid ads on flagged sites and pivot to sites and channels that meet their standards. 

Intelligence

AI should not be the sole arbiter of content moderation, but it must play a role. Intelligence tools powered by AI can surface patterns, flag bad actors, and help distinguish between credible publishers and unsubstantiated, high-risk environments. For agencies, the ability to make that distinction is critical. It enables them to guide clients with confidence and ensure placements align with both performance goals and brand integrity. 

People

AI-powered tools are indispensable for detecting risky inventory with judgments that are not without limitations. Agencies and advertisers must pair automated detection with human oversight, providing the context, judgment, and nuance that algorithms often miss. Human review remains a necessary safeguard in the ongoing effort to keep misinformation out of the media plan.

Supply

Agencies must have a clear, end-to-end understanding of how and where their media partners are distributing content. That means demanding transparency not just in performance metrics, but in how content is classified, approved, and surfaced. It’s equally important to establish shared benchmarks for what constitutes a brand-safe and contextually appropriate environment. Without that alignment, accountability is impossible.

If trust is the currency of modern advertising, agencies and advertisers now know they hold the keys to the vault. These steps aren’t necessarily difficult to implement, but they require consistency, vigilance, and contingency planning when safeguards fail. The effort lies in ensuring this process is always-on and monitored with safety nets when protections fail. By habit or misjudgment, advertisers have traditionally relied on social platforms and inventory providers to police content and protect audiences. That era needs to end. In today’s relentless environment, agencies and advertisers must take direct responsibility using their influence and ad dollars to set new standards and rebuild consumer confidence.