By Jason Gitlin, creative director and leader of creative initiatives for RAPP San Francisco
Artificial intelligence is not above the law, but sometimes it can seem that way. Tech companies attempting to use data to power their algorithms have violated industry-specific regulations and data-privacy restrictions in the past. This is what happened in 2018 when Facebook gained access to users’ banking data.
Unfortunately, violations often only come to light after the fact. The problem is that — beyond stretching our notion of data privacy — these kinds of violations risk creating biased algorithms that persist within companies. And, to make matters more elusive, it’s very difficult to know what’s going on behind the curtain at a powerful tech company.
Given the highly secret and valuable nature of a platform’s algorithmic machinations, biased algorithms can easily go undetected. But how are these biases affecting consumers?
How bias in artificial intelligence impacts the audience
Even when marketers avoid targeting audiences in a way that could be discriminatory, there’s no guarantee that a platform’s built-in optimization process isn’t favoring one group over another.
This kind of discrimination often slides under the radar because marketers are so keen on using social media’s massive and targeted capabilities. Of course they are; leveraging data to make a campaign sing a personal song to a platform user is the ultimate hook. It’s what creates highly successful, individually relevant marketing campaigns.
But targeting goes beyond geographical location and user behavior; it extends to “lookalike audiences” and “multicultural affinities,” leading to a kind of selectivity that begins to shape how the whole platform looks and operates — and quite easily begins to stray from principles of diversity, equity, and inclusion.
This problem isn’t just ideological. Take the case of ACLU’s study of facial recognition software. In a test conducted by the ACLU’s Massachusetts division, the official headshots of 188 professional New England athletes were compared to a database of 20,000 public arrest photos. Shockingly, nearly one in six athletes were wrongly identified.
Incidents like Facebook’s use of banking data and facial recognition bias should force marketers and algorithm developers to incorporate a level of proactivity into their AI-development processes, so we can catch bias in algorithms before they shape our digital world and directly harm the people involved.
Thankfully, these issues haven’t gone unnoticed by lawmakers. The Algorithmic Accountability Act of 2022 is dedicated to creating a culture (and legal standard) of accountability and transparency in how companies use automated systems and monitor and report any potentially negative impacts.
What can marketers do to be more proactive about AI and bias?
Marketers need to take steps today to ensure that the solutions they are recommending or offering are high-quality and not reinforcing bias or inequity. But how can they extract bias — inherent or unintentional — from their processes?
Define the principles you’re prioritizing
It’s important to decide which principles you’re working with before you start creating. Define your values and how they relate to your use case. If the technology communicates, define language principles. If it targets, define targeting principles. In this way, you can have the hope of taking ownership of algorithms.
Acknowledge how the technology could be abused
Nothing will change if we pretend everything is fine. Marketers need to realize and accept that no one is free of inherent biases and that combating bias is everyone’s responsibility. Spend time at the beginning of a project thinking ahead at worst-case scenarios. What could happen? What could your blind spots be? And how could you start shedding light on them now?
Keep learning and adjusting
Once you know what principles are guiding you, creative leads can keep these principles top of mind when directing and reviewing work. In a bias-conscious environment, it’s essential that every project or solution be audited through the lens of DEI. Developers should also build in the time to rigorously test technology to ensure it aligns with principles as it learns.
Bias in artificial intelligence is a complex topic without a simple solution. But the most important thing to do is just to get started. Pay attention to what we create or recommend by considering diversity and bias from the start of every project, assessing and addressing potential future issues, and staying aware.
About the Author
With 20 years of customer-centric, action-driving marketing experience, Jason Gitlin leads creative initiatives for RAPP San Francisco as a Creative Director.