‘Who Will Watch the Watchmen’: The Blurred Lines of Using AI for Customer Experience Marketing

By Mark Walker, Planning Director at MRM UK

“Quis custodiet ipsos custodes?”  2000 odd years later and the words still ring true, from governance to security, from Juvenal through Plato and Alan Moore in 2024, we are now confronted with the rise of artificial intelligence and still we need to repeat “Who will watch the watchmen?”

Admittedly, Juvenal actually wrote the line to describe one man’s battle with marital fidelity, but as AI’s power and potential continue to unfold, there’s no doubt it’s a relevant question to ask. Who will watch over those companies building AI and using it to peer further into people’s everyday lives?

This isn’t an article bashing AI; the technology undeniably offers remarkable marketing and customer experience capabilities. Netflix’s use of it to personalise its recommendations engine is a great example of a brand using AI in a helpful and engaging way that benefits customers and the business alike. A lot of these tools have been used behind the scenes for a long time – so in 2024, the average consumer seamlessly interacts with AI on a daily basis.

However, progress has undoubtedly accelerated in recent years, and that has brought some serious ethical conundrums. Some are best left to philosophers to ponder and engineers to solve, but there are nuanced concerns for marketers to consider, too.

With roughly 80% of CMOs planning to up their investment in AI and data this year, that conversation needs to happen today. What principles do brands want to stand by? Where does regulation need to come into play?

The big data question

It’s common knowledge that most AI tools are taught with reams of online data. The question is how far the likes of ChatGPT should go to faithfully replicate humanity.

Google has been furiously attacked for the “wokeness” of its own chatbot, Gemini. Some feel the company has been abusing its power by steering it to be more “politically correct” in its responses. But let’s be honest – the internet can be a horrible place. Do we really want platforms filled with racism, homophobia and misogyny to become the basis of our AI tools without any checks or moderation in place? ” of its own chatbot, Gemini. Some feel the company has been abusing its power by steering it to be more “politically correct” in its responses. But let’s be honest – the internet can be a horrible place. Do we really want platforms filled with racism, homophobia and misogyny to become the basis of our AI tools without any checks or moderation in place?

Brands face a similar problem when developing customer service chatbots. If the AI learns from negative customer feedback, the responses generated will become aggregated and might become harsh and brutal, a disaster from a customer experience point of view.

Another important ethical consideration is how data is used to personalise ads and target consumers. We know how much tracking and predictive technologies can scare consumers – who hasn’t been alarmed to see an ad on Instagram that directly relates to a conversation they’ve just had? Consumers don’t want to feel spied upon in their private lives, and they don’t want to feel like an ethical boundary has been crossed.

Things can really get out of hand when AI is allowed to gather and use data unchecked; Target once accidentally exposed a teen pregnancy when the retailer analysed the customer’s buying history and sent her pregnancy-related coupons with her family previously unaware of her pregnancy at this point. AI is agnostic about data; it can’t decipher between public and private.

So, data ownership is inevitably going to become an even bigger issue than it is today. Will consumers have to be asked to volunteer their data for AI use similar to 1st party cookie permission, will this scare them as much as cookies first did. We all want to feel that we own our data, especially when it’s highly personal.

Generative AI’s ethical quandary

Generative AI has its own ethical problems, from copyright issues to replacing human jobs. Big, profitable companies are using AI to further cut the cost of human specialists, and at a time when the cost of living crisis is still an issue for many people, that can leave a sour taste. The other side of the coin is AI can become the tool that only supports the experienced and knowledgeable marketeer. Be it a designer or developer, AI can seemingly support removing the need for junior members of staff. It sounds good in principle for a business to have less junior roles and more senior roles however it limits the opportunities for the marketers of the future to enter the industry., However,

Every brand wants to be more effective and engaging, and generative AI certainly has a role to play. But branding is about finding that nugget of truth that explains your business’s reason to exist. It’s about making human connections. Neither can come from AI, so the more we allow it to do the heavy lifting, the more dangerous it will become.

Finding the balance

Eventually, AI developers like OpenAI will be pulled back by legislation. But until then, they will continue to push the boundaries (and this again is a good thing, innovation drives change) but one day someone – be it regulators, consumers or brands – will tell them they’ve gone too far.

That means brands have to make their own decisions about what they are willing to do with AI. Some are already taking a stand – Bike Shed Motorcycle Company recently posted the #noAI hashtag on Instagram, promising that it “won’t cross the line, where content is entirely created by text prompts, delivered by avatars or is designed by algorithms, replacing our direct human connection with our human community.” Brands that thrive on authenticity will increasingly take up the #noAI call.

Again, AI has a lot to offer both inside and outside of marketing. It can vastly improve the pitch process, for example, and its capabilities in cancer diagnosis are phenomenal.

Ethics must be remembered. Regulation is coming but in the meantime, brands need to regulate themselves, watch themselves and never betray that trust and control in how this tool is utilised. Because when no one watches the watchmen they themselves become our authority. Because when no one watches the watchmen they themselves become our authority.

Tags: AI