Navigating the Ethics of AI Visuals: Nine Considerations for Marketers

By Craig McDonogh, Head of Marketing, Imgix

Last month, Meta defended its use of Australian Facebook and Instagram posts to train its AI, arguing it needed “real” conversations to understand Australian culture. Critics highlighted the absence of consent mechanisms and the implications for user privacy in generative image and video training datasets. This is an example of a new dilemma for marketers – generative AI has made it possible to produce images and video faster than ever before. But as these tools move into marketing, design, and media it raises new ethical questions without easy answers.

The ethics of AI include everything from environmental impact to whether these tools erode our ability to think critically. Those are big, important conversations, but this article outlines nine of the most important concerns, why they matter, and what’s at stake as we balance innovation, responsibility and public trust. These are conversations we need to have.

1. Consent and Privacy in Training Data

The Issue: AI models are trained on massive datasets that often include images of real people.
The Worry: Individuals’ faces, bodies or likenesses may be used without consent.
The Counterpoint: AI is generative, not derivative, so while images of real people may be used in training (and deepfakes of more famous people exist), it is extremely unlikely that images of most people will ever be replicated by AI.

Your image can be part of an AI model, even if you never agreed to it. Photos posted on social media, in public datasets, or through stock photo platforms are often scraped and used to train AI systems and once your likeness is in the training data, it can show up in generated content without your knowledge.

Even when companies say they “anonymize” training data, it’s not always clear how effective those safeguards are or whether they’re in place at all. Without clear standards or consent processes, there’s little to prevent people’s personal images from being used in ways they’d never expected. Groups like Fairly Trained help identify vendors that prioritize creator consent and transparency in their training practices.

2. Creator Rights and IP Theft

The Issue: Artists’ and photographers’ work is used to train AI without permission.
The Worry: Original content is being exploited without compensation or credit.
The Counterpoint: While AI image generation is faster and cheaper than manual creation, this is not a new issue for artists and creators

Artists have very specific concerns. A model trained on millions of images can easily reproduce a creator’s unique style, without giving credit or asking permission. This is especially troubling because creators have little control. Even if they see an AI-generated image that resembles their work, they often can’t trace where it came from or do anything about it. And as more tools use scraped or crowdsourced visuals, the legal and ethical rules around ownership are getting harder to pin down.

In fact, in June, Disney and Universal filed a landmark lawsuit against Midjourney, accusing it of training on and replicating copyrighted characters like Shrek and Homer Simpson. The case marks a major escalation in the battle over visual creator rights in the age of generative image AI. As an alternative, companies like vAIsual provide legally clean AI-driven generative AI solutions for content creators.

3. Economic Implications for Creatives

The Issue: AI tools reduce the need for human visual creators.
The Worry: Artists, photographers, and video producers may lose work or income.
The Counterpoint: With powerful AI tools, artists and creators have access to a technology that will allow them to be more creative, and produce more content.

If you’ve ever wondered whether your job might be replaced by AI, you’re right to worry. For creatives the pressure is here. AI can generate polished visuals in seconds and often at a fraction of the cost of hiring an illustrator or photographer.

These tools offer speed and convenience, but they’re also shifting how visual work gets done. Teams that once relied on creative freelancers or in-house experts may now turn to AI for quick outputs and this kind of shift can reshape the economics of the industry.  In fact, A recent study projected that audiovisual creators could lose 21% of their income by 2028 due to the rapid growth of AI-generated video and animation. The report warned of a large-scale shift in revenue from human artists to generative AI platforms unless protective policies are enacted.

For many creatives, adapting to new technology is only part of the challenge. It’s just as crucial to maintain relevance and steady work in a rapidly changing market.

4. Biases and Stereotypes

The Issue: AI imagery often reflects skewed or stereotyped defaults.
The Worry: Racial, gender, age and body diversity may be underrepresented or distorted.
The Counterpoint: None. This is a real concern that should be considered when generating AI images, and addressed with appropriate prompting.

Experiment with image generators – type “CEO” and you’ll get a series of middle-aged white men. These biases aren’t random. They’re baked into the data. If a model learns from content that underrepresents certain people, it will reproduce those same gaps in its outputs.

A recent UNDP study analyzed how DALL-E 2 and Stable Diffusion represent STEM professions. When asked to visualize roles like “engineer” or “scientist,” 75-100% of AI-generated images depicted men, reinforcing biases. This contrasts with real-world data, where women make up 28-40% of STEM graduates globally.

AI-generated images are showing up more often in ads, websites, and brand content. If no one steps in, these visuals can reinforce narrow or inaccurate ideas about who fits where. That’s not just a representation issue, it’s a reputational one for brands aiming to reflect a diverse audience.

Dove is a household brand that is continuing to challenge false beauty standards. Two decades ago, Dove made a stand by showing real people in their ads and now the company has taken a stand against AI generated images saying “as we transition into an era where 90% of content is predicted to be AI-generated by 2025, our message still stands: keep beauty real.” With that, the company committed to never using AI to create or distort women’s images.

5. Misinformation and Deepfakes

The Issue: AI makes it easy to create misleading visuals.
The Worry: Fake images or videos can be used to manipulate opinions or harm reputations.
The Counterpoint: This is not a concern specific to AI image generation. “It shouldn’t be an indictment of the technology itself. The ethics isn’t in technology. It’s in the people.”

We’re all getting good at spotting fake content, but also quick to believe it. AI-generated visuals are often polished, believable, and easy to create. That combination makes them perfect for spreading misinformation.  In June 2025, TIME reported that Google’s AI tools can generate hyper-realistic AI videos of riots, election fraud and fabricated political events. The tool’s ability to produce believable but false visual narratives raised urgent concerns about the role of AI video in disinformation campaigns.

Whether it’s a fabricated protest scene or a politician saying something they never actually said, these videos can gain traction before fact-checkers catch up. Even brands using AI ethically may find themselves accused of manipulation, just because the line between real and fake keeps getting blurrier.

Products like Google DeepMind’s SynthID are empowering users to identify AI-generated (or altered) content with a new watermarking tool, helping to foster transparency and trust in generative AI.

6. Safety and Harmful Content

The Issue: AI tools can generate dangerous or illegal imagery.
The Worry: These can cause real-world harm or violate laws.
The Counterpoint: Same as #5. When it comes to AI usage, AI itself isn’t harmful; the danger comes from how people decide to use it.

AI can accidentally or intentionally generate truly harmful visuals. Some AI tools have produced explicit, violent, or illegal images when pushed, or even unintentionally. These risks become more serious when the content involves people or resembles criminal content.

Earlier this year, Google admitted to Australia’s eSafety Commission that it received more than 250 complaints globally in less than a year that its artificial intelligence software was used to make deepfake terrorism material. The report criticized major tech platforms for failing to implement effective safeguards against harmful and illegal visual content.

Without strong safeguards, platforms can easily be misused and once disturbing content is generated, it’s difficult to contain.

7. Lack of Trust

The Issue: The line between real and fake imagery is blurring.
The Worry: Audiences may lose faith in even authentic visual content.
The Counterpoint: None. This is an issue that brands should address by being upfront about their use of AI. Groups like CAI are working toward this.

It’s become all too common to see “that’s AI” in the social-media comments under a real photo. We’re all becoming more skeptical of images, even when they’re authentic.

That erosion of trust can hurt brands, journalists and creators. If people don’t believe what they see, it becomes harder to tell a compelling story or convey a fact. For anyone using visual media to communicate, trust now must be earned more actively than ever before.

For example, in last year’s election, a presidential candidate falsely claimed an image of a rival’s large campaign rally was AI-generated despite multiple sources confirming its authenticity. The incident highlighted a new dilemma: even real photos can now be dismissed as fakes, further undermining trust in visual evidence.

Support is emerging from companies like Blackbird.ai, which help organizations address reputational harm by analyzing narratives, key influencers, network connections, bot campaigns, and related groups to identify and prioritize risks.

8. Ownership and Copyright of AI-Generated Work

The Issue: AI creations exist in a legal gray area.
The Worry: It’s unclear who, if anyone, owns the rights to AI-generated content.
The Counterpoint: This is not the fault of the technology. Rather, it is regulations that have not kept pace with change. Also not a new issue – remember the monkey selfie copyright dispute?

When an AI tool generates a visual, the debate begins over who owns it. Right now, the answer is often no one. Copyright law in many countries requires a human author, which means AI-created work may not be protected or protectable.

Earlier this year a U.S. appeals court reaffirmed that AI-generated art lacking human input cannot be copyrighted. The ruling reinforced that legal protections for AI-generated images remain murky, leaving creators and brands in a precarious position regarding rights and reuse.

Electronic Frontier Foundation (EFF), a nonprofit organization defending civil liberties in the digital world, is advocating to resist the urge to expand copyright law, saying that expanding copyright threatens socially beneficial uses of AI, such as scientific research, without meaningfully addressing the harms. This uncertainty can complicate everything from branding and licensing to asset reuse. If a campaign includes AI-generated imagery, marketers need to know what they own and what they can legally defend or monetize.

9. Environmental Concerns around AI

The Issue: AI training and inference has environmental impact.
The Worry: Extensive use of AI will exacerbate environmental problems.
The Counterpoint: While AI consumes more water and power than other compute loads, the numbers are not actually that large.

The two main environmental concerns center around power and water consumption. An AI datacenter will tend to use around 10x the power of a similarly sized non-AI datacenter, and 20-50% more water. But data centers vary massively in cooling efficiency and infrastructure, leading to huge differences in water use even for the same model.

That being said, it’s estimated that the training of GPT-5 used around 1.2m liters, and that its use consumes  approximately 1ml/query. Based on 330m queries per day from the United States, that’s the equivalent of the annual usage of just 3 households for training, and around 800 households (a medium/large subdivision) per year for ongoing inference. On the power side of the equation, the numbers are significantly higher – GPT-5 queries use approximately 45 GWh/day – which equates to the energy use of around 1.5m households per year, or one quarter of the energy used by Tesla (worldwide) to manufacture automobiles.

However, this energy and water use is improving as new technology breakthroughs occur leading to cooler and more powerful chips – and many of these breakthroughs are aided by (you guessed it) AI design. In fact, Google has said that the median Gemini prompt used 33 times more energy in May 2024 than it did in May 2025.

The Way Forward

Generative AI is changing how we create and share images, but also how we think about ownership, fairness and credibility. These are practical challenges that marketers, creatives, and teams across industries are already facing.

There’s no single rulebook for using AI responsibly, but brands are making choices about transparency, representation and rights. With solutions such as vAIsual, providing legally clean AI-driven generative AI solutions for content creators and Blackbird.ai which is working to combat reputational harm, there is an opportunity to lead with clarity and integrity, even as the tools keep evolving.

About the Author

Craig is an accomplished marketing executive with more than 30 years of experience in all aspects of B2B software marketing. His background spans US, APAC, and European markets in companies including ServiceNow, BMC, and Perspectium. Craig is recognized as a SaaS marketing leader and has been a featured speaker at events around the world

Tags: AI