By Jon Morra, Chief AI Officer, Zefr
Who appears first in Of Mice and Men?
Ask an AI model and it might say “George.” Provide the opening passage and it will switch to “Lennie.” The contradiction isn’t the problem here: it’s the tone. Even when the answer is uncertain, the model delivers it with complete confidence.
That confidence comes from us. Humans dislike doubt. We crave certainty, even when the truth is complicated. We reward people who sound sure of themselves, not those who pause to reconsider. So it’s no surprise that we trained our machines to do the same.
Large language models learn by seeing large numbers of tokens, small bits of words, images, or videos, and predicting what token should come next. Later, they are refined through reinforcement that prizes fluency and polish. The outcome is technology that speaks with authority, even when the foundation is shaky.
When AI is helping you summarize an article or draft an email, that’s harmless. But when it’s making decisions about what content is “safe,” which ad should run next to which video, or what audiences are being targeted, misplaced confidence can have real consequences.
For marketers, this is not a theoretical concern. AI already influences brand perception, ad placement, and messaging at scale. If the systems you use cannot recognize uncertainty, they can misjudge intent, context, or cultural nuance and take your brand with them.
Consider a few familiar pitfalls:
- A harmless gaming clip is flagged as violent, blocking valuable inventory.
- A sarcastic meme is classified as explicit, disrupting a campaign.
- A phrase that is benign in one market is interpreted as offensive in another.
Each decision is made confidently, and each one is wrong.
As marketers increasingly rely on automation, the challenge is not to eliminate AI’s doubt but to design for it. Build systems that show when they are uncertain. Look for partners that can quantify confidence levels or flag ambiguous results for human review. Treat transparency and explainability as brand-safety metrics, not technical nice-to-haves.
Ask the right questions before deploying AI tools:
- Can this system express uncertainty or abstain when unsure?
- Does it provide the reasoning behind its classifications?
- How is human judgment integrated into the workflow?
- Can outputs be audited and adjusted over time?
Real intelligence is not about knowing everything. It is about knowing when to pause, question, and ask for help. The same principle applies to marketing. The best results come from blending machine efficiency with human empathy and context.
We have spent years teaching AI to sound human. The next step is teaching it something far more valuable: humility.
Confidence may sound smart, but in marketing, as in life, doubt is what keeps you honest.

