And That’s the Way It Is: AI Wants to Be Your Walter Cronkite

By Colin Moffett, Co-founder and CEO, Artemis Ward, Washington, DC

In the pursuit of knowledge, humans face three fundamental challenges: accessing information, digesting it, and determining whether it’s trustworthy. In earlier times, when information was scarce, our primary challenge was access. Today, the opposite is true: we’re inundated with information and struggling to make sense of it all — assuming we even trust it. This challenge — digesting what something says, where it comes from, and how it’s presented — is more about assigning value than it is about the nature of reliability.

Historically, we trusted a select group to handle the task of curation. In the twentieth century, nightly news anchors like Walter Cronkite, Peter Jennings, and Tom Brokaw didn’t just deliver information. These trustworthy figures filtered the news of the day, ensuring that only the most significant stories made it to our screens, and by doing so, they became not only our primary mediators of information but also fixtures in our day-to-day lives. While this system maintained a relatively informed and balanced flow of information, it wasn’t without drawbacks: important stories, such as Civil Rights struggles, often took too long to break through these human filters.

Then came the rise of social media, which fundamentally disrupted this model. As communication tools became democratized, there was no longer a select group of editors and news anchors in control. Anyone could weigh in on anything – and everyone did – enabling a deluge of content creators to position themselves as the new moderators of news and culture.

The shift from professional to peer was refreshing at first, painting a fuller picture of the world while reflecting our tendency to trust information coming from those closest to us. But, over time, these content creators stopped digesting information and started increasing its volume, leading to echo chambers, misinformation, and a craving for a central, reliable voice to help us discern fact from fiction.

Enter artificial intelligence, capable of processing vast amounts of information in seconds and presenting it with an alluring (albeit perceived) neutrality. From ChatGPT’s authoritative tone and assured hallucinations to Perplexity AI’s personalized news recaps and NotebookLM’s smooth-talking synthetic podcast hosts, AI is becoming our new information intermediary. Tools like NotebookLM that boast human voices and speech patterns can feel comforting in their objectivity — almost like the nightly news anchors of yesteryear. It’s easy to forget that behind these confident, human-like outputs, there’s no emotional intelligence, lived experience, or broader human context guiding the presentation.

As our reliance on AI to make sense of the chaos increases, we must remember that each era of information consumption has presented its own trade-offs: when information was scarce, it was better filtered; when it became abundant, trust became more challenging; now, as AI helps us process the growing volume of information, we face a new challenge: preserving our uniquely human capacity for judgment.

Unlike AI, we don’t just process information; we interpret it, assign meaning, and make decisions based on emotional, cultural, and ethical contexts. In an age of algorithmic curation, the role once filled by our favorite nightly news anchors — who didn’t just relay information but helped us understand its value — must become ours. How we determine what’s truly important will shape the future of how we interact with information — and each other.

Tags: AI