By Ian Maxwell, CEO, Converge
Every time machine software has taken responsibility for work once done by people, it’s had to earn trust. With AI agents it’s a bit like teaching them to drive. They need to learn through structured help and a regulated process before they finally pass their test, and the car keys are handed over to them. It’s a process, accumulated over time, of advice learned and reward conferred.
Likewise, today’s autonomous, agent-driven advertising continues to drive its way through the different stages of building accuracy and trust. And it’s getting there. Witness CES 2026 where, this year, we saw tech toys graduate to agentic AI. That’s evolving from pilots to real media transactions, with media institutions like NBCUniversal lifting the curtain on agent-driven buying tools.
But building trust in agentic AI has to mean expanding its availability and influence from existing media and AdTech powers to throughout the wider ecosystem. Most especially to indie agencies and quality publishers who stand to gain the most from this democratisation and the tech-stack-trimming power it brings. Trust is less a matter of the technology itself and more a question of who is pulling the strings.
Trust in agents begins with how they are trained
How an agent’s underlying model is developed and tested determines trust in autonomous machines. Like our learner driver above, knowing whether an agent can be trusted is like knowing whether a student learned a syllabus: you test them on questions they’ve never seen, under controlled conditions, then measure how correct they were. Repeat this process enough times, and you get a sense of their capability.
It all starts with the raw material used to train an agent – data. Before any modelling begins, a ground truth must be established as a benchmark against what “correct” means. Training data must come from reliable sources and represent the full range of real-world situations the agent is expected to operate in. If the data is biased, incomplete, or inconsistent, no amount of modelling can fix that. As always garbage in, means garbage out.
Agent performance is validated by splitting data into two sets: practice data, used to build and improve the model; and final test data, which is kept hidden until the test starts. This latter dataset functions like a final exam the system cannot study for. During development, the model is repeatedly trained on one subset of the practice data and checked against another to see how well it performs in new contexts. These “mock exams” ensure the agent is learning generalisable patterns rather than a set of rehearsed answers.
Only once development is complete is the model tested on untouched data. Because this data has never been used during training or tuning, the results provide a trustworthy yardstick for whether an agent can hack it in real-world conditions. Reliability is then established through measuring and reporting its performance. This can’t be gauged with a single metric. Depending on the task, it may hinge on how often the model is correct, how often it misses important cases, how often it raises false alarms, and how confident it is when it makes a decision.
Only through repeated, reliable outcomes can agents earn broader responsibility. Just as a student’s consistent exam performance builds confidence in their abilities, agents adhering to instructions and expected outputs is what ultimately makes trust possible.
Can we trust digital advertising’s current power brokers with AI agents?
Digital advertising has, unfortunately, earned a level of mistrust. On the walled garden side, a deliberately obscured supply chain has allowed Meta to get away with as much as 10% of its ad revenue coming from scams and banned goods. On the open web, the volume of unattributable spend may have improved since the ANA and ISBA’s damning programmatic transparency audits, but it remains beset by tech taxes, fraud, and bots.
The complexity of the supply chain simply creates too many places for bad actors to hide and for negligence to go unnoticed. Meanwhile, dominant ad tech intermediaries and Big Tech have used this complexity to maintain their black boxes. The implication has always been, ‘You can’t look at our inner workings and, even if you could, they would be too complex for you to understand.’
This same model is being replicated in many of the “agentic” solutions we’re seeing in product announcements that feel more angled towards exciting investors than delivering innovation for buyers and sellers.
Wrapping a natural language interface over existing programmatic infrastructure or deploying agents over which the client has minimal actual oversight or control doesn’t meaningfully change current dynamics. Intermediaries remain the gatekeepers, giving their customers a few extra levers to pull as a treat.
Meanwhile, I can sit down with an agency in the morning and have them running their own agents by the afternoon. These agents connect directly to seller agents on the other end, with both sides having total visibility over what their and their counterparts’ agents are doing. It’s remarkable to watch the typically tangled and overwrought supply chain reduced to two components, which is especially valuable for independent agencies who cannot afford to stand up towering tech stacks just to make a programmatic transaction
Agentic advertising is delightfully simple to explain, demonstrate, and control. Nothing builds trust like understanding, and the more agencies and publishers get hands on with agents, the less tolerant they will be of black boxes and intermediaries that are more interested in extracting than adding value to transactions. Ultimately, it leaves more money in the pockets of indie agencies and publishers who have been feeling their margins being squeezed.
Whether standards establish trust depends on who’s behind them
There need to be standards for how agents communicate for their inputs and outputs to be trusted, but we should have reservations about AdCP; the brainchild of ad tech intermediaries who may be seeking to entrench their existing influence rather than establish a level playing field. Far more interesting are efforts developed by or endorsed by neutral industry bodies such as the IAB but, ultimately, the market will decide on the de facto standard.
A shared language also does not guarantee shared understanding. If two agents can communicate with one another but have, for example, differing audience definitions, the targeting will misfire. While using agentic AI to cart out digital advertising’s dead weight is a good thing, we shouldn’t throw the baby out with the bathwater. There are existing and widely used standards for audiences, transactions, management, delivery, and measurement that we can teach agents to abide by.
What is needed are tried and tested standards deployed to expand agentic functions beyond buying and selling ads to other vital functions currently squatted on by unaccountable ad tech intermediaries that could be handed over to AI agents. For example, advertisers need brand safety controls, but current solutions can’t be trusted not to overzealously block perfectly suitable placements, while ad verification and measurement vendors keep getting caught using their bots for purposes beyond the scope of their agreed terms.
Agents, with buyers and sellers as their masters, provide an opportunity to reshape all facets of the digital advertising market in the interests of its participants, not its gatekeepers. Agentic advertising has the potential not just to streamline transactions, but to mend the cracks that have weakened trust in digital advertising for years and lay the foundations of a more open and accessible ecosystem.

