How Brands Can Shape the Platforms That Shape Them

By Greg Shickle, Head of Growth, Making Science

Digital advertising has entered a phase where a new breed of ecosystems is redefining its power. These are platforms that operate on vast learning models interpreting creative signals, commercial inputs, and behavioural data at a scale no individual team can match. Brands now interact with these systems more directly through self-serve media tools, automated bidding frameworks, product feeds, and creative optimisation layers that sit closer to the algorithm than ever before, yet many are discovering that proximity does not automatically translate into influence. The systems that determine visibility and performance learn from what they are given. When inputs lack definition or coherence, the outcomes reflect that limitation.

Large Language Models (LLMs) sit beneath much of this change, acting as the connective tissue that interprets text, images, and structured data before a result is ever surfaced. Rather than ranking isolated pages, these systems assemble answers from multiple sources, selecting assets based on how well they satisfy intent in context. This shift underpins the wider move toward conversational discovery that is now reshaping search and media environments.

The rise of conversational discovery

Search has changed more in the past two years than in the previous decade. The growth of AI-generated responses has accelerated zero-click behaviour, meaning visibility increasingly depends on whether a brand’s information is incorporated into an answer rather than where a page ranks. Models decide which product attributes matter, which visuals add context, and which sources can be trusted to support a given line of enquiry.

This is especially evident in multimodal search. Here, people may start queries with images, continue with text, and refine their questions as they move. Google Lens and Circle to Search have made visual discovery far more common, and the models respond by matching assets to intent. A single product may need a library of visuals to account for every angle or context the model encounters. Video follows the same pattern. Key details must be present early and clearly signposted so the model can isolate them quickly. Google’s recently launched Universal Commerce Protocol (UCP), which is designed to interpret multimodal queries as well as complete transactions on a user’s behalf through AI agents, is an example of how this richer understanding of user intent is gaining momentum.

These shifts move the industry toward a topic-based approach to visibility. Rather than building pages around narrow keyword intent, brands now need content that covers the wider constellation of related questions. Effectively, brands must prepare content for LLMs as well as for people. The aim is to ensure the model can trace a complete and consistent understanding of the brand within its own knowledge graph, increasing the likelihood that the brand is referenced as users explore adjacent ideas.

Learning from what brands teach the system

The influence of these models is growing across all channels. LLMs interpret product feeds, metadata, and creative assets long before a user sees anything on a results page. They form a picture of a brand’s offering and its relevance to a specific query. The quality of that picture depends entirely on the content the model can parse.

As a result, many brands are reassessing how content functions inside algorithmic environments. Product titles, image composition, and supporting text all carry more weight than before. AI models favour assets that answer questions directly. A lifestyle shot, a close-up, or a short tutorial can each function as an answer, and the systems tend to prefer assets that contribute to the conversation.

This is why integrated creative systems are becoming essential. The models draw from multiple pools of information at once, so content built in separate silos often sends mixed signals. When teams build modular assets that work across conversation types, user intents, and visual prompts, the model has clearer material to learn from.

A more deliberate relationship between brands and machines

Agencies are playing a renewed role in this environment. As platforms automate more decision-making, value increasingly comes from shaping the inputs that automation relies on. This includes defining clearer commercial intent within feeds, refining creative assets so they support both exploration and action, and aligning measurement frameworks with how AI-driven systems actually surface and evaluate information.

Agencies that understand how LLMs interpret content are able to help brands train these systems more effectively. Every asset contributes to the model’s learning process. Narrative coherence, factual accuracy, and consistency across formats all make it easier for the system to identify a brand as a dependable source. Over time, this familiarity supports a stronger presence across search, commerce, and emerging conversational interfaces without relying on short-term optimisation tactics.

Adapting to the AI-first landscape

The direction of travel is clear. Discovery is becoming conversational, multimodal, and shaped by real-time synthesis. Algorithms interpret signals continuously, and brands that organise their content with these learning processes in mind are better equipped to navigate that shift. This requires creative and operational foundations that prioritise intelligibility as much as reach. It means aligning teams around purposeful asset creation rather than sheer output. It also means approaching platforms with a clearer view of what the brand wants the system to understand.

LLMs are already influencing search, recommendations, and the informational layers that sit across major platforms. Brands that build assets with these models in mind, and supply them with consistent, well-defined signals, are more likely to appear accurately and consistently as AI-first experiences expand. The platforms themselves are responsive. They evolve based on what they are taught. Brands that approach that process deliberately will find they have greater influence over how they are represented within the systems shaping modern discovery.