Get Ready for Generative UI: Why the Future of AI Lies Behind Intelligent Interfaces

By Leon Gauhman, chief strategy officer of digital product consultancy Elsewhen. 

Two years after ChatGPT’s pioneering debut, could the exponential advancements seen in Generative AI (GenAI) be starting to slow down? Anticipation around OpenAI’s next major model release — dubbed Orion — has been tempered by reports suggesting that its performance gains might be less substantial than initially projected. More widely, the pace of foundational improvements appears to be plateauing across the board.

While new AI models tend to capture headlines and spark debate, they often overlook the bigger picture. It’s not about pitting one model against another — proprietary and open-source models are increasingly converging in their capabilities, after all. Instead, the future of AI lies in designing solutions atop these models that leverage the best tools for each task, efficiently and effectively.

As the original chatbot-based interface, ChatGPT achieved the largest consumer app launch in history.  However, OpenAI’s success wasn’t just about accelerating AI’s entry into daily interactions; it was about abstracting the complexity of AI and large language models (LLMs) behind straightforward, intuitive interfaces — transforming the user experience (UX) along the way.

By providing AI with a user interface (UI), ChatGPT and its early competitors allowed users with no tech knowledge to interact with sophisticated AI systems through natural language or simple prompts without needing to understand the intricate workings of the underlying models themselves.

In other words, Chat GPT’s real breakthrough — transforming AI from a specialist technology into an everyday tool — marked a revolution in UI and UX.

This revolution is far from over and chatbots were only the starting point. Multimodal GenAI models — like Google’s Gemini — are already paving the way for a new era in human-machine interaction where models can see and interact with their surroundings in an increasingly sophisticated way. Looking ahead, the true differentiator will lie in how enterprises build and design tools to harness this intelligence. Here’s why:

GenUI is an evolution.

Current AI interfaces are predominantly tailored for general-purpose interactions, such as casual inquiries or broad information retrieval, which constrains their utility in specialised domains or intricate workflows. As a result, users frequently invest significant time refining prompts to achieve the desired outcomes — a process that can be both time-consuming and frustrating.

As we move from chatbots to more sophisticated GenAI systems, the future will be defined by Generative UI (GenUI): interfaces that dynamically adapt to user needs, hiding complex AI behind seamless experiences.

We’ve been here before. In the early days of computing, users had to know specific commands and syntax to interact with machines. The advent of graphical user interfaces (GUIs) made computers accessible to a broader audience, allowing users to focus on their tasks rather than remembering commands. GenUI is effectively an evolution in user interfaces where human-machine interaction is defined by increasingly intelligent interfaces.

GenUI will create new UI components on demand.

GenUI represents a fundamental shift in how we create and interact with user interfaces. At its core, it leverages AI models’ ability to generate code in real-time, which allows entirely new UI elements and interactions to be created on demand.

Going forward, everything from sound to gestures, text to images, becomes not just an input but a catalyst for generating tailored interface components. The AI doesn’t simply respond to these inputs; it uses them to craft bespoke UI elements by writing new code that best serves the user’s current needs.

For instance, a video editing AI assistant might offer a timeline-based interface rather than a chat window, aligning with editors’ existing mental models and workflows. Similarly, an AI-powered architectural design tool could present a 3D modelling environment, allowing architects to manipulate structures using familiar gestures and commands. For medical professionals, an AI diagnostic aid could integrate seamlessly with existing imaging software, overlaying insights directly onto scans and patient records.

For successful GenUI,  combining LLMs is key.

By combining various types of foundational models, LLMs can act as a sophisticated, intelligent UI generator. In doing so, they can analyse user inputs, context, and historical data to not only draw from specific AI models as needed but also render the UI in real-time in response to the AI system’s needs — transforming the interface from a static, pre-designed construct into a fluid, code-generating entity.

Organisational cultures will be transformed.

Instead of rigid, uninspiring one-size-fits-all enterprise tools, companies can deploy GenUI to create imaginative, intelligent, intuitive enterprise tools that seamlessly merge with their way of working and with individual employee’s preferred working styles, strengths and weaknesses.

This can include making workplaces more inclusive by tailoring interfaces to  employees with particular needs across neurodiversity, hearing, sight, physical challenges and mental health.

Final Thought

In the short term the increasing context windows and capabilities of larger models is undoubtedly impressive. Looking ahead, for many AI applications, wrangling information from massive maelstroms of data is unlikely to be the best approach. Instead, smaller, more tailored models will be produced that are dedicated to the task at hand and which sit behind bespoke interfaces that adapt to the unique needs of the user at the time.

Ultimately, as LLMs advance, intelligence itself is becoming a commodity. For businesses seeking competitive advantage, the key differentiator will be about using GenUI to shape that commodity to solve the right problem in the right way.

Tags: AI