Are Voice Assistants the Next UX Paradigm for Mobile Apps?

graphic of wave forms coming out of phone app icon in blues and greens

By Eric Turkington, Vice President of Growth, RAIN Agency

When’s the last time a mobile app surprised you? Perhaps it was a jarring new home screen layout. Or a completely new iconography and color palette. Or maybe the removal of a key feature from the tab bar.

How did it feel? Probably somewhere between confusion, frustration and downright betrayal.

Our intense desire for the familiar makes it difficult, initially, to accept material changes to interfaces we’ve come to know. But as Facebook’s bold introduction of the News Feed proved famously in product lore, even changes that seem radical and abhorrent to an existing user base can come to be swiftly, roundly embraced, in ways that fundamentally re-shape the user experience and value proposition of the product itself.

One such app UI evolution–one that carries this potential for both high risk and high reward–is fundamentally not about the visual interface of the app. Rather, it’s about broadening the app’s interface beyond touch and type, and into voice and audio.

Brand-created voice assistants – in contrast to big tech assistants – are perhaps one of the most under-utilized, yet surprisingly in-demand technologies to elevate mobile app experience today. The user behavior of voice on mobile is already there: in the US, of the 140 million users of native voice assistants on mobile devices, 27% are already daily active users. And of consumers that have used brand-owned voice assistants inside mobile apps, ~69% of consumers are favorable, compared to only about 4% who are negative (Voicebot.ai study, 2022).

That’s an overwhelming margin of support, and when you consider the benefits of in-app voice assistants, it’s not hard to imagine why. The affordances of voice technology provide an elegant solution to the reality that app screen real estate is sharply limited. For apps that offer dozens of features, only so many can be made available on any given screen. In contrast, the flexibility of natural language results in first-order retrievability of information, meaning no content or feature is more than a single utterance away from being accessed.

If apps were giant office buildings, menus and tabs would be the stairs and elevators, and voice would be a teleportation device, capable of dropping in anywhere, or retrieving anything. This affordance mitigates the tradeoffs of ruthless feature prioritization in the visual UI, where less-frequently-used features–which can still be vitally important at times, such as customer support and FAQs–are effectively buried. When you pair this retrievability affordance with voice’s other key benefits–being three times faster than typing as an input, and enabling hands-free, eyes-up productivity–you start to see how entirely new app interaction paradigms emerge to do things faster and more easily. At RAIN, we’ve been seeing this play out both in existing apps to which voice is added as a functionality-augmenting layer, and net-new apps we’re building where the voice assistant is the app’s nucleus.

But amidst this potential, there’s a puzzling reality that most major app publishers haven’t yet taken the plunge into bringing voice to their apps. There are a number of probable reasons for this. The first is risk aversion, and a desire to “do no harm” to apps that either are a company’s business, or support a large part of their business. Good voice experiences are tricky to get right, and need to work reliably well for consumers to trust and embrace them. A second reason is likely that in-app voice assistance isn’t yet a broad-scale consumer expectation, but rather a delightful exception where it does exist.

Despite this relative nascency, a number of high-profile companies have invested significantly in these capabilities, including Bank of America, US Bank, Spotify, Pandora, Snap, Dominos, and a handful of others. As these companies set the pace and more consumers begin to enjoy these features, more companies will begin to introduce in-app voice controls and voice assistants, and the companies that waited too long will find themselves scrambling.

For mobile app publishers looking to step-change their UX without risking turning off users, here are some strategic approaches to consider:

Deliver an invisible swiss-army-knife of functionality. 

For apps that provide a broad range of utilities, such as finance, eCommerce, health and fitness, voice can reduce the cognitive burden on users to remember where various functions are located, and the time it takes to get there. Navigational shortcuts (“show me pending transactions”), one-off data requests (e.g., “what’s May’s balance”), or even task execution (“remind me to transfer money Tuesday”) are the types of commands that are both more intuitive and efficient than navigating the visual UI. While the most frequent use cases of an app may be one or two taps away, for everything else, voice can provide great utility.

Let voice, not thumbs, do the heavy search lifting. 

For content-rich apps that have worked to optimize their search engines, why not elevate search even further by voice-enabling it? This is different from simply allowing microphone dictation of a text query in a narrow search bar. Much as Spotify has built into its native app, users anywhere in an app can simply say a wake word, a question, and trigger natural language processing to extract meaning and deliver the answer, visually or auditorily, or both.

Transform your app into always-on service portal.

Brands can use in-app voice assistants as call center triage – and seamless call center escalation mechanisms when issues need to be resolved by a human agent. A multi-modal (voice and/or text) agent inside an app that’s just one question away at all times is a powerful proposition, sparing customers the need to read through lengthy FAQs or seek support elsewhere via agents or chatbots living on other channels.

Improve accessibility to bring new customers in.

Brands that design more accessible solutions open their tent to serve more customers and serve them better. For customers that are less tech-savvy, elderly, or have visual impairments, bringing the simplicity of a natural voice interface to an app environment can help sort out significant usability challenges. It’s no surprise that a recent Voicebot study showed that 48% of people 60 years or older were interested in adding voice capabilities to the apps they use, the highest of any age bracket.

Leverage big tech assistants as a “front door” to your app. 

Developing an in-app assistant doesn’t preclude considering the role that big tech assistants can play with your app. Through Siri Shortcuts, Google App Actions, and Alexa Skills’ Send-to-Phone functionality,  app publishers can enable users to jump seamlessly from a mainstream assistant into specific app destinations to complete an action. These linkages provide hands-free access to initiate apps in a specific workflow without users having to manually find and open the app.

About the Author

Eric Turkington is VP of Growth at RAIN, where he leads the company’s growth strategy and new business, industry partnerships, and advises RAIN’s clients on their voice strategies and applications.

About RAIN

RAIN is a leader in voice technology and conversational AI. RAIN builds voice-first software products and supports the world’s leading brands on voice strategy and experience development. Backed by Stanley Ventures, RAIN is commercializing a proprietary, productivity-focused voice assistant and continues its 8-year track record of partnering with industry-leading brands to develop bespoke voice strategies, experiences and products across B2C, B2B and employee-facing contexts.

GET IN TOUCH
Got a Question? We’ve Got Answers.