How New Developments in Applied AI Can Be Used More Widely in Business

A computer-rendered image of two fingers touching: one human, one virtual

By Lucas Galan, Head of Data Science Product, CODE Worldwide

AI times are a-changing…

This video from 1992 is an advert for the then cutting edge of tech, Microsoft Excel. Today, it’s ubiquitous and used daily across the globe. One day we’ll feel the same about AI.

AI is entering a new era. Increasingly, its accessibility, capabilities and versatility are integrating it seamlessly into everyday business tasks.

AI has advanced to a stage where it is no longer a specialised field, but capable of enhancing almost every component of modern business. This is primarily driven by three fields: foundational, explainable and combinable AI.

No longer from scratch: Foundational models for all

In the mid-2010s, AI would devise a task and then train a neural network with a hyper-specific dataset. The resulting machine would be efficient at tackling the task but would struggle in anything but the most textbook examples. While impressive, it was only usable within the narrow parameters of that specific task. These systems were costly (requiring many hours to create the training set) and prone to becoming obsolete if the base conditions changed.

Foundational models represent a step-change in how AI is trained and the kind of tasks it can undertake. By training on larger, more generalised datasets models are more adaptable to a wide range of tasks and don’t need to start from scratch. Foundational models act as a cornerstone for AI, providing an out-of-box experience for just about any task. Need to understand large volumes of language, or the context and content of images at scale? No problem. These models work by training a huge single system on large amounts of general data, then adapting the system to new problems.

Due to their size, the models develop in novel and unexpected ways, becoming far more adept –often even to the surprise of their creators. This is unlike traditional AI models, which tended to require new data training for each new problem.

The term ‘foundation models’, coined by Stamford’s Percy Liang, reiterates the central importance of these systems within whole ecosystem of software applications. Liang argues that such dominant foundations could be dangerous and introduce deep flaws in AI development, but the name has stuck for these colossally influential models. Regardless of Liang’s argument, it is hard to deny how quickly and effectively these can be leveraged.

More and more of these models are becoming available, from GTP-3 and BERT, which can tackle most language tasks, to CLIP which connects language and images. The most recent example of a novel foundational AI is the viral DALL-E Mini, which combines images on request, without revealing what is going on behind the scenes.

Cracking open the black box

Foundational AI reminds us of Arthur C. Clarke’s adage, “Any sufficiently advanced technology is indistinguishable from magic.”

But just because certain kinds of AI have become ubiquitous, it doesn’t make them easier to understand – and this can lead to controversial outcomes. When AI makes a mistake, it can be very difficult to explain what has happened or why the network has made a decision. AI is limited by its relative opaqueness; the black-box approach to neural networks has left it open to error, bias and over-fitting and created distrust.

The Twitter algorithm controversy of 2021 found that the platform ‘preferred slimmer, younger, light-skinned faces’ – an issue which quickly went viral and caused a lot of heat for Twitter bods.

Increasingly, we are getting better at understanding, documenting and explaining just how these machines make choices. As explainable AI develops, these mishaps become less likely to happen and can be rapidly fixed if they do. Explainable AI can shed light into why your AI is making its decisions and what underlying elements it is considering.

Take, for instance, a driverless car. If it crashed and it couldn’t be explained, this mysteriousness would foster mistrust and fear. Even if human error is more statistically likely to cause an accident, the lack of accountability for AI would make it a tough sell. Explainable AI bridges that gap by developing our relationship with AI and demystifying the ‘random’ things it does.

Think of AI in combinations, not singularities

Modern systems combine AIs to optimise output – using multiple networks has created an output which is greater than the sum of its parts. This has big implications for businesses. The development of foundational and explainable models has had a demonstrative impact on AI’s commercial application. By viewing AI as components in a wider system there are very few business tasks that it cannot be leveraged against.

Simultaneously, advances in our ability to understand AI means we know when (and when not!) to apply it.

Additionally, a lot of these models are extremely cost effective, requiring only staff with the know-how and machines that can be hired from cloud providers.

There are very few existing reasons not to be utilising AI – and the list grows smaller by the day.

A note on human obsolescence

Much has been made of the ‘dangers’ of new technologies and how they will render humans obsolete by automating jobs. AI is a catch-all term for powerful technology which will require physical and ethical safeguarding. Like all technology, it has a dark side.

But AI driving human obsolescence is an exaggerated premise. Just like the advent of Microsoft Excel 30 years ago, these tools make certain tasks easier and remove the need for others altogether. However, the need for human agency is here to stay.

Ultimately, there is an opportunity for this technology to enhance and empower people in their day-to-day work and lives. The advancement of AI provides tools at our disposal, not machines competing for our jobs: so AI is more of a companion than a rival.

GET IN TOUCH
Got a Question? We’ve Got Answers.