By Ji Hye Chang, Associate Data Analytics Director at RAPP UK
The concept of digital reincarnation is not necessarily a new one in the world of AI. Over the past few years, we have seen Steve Jobs posthumously appear on a podcast with Joe Rogan, Kanye West gifting Kim Kardashian a hologram of her late father and Carrie Fisher brought back to life to play Princess Leia in the latest Star Wars, to name but a few examples.
Deepfake technology continues to be entrenched in controversy. From altering political speeches in order to fuel fake news, to the creation of deepfakes of celebrities and their subsequent use in the porn industry, many have questioned the ethical implications of how the technology is used.
Most would agree that these developments do need to be challenged and there needs to be an ongoing conversation around ethical use of deepfakes. But these debates aren’t dissimilar to those that occur whenever a new technology enters the mass market – photography and film are prime examples of advancements which were lambasted in their early forms, as they had potential to be used maliciously. As AI tech - and public awareness of it – evolves, so must investment in developing adequate safeguards and regulations.
One thing is clear. When discussing using AI and its capabilities to reimagine the dead, ethical guidelines and an airtight consent process need to be determined before the tech hits critical mass. It might sound obvious, but the people responsible for building these tools must make it clear from the outset that deepfakes are not real to flush out any bad actors or murky ethics in the space. Transparency is key.
Alive after death
There are many reasons why people would want to ‘bring someone back’, even in a transient way. Consider parents who want their children to meet their grandparents, entertainers reprising their greatest hits for fans, or grieving families who want to turn to a loved one for comfort during difficult times. There are swathes of other sectors, outside of entertainment or domestic settings, which could benefit from this technology, too. It could have a significant impact on the work of historians, educators, social care practitioners and mental health providers, to name but a few.
Using artificial intelligence to ‘resurrect’ the dead could provide a game-changing tool which allows people to process grief, helps the sick and provides reassurance to many. Deepfakes of the deceased could be programmed with their past dialogues – text, voice and video – to enable loved ones to feel as though they are having a two-way conversation.
These reconstructions are built using large AI language models (LMMs) to understand prompts and respond with text. Google’s LaMDA and OpenAI’s GPT-3 are two of the most used tools principally because they are customisable to mimic an individual’s unique conversational pattern.
When paired with voice cloning tools, a person’s speech could be deployed in the form of voice notes, videos or even a face-to-face interaction, and feature the individual’s quirks, inflections, or most-used phrases.
In the end
AI technology has and will continue to permeate our daily lives and, with the right protections and limitations in place, it can provide numerous benefits to those that need it most. And while there might be incidences where the technology is misused, we need to keep working at pace – actively having conversations about ethics and best practice whilst maintaining progress. Some academics are already putting their best foot forward on this front – developing a tool which identifies deepfakes before they can do any damage and ultimately preventing them from being misused.
We have a long-held tradition of using photos, letters, and videos to keep the memories of deceased loved ones alive. AI tech is just the evolution of that – making these precious moments interactive and more tangible.