Elon X Rishi. An Interesting, yet Disappointing Discussion on AI

By Alex Hamilton, isobel

“AI will bring a future of abundance, positive change and could even end working as we know it” — these are the latest words from Elon Musk on the fastest technological revolution the world has ever seen.

But alongside this positivity, there’s a real chance of something dangerous being created. AI is a gateway to some serious doom and gloom — on a scale not imagined since the nuclear arms race began.

In response, the UK government has been hosting a conference on AI safety — where Elon and Rishi took centre stage. Here are some of the highlights and a few points that stood out more than others.

The Magic Genie Problem.

To understand what’s happening here with a simple analogy, Elon refers to the Magic Genie Problem. If there’s such a thing as a magic genie that can make any wish you ask come true, ultimately you have to be careful what you wish for — and sometimes you might think you’re wishing for something good that turns… well… very, very bad.

We don’t fully understand the societal effects of what would happen if people didn’t have to work anymore. Sure we’d have an abundance of wealth, because the cost of labour would be practically 0, and the products of labour would be overflowing. People would likely end up with UBI (universal basic income), and have to rethink their lives around hedonism, or artistic expression. But who knows if that would be a world better than this one? Might we see an influx in mental health issues? As almost everyone on the planet goes through a simultaneous identity crisis. If that happens, at least in this imagined future of abundance everyone could have a Ferrari.

Bad Actors.

This could be states, terrorists, or other rogue outfits. Basically anyone with the intention to use AI for harm. Before the machines rise up like they do in James Cameron’s Terminator, we’re likely to see a world where cyber crime becomes unrestrained.

From phishing scams, to mass manipulation through social media and mis information, we are only at the tip of the iceberg of these threats.

What’s more worrying is that in an opensource environment, AI tools are becoming more and more readily available on the Internet. How long is it before a tool can be downloaded that can write a computer program to hack a bank and steal millions?

But these comments didn’t come up as much as they should have. Instead, Elon fantasied about robots chasing you up stairs. Yeah. Sure. That’s pretty scary. But I wouldn’t argue it’s the immediate threat.

The China Situation

We have to make sure the nations of the world are aligned on AI safety. There’s a real push to slow things down and introduce regulations, licences and government ‘referring’ on new models. But the argument against this is that if the Europe and the US decide to slow down AI advancement and China and Russia don’t, then it puts the west in an enormous disadvantage — which frankly could be unrecoverable.

Elon and Rishi agreed that China needs to be at the table — even in light of the recent Chinese spying allegations. Of course, it’s the sensible thing to do. It’s the perfect politician’s response. Let’s all just work together.

But I would firmly believe that even if the US and the UK say they are slowing AI development, they would continue to accelerate it behind closed doors. Like it or not, this is another space race, and this time the stakes couldn’t be higher.

How should governments regulate.

This is a major problem. The AI models being created are a whole new breed of programming. A type where you cannot measure the outputs based on the inputs. So really, there’s no way of knowing what you’ve created and what it’s capable of until it’s too late.

This is going to make regulating AI extremely difficult. Musk and Rishi spoke about governments acting as referees, but that sounds just like ‘talk’, and the more open source these things become the harder it is to put the cat back in the bag. Let’s just hope that cat isn’t called Pandora.

Opensource models.

Talking opensource, what is it? This is when code is made freely available online for anyone to work with and update. It means anyone can have access to the basic building blocks of powerful AI models. Anyone can use them to their own needs. Going back to the genie analogy, it’s essentially like giving everyone the blueprint of a magic lamp — or you might say a 3D printing device and the blueprint of a gun.

Now, AI experts claim that in order to run something as complex as Chat GPT, you need serious computation power and server space. This is not something the nerdy basement hacker has to hand. But as the tech improves, it becomes easier and easier to do more with less.

A side effect on social media.

Probably the most interesting thing to come out of Elon’s mouth during the hour talk, was when he mentioned that all social media will inevitably be paid for.

This is off the back of X recently adding a paid tier to the platform.

Now hold on. I know what you’re thinking. We’ve heard this before and it doesn’t work. But there’s logic to his madness. He claims that the AI models will become so abundant and powerful, they will be able to fake human accounts much better than current bots can. ‘Bad Actors’ will be able to create millions of bots in a click. And the bots will flood social networks creating a constant problem of authenticating real people, spreading disinformation, or even affecting local elections.

In a simple case, imagine a planning application getting rejected because local residents create a storm on social media. But really it was one person, using an AI bot creator to generate thousands of realistic complaints.

The only solution to this at the moment is to increase the cost of bot creation. If every account costs a price to create and run, then suddenly the maths doesn’t work in the ‘Bad Actor’s’ favour. Creating a million bots at the cost of 1 dollar a bot is a very expensive way to kill a planning application.

The real questions.

There are no answers. Most of the conversation is about robots and sci-fi dreams of dystopian futures, or government trumpet blowing. Anyone clued up on AI may have watched the conference and cringed a little bit. I did.

Because the big questions aren’t being asked. And the answers to the questions we have are kind of wishy washy.

The truth is that AI is here to stay. It cannot be regulated as other things can. It is an amazing, game changing tool that will transform the lives of every human being on the planet. But it is also a dangerous weapon.

If you believe that AI cannot be stopped, and you buy into the Pandora’s box analogy, then you should really advocate for more AI. Because ultimately the only thing that will beat bad AI is better AI. AI trained to identify fake humans. AI trained to look for fake imagery and malicious code.

It sounds ridiculous. But if AI is going to be a problem, then more AI is the solution.

Tags: AI