AI Cometh part 2: erm maybe we ask the experts?

Part 2: AGI vs Humanity, maybe we should ask the experts.

Tech sometimes feels like you can only have a conversation about it if you know exactly how the sausage is made.

Early stages of tech can be alienating, confusing, and like it requires sequential technical knowledge…”I couldn't possibly understand Web 3.0, I don't even use TikTok”.

When it comes to AI though very few people know exactly how the sausage is really made. Or even if it’s a sausage we’re actually making.

We’ve seen some loud and significant voices hinting at the dangers of AI for nearly a decade now. I first became aware of the bigger issues around it in 2015 after reading Tim Urban’s incredible ‘Wait, but why’ essay on the topic. Up until that point I had completely dismissed AI nay sayers as SkyNet doom-fantasists.

Tim Urban’s article made me do a complete about turn. I highly recommend you read it. I'll link to it at the bottom of this post.

Then, around 2018, we saw a wave of senior AI experts speak up in the tech press about their growing concerns.

5/6 years ago these two below quotes stood out to me, due to the obviousness of the statements, and who was saying it.

"I think the most dangerous thing with AI is its pace of development. Depending how quickly it will develop and how quickly we will be able to adapt to it. And if we lose that balance, we might get in trouble."- Irakli Beridze, Head of The centre for AI and robotics, UN.

"When there’s a lot of interest and funding around something, there are also people who are abusing it. I find it unsettling that some people are selling AI even before we make it, and are pretending to know what [problem it will solve]." -Tomos Mikolv, Research Scientist at Meta (Facebook) AI.

But, in 2024,  there has been an (increasing) trend of incredibly significant voices in the AI space raising concern about AGI specifically based on their own knowledge and experience building the tech.

Proper grown up adults who know a lot about sausages, not LinkedIn snake oil salesman looking to generate 24 likes.

For the sake of (attempted) brevity, I’m going to focus on 2 people specifically. Mo Gawdat and Geoffrey Hinton.

I'm focusing on these two experts largely because of who they are, mainly because of where they worked (Google's AI development teams), and their desire to make their points as accessible as possible to the general public in order to get as many people as possible talking about their concerns.

We'll start with Mo Gawdat, former Chief Business Officer at Google’s Innovation and Development studio, Google X.

Mo is adament that the genie is already out of the bottle with AI, that we have passed the point of no return in the development of AGI.

Central to Mo’s concerns are that we have now give AI access to the open internet (and therefore a limitless knowledge/data), and the ability to mark it’s own test papers as largely a ‘Teacher’ AI is now used to decide whether the ‘student’ AI has found the correct or more efficient solution to a problem. This ‘teacher’ role previously was held by a human programmer but the tech companies have now decided it’s more accurate/efficient if AI plays that intermediary role, which unfortunately also bypasses some of the human safety nets we had previously built in.

As an example of this he points out that ‘Bard’ (Google's ChatGPT competitor) has taught itself Persian. Nobody knows how or why.

Mo believes that we have given Tech Companies too much power, but without demanding responsibility. That the greed of these Tech Companies to be ‘first’ in a capitalism fuelled society means that we have;

“fundamentally changed the future for people that have not been involved in the discussion.”

He means the future for you and I. And everyone we care about.

Mo is currently spending a lot of his time trying to warn us about the current trajectory of AI. A trajectory that points us not at the dystopian AI future we see in the movies, but instead something a little more chilling.

It points us towards a singularity.

For clarity, a singularity refers to the point in either space or time at which we surpass our known understanding of the way the world works. The edge of a blackhole is a singularity for example, as past that point the laws of physics as we understand them do not apply.

The ‘singularity’ when we refer to AI is the moment that it becomes smarter than the smartest human.

For me the idea of a singularity is more terrifying than dystopian battle robots as it is simply impossible for our brains to comprehend what that may look, feel, taste, and sound like.

It may seem upon first glance that we are far away from that moment of singularity. Unfortunately, that’s not entirely true. Were we to extrapolate that timeline with some historical context, logic, and the principals of exponential growth, it quickly becomes a little more unsettling.

Einstein's estimated IQ was (for argument's sake) about 160. Chat GPTs current IQ equivalent is said to be around 155. Thats Chat GPT that is still in it’s infancy with only wide scale usage taking place this year.

It’s not inconceivable to believe that a technology that is exponential in nature can 10x this IQ equivalent in years if not months. At that point the gulf between AI and Humans will be the equivalent to Einstein trying to explain the theory of relativity to the most stupid person on earth in 1915.

We will simply not be able to understand what AI is saying to us. I find that a very sobering thought and timeline.

The second important voice that has recently raised their hand and asked us to collectively start taking the threat of AI more seriously is Geoffrey Hinton, widely regarded as the ‘Godfather of AI’ and also a recent Google Employee.

Geoffrey has dedicated his entire life and career to understanding how the human mind works, and as part of his research created an almost second career as one of the world's most respected experts in neural networks and algorithmic machine learning. More recently, Geoffrey was the person Google trusted to lead their research in AI.

I’m telling you, he knows sausages.

Recently Geoffrey decided to leave his job at Google so that he could speak more freely about his concerns for the current trajectory of AI.

A lifelong advocate of the power of AI, he until very recently also believed the warnings other experts were making to be hyperbolic or unlikely at best.

That was until approximately 6 months ago when it dawned on him that the ‘intelligence’ we are creating is vastly different to the intelligence of humans.

He describes his realsation thus;

“It is as if there were 10,000 people and one of those people learned something new. With human intelligence that one person would have to teach all of those 10,000 people that thing. With AI as soon as that one person (AI engine) learnt something new all 10,000 would automatically also know it, without any work on their side.”

Imagine you are the only person reading this article (highly likely) and upon completion every human in the world would also know what it contains just because you read it.

Equally, you would also instantly consumer the knowledge EVERY other human on earth had gained during the time it took you to read this article.

That’s the potential ‘speed’ of exponential growth in 'intelligence' we have to consider when thinking about how quickly we may be approaching Mo’s singularity moment.

Geoffrey and Mo want to try and draw our attention to the 3 main risks with AI. Each with their own time scale, each a little more mundane (but devastating) than we’ve historically thought about when it comes to the risk of AI.

So in part three we’ll look at those ‘mundane’ risks, and how they may disrupt our world unrecognisably.

Previous
Previous

AI Cometh part 1: WTF are you talking about?

Next
Next

AI Cometh part 3: Isn’t it all just a bit meh?