AI Cometh part 4 : Actually, are humans just awful?
Part 4: AI isn't the problem, we are.
I write this hyper aware that it sounds pretty pessimistic, and perhaps even anti innovation or progress. I'm not either really. I do know humans though.
I’m also equally aware that as someone who has benefited massively in my career from taking a gamble on new tech and being one of the first to lean into new things professionally I may now sound like old man Abe Simpson shouting at a cloud.
Here are some things I remind myself of when I start to feel like that. Things that we all know to be true from recent history;
Bad things will always happen. They always have as long as there has been life on this planet.
In 2023/24 (arguably never) Countries/Governments do not have the same interests at heart. What the US wants and what China wants are not mutually beneficial. Putin’s vision for the future is likely not one aligned with your own or your local Labour councillors.
Money wins. AI will happen, we will not stop its progres now. It is too valuable for those who can monetise it first.
We will abuse tech to make money/short term gain. We always do.
At its current rate of improvement AI will likely be smarter than the smartest human by 2025/26 (remember that is the singularity moment).
Real human connection (especially face to face) is universally central to what makes us feel safe, happy, and valued. AI will not replace human connection, but will make it either a) more difficult to achieve or b) easier to opt out of.
Humans are guaranteed to prioritise short term concerns over solving for longer term risks.
With perhaps the exception of social media, the amount of damage these human truths when met with what tech disruption could realistically do to fundamentally change what the human experience is wasn’t really a valid concern.
Really, Tom from MySpace just wanted to teach us all HTML.
However, AI is different (even to social media) in that the origins of it’s development at scale are based purely from the desire for financial and political power or one upmanship.
Google’s CEO Sundar Pichai has admitted that he is not comfortable with AI but simply cannot afford for Google not to be working on it.
I’ve also read some people compare AI to the development of the Nuclear Bomb. A milenal Oppenheimer moment. And although there are parallels there are also crucial differences, mainly that the bleeding edge of AI development is being led by commercial entities, with a proud (and proven) ‘Fail Fast’ approach to innovation not government with some sense of moral responsibility.
Or in the words of Valérie Pisano, the chief executive of Mila – the Quebec Artificial Intelligence Institute
The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. We would never, as a collective, accept this kind of mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘Yeah, sure, we’ll figure it out later,’
The Millennium bug is a great example of why that fail fast approach isn't always the best, and why when it doesn't work it can have catastrophic results.
The Millennium Bug is almost remembered with a smirk now. Planes didn't fall out of the sky, banks didn't lose all our accounts (until 2008), and the power grid didn't shut down.
The reason none of these things happened, however, was not because it wasn't a valid risk.
They didn’t happen because the problem was identified, talked about in the public domain, and sparked a successful internationally coordinated effort to solve for it. It wasn't allowed to break first and a solution be developed later, yet that is what we’re doing when it comes to the risk that is being identified with AI.
To take a similar approach to AI in 2024 we would need to convince every government, business, and criminal digital enterprise across the world to pause, reflect, and collaboratively agree to only build AI that learns not acts until we know more about it’s potential for harm, or good.
Y2K was 25 years ago, in a society much less fragmented than our own today. Unfortunately that level of global collaboration and regulation seems much less likely in 2024 than it did in 1999.
Cool. So far so bleak (maybe). But what does this mean for me and my daily coffee trip? Is it really the end of the world or my job as we know it?
In part 5 (the final for now) of this series we’ll ask ourselves is this is maybe all just a touch hyperbolic.