AI Cometh part 5: End of Happiness or Exaggerated Hyperbole?
Part 5: It could all be hyperbole. But what if it’s not?
Technological momentum is nothing new.
For several decades 'tech' has changed our lives in some fairly large and fundamental, ways.
These rapid advances in tech historically unsettle people. The unknown 'impact of potential' can reverberate alarmingly through society and business.
Why is AI any different?
Maybe the experts speaking out now are playing the game of disaster hype for engagement and public speaking opportunities. Someone suggests something, it gets amplified, it finds an audience, podcast invites ensue.
Equally, maybe AI is actually ‘stupid’. Maybe it will always just be narrow AI, only capable of solving a specific task, predetermined by the humans that create it. Chess playing Robots in the 90s, was supposed to be the end of chess. In reality it just changed how chess was played and opened up new strategies for human players.
But, even if that is the case and AI is stupid or the doom mongers are just spouting hyperbole, that still doesn't make me feel any better about where we’re at.
The laser focus on AI development as the next frontier still has massive social impacts, namely;
How, what, and when we choose to fund in terms of infrastructure and sociality improvements. “Oh no, we don’t even need better active travel infrastructure, the cars will just drive themselves.”
The massive environmental impact of the server power (i.e.old school resource hungry energy) needed to power our AI and web 3.0 futures (a topic for another article another day, but the MIT study I'll link to at the end is a great place to start).
There's something with AI that is different though, and I'm becoming increasingly uncomfortable with it the more I learn. In fact this discomfort has also caused me to take a step back from my career in tech and innovation.
It's not the advancement of the tech itself per-se, it's the intentions (or lack of intention) of those that are currently the loudest shouting advocates of it.
I believe we've now reached a point where the personal gain (be that an individual, brand, or corporation) of being ‘first' has outweighed the shared moral understanding that we need a collective societal conversation. Do we want/need this? What are the pros and cons? What are the risks? Do we universally accept those risks?.
It should be crystal clear to all of us that technological innovation is not moving at the same pace as our societal safety nets and governance.
We've all witnessed first hand the negative impact that miss balance has had with the rise of social media (and specifically the connection it has to the mental health of ourselves and younger generations).
AI adoption is the genesis of a widening divide between digital haves and have-nots at a time when we already have massive and debilitating financial chasms across society.
The pace of development has increased exponentially (by design). The desire for it to succeed is more explicitly driven by financial gain. The potential societal impact is more transformative (good and bad) than anything we've seen before. Including the Spinning Jenny.
Put more bluntly, the rising AI tide is not lifting all boats. In fact that tide is turning some boats into Super Yachts and the majority into leaky buckets.
I take it as fact from listening to those with more knowledge than I that we are on the path to a singularity with the current (rapid) evolution of AI, past the point at which we surpass our known understanding of the way the world works.
The only variable really is time. How long it will take AI to get there. Years? Months? Weeks? Tomorrow?.
With most topics of this weight ‘opinions’ are considered the enemy. Uninformed shouting. Brexit and we’re sick of experts.
However it is my fundamental, erm, opinion that with AI this is not the case.
Because we are at the edge of our understanding there really are no experts that can guide us with absolute knowledge and certainty.
There are of course those that have more knowledge, and largely what those experts are saying is it needs to be the number one topic of the media agenda today.
They want us all, globally, to start talking about it with them.
When we do engage, and have a much wider inclusive conversation about our wants, needs, fears, and hopes we also foster a better chance of landing the positive scenarios.
The flip side, a black box invite only approach guarantees bad actors and those with vested interest will dictate the trajectory towards the singularity.
For so many people to have their lives impacted so much without having a say feels morally wrong.
We all need to learn, and by starting the difficult conversations we can learn before it’s broken.
Would love to hear your thoughts on anything I’ve raised in this series, or get in touch to discuss more about what AI may mean for your teams, brand, or customers.