AI Cometh part 1: WTF are you talking about?
I was recently asked to write a short paid article on the risk of AI.
It's my belief that we’re at a precipice with AI that is being shepherded by the few but will impact the many.
It's also my belief that we, gen pop, are not only entitled to an opinion but in fact we should have an opinion (even if you’re not from a tech background).
The article was supposed to be 1000 words. I did a lot of research, and therefore wrote a lot of words (which are below, unedited).
Not only did the research generate a lot of characters on a gDoc, it also put me in an increasingly dark head space. Some of the things I read and tried to mentally process left me feeling empty and sad.
So it was during a particularly long and beautiful bike ride (try it) that I made the decision to leave the article as is. Do no further research. Do no further editing.
It is still, however, a topic I want to share and ignite conversation on.
So I'm posting it here to get it out of my system, and move on from it. There will be typos. Soz.
It’s a huge topic that I’ve tried to do justice to and keep digestable, but there’s a lot to cover in context so I’ll be breaking it into 5 parts here.
Where are we today?
What’s the ‘AI Experts’ pov?
What could disruption look like?
Are humans actually the probelm?
Is a doomsday hyperbole?
Below is part one.
I hope you find some of it interesting. Equally I hope you find some of it alarming.
Each one teach one.
Part 1: Definition and the different types of AI.
Probably one of the first things to wrap our heads around when trying to understand more about AI and it's impact is that not all AI’s are created equal.
For the sake of this article we’re going to focus on the differences between Narrow AI (sometimes called weak AI) and Strong AI.
Narrow AI is designed to focus on a specific task. Examples of these types of tasks could be answering questions based on a user's specific input or playing chess. It can specialise in either one of these tasks, but not both.
Generative AI (GAI) is probably the most prominent type of narrow AI that we’re familiar with today, think Chat GPT and Midjourney. Yes it can recall past answers to ‘learn’ and ‘improve’ its output, but they are pulling from a limited knowledge base, have a specific set of parameters in which they can operate, and are focused on one specific type of task. Quality of user input is key. Midjourney is not going to be able to learn to play Chess purely of its own accord. As such, Narrow AI has the potential to outperform humans, but only in the specific areas it is designed to excel in.
Narrow AI is the type of AI we are seeing the most explosive growth in currently. It is the type of AI that is generating most discussion, sometimes unhelpfully. You've all seen the memes around Midjourney replacing human creatives (spoiler, it won’t, it will just disrupt the creative arts in both good and bad ways).
Generative AI just presents the opportunity for truly creative people to experiment and innovate with their process and mediocre creatives to produce mediocre work more quickly and cheaply. It can also be argued it offers new routes into the creative world for those that can express their ideas via tech but not ‘traditional’ materials. There has always been poor creative work, great creative work, and the associated ‘value’ of that work. Just like we’re not still exclusively using charcoal and calcite to create art or etching advertising campaigns onto the innards of a cave.
Strong AI is different.
Strong AI can perform a variety of unrelated tasks and processes, teaching itself to solve for new problems as it ‘learns’ from its previous actions. Strong AI is also referred to as Sophisticated Artificial General Intelligence (AGI).
AGI is able to learn, reason, compare, and infer. It can detect patterns, improve its own analytical skills, self classify data, and make its own decisions (i.e. on what to focus it’s next set of tasks on).
In theory AGI could help make the world a much better place.
It could help us find new ways to accelerate our sustainably goals (although at a huge energy cost). It could help us improve healthcare and fight cancer. It could free us from the menial tasks that dominate our lives and free us to focus on the fun stuff and time to create we all crave.
The development of Strong AI aims to create machine based intelligence that is indistinguishable from the human mind. But, like a child, AGI would have to learn through experience to advance its abilities over ‘time’.
It also doesn’t actually exist as far as we know.
These two things (it’s apparent non-existence and our understanding of the time it takes humans to learn) means that the concerns around AGI are often overlooked or minimised in favour of discussion around the more immediate concern about of GAI.
However, IMO, focusing on a narrow set of risks because they are in front of us right now, rather than an infinitely bigger set of risks because they seem too far away is itself the biggest risk with the current trajectory of AI development and discourse.
In fact some very well regarded ‘experts’ are starting to be very vocal about this, unfortunately to largely deaf ears.
So let’s take a deeper look at that in part two…