It’s remotely possible that an AI already exists, burbling away quite happily and innocently somewhere on the Internet, ignorant of the miracle of its own existence and no more self-aware than a box of tissues.
The possibility that it’s plotting the destruction of humanity, let alone all life on Earth, seems remarkably unlikely. It may not even realise in any logical sense that there is such a thing as biological life.
And if its intelligence was to develop self-awareness (a big ask) and subsequently an awareness that other life exists (another big ask), so what? Why would it divert resources away from sorting through key words for the NSA (or whatever task it was created to fulfil) to devise some method of eliminating humanity? What broken thread of logic would set it on such an absurd course?
But what if an AI accidentally sets off Armageddon?
In an article for the New York Times titled “Artificial Intelligence as a Threat”[i], technology writer Nick Bilton raises the possibility of a “rogue computer” derailing the stock market, or a robot programmed to fight cancer concluding “that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
Well, hell, why not go the whole hog and create an AI that can both derail the stock market and exterminate humanity?
In both of these scenarios, surely AI is at best an option? I can imagine a plain old stupid robot making both of these mistakes because of crappy programming or intentional sabotage. Neither case is a genuine argument for we old-fashioned biological intelligences to be afraid of AI. (They may, however, be arguments for us to be afraid of important decision-making being taken from human hands and put into the silicon hands of machines that don’t give a damn, whether or not their processors amount to real intelligence or just a hill of beans.)
Bilton then raised the possibility of self-replicating nanobots being programmed by someone of “malicious intent” to extinguish humanity. But again, the nanobots don’t have to possess AI for this nightmare to become a reality.
Bilton concludes with two possible problems with AIs put forward by futurists like Elon Musk. First, that AIs created to make decisions like humans will not have a sense of morality, and second, that intelligent machines will one day go on to build even more intelligent machines that ultimately will lord it over the planet.
While it is true that AIs are unlikely to have a sense of morality, they are no more likely to experience murderous paranoia, or indulge in sociopathic tendencies. A human without any morality may want to kill other humans, but how does that translate to an AI without morality wanting to kill humans, or for that matter kill other AIs?
As for AIs creating super-AIs that end up ruling the earth, why would they want to? I love playing Civilization V on my computer, and using my tanks and political clout to take over the whole game map, but if I had a brain the size of the Empire State Building there’s about a zillion other things I could do that would keep me entertained and fulfilled, and none of them involve bending creatures of lesser intelligence to my will. There’s a whole universe out there to explore, and telling the Simon Browns of this world what to do or how to do it simply wouldn’t float my chip.
[i] And three days later picked up by the Sydney Morning Herald for its weekend edition of 8-9 November 2014, which is where I came across it.