Robots vis humans

Suppose you enter a dark room in an unknown building. You may panic about some potential monsters lurking in the dark. Or just turn on the light, to avoid painfully bumping into the furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, there are people who believe that, as we step into the room, we may run into some evil, ultra-intelligent machines. Fear of some kind of ogre, such as a Golem or a Frankenstein’s monster, is as old as human memory. The computerised version of such fear dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

“Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.”

Once ultra-intelligent machines become a reality, they may not be docile at all but enslave us as a subspecies, ignore our rights and pursue their own ends, regardless of the effects that this has on our lives. If this sounds too incredible to be taken seriously, fast-forward half a century and the amazing developments in our digital technologies have led many people to believe that Good’s “intelligence explosion”, sometimes known as Singularity, may be a serious risk, and that the end of our species may be near if we are not careful.

Stephen Hawking, for example, has stated: “I think the development of full artificial intelligence could spell the end of the human race.” Yet this is as correct as the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble. The problem is with the premise. Bill Gates, the founder of Microsoft, is equally concerned:

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

And this is what Elon Musk, CEO of Tesla, a US carmaker, said:

“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” he continued. “It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

Just in case you thought predictions by experts were a reliable guide, think again. There are many staggeringly wrong technological forecasts by great experts. For example, in 2004 Gates predicted: “Two years from now, spam will be solved.” And Musk speculates that “the chance that we are not living in a computer simulation is one in billions”. That is, you are not real; you are reading this within the Matrix. Literally.

The reality is more trivial. Current and foreseeable smart technologies have the intelligence of an abacus: that is, zero. The trouble is always human stupidity or evil nature. On March 24th 2016 Microsoft introduced Tay, an AI-based chat robot, to Twitter. The company had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, it quickly became an evil, Hitler-loving, Holocaust-denying, incestual-sex-promoting, “Bush did 9/11”-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the tricky and nasty messages sent to it. Microsoft had to apologise.

This is the state of AI today, and for any realistically foreseeable future. Computers still fail to find printers that are right there, next to them. Yet the fact that full AI is science fiction is not a reason to be complacent. On the contrary, after so much distracting and irresponsible speculation about the fanciful risks of ultra-intelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual and serious challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.