Should We Be Afraid of AI?

Were HAL, the space-age AI assistant from the 1968 film 2001: A Space Odyssey, human he would have celebrated his 50th birthday this year. That's really old in digital years. Yet five decades later HAL still epitomizes the computer that might someday take over the world. Could it happen?

                HELLO DAVE

               HELLO DAVE

To explore this question it's important to note that HAL stands for Heuristically programmed ALgorithmic computer -- the key word being "heuristically." A heuristic is a general concept, based on past experience, that can be applied to learn or understand something in the future. We use heuristics to discover or learn things for ourselves. In the film, the computer itself comes up with the idea of killing the crew members as a way to resolve the conflict between its general mission to relay accurate information, on one hand, and Mission Control's specific directive to conceal the true purpose of the mission from the crew on the other. HAL reckoned killing the crew would kill two birds with one stone. Pretty smart.

But today's cognitive computers don't work like HAL -- in fact, just the opposite. Unlike HAL, today's AI assistants aren't heuristically programmed. Instead, the basic AI model used today is based on a "deep learning" technique called backpropagation that was first published more than 30 years ago. Backpropagation is basically a process of working backward mathematically to evaluate and eliminate the universe of possible solutions to eventually arrive at the best one. If you're thinking this takes a lot of computing power you're getting the picture.  At the time the backpropagation technique was introduced, the computer powerful enough to run the algorithm had yet to be built. That changed eventually, and thanks to the tremendous processing speed of supercomputers like IBM's Watson and Google's DeepMind, deep learning machines eventually went mainstream.  

So, mathematically speaking, AI may as well be frozen in time. But computationally, due to supercomputers, it has moved light years ahead. IBM's Watson supercomputer, for example, runs 2,880 "processor threads" and has 16 terabytes of RAM, allowing it to process about 200 million pages of content per second. Not to downplay AI's mathematical sophistication, as a practical matter cognitive computing is more about brute computational strength.

AI supercomputers run a suite of cognitive software programs that allow them to see, hear, read, talk, taste, understand, interpret, learn and recommend, which is what makes them seem, well, intelligent. But because AI still relies on the mathematical technique of backpropagation, absent innovation in the underlying math, some predict progress in AI will level out and stall before nearing the fictional intelligence of HAL. Because while AI computers may get better and faster at processing data, they will forever be capable of just that -- processing data. In fact, AI purists argue that what we now call AI is merely machine learning, or the ability of the computer to learn from its mistakes and thus improve future accuracy. If they're correct, it's fair to say that the computing industry has decided to declare victory on AI by anointing machine learning as AI.

But even if machine learning falls short of being true artificial intelligence, this fact shouldn't diminish its great potential. To illustrate this point, consider the evolution of human flight. Early attempts at human flight were comical, if often tragic, versions of men bordering on insanity flapping mechanical wings to imitate the flying behavior of birds. Only later, thanks to the mathematical understanding of aerodynamics, was human flight finally achieved. But with the exception of these ubiquitous aerodynamic principles, humans don't actually fly like birds and never will. Nevertheless, human flying machines can do certain things far better than can birds.  Likewise, while cognitive computers may never work like the human brain, they can do certain things far better. Simply put, planes are like birds, but they're not birds, and computers are like brains, but they're not brains. Thus, unless a fundamentally different mathematical model for AI is discovered, AI can merely supplement but not replace human thought.

This is not to say that jobs aren't threatened by AI. Indeed, some tasks are just as prone to being assumed by bots as comparable factory floor tasks are to being automated. Clearly, people who perform such tasks should be fearful of AI's impact because jobs will go away. Fortunately, the loss of jobs will be, at least partially, offset by new jobs in the rapidly expanding AI industry. Whether the job losers will be able to migrate to these new jobs is yet to be seen, though history would suggest doubtful.

In my next blog I'll discuss a framework for assessing future opportunities for AI and the impact, negative and positive, on your workforce.