When Will AI REALLY Take Off?

ENGLISH_IVY3__58008.1475968034.jpg

There's a saying about English ivy -- first year sleeps, second year creeps, third year leaps. I can testify to its accuracy, but it's incomplete. Here in Atlanta, where the weather alternates between desert and rain forest conditions, there should be a fourth part that goes: and then completely takes over. I've seen unchecked English ivy take down giant oak trees. I think artificial intelligence is like English ivy.

In HR, we're straddling the sleeping and creeping stages of AI. In a recently completed survey of HR delivery practices, which was a research partnership between HRSSI and Aon's HR Effectiveness practice, this was made evident:

  • Whereas only about 15 percent of HR organizations currently utilize AI assistants, otherwise known as chatbots, the exact converse of 85% expect to begin utilizing them in the next couple of years.
  • Even more revealing, and the justification for my approaching claim, is that the most pressing concerns about further adoption of bots are: a) the cost of initial configuration and ongoing maintenance; and b) the cost to implement an employee experience platform that enables chatbot functionality. Much lower on the list, at the very bottom in fact, was the concern that bots could function as promised.
  • Confirming this confidence in bot capabilities, the data also showed that the ability of bots to deliver information faster and more accurately than human beings ranked higher on expected benefits than cost savings through FTE reductions.  

In other words, the chief barrier to increased AI adoption is making the business case. To illustrate, I'll refer back to the early days of human flight. We're all familiar with the old footage of early failed attempts at human flight, with brave, if slightly crazed, "pilots" launching themselves from cliffs in contraptions built to imitate the flight behavior of birds, inevitably crashing to the ground. But eventually, after enough bruises and broken bones, these painful experiments produced the first true flying machine. The rest is history. 

The mistake made by those earliest bruised adventurers was trying to fly like a bird. Human flying machines don't fly like birds, in that the don't flap their wings. They do, however, employ the same aerodynamic principles as birds' wings. When experimenters realized that humans could fly by using the same aerodynamic principles as birds but not the same behaviors, human flight, well, took off.

Airplanes still can't do many of the things birds can, but birds can't do many of the things aircraft can either. Once we realized that flight was not about flying like a bird but as a bird, we were able to create machines that in certain ways could fly much better than birds. The birds human built can carry hundreds of people across vast oceans, protect our shores from invasion and fly into outer space, things no bird can do.

Now I arrive at my main point.

In these early days of AI, we're still making the mistake the early flight explorers did in trying to make AI do what human beings do. Instead, like the later successful inventors, we need to make AI do what human beings cannot. 

As long as we think of AI as a machine that can do what humans do, business cases for AI will fly about as well as the first flying machines. It's not that they can't do some things that humans can; that's been proven already. But who really cares? Do we truly care that chatbots can answer routine questions and retrieve standard information like human agents? When you consider the volume of such inquiries and the amount of time (FTEs) spent resolving them, do you really think Wall Street analysts will pay attention? I do believe that chatbots can pay for themselves, and then some, by doing this type of work. But that's not what will make AI take flight. Instead, we must focus on bots doing what human agents cannot, which means providing services that currently are not possible through human channels.

This raises the central question: if bots aren't doing what human agents do, and thus enabling headcount reductions, how can we measure their ROI? 

Back to English ivy...

The first generation of bots, which do what human agents do, will pay for themselves, but barely more; in other words, they will creep. But the next generation of bots, built to do what human agents cannot, will deliver far more. What that might be, I frankly can't say (I don't think any of us really can). But trust me, they will. That's just how technology works.

In the interim, I see organizations deploying bots to do what human agents otherwise would do, basically breaking even or a little better.  Over time, however, bots will be utilized in ways we cannot today imagine, doing not what human agents can do but what they cannot. Consequently, business cases will no longer be based on headcount savings but things far more powerful. I can  speculate about what those more powerful things might be...more engaged employees, better-informed talent decisions, etc. But that's not the point. Don't ask your consultant what these future bot capabilities are, because we don't know. Nobody does.

Despite those first attempts at human flight being as comical as painful, they eventually took us to the moon and back again, because we didn't give up. Like English ivy, human flight slept, then crept, the leaped...and then completely took over. Bots will do the same.

Be patient, have faith, and, most importantly, take part in the experiment.