Five Barrier Questions to AI Adoption


 Some use cases are now well established, especially in circumstances where large data volumes would overwhelm a human and the cost of the software making a mistake is small—in particular in classifying and segmenting customers and in providing recommendations on next-best actions and next-best offers. But there are many other use cases that have not yet been penetrated by the technology. Partly, this is just a matter of technological maturity. But it also comes from issues that managers have with AI software systems. I see five key questions that managers have that are getting in the way of successful implementation of AI software.
 
1. Will the software go crazy and cause me major problems? We all know that software AI systems do not have common sense the way humans do. Even in the impressive demonstration of the IBM Watson deep learning system on the game show Jeopardy, where it beat the expert human contestants, it made one egregious mistake that no human ever would.
 
2. Operations managers are held accountable for business results, so I will be responsible if the software screws up? If an employee makes a mistake, he can be blamed, then either be fired or receive more training. But what do you do when the software makes a bad mistake? I guess just try to blame the IT department. Or maybe you just taught the AI wrong?

3. Can I trust that the answer is the best one (or at least a good one)? A manager can ask a person “How did you come to that conclusion?” and probably get a reasonable answer from a person. But AI systems do not have such “explanation facilities” readily available . So, the manager just has to accept the answer or not.
 
4. Will my employees accept it or will they re-do much of the work to check the answers? Employees also have the same concerns. I have seen many cases where employees who care about the quality of work that comes from their organization will double-check a recommendation of the machine. This destroys the economic value of the AI. Of course, they may come to trust it over time, but usually evaluations of the technology happen a few short months after implementation.
 
5. Is it really worth doing? Given these, is it really worth putting in all this technology? Of course, we are in the early days of artificial intelligence and many more advances will make their answers better and, eventually, provide explanations of what is behind their answers. And, we will all become more familiar and comfortable with our machine brethren.
 
 
1. From https://www.aol.com/2011/02/17/the-watson-supercomputer-isnt-always-perfect-you-say-tomato/ On Day 2, Watson missed one clue by a country mile -- better make that an entire country. During a Final Jeopardy! segment that included the "U.S. Cities" category, the clue was: "Its largest airport was named for a World War II hero; its second-largest, for a World War II battle." Watson responded, "What is Toronto???," while contestants Jennings and Rutter correctly answered Chicago, for the city's O'Hare andMidway airports.
 
2. See “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day” at https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
 
3. The US Defense Advanced Research Projects Agency initiated its Explainable Artificial Intelligence (XAI) project to advance the state of the art, stating, “Machine learning models are opaque, non-intuitive, and difficult for people to understand.” See https://www.darpa.mil/program/explainable-artificial-intelligence.

Similar Blogs: