Thoughts on AI—from Consumer, to Enterprise, and Back

Subscribe To Download This Insight

1Q 2017 | IN-4410

ABI Research attended AI World in November 2016, and as related research ramps up, it is worth considering where and how some of our markets will get impacted by AI. At the conference, one common question raised by many was “when will AI become less artificial?” Chatbots, in particular, create the most disparate opinions with some claiming these solutions fall short of AI while others believe these are critical elements of the transition for AI to true intelligence. It is easy to see why such a wide breadth of opinions exists around chatbots, particularly if one’s definition is comprehensive and includes any form of automated customer service/assistance.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Is AI too Artificial?

NEWS


ABI Research attended AI World in November 2016, and as related research ramps up, it is worth considering where and how some of our markets will get impacted by AI. At the conference, one common question raised by many was “when will AI become less artificial?” Chatbots, in particular, create the most disparate opinions with some claiming these solutions fall short of AI while others believe these are critical elements of the transition for AI to true intelligence. It is easy to see why such a wide breadth of opinions exists around chatbots, particularly if one’s definition is comprehensive and includes any form of automated customer service/assistance.

At the most basic, the chatbot, like any virtual assistant, includes the intents (what the user wants to accomplish or solved) and the forthcoming entities or actions that the AI outputs. While some services/platforms do employ self-learning to help populate these intents and entities, human intervention, or oversight, is still necessary to make key decisions about links and suggested solutions. For instance, the platform might suggest a solution with a 90% confidence level but it is up to human oversight to make the final decision if this answer becomes part of the chatbot’s core answers (e.g., in a FAQ). With training, an AI chatbot might move from the 60% to 75% accuracy at the start to the upper 90% range, which is rather spectacular and could greatly decrease costs at customer support centers and other Q&A type scenarios.

However, what is shown and discussed at shows like AI World do not so elegantly translate to the real world, which is often. In other words, we are constantly reminded that AI is still simulated intelligence, and even though some claim the “speech” component is completely solved there are still lingering issues when it comes to using virtual assistants and chatbots in the home. Machine learning or training can also run into issues if this process is “crowdsourced” or expected to develop through user input—Microsoft’s chatbot, Tay, was an experiment that ended rather disastrously (depending on one’s perspective) after the Internet had its way with teaching it to communicate like a Millennial. So, where will these systems gain the most traction in the coming months and years?  

Forget the Turing Test, Just Turn on My TV

IMPACT


Most individuals in the developed world interfaced with some form of AI or chatbot—it is important to remember that chatbots extend beyond voice communications and work over instant messaging services, as well. For example, Facebook is home to thousands of chatbots used by various companies to communicate with their users; in addition, there are growing numbers of solutions that help companies integrate chatbots into their services supporting a range of platforms/ecosystems. Banking, pay TV services, credit card companies, etc., all implemented front end chatbots to help reduce the load on human representatives. For simple tasks like checking balances, available bank hours, etc., these solutions are adequate but when complexities mount too often we find ourselves immediately saying asking to speak to a “representative”.

We are clearly far from eclipsing the Turing Test for most mainstream solutions but as complexities increase the systems often start to see increased rates of failure. Take, for example, the virtual assistants that are quickly becoming fixtures within our mobile devices and homes. Alexa, Google Assistant, Cortana, and Siri (and perhaps Samsung’s Viv) are all trying to become your personal assistant in the home and potentially in the workplace, as well. Like in the past, these technologies will find, purchase, and develop at the consumer level before pushing its way into the commercial realm—in other words these solutions are not quite sophisticated enough to bridge the gap between home and work, but the time is coming (Microsoft’s investment in LinkedIn and Facebook at Work are clear examples of this coming crossover).

Despite all the claims for “natural language” these systems are still too rigid when it comes to processing intents and the ecosystems are still too rudimentary and rough to truly hit the mainstream. Logitech’s Harmony for instance has two apps for Alexa’s “Skills”—one for the smart home and one more geared toward AV, but the commands are different (e.g., one requires you to say “Alexa, tell Harmony to…” while the other you can say “Alexa turn on the TV”). In addition, the user still should bounce around to multiple apps to get all the pieces to work together, which is again not an ideal scenario from the user’s perspective.    

Making the Jump to the Enterprise

COMMENTARY


Eventually these kinks in the consumer space will work themselves out and the chatbots and virtual assistants will certainly find a place in some if not most consumers’ lives. Google certainly had mixed results with its hardware, but it’s Amazon Echo/Alexa competitor might have some staying power, particularly if Google’s Pixel phone has some legs. Regardless, though, the initial investments have already been made and the overall working world will not shift away from finding additional efficiencies through automation; therefore, AI will continue to find applications within the commercial space. While AI will not replace humans, it will allow for workers to work more efficiently allowing for increased productivity without extending the workday/load. Like the smartphone, users of virtual assistants will eventually want these artificial intelligences to follow them throughout the day—while our penchant to use personal mobile devices in the workplace engendered the BYOD movement, I doubt we’ll call it BYOAI but the principal might be the same. What might these virtual assistants do for us in the workplace?

At the most rudimentary, you have calendar maintenance, but as more applications plug into these assistants they will find new ways to bridge the gap between the home and what the user does at his or her workplace. For instance, if the AI knows the user works within the technology industry it might highlight news updates that are pertinent to that user’s work interests. The AI could also send quick messages to contacts without requiring additional input from the user, e.g., the AI could check the traffic and if the user is likely to be late to a meeting it could automatically inform the meeting participants of the traffic problems and subsequent delay. Again, these are also compelling reasons for social networks like Facebook to start looking more intently at the professional space, not only for growth but to get a head start on the next push from the consumer to the enterprise. Once this transition happens it will help fuel the consumer market, as well, pushing it increasingly to the mainstream—again not too dissimilar from the smartphone market.

Per some participants at the AI World conference, eventually we will shed the artificial moniker and perchance reach a “Super AI”, at which time the other common question about Skynet (the AI in "The Terminator" movies that becomes self-aware and tries to wipe out the human race) comes into play, but we still have some time before that happens. 

Services

Companies Mentioned