What Westworld Got Right About the Future of AI
The promise and concern about artificial intelligence is its ability to seemingly understand and mimic human emotions.
Artificial intelligence has quickly become part of our daily routine -- it powers everything from simple features like automatic image tagging on Facebook and purchase prediction on Amazon to more complex systems like smart cars and connected homes. We can even thank AI for HBO’s hit TV series, Westworld, a sci-fi thriller about an amusement park populated with AI-powered robots called “hosts,” that dominated the cultural zeitgeist this past fall.
While the AI technology we interact with today gives us a hint at what’s to come, it’s actually Westworld that spells out the future most clearly. Before I get ahead of myself let me clarify: I don’t foresee an apocalyptic robot takeover happening any time soon (or ever for that matter). But Westworld’s narrative begins in a way that is quite similar to the true beginning of AI — it may not look the same (you don’t see many human-like “hosts” wandering the streets), but the technology behaves the same.
In the beginning, Westworld “hosts” engage with humans by spitting out canned responses that are personalized based on the interaction. Sure, it sounds futuristic at first, but that’s exactly what happens when you interact with Apple’s Siri or Amazon’s Alexa. Things get interesting when the “hosts” start to learn emotion, and they remember and reflect on not just facts and figures, but desires, doubts, opinions and passions. This is indicative of where the true potential lies for AI in the real world, too.
Right now, AI is largely clinical, canned and limited. But just as Westworld’s “hosts” were able to learn emotion from human interactions, our own technologies can become more interesting and more advanced if they learn from the right set of emotional data. That’s the future -- plain and simple. But how do we get there? And who will lead the charge?
Data is in the driver’s seat.
Artificial Intelligence has hit the point of maximum awareness, with minimum productivity. And that should come as no surprise, as this happens with most technology waves. The harsh reality is that the majority of AI-focused companies will fail, and many AI projects at well established companies will die just as quickly. There are thousands of open source AI algorithms out there, which means technology isn’t the limiting factor. Ultimately, companies with access to rich (and large) data sets will win out over those that must rely solely on algorithms to differentiate.
Even companies that have access to a wealth of computing power and data may fail to implement successful AI solutions unless they undergo a significant cultural shift. Take IBM for example. Despite nearly 10 years of developing and marketing Watson, IBM has so far failed to define a compelling “human” use for the technology. It has been relegated to many narrow use cases involving pattern recognition and prediction (some of which are very valuable and useful, such as improving cancer detection, identifying financial risk and fraud, and other high performance computing applications), but it has not developed a general “understanding” of human interactions, human emotions, speech patterns and human responses to information.
Success is somewhere at the intersection of humanity and technology.
To add another layer, that data must be used to train, or “teach” emotional intelligence. Most AI solutions are trained on left brain information, but the real value (just as Westworld predicted) is in teaching your technology how to navigate the right brain. That’s how AI will begin to learn emotion, better understand human sentiment and solve more complex problems. For instance, when a customer calls a service center to fix a thorny problem with a product or service, a good agent interaction offers not just diagnosis and fixing, but subjective experiences like listening, allowing a customer to vent, having empathy to the customer’s problem, and conversation to help calm, focus energy on solving the problem. Ultimately, this makes the customer feel heard, understood and affirmed. An interactive voice response (IVR) “chatbot” in today’s call centers can do very little human interaction, and customers are more like to hang up frustrated even after in efficient, inhuman interaction than they would be after a less successful, empathetic interaction.
So how can you train a computer to listen not just to your words, but to your emotions and mental state? You need to train a computer to develop empathy. It needs to understand all the markers of human interaction like tone, emotion, sentiment and timing, not just words. Today’s AI systems largely are blind to those critical pieces of human interaction, and thus can’t learn from them.
Left Brain AI learns from objective, structured data, like customer interactions, clicks, purchases, and transactional information like question/answer pairs or problem/solution diagnoses. Right Brain AI requires much more subjective, emotional human data like conversations, and human interactions. To make sense of this, it’s important for designers of those solutions to know how to take human interactions (emails, chats, phone calls, social media threads) and tag them, by identifying emotion and sentiment, and other markers so the computer “understands” humans better. Once the computer understands how to process human interaction data, it can learn from it.
When AI is able to grasp key emotions --- like anger, fear, happiness, desire and love --- it’s better suited to listen, learn and understand. And we’re not that far away from technology that can better understand and better serve us. Soon, companies will be able to automate a large chunk of data analysis, decision making and customer service, allowing employees to tackle the most complex challenges rather than get bogged down in the details. What’s most exciting about this, is that customers will still receive human-like service that recognizes emotion and can respond accordingly just like a person would. In the future, intelligent AI powered chatbot interaction might greet you with a friendly opening, engage in chit chat, truly listen to your words, execute pauses, react to emotional cues and not just diagnose, but connect with you as it works to solve your problems efficiently, and smartly.
Westworld got it right --- the most impactful AI technologies will tap into emotion. These personal assistants won’t feel like we do, but they will be able to understand what we’re feeling. That spells huge opportunity for streamlined services, innovative products, improved workflow and beyond.
Sid Banerjee is executive chairman and co-founder of Clarabridge. A founding employee at MicroStrategy, Banerjee held VP-level positions in both product marketing and worldwide services. During his tenure leading MicroStrategy’s worldwide services division, he grew the organization to 500+ employees supporting enterprise deployments of BI solutions. Before joining MicroStrategy, Banerjee held management positions at Ernst & Young and Sprint International.