⚡ Get All Content for 20% Off ⚡

Technologists Are Creating Artificial Intelligence to Help Us Tap Into Our Humanity. Here's How (and Why). New AI tools like Cogito aim to remind us how to be 'human,' issuing reminders and alerts for empathy and compassion.

By Hayden Field

entrepreneur daily
metamorworks | Getty Images

When being empathetic is your full-time job, burning out is only human.

Few people are more aware of this than customer service representatives, who are tasked with approaching each conversation with energy and compassion — whether it's their first call of the day or their 60th. It's their job to make even the most difficult customer feel understood and respected while still providing them accurate information. Oftentimes that's a tall order, resulting in frustration on both ends of the call.

But over the last few years, an unlikely aide has come forward: artificial intelligence tools designed to help people tap into and maintain "human" characteristics like empathy and compassion.

One of these tools is a platform called Cogito, named for the famous Descartes philosophy Cogito, ergo sum ("I think, therefore I am"). It's an AI platform that monitors sales and service calls for large corporations (among them, MetLife and Humana) and offers employees real-time feedback on customer interactions.

During a call, an employee may see Cogito pop-up alerts on their screen encouraging them to display more empathy, increase their vocal energy, speak more slowly or respond more quickly. Interactions are scored and tracked on internal company dashboards, and managers can gauge, instantly, what different members of their team may need to work on.

As a call center representative in MetLife's disability insurance department, Conor Sprouls uses Cogito constantly. On a typical day, he takes anywhere from 30 to 50 calls. Each one lasts between five and 45 minutes, depending on the complexity of the issue.

Sprouls's first caller on the morning of Sept. 12, 2019, was someone with an anxiety disorder, and Cogito pinged Sprouls once with a reminder to be empathetic and a few times for being slow to respond (not uncommon when looking for documentation on someone's claim, explains Sprouls).

When Cogito first rolled out, some employees were concerned about constant supervisor oversight and notification overload. They were getting pinged too often about the empathy cue, for example, and at one point, the tool thought a representative and a customer were talking over each other when they were in fact sharing a laugh. But Sprouls says that the system gets more intuitive with every call. As for over-supervision, call center conversations are always recorded and sent to supervisors, so it's not much of a change.

In fact, Cogito may even offer a more realistic reflection of performance, says Sprouls. "A supervisor can't be expected to listen to every single call for each of their associates, so sometimes when we're just choosing calls at random, it could be luck of the draw — one associate could be monitored on an easy call, and another could be monitored on a hard one," he says. "Cogito is going to give you the end result: who needs to work on what. I think the way a lot of us really look at Cogito is as a personal job coach."

MetLife has been using Cogito for about two years, though it was first introduced in a pilot capacity.

Emily Baker, a MetLife supervisor with a team of about 17, says that her associates all benefited from Cogito's cues during the pilot process. She says one associate's favorite was the energy cue; he'd start slouching in his seat at the end of the day, and the posture meant he didn't project his voice as much. When the energy cue appeared (a coffee cup icon), he sat up straight and spoke more energetically so that he appeared more engaged in the call.

"I like the fact that I can see overall, on my particular supervisor dashboard, how we're doing as a team, if there are any trends," Baker says. "Is everybody speaking over the caller? Is everybody having trouble with dead air? You can drill down into each person, and it's really good for coaching one-on-one."

Now, MetLife is in the process of rolling out Cogito across even more of its customer-facing departments — claims, direct sales, customer growth. The company also plans to more than double the number of employees using the platform (from 1,200 to over 3,000).

"It's a little bit of a strange dynamic," says Kristine Poznanski, head of global customer solutions at MetLife. "We're using technology and artificial intelligence to help our associates demonstrate more human behavior. It's something you don't intuitively think about."

A growing trend

At his consulting job in the New Zealand Department of Child and Family, Josh Feast, co-founder and CEO of Cogito, says he learned that social workers could experience burnout in as few as three to five years. He was shocked by the irony — that a profession designed to care for people wasn't conducive to caring for the people in that profession.

An idea began to form, and it took further shape after a course at MIT's Media Lab, during which Feast had a key revelation: Big organizations understand data well, so if he wanted to help people inside a large organization, he needed to present his idea in a language the corporate team could understand. "It was almost like being hit by a lightning strike," he says.

And so Cogito was born. In the R&D phase, Feast and his co-founder worked for DARPA, the U.S. government's Defense Advanced Research Projects Agency. The agency had in mind soldiers struggling with PTSD, and DARPA provided the Cogito team with funding to research aid for psychological distress. So Feast began studying how nurses interacted with patients.

"There was a real "aha' moment where we discovered that if you could use that technology to understand the conversation — and to measure the conversational dance between nurse and patient — you could start getting a read of the degree of empathy and compassion they displayed ... and the resulting attitude the patient had to that interaction," says Feast.

He built dashboards to display measures of compassion and empathy, and he found something noteworthy: When people were given real-time feedback while speaking with someone, levels of compassion and empathy during the conversation improved. That realization was the key to Cogito's future.

But Cogito isn't the only AI-powered tool aiming to help us tap into our humanity.

Butterfly

There's Butterfly, an AI tool that aims to help managers empathize with their employees and increase workplace happiness. After Butterfly is embedded into a workplace messaging system, it functions as a chatbot — executive-coaching managers in real-time based on employee surveys and feedback. Butterfly analyzes the latter to measure levels of stress, collaboration, conflict, purpose, creativity and the like. Then, it provides managers with calls to action and reading materials to help them deal with problems on their team. For example, an executive with a highly-stressed team might receive an article on how to create a more compassionate work environment.

"In a nutshell, Butterfly was created in order to help managers to be on point when it comes to their… team's level of engagement and overall happiness," says co-founder and CEO David Mendlewicz. "Think about an AI-driven happiness assistant or AI-driven leadership coach."

Supportiv

Another AI-powered empathy tool is Supportiv, a peer counseling platform aiming to use natural language processing to take on daily mental health struggles such as work stress, anxiety, loneliness and conflicts with loved ones. Seconds after a user answers Supportiv's primary question — "What's your struggle?" — they're sorted into an anonymous, topic-specific peer support group.

Each group has a trained live moderator (who is also equipped to refer specialized or emergency services as needed), and an AI algorithm scans conversations to detect mood, tone, engagement and interaction patterns. On the moderator's side, prompts pop up — user X hasn't contributed to the conversation in a while, or user Y shared a thought above that hasn't been addressed. Co-founder Helena Plater-Zyberk's vision for Supportiv's next iteration: additional AI advances that could help identify isolated users in chats and alert moderators with suggestions on how to be more empathetic towards those individuals.

The aim, says Plater-Zyberk, is to create "superhuman moderators" — using compassion, empathy and hyper-alertness to facilitate a group chat better than any normally-equipped human.

IBM's Project Debater

Finally, when it comes to the theory "I think, therefore I am," IBM's Project Debater fits the bill. Introduced by the tech giant in January, it's billed as the first AI system that can debate complex ideas with humans. At its core, Debater is about rational thinking and empathy — considering opposing points of view and understanding an opponent enough to be able to address their argument piece-by-piece and ultimately win them over.

Dr. Aya Soffer, vice president of AI tech at IBM Research, envisions a variety of real-world applications for Debater — a policymaker who wants to understand the range of implications for a law they're considering. For example, in the case of banning phones from schools (a law the French government passed in 2018), what are the precedents, the pros and cons, the arguments on both sides of the equation? A financial analyst or investment advisor might use Debater to make smart projections about what a new type of technology may or may not mean for the market.

We typically look for supporting arguments in order to convince ourselves, or someone else, of something. But Soffer says that taking counterarguments into account could be even more powerful, whether to change a mind or strengthen a pre-existing view. That kind of empathy and higher-level logical thinking is something IBM Debater aims to help with.

Pitfalls and privacy

As is the case with all new technology, this type has some concerning use cases.

First, there's the potential for system bias in the data used to train the algorithm. For example, if it's taught using cases of predominantly white men expressing empathy, that could yield a system that charts lower output for women and minorities. A call center representative with a medical condition might display less energy than the perceived norm but does their best to make up for it in other ways.

That's why it's a good idea for individuals to be provided this data before it's shared with their supervisors, says Rosalind Picard, founder and director of the Affective Computing Research Group at the MIT Media Lab. She believes it's a breach of ethics to share data on an employee's interactions, such as levels of compassion, empathy and energy, with a manager first.

And then there's the temptation for this type of technology to go beyond its intended use case — a helpful reminder to facilitate a genuine connection — and instead serve as a driver for insincere interactions fueled by fear. After all, similar tech tools are part of the foundation of social ratings systems (think Black Mirror's "Nosedive" episode). In 2020, China plans to debut publicly available social credit scores for every citizen. That score will help determine an individual's eligibility for an apartment, which travel deals they're offered, which schools they may enroll their children in and even whether they can see a hospital doctor without lining up to pay first.

Within the next five years, experts predict we'll make great strides in "sentiment analysis" — a type of natural language processing that identifies human emotions by analyzing facial expressions and body language or text responses.

But for Noah Goodman, associate professor at Stanford University's Computation and Cognition Lab, there's a moral dilemma involved: What's the right thing to do with the information these systems learn? Should they have goals — prompt us, adjust our environments or send us tools to make us feel happier, more compassionate, more empathetic? What should the technology do with data on our feelings towards someone else, our performance in any given interaction? And who should it make that information available to? "This is a place where the creepiness boundary is always close," says Goodman.

Another problem? AI simply can't replicate, or fully comprehend, human emotion. Take Cogito, for example. Let's say you're a customer service representative on the phone with customers all day, and you receive an alert that you're sounding low-energy and tired instead of high-energy and alert. That doesn't mean you're actually feeling tired, says Picard, and that's an important distinction to make.

"It doesn't know how I feel," says Picard. "It has no consciousness — it's simply saying that to this system listening to your vocal quality, compared to your usual vocal quality and compared to other people on the phone at this company's vocal quality, here is how you might sound, according to the data we've collected… It's not to say you are that way."

There's a misunderstanding that we're already at the point where AI effectively understands human feelings, rather than just being able to analyze data and recognize patterns related to them. The phrase "artificial intelligence" itself may propagate that misunderstanding, says Picard, so to avoid fueling public fear about the future of AI, she recommends calling it software instead.

"As soon as we call the software "AI,' a lot of people think it's doing more than it is," she says. "When we say the machine "learns' and that it's "learned something' what we mean is that we've trained a big chunk of mathematics to take a bunch of inputs and make a mathematical function that produces a set of outputs with them. It doesn't "learn' or "know' or "feel' or "think' anything like any of us do. It's not alive."

Implications and regulations

Some experts believe there will come a day when technology will be able to understand and replicate "uniquely human" characteristics. The idea falls under the "computational theory of the mind" — that the brain is a dedicated tool for processing information, and even complex emotions like compassion and empathy can be charted as data. But even if that's true, there's a difference between experiencing emotion and understanding it — and in Goodman's view, it'll one day be entirely possible to build AI systems that have a good understanding of people's emotions without actually experiencing emotions themselves.

There's also the idea that throughout the course of history, fear has often accompanied the release of new technology. "We're always afraid of something new coming out, specifically if it has a large technological component," says Mendlewicz. "Exactly the same fear rose up when the first telegraph came… and when the telegraph was replaced by the phone, people were also expressing fear… about [it] making us less human — having to communicate to a machine."

One of the most important questions to ask: How do we avoid this being used to alienate people or to create more distance between human beings?

One prime example is social media platforms, which were introduced to augment human connectivity but paradoxically ended up as tools of polarization. "What we've learned from that is that human connectivity and the humanity of technology should not be assumed; it needs to be cultivated," says Rumman Chowdhury, who leads Accenture's Responsible AI initiative. "Instead of figuring out how we fit around technology, we need to figure out how technology fits around us."

That also means watching out for red flags, including the tech "solutionism" fallacy — the idea that technology can solve any and all of humanity's problems. Although it can't do that, technology can point out things we need to focus on in order to work towards more overarching solutions.

"We as human beings have to be willing to do the hard work," says Chowdhury. "Empathy doesn't just happen because an AI told you to be more empathetic ... [Let's say] I create an AI to read through your emails and tell you if you sound kind enough and, if not, fix your emails for you so that you sound kind. That doesn't make your a nicer person; it doesn't make you more empathetic... The creation of any of this AI that involves improving human beings needs to be designed very thoughtfully, so that human beings are doing the work."

Some of that work involves building systems to regulate this type of AI before it's widespread, and experts have already begun floating ideas.

For any AI tool, Chris Sciacca, communications manager for IBM Research, would like to see an "AI Fact Sheet" that functions like a nutrition label on a loaf of bread, including data such as who trained the algorithm, when and which data they used. It's a way to look "under the hood" — or even inside the black box — of an AI tool, understand why it might have come to a certain conclusion and remember to take its results with a grain of salt. He says IBM is working on standardizing and promoting such a practice.

Picard suggests regulations akin to those for lie detection tests, such as the Federal Employee Polygraph Protection Act, passed in 1988. Under a similar law, it stands to reason that employers would be unable to require AI communication monitoring tools, with few exceptions — and that even in those cases, they couldn't monitoring someone without informing them about the technology and their rights.

Spencer Gerrol, CEO of Spark Neuro — a neuroanalytics company that aims to measure emotion and attention for advertisers — says the potential implications for this kind of empathetic AI keep him up at night. Facebook may have created "amazing" tech, he says, but it also contributed to meddling in the U.S. elections. And when it comes to devices that can read emotions based on your brain activity, consequences could be even more dire, especially since much of emotion is subconscious. That means that one day, a device could feasibly be more "aware" of your emotions than you yourself are. "The ethics of that will become complex," says Gerrol, especially once advertisers attempt to persuade individuals to take action by leveraging what's known about their emotions.

As for the founder of Cogito himself? Feast believes that over the next five to 10 years, AI tools will split into two categories:

  1. Virtual agents that complete tasks on our behalf.

  2. Intelligent augmentation, or services built around reinforcing or extending our own human capabilities.

Feast envisions more of a meld between man and machine, tools that we'll deem necessary to help us perform the way we want to in particular settings. These types of tools, he says, will "extend and reinforce our humanness."

Hayden Field

Entrepreneur Staff

Associate Editor

Hayden Field is an associate editor at Entrepreneur. She covers technology, business and science. Her work has also appeared in Fortune Magazine, Mashable, Refinery29 and others. 

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Side Hustle

The Remote Side Hustle a 43-Year-Old Musician Works on for 1 Hour a Day Earns Nearly $3,000 a Month: 'All From the Comfort of Home'

Sam Ziegler wanted to supplement his income as a professional drummer — then his tech skills and desire to help people came together.

Business News

Costco CFO Reveals Uncertain Fate of $1.50 Hot Dog and Soda Combo

CFO Richard Galanti reveals that the price will stay the same — but only "for a while."

Marketing

Ever Wonder Why Certain Websites Rank Higher Than Yours? This SEO Expert Reveals The Secret to Dominating Search Results

It's often the smart use of SEO, now supercharged with AI, particularly in keyword optimization.

Leadership

Former Interrogator Shares 5 Behaviors Liars Exhibit and How to Handle Them

Five deceptive behaviors to look for and how to respond to those behaviors when you encounter them.

Business News

AI Is Impacting Jobs. Here Are the Gigs Affected the Most, According to an Analysis of 5 Million Upwork Postings

The researcher said in the report that freelance jobs were analyzed first because that market will likely see AI's immediate impact.