Why Are Some Bots Racist? Look at the Humans Who Taught Them. Here are a few things we can do about human biases in machine learning.

By Jordi Torras

Opinions expressed by Entrepreneur contributors are their own.

PhonlamaiPhoto | Getty Images

With its scientific algorithms, we trusted robots to deliver neutral, fair, impartial answers. Since they are supposed to be free from unjust human biases and filters from past experiences, we believe that just as 2+2 = 4, mathematical formulas are black and white. We have found instead that the data scientists who create these algorithms have their own unconscious biases, which are subtly filtered down inside of the algorithms and act on these predispositions. More revealing is that even without the presence of bias during the development phase, machines are learning from the discriminatory undertones they perceive in our society.

Related: Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less Lucrative

We can see this bias at play when we use Google to search for images of CEOs, with significantly more white males appearing than women or minorities, and search results for higher-paying jobs appear more to men than women. An algorithm designed to predict the likelihood of a person committing a future crime became unfairly racist. Researchers at Boston University and Microsoft New England found that machines associated the word "programmer" with the word "man," rather than "woman," and that the most similar word for "woman" is "homemaker."

AI's predictive power creates solutions.

Artificial Intelligence (AI) is arriving at a time of great necessity. AI's importance is reflected in research conducted by McKinsey, which found total annual external investment in AI was between $8 billion to $12 billion in 2016. With its increasing ability to collect vast amounts of data, AI quickly turns this information into actionable insights, providing critical strategic advantages. Chatbots can find answers to customer queries much faster than humans, diagnose diseases and make accurate predictions that can drive product innovation. The predictive data can be used to detect fraud or match the unemployed with available jobs that utilize their skills, and solve complex traffic problems.

Related: A Humanoid Robot Called Sophia Mocked Elon Musk After Being Asked About the Dangers of AI

Are chatbots reflecting a biased world?

The most straightforward solution, it seems, would be to train programmers so that they do not unintentionally write code or use data that is biased. Neutrality can be difficult to discern during the development phase, as Beauty.AI learned when the results of its first machine-judged beauty pageant resulted in winners that all had fair skin. Developers realized after the fact that the machine was not taught to recognize those with dark skin.

Enlightened developers are not enough.

The creation of fair and balanced algorithms is critical and necessary yet reflects only part of the solution. Machines have the potential to have hidden biases, whether or not they originated from any predisposition of the designer. Machines continuously learn from outside data to do their task better. We are designing machines to think and learn. The extreme example of continuous improvement is Elon Musk's scenario where bots trained to eliminate spam might eventually learn to wipe out the humans who create spam. While we are far from this extreme, it's important to understand that the algorithms used by AI depend upon deep learning, which uses neural networks. However pure the original code is, the machines will always be vulnerable to replicating the bias it sees as it engages with the world.

Related: 5 Reasons Machine Learning Is the Future of Marketing

This type of learning was evident after Microsoft's Tay chatbot imitated what it read on Twitter, regardless of how vicious the behavior. Tay lived less than 24 hours before it was influenced by racist tweets. As Tay learned and engaged through conversation and dialogue, it very quickly learned to use these statements to spin its own racist tweets.

AI can perpetuate and reinforce a bias that already exists. For example, if an organization has traditionally hired male CEOs, a bot trained to find future CEOs would look at the past for likely candidates, based upon real data that indicates that previous CEOs were male. The machine would use male candidates as a predictor of someone who is qualified for a job.

Algorithms that only provide results "similar to" previous data create a bubble of their own, as we have experienced in news feeds that are free from conflicting viewpoints. Without opposing points, we lack necessary insights for significant decisions that foster creativity and innovation.

Related: 10 Artificial Intelligence Trends to Watch in 2018

Value alignment is a teachable skill.

Mark O. Riedl, an associate professor at the College of Computing, School of Interactive Computing at Georgia Tech, proposes an attractive solution, Quixote, that involves immersing bots in literature. In his paper, which he presented at the AAAI Conference on Artificial Intelligence, he explains that strong values can be learned, and, "that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate." He explains that stories are encoded with implicit and explicit sociocultural knowledge that could build a moral foundation in the bot and could teach the bot how to solve a problem without harming humans.

Literature can be ambiguous and abstruse, and some of the greatest books of all time were once regarded as nefarious by school boards, calling for their ban. The more significant question is how will we, as a society, come to a consensus regarding what is socially responsible literature reading material for our bots?

AI is not a replacement for people.

The goal has never been for AI to replace humans, but rather to support, amplify and enlighten. At a minimum, AI developers can take a more active role to ensure biases are not inadvertently created. Human oversight is still needed at all levels of development, and to ensure there is a healthy mix of opposing views to encourage diversity. Additionally, this must be supported by an equal opportunity for everyone to develop these systems.

Wavy Line
Jordi Torras

CEO and Founder of Inbenta

Jordi Torras founded Inbenta in 2005 to help clients improve online relationships with their customers using revolutionary technologies like artificial intelligence and natural language processing.

Editor's Pick

A Leader's Most Powerful Tool Is Executive Capital. Here's What It Is — and How to Earn It.
Lock
One Man's Casual Side Hustle Became an International Phenomenon — And It's on Track to See $15 Million in Revenue This Year
Lock
3 Reasons to Keep Posting on LinkedIn, Even If Nobody Is Engaging With You
Why a Strong Chief Financial Officer Is Crucial for Your Franchise — and What to Look for When Hiring One

Related Topics

Business News

7 of the 10 Most Expensive Cities to Live in the U.S. Are in One State

A new report by U.S. News found that San Diego is the most expensive city to live in for 2023-2024, followed by Los Angeles. New York City didn't even rank in the top 10.

Business News

More Americans Are Retiring Abroad, Without a Massive Nest Egg — Here's How They Made the Leap

About 450,000 people received their social security benefits outside the U.S. at the end of 2021, up from 307,000 in 2008, according to the Social Security Administration.

Business News

Lululemon Employees Say They Were Fired for Trying to Stop Shoplifters

Two Georgia women say Lululemon fired them without severance for trying to get thieves out of the store.

Business News

Woman Ties the Knot at White Castle Almost 30 Years After the Chain Gave Her Free Food as a Homeless Teen

Jamie West was just 12 years old when she ran away from the foster care system.

Business News

New York Lawyer Uses ChatGPT to Create Legal Brief, Cites 6 'Bogus' Cases: 'The Court Is Presented With an Unprecedented Circumstance'

The lawyer, who has 30 years of experience, said it was the first time he used the tool for "research" and was "unaware of the possibility that its content could be false."