📺 Stream EntrepreneurTV for Free 📺

Why Are Some Bots Racist? Look at the Humans Who Taught Them. Here are a few things we can do about human biases in machine learning.

By Jordi Torras

entrepreneur daily

Opinions expressed by Entrepreneur contributors are their own.

PhonlamaiPhoto | Getty Images

With its scientific algorithms, we trusted robots to deliver neutral, fair, impartial answers. Since they are supposed to be free from unjust human biases and filters from past experiences, we believe that just as 2+2 = 4, mathematical formulas are black and white. We have found instead that the data scientists who create these algorithms have their own unconscious biases, which are subtly filtered down inside of the algorithms and act on these predispositions. More revealing is that even without the presence of bias during the development phase, machines are learning from the discriminatory undertones they perceive in our society.

Related: Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less Lucrative

We can see this bias at play when we use Google to search for images of CEOs, with significantly more white males appearing than women or minorities, and search results for higher-paying jobs appear more to men than women. An algorithm designed to predict the likelihood of a person committing a future crime became unfairly racist. Researchers at Boston University and Microsoft New England found that machines associated the word "programmer" with the word "man," rather than "woman," and that the most similar word for "woman" is "homemaker."

AI's predictive power creates solutions.

Artificial Intelligence (AI) is arriving at a time of great necessity. AI's importance is reflected in research conducted by McKinsey, which found total annual external investment in AI was between $8 billion to $12 billion in 2016. With its increasing ability to collect vast amounts of data, AI quickly turns this information into actionable insights, providing critical strategic advantages. Chatbots can find answers to customer queries much faster than humans, diagnose diseases and make accurate predictions that can drive product innovation. The predictive data can be used to detect fraud or match the unemployed with available jobs that utilize their skills, and solve complex traffic problems.

Related: A Humanoid Robot Called Sophia Mocked Elon Musk After Being Asked About the Dangers of AI

Are chatbots reflecting a biased world?

The most straightforward solution, it seems, would be to train programmers so that they do not unintentionally write code or use data that is biased. Neutrality can be difficult to discern during the development phase, as Beauty.AI learned when the results of its first machine-judged beauty pageant resulted in winners that all had fair skin. Developers realized after the fact that the machine was not taught to recognize those with dark skin.

Enlightened developers are not enough.

The creation of fair and balanced algorithms is critical and necessary yet reflects only part of the solution. Machines have the potential to have hidden biases, whether or not they originated from any predisposition of the designer. Machines continuously learn from outside data to do their task better. We are designing machines to think and learn. The extreme example of continuous improvement is Elon Musk's scenario where bots trained to eliminate spam might eventually learn to wipe out the humans who create spam. While we are far from this extreme, it's important to understand that the algorithms used by AI depend upon deep learning, which uses neural networks. However pure the original code is, the machines will always be vulnerable to replicating the bias it sees as it engages with the world.

Related: 5 Reasons Machine Learning Is the Future of Marketing

This type of learning was evident after Microsoft's Tay chatbot imitated what it read on Twitter, regardless of how vicious the behavior. Tay lived less than 24 hours before it was influenced by racist tweets. As Tay learned and engaged through conversation and dialogue, it very quickly learned to use these statements to spin its own racist tweets.

AI can perpetuate and reinforce a bias that already exists. For example, if an organization has traditionally hired male CEOs, a bot trained to find future CEOs would look at the past for likely candidates, based upon real data that indicates that previous CEOs were male. The machine would use male candidates as a predictor of someone who is qualified for a job.

Algorithms that only provide results "similar to" previous data create a bubble of their own, as we have experienced in news feeds that are free from conflicting viewpoints. Without opposing points, we lack necessary insights for significant decisions that foster creativity and innovation.

Related: 10 Artificial Intelligence Trends to Watch in 2018

Value alignment is a teachable skill.

Mark O. Riedl, an associate professor at the College of Computing, School of Interactive Computing at Georgia Tech, proposes an attractive solution, Quixote, that involves immersing bots in literature. In his paper, which he presented at the AAAI Conference on Artificial Intelligence, he explains that strong values can be learned, and, "that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate." He explains that stories are encoded with implicit and explicit sociocultural knowledge that could build a moral foundation in the bot and could teach the bot how to solve a problem without harming humans.

Literature can be ambiguous and abstruse, and some of the greatest books of all time were once regarded as nefarious by school boards, calling for their ban. The more significant question is how will we, as a society, come to a consensus regarding what is socially responsible literature reading material for our bots?

AI is not a replacement for people.

The goal has never been for AI to replace humans, but rather to support, amplify and enlighten. At a minimum, AI developers can take a more active role to ensure biases are not inadvertently created. Human oversight is still needed at all levels of development, and to ensure there is a healthy mix of opposing views to encourage diversity. Additionally, this must be supported by an equal opportunity for everyone to develop these systems.

Jordi Torras

CEO and Founder of Inbenta

Jordi Torras founded Inbenta in 2005 to help clients improve online relationships with their customers using revolutionary technologies like artificial intelligence and natural language processing.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Franchise

Franchising Is Not For Everyone. Explore These Lucrative Alternatives to Expand Your Business.

Not every business can be franchised, nor should it. While franchising can be the right growth vehicle for someone with an established brand and proven concept that's ripe for growth, there are other options available for business owners.

Side Hustle

Her 'Crude Prototype' and $50 Craigslist Purchase Launched a Side Hustle That Hit $1 Million in Sales — Now the Business Generates Up to $20 Million a Year

Elle Rowley experienced a "surge of creative inspiration" after she had her first baby in 2009 — and it wasn't long before she landed on a great idea.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business News

Passengers Are Now Entitled to a Full Cash Refund for Canceled Flights, 'Significant' Delays

The U.S. Department of Transportation announced new rules for commercial passengers on Wednesday.

Leadership

There Are 4 Types of Managers. Take This Quiz to Find Out Which You Are, and If You're In the Right Line of Work.

Knowing your leadership style, and whether it suits the work you're doing and the team you have, is the first step in living up to your leadership potential.