Ending Soon! Save 33% on All Access

Artificial Intelligence May Reflect the Unfair World We Live in Our dilemma: Do we "adjust" the neural networks we're creating to make them more fair in an unfair world, or address bias and prejudice in real life?

By Sophia Arakelyan Edited by Dan Bova

Opinions expressed by Entrepreneur contributors are their own.


We've all heard Elon Musk speak with foreboding about the danger AI poses -- something he says may potentially bring forth a Third World War: This is of course the power of artificial intelligence (AI).

Related: 5 Major Artificial Intelligence Hurdles We're on Track to Overcome by 2020

But let's put aside for a moment Musk's claims about the threat of human extinction and look instead at the present-day risk AI poses.

This risk, which may well be commonplace in the world of technology business, is bias within the learning process of artificial neural networks.

This notion of bias may not be as alarming as that of "killer" artificial intelligence -- something Hollywood has conditioned us to fear. But, in fact, a plethora of evidence suggests that AI has developed a system biased against racial minorities and women.

The proof? Consider the racial discrimination practiced against people of color who apply for loans. One reason may be that financial institutions are applying machine-learning algorithms to the data they collect about a user, to find patterns to determine if a borrower is a good or bad credit risk.

Think, too, about those AI-powered advertisements that portray the best jobs being performed by men, not women. Research by Carnegie Mellon University showed that in certain settings, Google online ads promising applicants help getting jobs paying more than $200,000 were shown to significantly fewer women than men.

That raised questions about the fairness of targeting ads online.

And, how about Amazon's refusal to provide same-day delivery service to certain zip codes whose populations were predominantly black?

Factors like these suggest that human bias against race and gender has been transferred by AI professionals into machine intelligence. The result is that AI systems are being trained to reflect the general opinions, prejudices and assumptions of their creators, particularly in the fields of lending and finance.

Because of these biases, experts are already striving to implement a greater degree of fairness within AI. "Fairness" in this context means an effort to find, in the data, representations of the real world. These, in turn, can help model predictions which will follow a more diverse global belief system that doesn't discriminate with regard to race, gender or ethnicity.


There is research backing up the threat: In 2014, the Journal of Personality and Social Psychology published the results of an experiment conducted by Justin Friesen, Troy Campbell and Aaron Kay. The experiment demonstrated that people have the tendency to strongly adhere to their beliefs. Even in the face of contradictory logic and scientific evidence, test subjects in the experiment steadfastly refused to change their opinions, and instead claimed moral superiority.

During the experiment, several unsubstantiated statements were made by the test subjects which had no grounding in common sense, but somehow allowed these people to maintain their version of the truth. More than anything, apparently, they -- and people in general -- have the desire to always be correct.

The issue here might be caused by the data scientists responsible for the training of the artificial "neural networks'' involved. Neural networks are interconnected groups of nodes similar to the vast network of neurons in a human's or animal's brain. The data scientists are "training" them for organizations possessing tremendous social impact and presence.

So, the implication is that these scientists may unconsciously be transferring to the AI they develop their personal core-belief structures and opinions regarding minorities, gender, and ethnicity.

In addition, the technological leaps being made aren't always clear to us, to begin with: Neural networks, in fact, remain a black box: It's hard to comprehend why they make they make the predictions they do -- and this remains a major area for ongoing research.

What could be done

One of the methods being used to gain more insight into the problem is the use of attention-based, neural-network architectures that help shed some light on what the network actually "sees" and focuses on when it makes predictions. Research like this entails a practical, research-based approach.

Another potential solution (one that's potentially "fair") might entail legally requiring across-the-board transparency of financially and socially significant organizations that regularly employ machine-learning in their decision-making procedures.

Under such a scenario, organizations would be forced to reveal their modus operandi for data manipulation and results-generation, in order to deter other organizations from following in their footsteps and further negatively impacting society.

Still, even if full transparency were to be applied and data scientists sincerely tried to feed neural networks with correct data, injustice would still exist in the world at large. In business, for example, women hold fewer C-level positions in companies than men, and African Americans earn less on average than whites. This is the reality in our culture, and unfair neural nets are simply its byproduct.

Related: Making Machine Learning Accessible: 3 Ways Entrepreneurs Can Apply It Today

So, our dilemma is whether to "adjust" neural networks to make them more fair in an unfair world, or address the prime causes of bias and prejudice in real life and give more opportunities to women and minorities. In this way we would (hopefully) see data naturally improve, to reflect those positive trends over time.

A truly progressive society should opt for the second option. Meanwhile, entrepreneurs already in, or contemplating getting into, the AI field might want to develop AI products and deal with the data collected from millions of users in a way that's cognizant of the potential biases it might contain. In other words, they need to ask if bias exists in their businesses and be more conscious of such scenarios.

The issue of bias carries reminders of the era when home computer use became the norm, and when many hacking attacks occurred. Back then, ethical hackers appeared, and exposed vulnerabilities in systems -- and they're still on the case.

Similarly, in this age of AI development, ethical groups of AI experts can and should step in to expose biases to save us from those "monster" scenarios of societal damage such biases could bring.

Sophia Arakelyan

Writer and Founder, BuZZrobot

Sophia Arakelyan is a writer and founder of a publication about AI: buZZrobot. She is an award-winning journalist and has worked with BusinessWeek Russia, where she covered finance/banking and small business topics. An article she wrote that highlighted the issues in the Russian small business ecosystem was admitted as an Article of the Year in 2008 by the Russian National Press Award, “PressCalling.”  She also founded buZZrobot, to deliver a clear explanation of what AI technology is and to cover practical aspects of the field.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Business News

Elvis Presley's Granddaughter Fights Graceland Foreclosure, Calls Paperwork 'Forgeries'

The 13.8-acre estate was scheduled to be sold in a public foreclosure auction on Thursday. Presley's granddaughter and heir, Riley Keough, is fighting to save Graceland in court.

Growing a Business

Want to Expand Your Market Overseas? Here's Everything You Need to Know About Global Logistics in 2024

With rising geopolitical tensions and changing market conditions it can be hard for businesses to navigate supply chain logistics even in a post-pandemic world. Here are three tips from the CEO of an international customs brokerage.

Business News

Target Is Lowering Prices on Thousands of Items — Here's Where You Can Expect to Save

The news was announced ahead of Target's Q1 2024 earnings call, expected to occur Wednesday at 10 a.m. EST.

Business News

Kickstarter Is Opening Up Its Platform to Creators and Making Big Changes to Its Model — Here's What's New

The company noted it is moving beyond traditional crowdfunding and making it easier for businesses to raise more money.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.