Get All Access for $5/mo

The 3 Principals of Building Anti-Bias AI Why your company needs to apply best practices in eliminating discriminatory bias in your artificial-intelligence systems -- and key principles in applying them.

By Salil Pande

Opinions expressed by Entrepreneur contributors are their own.

In April of 2021, the U.S. Federal Trade Commission — in its "Aiming for truth, fairness, and equity in your company's use of AI" report — issued a clear warning to tech industry players employing artificial intelligence: "Hold yourself accountable, or be ready for the FTC to do it for you." Likewise, the European Commission has proposed new AI rules to protect citizens from AI-based discrimination. These warnings, and impending regulations, are warranted.

Machine learning (ML), a common type of AI, mimics patterns, attitudes and behaviors that exist in our imperfect world, and as a result, it often codifies inherent biases and systemic racism. Unconscious biases are particularly difficult to overcome, because they, by definition, exist without human awareness. However, AI also has the power to do precisely the opposite: remove inherent human bias and introduce greater fairness, equity and economic opportunity to individuals on a global scale. Put simply, AI has the potential to truly democratize the world.

AI's reputation for reflecting human bias

Just as a child observes surroundings, sees patterns and behaviors and mimics them, AI is susceptible to mirroring human biases. So, tech companies, like parents, carry the weighty responsibility of ensuring that racist, sexist and otherwise prejudiced thinking isn't perpetuated through AI applications.

Unfortunately, AI's unsavory reputation in that respect has been rightly earned. For example, in January of 2021, the entire Dutch government resigned after it was revealed that it used a biased algorithm to predict which citizens would be most likely to wrongly claim child benefits, which forced 26,000 parents (many selected due to their dual nationalities) to pay back benefits to the tax authority without the right to appeal.

Other research, conducted by the Gender Shades Project (a non-profit focused on gender discrimination), found that facial recognition machine learning is notoriously bad at accurately identifying people of color. Some of the impacts of associated applications have been catastrophic, such as identifying an innocent person as a criminal in a virtual line-up. In the summer of 2020, Detroit's police chief confessed that the facial recognition technology his department used misidentified roughly 96% of suspects, leading to innocent people being identified as potential criminals. More recently, Amazon extended its ban on using its facial recognition software in policing due to concerns around racial misidentification.

Related: These Entrepreneurs Are Taking on Bias in Artificial Intelligence

Principals of building anti-bias AI

But it doesn't have to be this way. AI can be created not only to be unbiased, but also to fight back against inequities relating to race, gender, etc., and ironically, the only true solution to the biases in AI is human intervention. To help inform this practice, having a constant feedback loop of critiques and comments from users with diverse backgrounds, experiences and demographics is key. This method gives a network-like effect, allowing developers to offer continual updates to algorithms and practices. Developers that employ diligent principles and practices can ensure their technology is impartial and can be applied to a diverse range of scenarios.

Some guiding values towards that end:

1. Design processes for developing AI/ML systems with the removal of bias in mind

Companies and teams developing AI/ML systems need to start by consideration of bias throughout the process of developing and testing algorithms, to ensure that it is minimized or eliminated. There are multiple stages of developing AI/ML systems, and bias should not be an afterthought; teams must start by thinking through all possible biases that could exist in what they are building, then problem solve as to how they will be addressed during each stage of the process. Critical to this is ensuring that such teams are diverse in their thought process, helping ensure that the AI will be a collaboration from different backgrounds.

Related: The Case for Transparent AI

2. Ensure data sets used to teach algorithms reflect true (global) diversity and don't unintentionally introduce bias

Just as with the importance of having a diverse team, ensuring that data and inputs for the AI truly reflect our diverse world quells potential biases against individual groups. AI is designed to follow the rules laid out for it, so it must be trained with unbiased data. Without proper considerations during data collection and prep stages, unintentional biases can creep into algorithms, which can later become expensive to remove, both from time and cost perspectives. Pressure testing data and reviewing patterns within will help teams see both the apparent as well as unintended consequences of data sets.

3. Ensure a rigorous approach to eliminating biases across the development lifecycle

No matter how careful or diverse your team or data is, human biases can still slip through the cracks, so the next critical task is using anti-bias principals to prevent human biases from entering the technology. This includes ensuring that the ML cycle (pre-training, training and post-training) is actively monitored to spot them. This can be done by raising alarms for sensitive parameters and creating repeated expulsion and inclusions in the results. Another important aspect in minimizing the bias through such processes is to define relevant fairness metrics; there is no one universal definition of fairness, and there are many definitions, each offering a different tradeoff between fairness and other objectives.

Related: Learn How Machine Learning Can Help Your Business

Final frontier

Finally, the ongoing research in the proxy problem of bias — namely explainability in AI systems — may ultimately lead the AI/ML community to build bias-free systems, or rather, systems that can be probed and held accountable for the decisions they make. Creating a more equitable world isn't just about AI or technological innovation. While they play a role, true change can and must start with each one of us.

Salil Pande

CEO and Founder of VMock

Salil Pande, CEO and founder of VMock, strives to empower students and alumni to own their career development. Pande has global experience in marketing, sales and management consulting. He is an alumnus of Chicago Booth and IIT Kanpur.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business News

These Companies Offer the Best Work-Life Balance, According to Employees

The ranking is based on Glassdoor ratings and reviews.

Science & Technology

Use This Framework to Successfully Integrate AI Into Your Business Operations

Here's how to ensure both innovation and compliance when using AI in your organization.

Leadership

Why Your AI Strategy Will Fail Without the Right Talent in Place

Using fractional AI experts through specialized platforms allows companies to access top talent cost-effectively, drive innovation and scale agile strategies for growth.

Growing a Business

5 Effective Strategies to Boost Your Business's Online Presence

Boosting your online presence in 2025 is the key to success for businesses looking to grow. Working on your branding and reputation management is important to drive more sales and improve conversion.