The 3 Principals of Building Anti-Bias AI Why your company needs to apply best practices in eliminating discriminatory bias in your artificial-intelligence systems -- and key principles in applying them.

By Salil Pande

Opinions expressed by Entrepreneur contributors are their own.

In April of 2021, the U.S. Federal Trade Commission — in its "Aiming for truth, fairness, and equity in your company's use of AI" report — issued a clear warning to tech industry players employing artificial intelligence: "Hold yourself accountable, or be ready for the FTC to do it for you." Likewise, the European Commission has proposed new AI rules to protect citizens from AI-based discrimination. These warnings, and impending regulations, are warranted.

Machine learning (ML), a common type of AI, mimics patterns, attitudes and behaviors that exist in our imperfect world, and as a result, it often codifies inherent biases and systemic racism. Unconscious biases are particularly difficult to overcome, because they, by definition, exist without human awareness. However, AI also has the power to do precisely the opposite: remove inherent human bias and introduce greater fairness, equity and economic opportunity to individuals on a global scale. Put simply, AI has the potential to truly democratize the world.

AI's reputation for reflecting human bias

Just as a child observes surroundings, sees patterns and behaviors and mimics them, AI is susceptible to mirroring human biases. So, tech companies, like parents, carry the weighty responsibility of ensuring that racist, sexist and otherwise prejudiced thinking isn't perpetuated through AI applications.

Unfortunately, AI's unsavory reputation in that respect has been rightly earned. For example, in January of 2021, the entire Dutch government resigned after it was revealed that it used a biased algorithm to predict which citizens would be most likely to wrongly claim child benefits, which forced 26,000 parents (many selected due to their dual nationalities) to pay back benefits to the tax authority without the right to appeal.

Other research, conducted by the Gender Shades Project (a non-profit focused on gender discrimination), found that facial recognition machine learning is notoriously bad at accurately identifying people of color. Some of the impacts of associated applications have been catastrophic, such as identifying an innocent person as a criminal in a virtual line-up. In the summer of 2020, Detroit's police chief confessed that the facial recognition technology his department used misidentified roughly 96% of suspects, leading to innocent people being identified as potential criminals. More recently, Amazon extended its ban on using its facial recognition software in policing due to concerns around racial misidentification.

Related: These Entrepreneurs Are Taking on Bias in Artificial Intelligence

Principals of building anti-bias AI

But it doesn't have to be this way. AI can be created not only to be unbiased, but also to fight back against inequities relating to race, gender, etc., and ironically, the only true solution to the biases in AI is human intervention. To help inform this practice, having a constant feedback loop of critiques and comments from users with diverse backgrounds, experiences and demographics is key. This method gives a network-like effect, allowing developers to offer continual updates to algorithms and practices. Developers that employ diligent principles and practices can ensure their technology is impartial and can be applied to a diverse range of scenarios.

Some guiding values towards that end:

1. Design processes for developing AI/ML systems with the removal of bias in mind

Companies and teams developing AI/ML systems need to start by consideration of bias throughout the process of developing and testing algorithms, to ensure that it is minimized or eliminated. There are multiple stages of developing AI/ML systems, and bias should not be an afterthought; teams must start by thinking through all possible biases that could exist in what they are building, then problem solve as to how they will be addressed during each stage of the process. Critical to this is ensuring that such teams are diverse in their thought process, helping ensure that the AI will be a collaboration from different backgrounds.

Related: The Case for Transparent AI

2. Ensure data sets used to teach algorithms reflect true (global) diversity and don't unintentionally introduce bias

Just as with the importance of having a diverse team, ensuring that data and inputs for the AI truly reflect our diverse world quells potential biases against individual groups. AI is designed to follow the rules laid out for it, so it must be trained with unbiased data. Without proper considerations during data collection and prep stages, unintentional biases can creep into algorithms, which can later become expensive to remove, both from time and cost perspectives. Pressure testing data and reviewing patterns within will help teams see both the apparent as well as unintended consequences of data sets.

3. Ensure a rigorous approach to eliminating biases across the development lifecycle

No matter how careful or diverse your team or data is, human biases can still slip through the cracks, so the next critical task is using anti-bias principals to prevent human biases from entering the technology. This includes ensuring that the ML cycle (pre-training, training and post-training) is actively monitored to spot them. This can be done by raising alarms for sensitive parameters and creating repeated expulsion and inclusions in the results. Another important aspect in minimizing the bias through such processes is to define relevant fairness metrics; there is no one universal definition of fairness, and there are many definitions, each offering a different tradeoff between fairness and other objectives.

Related: Learn How Machine Learning Can Help Your Business

Final frontier

Finally, the ongoing research in the proxy problem of bias — namely explainability in AI systems — may ultimately lead the AI/ML community to build bias-free systems, or rather, systems that can be probed and held accountable for the decisions they make. Creating a more equitable world isn't just about AI or technological innovation. While they play a role, true change can and must start with each one of us.

Salil Pande

Entrepreneur Leadership Network Contributor

CEO and Founder of VMock

Salil Pande, CEO and founder of VMock, strives to empower students and alumni to own their career development. Pande has global experience in marketing, sales and management consulting. He is an alumnus of Chicago Booth and IIT Kanpur.

Editor's Pick

Related Topics


When to Know If Rebranding the Right Move For You — and What You Should Focus on When It's Time

Discover if rebranding is what your company needs to work on by asking yourself key questions about why you want to rebrand in the first place and which key areas are most important to focus on, whether that be a refresh, partial or full rebrand.


If You Want to Be an Industry Leader, Be an Industry Innovator. Here's How to Inspire Innovation in Your Business.

Innovation isn't just about making something new; it's about improving something.

Operations & Logistics

3 Ways Logistics Leaders Can Edge the Competition by Embracing Industry Problems

Success often comes to those who stay focused on the problems they're trying to solve, rather than getting fixated on their solution.

Business News

Here's the Secret to Growing Your Small Business, According to Execs at UPS, Airbnb, Mastercard, and Other Big Brands

These 10 executives work at big companies, overseeing programs that help small business. Here's the advice they wish all small business owners were getting.


Master Time Management by Using These Essential Tips for Self-Employed Entrepreneurs

These key time management tips have not only helped me stay afloat amidst the chaos but have bolstered my business and reputation.