5 Steps to Help Tech Companies Reduce Bias in AI
The five-step framework can help tech entrepreneurs prevent biases from developing and reverse existing biases in tech.
Children inevitably adapt to the culture in which they were raised. Parents or guardians shape the lens through which they view the world, largely through the examples they set. Many parents experience humored horror when a child picks up on an inappropriate word, likely from an overheard adult conversation, and begins to employ that expression in their everyday speech. It does not matter whether the parent is intentionally or unintentionally crafting the lens for the child — they will still pick up on the parents’ viewpoints and habits.
We are witnessing this same progression in the tech world. Artificial intelligence systems adopt the worldview of their creators, just like children adopt the worldview of the adults that raised them. This is problematic because artificial intelligence cannot develop its own worldview, learn from its life experiences or challenge its creator’s worldview like a child might as they grow older.
Here are five initiatives that will help prevent bias in tech.
1. Make tech education accessible
Artificial intelligence systems are biased, and the technology usually follows the viewpoints of its creators. While society has changed considerably in the last half-century, corporations still have underlying biases (whether they realize them or not). It’s essential that we take active steps to reverse our biases so that we can prevent further biases from developing in artificial intelligence, and the best way to do this is to make the tech industry more accessible to a wider range of people.
Initiatives like Girls Who Code, AI4ALL and other educational programs make it possible for children to develop an interest in technology. To reduce bias and make the tech industry more diverse, leaders must invest in the education of young people so that they can develop an interest in the field and build the skills necessary to pursue a career. Tech companies should invest in a range of students early on, knowing that investments in education yield long-term results.
2. Hire and promote with diversity in mind
Despite numerous call-outs of major industry leaders, the tech world still lacks diversity. The Harvard Business Review reported that leading companies like Google have only crawled ahead toward more diversity among staff members. Even nearly seven years after tech companies started reporting diversity efforts, most leading tech organizations are failing — with minorities only making up single-digit percentages of the overall workforce.
It goes without saying, but companies must actively hire and promote with diversity in mind. This is the most effective way to eliminate biases in artificial intelligence, because with a more diverse workforce, AI will be developed with multi-faceted viewpoints and experiences in mind. And we can’t just do it at the entry-level — diversity must extend to the top of technological leadership so that those who sign off on projects can see blind spots in the systems.
3. Evaluate data sets
Bias is already in your data sets, and you shouldn’t ignore it. To counter biases, every AI technology developer should devote time to evaluating the data sets with which the system was created. This evaluation should take place at every stage of development, from the initial design to the final proofs.
The best way to evaluate AI for biases is to ask specific questions. The FTC provides guidelines to determine if artificial intelligence is on the right trajectory, and to clarify what is allowed (or prohibited) by law. Developers must question themselves and the technology they are creating. It is imperative that developers understand their own biases — especially the unconscious ones — and can evaluate their work for the same. Working to eliminate biases is not a linear process, as it will take multiple back-and-forth steps.
4. Regularly reevaluate systems to detect bias
Rigorous evaluations can’t stop at data sets. Technology is growing and changing at such a rapid pace, and strategies, systems and even outcomes should be reevaluated each step of the way. In order to reverse the biases already in artificial intelligence and prevent further biases from developing, companies must check their work over and over again.
Artificial intelligence can learn new data, and thus develop new biases as the system grows and changes. Companies must understand the impact that this could have on their systems and the people using them. Artificial intelligence software should be regularly evaluated, especially from the consumer-facing interface, so that biases are accounted for and eliminated.
5. Adjust and repeat the process
Technology has never developed linearly. The same applies to artificial intelligence: Data, processes, systems and even the bots themselves must be adjusted over time. The best avenue forward is to take a preventative approach. That means that these five steps to reduce bias in AI should be adjusted and repeated multiple times on any given system.
Human compassion is at the core of this mission toward equality in artificial intelligence. In order to create technology that serves humanity instead of harming it, we must build it with people in mind. It is a pivotal time in technology, and it’s essential that companies take a more human approach to artificial intelligence by focusing on creating systems free of bias.
Entrepreneur Leadership Network Contributor