How Can You Tell If AI Is Being Used Ethically? Here Are 3 Things to Look for. What should we look for to know if a company or a team has built an ethical technology?
By Karim Nurani Edited by Chelsea Brown
Opinions expressed by Entrepreneur contributors are their own.
AI has been a topic of great interest — we're all amazed by its potential and the impact it may have on our lives, mostly because AI is the first tool in history that can make decisions by itself. Take ChatGPT as an example. It embodies more knowledge than any human has ever known. This tool can be a force for enormous good. Imagine what AI can do in healthcare and its enormous databases of genes, medicines and disease symptoms and drug interactions, for example? It can literally save lives. But that's also a huge responsibility we're putting on a technology that we haven't even begun to fully understand.
As investors, entrepreneurs and users, we directly impact where the technology will go, and we are setting the stage for where it will end up.
Related: What Are Some of the Ethical Concerns of Artificial Intelligence?
The hype around AI
AI is a technology that has been in the making for many years now. We've seen ups and downs around AI, but most recently, there's a lot of hype. The reason that AI is now having its moment is mostly because of two simple reasons:
There's more data available now than ever before, and AI feeds and grows from data
Computing and data storage have become exponentially cheaper, so the technology required to train AI became sufficient enough to make it broadly accessible
We've seen AI in every aspect of our lives, from the way we shop, search for information, the way we pay, etc., — and we'll continue to see AI enter our lives and our decision process in more and more ways. The applications are endless, from conversational agents (like Apple's Siri or Amazon's Alexa) to new ways of doing specific things, like creating the graphics of a marketing campaign or an ad.
In healthcare, for example, AI is enhancing diagnoses, treatments and patient care. Machine learning algorithms can analyze medical data, detect diseases at early stages and predict patient outcomes. AI-powered systems have improved medical imaging, enabling more accurate and faster diagnoses.
In finance, AI is optimizing processes from fraud detection to customer experience personalization. AI algorithms can analyze vast volumes of financial data in real time, helping banks and financial institutions detect anomalies and prevent fraud.
AI has revolutionized the manufacturing sector through automation, predictive maintenance and quality control. AI-powered robots and machines automate repetitive tasks, improving efficiency and reducing errors.
But how can we make this technology, that's already part of our lives in a very intimate way, actually be supportive? As creators of this technology, it's our responsibility to give it use cases that support human potential not diminish it; which is why ethics is important.
Related: Emerging Ethical Concerns In the Age of Artificial Intelligence
3 things to look for to find ethical AI
1. AI's general parameters:
Responsible and ethical AI starts by knowing how to assess the technology, the company and the team developing it. It can be hard to establish where to draw the line, however, there can be general parameters or rules to set the stage; human rights, fundamental freedom and human dignity are a cornerstone to assess if a technology or a tool is augmenting our capabilities or if it's going to hurt us and/or others (especially minority groups).
For example, gender has been historically excluded in many aspects of scientific research, and we've established that AI learns from data and from past examples. If AI is learning from examples that hold bias and discrimination that we've had in our society, we are reinforcing patterns instead of breaking them. AI holds the promise of solving more problems than it creates. However, we can't ignore challenges like equitable outcomes and personal privacy. Questioning the team and the company to understand how they are working towards breaking bias is important. The goal is to have the most thoughtful application of what AI's knowledge is building.
2. The team:
It's key to have people on the board and in the organization who are watching closely. There are always highly trained financial professionals who oversee the financial health of a company; same for compliance with regulations. Today, there should also be experts who assess the intended and unintended consequences of AI and its impact.
Having diversity in the team will foster a more ethical company. When you have different cultures, ages, personalities, etc., you get challenged to see more perspectives. It's important to bring in different perspectives in the development journey to have an inclusive final product.
3. Trusting AI:
Marketing and sales experts know that relationships and trust with their customers are a fundamental driver for business outcomes — trust is key for business. As users and customers of AI, we're also witnessing an impact. How can creators enable a trusting relationship between technology and customers? We live in a world where a lack of trust in institutions and governments has become more common. In other words, today, we are subject to manipulation because there's no longer one single source of truth, and this makes us an unstable society. Today, AI tools that are completely trustworthy are rare, but we might get to a point where they will no longer be a nice-to-have but a must.
Related: AI Isn't Evil — But Entrepreneurs Need to Keep Ethics in Mind As They Implement It
Unfortunately, today, there are no regulations or certificates that show if a company is building an AI model ethically or not, but if we question the development journey, we're more likely to find out the principles upon which the technology was built on.
The important part is that we are all an integral part of the ecosystem; the incredible thing about AI is that we are all relevant enough to have an impact even if we are not the experts working in the industry. We are already involved, the data comes from us, from our decisions and actions, and our participation is likely to keep growing. So, after all, ethics might be up to us.