Get All Access for $5/mo

Emerging Ethical Concerns In the Age of Artificial Intelligence Do electric sheep need shepherds?

By Bethany Quinn Edited by Dan Bova

Opinions expressed by Entrepreneur contributors are their own.

Amazon

My husband and I have a running joke where we have our Amazon Echo "compete" with our iPhones to see who does a better (i.e., more human-like) job of interacting with us. While there's no clear winner, Siri seems to have the edge for casual conversation, but Alexa can sing.

I've noticed something else, too. We don't usually thank Siri or Alexa the way we would a clerk at a supermarket or an employee at an information kiosk, even though they're providing us with identical services. And why would we? Siri and Alexa aren't people; they're anthropomorphized computer programs. They don't care if we thank them, because they don't have feelings.

At least, we're pretty sure they don't.

Related: Good, Bad & Ugly! Artificial Intelligence for Humans is All of This & More.

Science fiction novels have long delighted readers by grappling with futuristic challenges like the possibility of artificial intelligence so difficult to distinguish from human beings that people naturally ask, "should these sophisticated computer programs be considered human? Should "they' be granted human rights?" These are interesting philosophical questions, to be sure, but equally important, and more immediately pressing, is the question of what human-like artificial intelligence means for the rights of those whose humanity is not a philosophical question.

If artificial intelligence affects the way we do business, the way we obtain information, and even the way we converse and think about the world, then do we need to evaluate our existing definition(s) of human rights as well?

What are "human rights"?

Of course, what constitutes a human right is far from universally agreed. It goes without saying that not all countries guarantee the same rights to their citizens and nationals. Likewise, political support for the existing scope of rights within each country waxes and wanes, both directly and inversely, with those countries' respective economic fortunes and shifting cultural mores.

Historically, technological improvements and economic prosperity -- as measured by per capita GDP -- have tended to lead to an expanded view of basic human rights. The notion of universal health care as a basic right, for instance, is a relatively modern affectation. It did not exist -- and could not have existed -- without a robust administrative infrastructure and tax base to support it, and without sufficiently advanced medical technology to assure the population of its effectiveness.

Work to live? Live to work?

Technological advancement has always, understandably, been met with skepticism, particularly from those whose livelihoods are most likely to be affected by a technological shift. Technology that enhances productivity makes the humans using it more productive, but this is a double-edged sword, as it likewise increases the productivity expectations, and reduces the number of humans required for any given level of productive output. Theoretically, this need does not necessarily lead to job loss, as long as the demand for productive output continues to outpace the technologically abetted output itself.

Related: 5 Major Artificial Intelligence Hurdles We're on Track to Overcome By 2020

Do human beings have a right to earn a livelihood? And, if they do, how far does that right extend? How much discomfort is acceptable before the effort required to find gainful employment moves from reasonable to potentially rights-infringing? If technology renders human labor largely obsolete, do humans have a right to a livelihood even if they cannot earn it?

Tech industry luminaries such as Tesla CEO Elon Musk have recently endorsed concepts like guaranteed minimum income or universal basic income. A handful of experiments with this concept have been undertaken, announced or proposed in Canada, the Netherlands and elsewhere. Bill Gates recently made headlines with a proposal to impose a "robot tax" -- essentially, a tax on automated solutions to account for the social costs of job displacement. While people may differ on the effectiveness or necessity of these and other proposals, it's clear that discussion on these points will be a part of the broader AI conversation in the years to come.

Whose datum is it, anyway?

Technology challenges our conception of human rights in other ways, as well. Some of the most fascinating applications of improved artificial intelligence relate to the ability to quickly and efficiently analyze large quantities of data, finding and testing correlations and connections and translating them into usable information. "Big data" has dominated industry headlines in recent years, including speculation that a data analytics solution may have played a role in the 2016 US presidential election.

Typically, concerns around access to and use of personal data have centered on personal privacy concerns. Many countries have enacted strict laws prohibiting the collection and sharing of personal data without first providing specific, detailed information about the planned use of such information and obtaining consent. Businesses safeguard their confidential information through an assortment of contractual arrangements and trade secret protection laws.

Less legal attention has been paid, however, to the anonymized use of personal or proprietary data -- that is, data that has been stripped of identifying information and aggregated alongside other data. This is partly because the question itself is inchoate: who, if anyone, has a right to impose use limitations on aggregated datasets? And on what basis might such limitations be imposed? Some data is relatively easy to obtain, and has traditionally been part of either a formal public record or, at a minimum, thought to be fair game to anyone obtaining them lawfully. This approach essentially mirrors the privacy-rights approach, in that it focuses on data at the point of collection, rather than at the point of use. And yet it is clear that independent ethical concerns do arise from the use, standing alone, of such data.

For example, consider the case of an international beauty competition that was "judged" by an AI algorithm. The algorithm was given criteria thought to be unbiased and objective, and yet the selection of winners revealed an unexpected characteristic lurking in the algorithm's operation -- racial bias. As we increasingly rely on data aggregation software not only to provide us with organized information, but to influence or direct actions, we may increasingly find ourselves asking the question -- should we have the right to ensure data is used fairly?

Related: Artificial Intelligence: A Friend or Foe for Humans?

Where do we go from here?

Of course, technological innovation likely cannot be halted, and our ability to meaningfully hinder it is questionabl, even leaving aside the matter of whether it is desirable to attempt to do so. Industry groups have already formed to consider the ethical ramifications of increasingly sophisticated artificial intelligence. And while clear answers are unlikely to emerge any time soon, it will be equally important to ensure that we, collectively as a society, are asking the right questions to ensure that technological innovation equates to genuine progress.

Bethany Quinn

In-house counsel at Infosys

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Business News

These Companies Offer the Best Work-Life Balance, According to Employees

The ranking is based on Glassdoor ratings and reviews.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Science & Technology

Use This Framework to Successfully Integrate AI Into Your Business Operations

Here's how to ensure both innovation and compliance when using AI in your organization.

Leadership

Why Your AI Strategy Will Fail Without the Right Talent in Place

Using fractional AI experts through specialized platforms allows companies to access top talent cost-effectively, drive innovation and scale agile strategies for growth.

Growing a Business

5 Effective Strategies to Boost Your Business's Online Presence

Boosting your online presence in 2025 is the key to success for businesses looking to grow. Working on your branding and reputation management is important to drive more sales and improve conversion.