You can be on Entrepreneur’s cover!

What's Behind the Employee Revolts at Amazon, Microsoft and Google? Tech employees at Amazon, Microsoft and Google have been in open revolt. Here's why -- and how they're using their voices to shape company policy on weapons, surveillance and more.

By Hayden Field

entrepreneur daily
Albert L. Ortega | Getty Images

What do you get when you cross advanced technology with war and government surveillance? Hundreds of unsettled employees, hundreds of thousands of distressed individuals and an incalculable amount of bad PR.

At Amazon, employees are up in arms about the company's decision to sell its Rekognition facial recognition software to police departments and government agencies. The technology uses artificial intelligence (AI) to identify, track and analyze faces in real time, and Amazon claims it can recognize up to 100 people in one image and identify "people of interest" for purposes like government surveillance. In May, an investigation by the American Civil Liberties Union (ACLU) showed Amazon was actively marketing and selling the facial recognition software to government agencies.

Amazon workers weren't having it. In an internal letter to CEO Jeff Bezos last week, employees mentioned the ACLU report and their fears that the software will be used to harm the most marginalized.

"Technology like ours is playing an increasingly critical role across many sectors of society," the letter says. "What is clear to us is that our development and sales practices have yet to acknowledge the obligation that comes with this. Focusing solely on shareholder value is a race to the bottom and one that we will not participate in. We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build and a say in how it is used. We learn from history, and we understand how IBM's systems were employed in the 1940s to help Hitler. IBM did not take responsibility then, and by the time their role was understood, it was too late. We will not let that happen again."

Employees called on Amazon to stop selling facial recognition services to law enforcement agencies, stop partnering with Palantir and other companies that work with U.S. Immigrations and Customs Enforcement (ICE), leave the surveillance business altogether and implement strong transparency measures about which companies and agencies are using Amazon services and how. In speaking out, Amazon employees add their voices to those of many others. On Monday, civil rights, religious and community organizations visited Amazon's headquarters in Seattle, delivering more than 150,000 petition signatures, a coalition letter signed by 70 community organizations across the U.S. and a letter from company shareholders.

In a June blog post, Amazon Web Services (AWS) said the company's facial recognition software has also been used to prevent human trafficking, child exploitation and package theft.

"We believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future," wrote Dr. Matt Wood, general manager of AI at AWS. "The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm."

The company did not respond to a request for further comments and has not publicly responded to employees' open letter.

What's happening at Amazon points to a much larger tech industry trend in recent months. Employees are realizing -- and acting on -- their ability to unite to shape company policy and, with it, the trajectories of the causes they care about.

"As this particular debate is being led by employees from within the companies, it's being played out in a very public forum, which is unusual," says Alan Smeaton, professor of computing at Dublin City University. "It does point to a power struggle from within."

In a January blog post, Microsoft touted its pride in supporting ICE's homeland security work with its cloud services. The Trump administration's controversial "zero tolerance" policy for people who cross the border illegally made headlines in recent weeks for separating more than 2,300 children from their families. (After nationwide uproar, President Trump has since retreated on the policy, signing an executive order to keep families together.) Last week, more than 100 Microsoft employees signed an open letter to CEO Satya Nadella protesting the company's $19.4 million contract with ICE.

"We believe that Microsoft must take an ethical stand, and put children and families above profits," the letter says. "As the people who build the technologies that Microsoft profits from, we refuse to be complicit. We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm."

In an internal memo to employees, Nadella denounced the family separation policy at the border as "cruel and abusive," but he also downplayed the company's involvement with ICE. "I want to be clear: Microsoft is not working with the U.S. government on any projects related to separating children from their families at the border," he wrote, mentioning the cloud services provided are for "legacy mail, calendar, messaging and document management workloads." He did not lay out specific transparency guidelines for these contracts, and in response to a request for comment, a representative told Entrepreneur that Microsoft had nothing further to share.

Related: 7 Companies Reinventing 'Old' Fields With New Tech

Employees at Amazon and Microsoft aren't the only tech workers sounding an alarm.

In March, tech news site Gizmodo first reported Google's decision to employ AI to support a controversial military pilot program called Project Maven. The initiative aims to improve drone footage analysis by auto-classifying images of people and objects and could be used to make drone strikes more accurate. Google's involvement sparked about 12 employees to resign.

"At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew," one resigning Google employee told Gizmodo in May. "I realized if I can't recommend people join here, then why am I still here?" (In related news, the company quietly removed most mentions of its longtime "don't be evil" motto from its company-wide code of conduct in late April or early May.)

Google's involvement in Project Maven led to a widespread public outcry, and AI researchers across the country signed an open letter calling on the company to commit to never weaponize its technology. Google announced in June that it would not renew its contract with the Pentagon after it expires next year, as well as step back altogether from partnering with the military for AI purposes.

"We recognize that such powerful technology raises equally powerful questions about its use," Google published in a June blog post after the announcement that the company wouldn't renew its Pentagon contract. The company laid out seven principles for its future work in AI, clarifying they wouldn't be treated as theoretical concepts but rather as "concrete standards that will actively govern our research and product development." The principles themselves: Be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence and be made available for uses that accord with these principles.

Google also clearly laid out clear guidelines for AI uses it won't pursue, although the language arguably allows for some wiggle room. As far as technologies that cause or are likely to cause overall harm, the company clarified: "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints." The company says it won't collaborate on weapons or "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," as well as "technologies that gather or use information for surveillance violating internationally accepted norms" and "technologies whose purpose contravenes widely accepted principles of international law and human rights." Google declined to comment further to Entrepreneur.

When it comes to tech workers' collective power to shape the nature of company policy, the implications across industries, politics and international relations are far and wide.

Those implications haven't been explored properly by researchers, policymakers and tech companies, according to the 26 authors behind a recent report called "The Malicious Use of Artificial Intelligence." The authors' backgrounds spanned academia, civil society and industry, ranging from the University of Cambridge to the Future of Humanity Institute. When it comes to political security, the report highlights AI's potential use for targeted propaganda and deception, like manipulating videos or human speech.

"AI researchers and the organizations that employ them are in a unique position to shape the security landscape of the AI-enabled world," says the report, which highlights the need for education, ethical standards and expectations.

Related: This Is How to Get Started With AI When the Only Thing You Know Is the Acronym

The recent influx isn't the first time AI has been implicated in controversies. In 2015, Google Photos tagged African American users as gorillas, and in 2017, FaceApp's Russian developers "beautified" faces by lightening skin tones. But "what we're seeing now is different," says Smeaton. That's because prior controversies related to the use of skewed data used to train the technology, not the AI technology itself.

AI researchers are growing more conscious about how work they intended for one use could be used for something quite different -- perhaps even with malicious intent -- by others. Smeaton points to Cambridge University researcher Aleksandr Kogan as a prime example. "Kogan is not the first to find that once his work is out of the box, others then take control of how it is used," he says, mentioning that after seeing the devastation caused by the Hiroshima and Nagasaki atomic bombs, Robert Oppenheimer attempted to stop further development and then resigned from the Manhattan Project. "In a way, we're seeing history repeating itself," says Smeaton.

"Tech needs its talent, and its talent knows better than most of us the seismic changes technology… will unleash," says American civil rights activist Maya Wiley. "It's not all good, and we have to be people before profit-makers."

After news broke of Google's involvement with war technology and the Pentagon, more than 300 tech industry employees signed a petition addressed to Google, Amazon, Microsoft and IBM with one premise: Tech should not be in the business of war.

"Many of us signing this petition are faced with ethical decisions in the design and development of technology on a daily basis," it says. "We cannot ignore the moral responsibility of our work… We represent a growing network of tech workers who commit to never "just follow orders,' but to hold ourselves, each other, and the industry accountable."

Hayden Field

Entrepreneur Staff

Associate Editor

Hayden Field is an associate editor at Entrepreneur. She covers technology, business and science. Her work has also appeared in Fortune Magazine, Mashable, Refinery29 and others. 

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Business News

James Clear Explains Why the 'Two Minute Rule' Is the Key to Long-Term Habit Building

The hardest step is usually the first one, he says. So make it short.

Side Hustle

He Took His Side Hustle Full-Time After Being Laid Off From Meta in 2023 — Now He Earns About $200,000 a Year: 'Sweet, Sweet Irony'

When Scott Goodfriend moved from Los Angeles to New York City, he became "obsessed" with the city's culinary offerings — and saw a business opportunity.

Living

Get Your Business a One-Year Sam's Club Membership for Just $14

Shop for office essentials, lunch for the team, appliances, electronics, and more.

Business News

Microsoft's New AI Can Make Photographs Sing and Talk — and It Already Has the Mona Lisa Lip-Syncing

The VASA-1 AI model was not trained on the Mona Lisa but could animate it anyway.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.