Get All Access for $5/mo

Industry's Top CEOs Say AI Has The Potential to Destroy Humanity in 5 to 10 Years. Here's Why We Need to Act – Now. 42% of the CEOs indicated that artificial intelligence (AI) could spell the end of humanity within the next decade. Here's what we know.

By Gleb Tsipursky Edited by Maria Bailey

Opinions expressed by Entrepreneur contributors are their own.

At a CEO summit in the hallowed halls of Yale University, 42% of the CEOs indicated that artificial intelligence (AI) could spell the end of humanity within the next decade. These aren't the leaders of small business: this is 119 CEOs from a cross-section of top companies, including Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy, the leaders of IT companies like Xerox and Zoom as well as CEOs from pharmaceutical, media and manufacturing.

This isn't a plot from a dystopian novel or a Hollywood blockbuster. It's a stark warning from the titans of industry who are shaping our future.

The AI extinction risk: A laughing matter?

It's easy to dismiss these concerns as the stuff of science fiction. After all, AI is just a tool, right? It's like a hammer. It can build a house or it can smash a window. It all depends on who's wielding it. But what if the hammer starts swinging itself?

The findings come just weeks after dozens of AI industry leaders, academics, and even some celebrities signed a statement warning of an "extinction" risk from AI. That statement, signed by OpenAI CEO Sam Altman, Geoffrey Hinton, the "godfather of AI," and top executives from Google and Microsoft, called for society to take steps to guard against the dangers of AI.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. This isn't a call to arms. It's a call to awareness. It's a call to responsibility.

It's time to take AI risk seriously

The AI revolution is here, and it's transforming everything from how we shop to how we work. But as we embrace the convenience and efficiency that AI brings, we must also grapple with its potential dangers. We must ask ourselves: Are we ready for a world where AI has the potential to outthink, outperform, and outlast us?

Business leaders have a responsibility to not only drive profits but also safeguard the future. The risk of AI extinction isn't just a tech issue. It's a business issue. It's a human issue. And it's an issue that requires our immediate attention.

The CEOs who participated in the Yale survey are not alarmists. They are realists. They understand that AI, like any powerful tool, can be both a boon and a bane. And they are calling for a balanced approach to AI — one that embraces its potential while mitigating its risks.

Related: Read This Terrifying One-Sentence Statement About AI's Threat to Humanity Issued by Global Tech Leaders

The tipping point: AI's existential threat

The existential threat of AI isn't a distant possibility. It's a present reality. Every day, AI is becoming more sophisticated, more powerful and more autonomous. It's not just about robots taking our jobs. It's about AI systems making decisions that could have far-reaching implications for our society, our economy and our planet.

Consider the potential of autonomous weapons, for example. These are AI systems designed to kill without human intervention. What happens if they fall into the wrong hands? Or what about AI systems that control our critical infrastructure? A single malfunction or cyberattack could have catastrophic consequences.

AI represents a paradox. On one hand, it promises unprecedented progress. It could revolutionize healthcare, education, transportation and countless other sectors. It could solve some of our most pressing problems, from climate change to poverty.

On the other hand, AI poses a peril like no other. It could lead to mass unemployment, social unrest and even global conflict. And in the worst-case scenario, it could lead to human extinction.

This is the paradox we must confront. We must harness the power of AI while avoiding its pitfalls. We must ensure that AI serves us, not the other way around.

The AI alignment problem: Bridging the gap between machine and human values

The AI alignment problem, the challenge of ensuring AI systems behave in ways that align with human values, is not just a philosophical conundrum. It's a potential existential threat. If not addressed properly, it could set us on a path toward self-destruction.

Consider an AI system designed to optimize a certain goal, such as maximizing the production of a particular resource. If this AI is not perfectly aligned with human values, it might pursue its goal at all costs, disregarding any potential negative impacts on humanity. For instance, it might over-exploit resources, leading to environmental devastation, or it might decide that humans themselves are obstacles to its goal and act against us.

This is known as the "instrumental convergence" thesis. Essentially, it suggests that most AI systems, unless explicitly programmed otherwise, will converge on similar strategies to achieve their goals, such as self-preservation, resource acquisition and resistance to being shut down. If an AI becomes superintelligent, these strategies could pose a serious threat to humanity.

The alignment problem becomes even more concerning when we consider the possibility of an "intelligence explosion" — a scenario in which an AI becomes capable of recursive self-improvement, rapidly surpassing human intelligence. In this case, even a small misalignment between the AI's values and ours could have catastrophic consequences. If we lose control of such an AI, it could result in human extinction.

Furthermore, the alignment problem is complicated by the diversity and dynamism of human values. Values vary greatly among different individuals, cultures and societies, and they can change over time. Programming an AI to respect these diverse and evolving values is a monumental challenge.

Addressing the AI alignment problem is therefore crucial for our survival. It requires a multidisciplinary approach, combining insights from computer science, ethics, psychology, sociology, and other fields. It also requires the involvement of diverse stakeholders, including AI developers, policymakers, ethicists and the public.

As we stand on the brink of the AI revolution, the alignment problem presents us with a stark choice. If we get it right, AI could usher in a new era of prosperity and progress. If we get it wrong, it could lead to our downfall. The stakes couldn't be higher. Let's make sure we choose wisely.

Related: As Machines Take Over — What Will It Mean to Be Human? Here's What We Know.

The way forward: Responsible AI

So, what's the way forward? How do we navigate this brave new world of AI?

First, we need to foster a culture of responsible AI. This means developing AI in a way that respects our values, our laws, and our safety. It means ensuring that AI systems are transparent, accountable and fair.

Second, we need to invest in AI safety research. We need to understand the risks of AI and how to mitigate them. We need to develop techniques for controlling AI and for aligning it with our interests.

Third, we need to engage in a global dialogue on AI. We need to involve all stakeholders — governments, businesses, civil society and the public — in the decision-making process. We need to build a global consensus on the rules and norms for AI.

The choice is ours

In the end, the question isn't whether AI will destroy humanity. The question is: Will we let it?

The time to act is now. Let's take the risk of AI extinction seriously — as do nearly half of the top business leaders. Because the future of our businesses — and our very existence — may depend on it. We have the power to shape the future of AI. We have the power to turn the tide. But we must act with wisdom, with courage, and with urgency. Because the stakes couldn't be higher. The AI revolution is upon us. The choice is ours. Let's make the right one.

Gleb Tsipursky

CEO of Disaster Avoidance Experts

Dr. Gleb Tsipursky, CEO of Disaster Avoidance Experts, is a behavioral scientist who helps executives make the wisest decisions and manage risks in the future of work. He wrote the best-sellers “Never Go With Your Gut,” “The Blindspots Between Us,” and "Leading Hybrid and Remote Teams."

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Side Hustle

At Age 15, He Used Facebook Marketplace to Start a Side Hustle — Then It Became Something Much Bigger: 'Raised Over $1.6 Million'

Dylan Zajac, now a 21-year-old senior at Babson College, wanted to bridge the digital divide.

Innovation

These Entrepreneurs Created a League That Turns Gamers Into Pro Race Car Drivers: 'We're Giving Drivers a Sustainable Career Path'

Racing Prodigy's innovative E2Real sports league is lowering the high-cost barrier to entry for drivers to take their passion to the track.

Business News

OpenAI Just Released Its Text-to-Video Generator, Sora. Here's How the New AI Could Impact Small Businesses and Creators.

Sora has a variety of use cases for businesses, from social media campaigns to video creation.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business News

'Faster, Smarter, and More Relevant': Reddit Tests AI That Combs the Site For You

The AI is like a blend of Google and ChatGPT, tailored specifically for Reddit.

Growing a Business

This Cozy Coffee and Garden Shop Has Become a Staple in Its Community By Following 5 Smart Strategies

Maypop is a combination coffee and garden shop where a blend of community building and customer service creates an unforgettable experience.