Get All Access for $5/mo

Now that OpenAI's Superalignment Team Has Been Disbanded, Who's Preventing AI from Going Rogue? We spoke to an AI expert who says safety and innovation are not separate things that must be balanced; they go hand in hand.

By Sherin Shibu Edited by Melissa Malamut

Key Takeaways

  • Former OpenAI research lead Jan Leike and chief scientist, Ilya Sutskever, resigned last week.
  • Leike stated that it was because he felt safety took a backseat to new products at OpenAI.
  • One AI expert tells "Entrepreneur" that safety and innovation are not separate things that need to be balanced — they should go hand in hand.

How do we prevent AI from going rogue?

OpenAI, the $80 billion AI company behind ChatGPT, just dissolved the team tackling that question — after the two executives in charge of the effort left the company.

The AI safety controversy comes less than a week after OpenAI announced a new AI model, GPT-4o, with more functionality — and a voice eerily similar to Scarlett Johansson's. The company paused the rollout of that particular voice on Monday.

Related: Scarlett Johansson 'Shocked' That OpenAI Used a Voice 'So Eerily Similar' to Hers After Already Telling the Company 'No'

Sahil Agarwal, a Yale PhD in applied mathematics who co-founded and currently runs Enkrypt AI, a startup focused on making AI less of a risky bet for businesses, told Entrepreneur that innovation and safety are not separate things that need to be balanced, but rather two things that go hand in hand as a company grows.

"You're not stopping innovation from happening when you're trying to make these systems more safe and secure for society," Agarwal said.

OpenAI Exec Raises Safety Concerns

Last week, the former OpenAI chief scientist and co-founder Ilya Sutskever and former OpenAI research lead Jan Leike both resigned from the AI giant. The two were tasked with leading the superalignment team, which ensures that AI is under human control, even as its capabilities grow.

Related: OpenAI Chief Scientist, Cofounder Ilya Sutskever Resigns

While Sutskever stated he was "confident" that OpenAI would build "safe and beneficial" AI under CEO Sam Altman's leadership in his parting statement, Leike said he left because he felt OpenAI did not prioritize AI safety.

"Over the past few months my team has been sailing against the wind," Leike wrote. "Building smarter-than-human machines is an inherently dangerous endeavor."

Leike also said that "over the past years, safety culture and processes have taken a backseat to shiny products" at OpenAI and called for the ChatGPT-maker to put safety first.

OpenAI dissolved the superalignment team that Leike and Sutskever led, the company confirmed to Wired on Friday.

Sam Altman, chief executive officer of OpenAI. Photographer: Dustin Chambers/Bloomberg via Getty Images

Altman and OpenAI president and co-founder Greg Brockman released a statement in response to Leike on Saturday, pointing out that OpenAI has raised awareness about the risks of AI so that the world can prepare for it and the AI company has been deploying systems safely.

How Do We Prevent AI from Going Rogue?

Agarwal says that as OpenAI tries to make ChatGPT more human-like, the danger is not necessarily a super-intelligent being.

"Even systems like ChatGPT, they are not implicitly reasoning by any means," Agarwal told Entrepreneur. "So I don't view the risk as from a super-intelligent artificial being perspective."

The problem is that as AI becomes more powerful and multifaceted, the possibility of more implicit bias and toxic content increases and the AI becomes riskier to implement, he explained. By adding more ways to interact with ChatGPT, from image to video, OpenAI has to think about safety from more angles.

Related: OpenAI's Launches New AI Chatbot, GPT-4o

Agarwal's company released a safety leaderboard earlier this month that ranks the safety and security of AI models from Google, Anthropic, Cohere, OpenAI, and more.

They found that the new GPT-4o model potentially contains more bias than the previous model and can possibly produce more toxic content than the previous model.

"What ChatGPT did is it made AI real for everyone," Agarwal said.

Sherin Shibu

Entrepreneur Staff

News Reporter

Sherin Shibu is a business news reporter at She previously worked for PCMag, Business Insider, The Messenger, and ZDNET as a reporter and copyeditor. Her areas of coverage encompass tech, business, strategy, finance, and even space. She is a Columbia University graduate.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick


How to Close the Trust Gap Between You and Your Team — 5 Strategies for Leaders

Trust is tanking in your workplace. Here's how to fix it and become the boss your team needs to succeed.

Health & Wellness

Get a Year of Unlimited Yoga Class Downloads for Only $23 Through June 17

Regular exercise has been proven to increase energy and focus, both of which are valuable to entrepreneurs and well-known benefits of yoga.

Growing a Business

He Immigrated to the U.S. and Got a Job at McDonald's — Then His Aversion to Being 'Too Comfortable' Led to a Fast-Growing Company That's Hard to Miss

Voyo Popovic launched his moving and storage company in 2018 — and he's been innovating in the industry ever since.

Business News

'Passing By Wide Margins': Elon Musk Celebrates His 'Guaranteed Win' of the Highest Pay Package in U.S. Corporate History

Musk's Tesla pay package is almost 140 times higher than the annual pay of other high-performing CEOs.

Starting a Business

I Left the Corporate World to Start a Chicken Coop Business — Here Are 3 Valuable Lessons I Learned Along the Way

Board meetings were traded for barnyards as a thriving new venture hatched.

Business Culture

Why Remote Work Policies Are Good For the Environment

Remote work policies are crucial for ESG guidelines. Embracing remote work can positively impact your business and employees.