Get All Access for $5/mo

Now that OpenAI's Superalignment Team Has Been Disbanded, Who's Preventing AI from Going Rogue? We spoke to an AI expert who says safety and innovation are not separate things that must be balanced; they go hand in hand.

By Sherin Shibu Edited by Melissa Malamut

Key Takeaways

  • Former OpenAI research lead Jan Leike and chief scientist, Ilya Sutskever, resigned last week.
  • Leike stated that it was because he felt safety took a backseat to new products at OpenAI.
  • One AI expert tells "Entrepreneur" that safety and innovation are not separate things that need to be balanced — they should go hand in hand.

How do we prevent AI from going rogue?

OpenAI, the $80 billion AI company behind ChatGPT, just dissolved the team tackling that question — after the two executives in charge of the effort left the company.

The AI safety controversy comes less than a week after OpenAI announced a new AI model, GPT-4o, with more functionality — and a voice eerily similar to Scarlett Johansson's. The company paused the rollout of that particular voice on Monday.

Related: Scarlett Johansson 'Shocked' That OpenAI Used a Voice 'So Eerily Similar' to Hers After Already Telling the Company 'No'

Sahil Agarwal, a Yale PhD in applied mathematics who co-founded and currently runs Enkrypt AI, a startup focused on making AI less of a risky bet for businesses, told Entrepreneur that innovation and safety are not separate things that need to be balanced, but rather two things that go hand in hand as a company grows.

"You're not stopping innovation from happening when you're trying to make these systems more safe and secure for society," Agarwal said.

OpenAI Exec Raises Safety Concerns

Last week, the former OpenAI chief scientist and co-founder Ilya Sutskever and former OpenAI research lead Jan Leike both resigned from the AI giant. The two were tasked with leading the superalignment team, which ensures that AI is under human control, even as its capabilities grow.

Related: OpenAI Chief Scientist, Cofounder Ilya Sutskever Resigns

While Sutskever stated he was "confident" that OpenAI would build "safe and beneficial" AI under CEO Sam Altman's leadership in his parting statement, Leike said he left because he felt OpenAI did not prioritize AI safety.

"Over the past few months my team has been sailing against the wind," Leike wrote. "Building smarter-than-human machines is an inherently dangerous endeavor."

Leike also said that "over the past years, safety culture and processes have taken a backseat to shiny products" at OpenAI and called for the ChatGPT-maker to put safety first.

OpenAI dissolved the superalignment team that Leike and Sutskever led, the company confirmed to Wired on Friday.

Sam Altman, chief executive officer of OpenAI. Photographer: Dustin Chambers/Bloomberg via Getty Images

Altman and OpenAI president and co-founder Greg Brockman released a statement in response to Leike on Saturday, pointing out that OpenAI has raised awareness about the risks of AI so that the world can prepare for it and the AI company has been deploying systems safely.

How Do We Prevent AI from Going Rogue?

Agarwal says that as OpenAI tries to make ChatGPT more human-like, the danger is not necessarily a super-intelligent being.

"Even systems like ChatGPT, they are not implicitly reasoning by any means," Agarwal told Entrepreneur. "So I don't view the risk as from a super-intelligent artificial being perspective."

The problem is that as AI becomes more powerful and multifaceted, the possibility of more implicit bias and toxic content increases and the AI becomes riskier to implement, he explained. By adding more ways to interact with ChatGPT, from image to video, OpenAI has to think about safety from more angles.

Related: OpenAI's Launches New AI Chatbot, GPT-4o

Agarwal's company released a safety leaderboard earlier this month that ranks the safety and security of AI models from Google, Anthropic, Cohere, OpenAI, and more.

They found that the new GPT-4o model potentially contains more bias than the previous model and can possibly produce more toxic content than the previous model.

"What ChatGPT did is it made AI real for everyone," Agarwal said.

Sherin Shibu

Entrepreneur Staff

News Reporter

Sherin Shibu is a business news reporter at Entrepreneur.com. She previously worked for PCMag, Business Insider, The Messenger, and ZDNET as a reporter and copyeditor. Her areas of coverage encompass tech, business, strategy, finance, and even space. She is a Columbia University graduate.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Growing a Business

4 Ways I Grew My Business From Startup to 17 Years of Sustained Success

Whatever the future holds, remembering these four lessons will help sustain and scale your startup to a lasting legacy.

Side Hustle

This 20-Year-Old Student Started a Side Hustle With $400 — and It Earned $150,000 Over the Summer

Jacob Shaidle launched his barbecue cleaning business Shaidle Cleaning in 2021 when he was just 15.

Business News

Barbara Corcoran Says This Is the One Question to Ask Before Selling Your Home

Barbara Corcoran sold The Corcoran Group in 2001 for $66 million.

Business News

Google Says It Won't Follow Amazon's Lead With a Return-to-Office Mandate — Yet

In a town hall, Google leaders told staff the current hybrid plan will stay in place.

Business News

'Not a Big Deal': Barbara Corcoran Says the NAR Ruling Hasn't Had Much of an Impact So Far

The ruling removes the commission rate that home sellers are expected to pay.