OpenAI Is Prepared to Pay Someone $555,000 — Plus Equity — for This ‘Stressful Job’

The role requires a leader to build programs that test new AI models for dangerous capabilities before release.

By Sherin Shibu | edited by Jessica Thomas | Dec 29, 2025

Key Takeaways

  • OpenAI is hiring a new Head of Preparedness tasked with testing new AI models for dangerous capabilities.
  • The role involves anticipating the ways AI systems could go wrong.
  • The job listing puts base compensation at $555,000 plus equity.

OpenAI is offering more than half a million dollars in salary to fill what could be one of the most stressful jobs in tech — a new Head of Preparedness tasked with anticipating the ways powerful AI systems could go wrong. 

In a post on X on Saturday, OpenAI CEO Sam Altman advertised the position, which he called a “critical role at an important time” as well as a “stressful job.” The role reflects how seriously the company takes the potential for harm from its own technologies. 

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm… please consider applying,” Altman wrote in the post. 

OpenAI describes the Head of Preparedness in the job posting as the executive responsible for executing its preparedness framework, the internal system the company uses to track and prepare for “frontier capabilities,” or new use cases that could create risks. That includes threats ranging from large-scale phishing attacks to more grave situations, such as AI systems contributing to nuclear or biological dangers. 

Related: Netflix Just Posted a Fully Remote Job That Pays $700K

The role pays $555,000 in base salary plus equity. It requires a leader to build and run programs that test new models for dangerous capabilities before release. 

The executive will need to determine how OpenAI will respond if certain AI models exceed risk thresholds. They have to delay, restrict or redesign products that have the potential to cause serious harm. 

Altman acknowledged in his post on X that OpenAI’s AI models are “starting to present some real challenges.” He noted that some models are now “so good at computer security they are beginning to find critical vulnerabilities.” 

Altman framed the Head of Preparedness job as a chance to “help the world figure out” how to empower cybersecurity officials with AI. At the same time, the person will be concerned with preventing attackers from weaponizing the same capabilities. 

Related: These Tesla Jobs Pay Up to $318,000 — And You’ll Have Meetings With Elon Musk

The preparedness team is relatively new. OpenAI first formed it in 2023 to study the risks of cutting-edge AI models. Since then, OpenAI reassigned the previous Head of Preparedness, Aleksander Madry, to focus on AI reasoning. Other safety leaders have left the company or moved into non-safety roles

The new Head of Preparedness will also have to balance market realities and safety interests. OpenAI released an updated Preparedness Framework in April. In the update, the company added language that it might “adjust” its own safety requirements if a competing lab releases a “high-risk” model without safety protections. That statement highlights the competitive forces in advanced AI. Companies may feel pressure to respond quickly when rivals release powerful systems, even as they worry about the risks. 

Ready to explore everything on Entrepreneur.com? December is your free pass to Entrepreneur+. Enjoy complete access, no strings attached. Claim your free month.

Key Takeaways

  • OpenAI is hiring a new Head of Preparedness tasked with testing new AI models for dangerous capabilities.
  • The role involves anticipating the ways AI systems could go wrong.
  • The job listing puts base compensation at $555,000 plus equity.

OpenAI is offering more than half a million dollars in salary to fill what could be one of the most stressful jobs in tech — a new Head of Preparedness tasked with anticipating the ways powerful AI systems could go wrong. 

In a post on X on Saturday, OpenAI CEO Sam Altman advertised the position, which he called a “critical role at an important time” as well as a “stressful job.” The role reflects how seriously the company takes the potential for harm from its own technologies. 

Related Content