OpenAl to Build a Specialized Team in the Next 4 Years to Prevent Superintelligence from Going Rogue In a blog post titled "Introducing Superalignment", published on Wednesday, the Al research firm noted that superintelligence would be the most impactful technology for humanity and that it can arrive by 2030.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
ChatGPT's creator OpenAl is putting into motion their plans of forming a specialized team to take on "superintelligence."
In a blog post titled "Introducing Superalignment", published on Wednesday, the Al research firm noted that superintelligence would be the most impactful technology for humanity and that it can arrive by 2030.
"Currently, we don't have solution for steering or controlling a potentially superintelligent Al, and preventing it from going rogue," the company said. For the uninitiated, superintelligence Al is referred to as a hypothetical form of AI which will possess intelligence far surpassing that of human minds. Superintelligence or ASI will come after Artificial General Intelligence (AGI), a stage where a machine can mimic humans and understand and perform intellectual tasks and carry out a wide range of activities.
ALSO READ: Artificial General Intelligence: The Next Frontier In Technology
OpenAI's current techniques to align superintelligence include reinforcing learnings based on human feedback and relying on human ability to supervise AI. However, this is not going to work in the long run. AI is improving rapidly and drastically and will soon reach a level where it cannot be governed by humans.
Sam Altman's AI brand currently has its alignment research working towards aligning artificial general intelligence (AGI) with human values and following human intent. With this division, it aims to build a roughly human-level automated alignment researcher.
To do this, they'll need to develop a scalable training method, verify the resulting model, and stress test the entire alignment.
- To provide a training signal on tasks that are difficult for humans to evaluate, we can leverage AI systems to assist evaluation of other AI systems (scalable oversight). In addition, we want to understand and control how our models generalize our oversight to tasks we can't supervise (generalization).
- To validate the alignment of our systems, we automate search for problematic behavior (robustness) and problematic internals (automated interpretability).
- Finally, we can test our entire pipeline by deliberately training misaligned models, and confirming that our techniques detect the worst kinds of misalignments (adversarial testing).
The Al Squad
The AI firm announced that it would be setting up a new team led by Ilva Sutskever (cofounder and Chief Scientist of OpenAI) and Jan Leike (Head of Alignment) in the next four years to achieve the goal of superalignment.
About 20 per cent of the existing computing capacity will be diverted to this vertical. While this is an ambitious yet humane project, it does not guarantee success. "We are optimistic that a focused, concerted effort can solve this problem." Furthermore, they are inviting researchers and engineers to join the team.
"We're also looking for outstanding new researchers and engineers to join this effort. Superintelligence alignment is fundamentally a machine learning problem, and we think great machine learning experts—even if they're not already working on alignment—will be critical to solving it," read the blog.
OpenAI is looking to fill roles such as research engineer, research scientist, and research manager.