Need for Robust AGI Regulatory Frameworks High Say Global AI Experts Among many key topics on the agenda was 'AGI,' which has been an actively explored and talked about avenue since the beginning of artificial intelligence (AI).
You're reading Entrepreneur Asia Pacific, an international franchise of Entrepreneur Media.

Establishing regulatory frameworks is fundamental to ensure that current and future Artificial General Intelligence (AGI) systems are developed and function responsibly and ethically, said experts on the opening day of 'Ai Everything Global', held in Abu Dhabi.
Among many key topics on the agenda was 'AGI,' which has been an actively explored and talked about avenue since the beginning of artificial intelligence (AI).
Among the key topics on the agenda was AGI, which has been actively explored since the inception of AI research and has since evolved into a widely adopted tool in everyday life. Speaking at the event. Stuart Russel, Professor of Computer Science at the University of California, said that governments and companies have to work together to map out resilient frameworks and strict regulations to minimize risks.
"Whether making the next generation of large language models bigger is going to produce AGI is purely speculation at the moment," said Russell, who is the author of 'Modern Handbook on AI Development' used by more than 1,300 universities across 116 countries. Russel is also the Vice Chair of the WEF Council on AI and Robotics.
"It's just an accident that by making language models bigger and bigger, all of a sudden, they pass intuition tests and are indistinguishable from conversations with intelligent human beings. But the technology is extremely unreliable. We – as experts in AI – do not understand how this technology works. And when it doesn't work, nobody has a way of fixing it, and I strongly believe that a different approach is needed," said Russel.
The UC professor also stressed the importance of regulations that require AGI providers to secure their systems and not disseminate dangerous information - such as instructions for hacking, given the current lack of understanding of these mechanisms.
Sharing the stage with Stuart Russell was Kate Darling, Research Scientist at MIT Media Lab, who was one of the '25 Women in Robotics You Need to Know About', as featured by Robohub, and an award-winning Intellectual Property expert influencing technology design and policy direction.
In her session, Darling emphasized that AI and robotics should enhance human connection rather than replace it, highlighting that AI tools like ChatGPT are best used as productivity aids that complement human skills.
Darling also raised questions about the environmental impact of emerging technologies, urging the industry to balance innovation with sustainable practices to ensure progress supports human and planetary well-being.
"The really interesting thing is what happens when you take AI and robotics and you put them together with people. Because as people start to encounter these technologies in their daily lives, we see some really interesting reactions," said Darling.