Google CEO Sundar Pichai Says There Is a Need For Governmental Regulation of AI: 'There Has To Be Consequences' In an interview with "60 Minutes," Google CEO Sundar Pichai said AI is the most "profound technology humanity is working on — more profound than fire or electricity."
The capabilities of artificial intelligence — and the speed at which the technology is being released to the public — are garnering a mix of reactions from tech enthusiasts, CEOs, and experts.
For Google CEO Sundar Pichai, AI is an increasingly important aspect of Google's business — the company released its AI chatbot, Bard, in February and has other projects on the horizon, like a prototype called "Project Starlink," which aims to enhance video conferencing by simulating a more life-like experience.
In an interview with "60 Minutes" on Sunday, Pichai said AI is one of the most significant discoveries of our time.
"I have always thought of AI as the most profound technology humanity is working on — more profound than fire or electricity," Pichai said in the interview. "We are developing technology that will be far more capable than anything we have ever seen before."
Pinchai told the program that there should be government regulation of AI, especially with the emergence of deep fakes, saying the approach to the technology would be "no different" from the way the company tackled spam and Gmail.
"We are constantly developing better algorithms to detect spam," Pichai said. "We would need to do the same thing with deep fakes, audio, and video. Over time there has to be regulation. There have to be consequences for creating deep fake videos which cause harm to society."
In March, in an open letter signed by tech leaders (notably Elon Musk and Apple co-founder Steve Wozniak) and CEOs called for a six-month pause on AI development to manage and assess potential risks. To date, the letter has over 26,000 signatures.