Get All Access for $5/mo

Towards a Responsible AI Alongside revolutionizing different fields, AI carries with it several challenges and concerns that has given rise to the need for its regulation

By Priya Kapoor

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

While generative AI is revolutionizing various fields, it has also raised some ethical concerns.

AI, particularly Generative AI has garnered a lot of buzz in recent times. The latter is an advanced version of the conventional AI in which computer algorithms are used to generate outputs that resemble human-created content like text, images, graphics, music or computer code.

One of the generative AI applications that is being talked about the most in recent times is Chat- GPT, a chatbot that Microsoft-backed OpenAI released late last year, which became an overnight sensation. It is powered by a large language model because it takes in a text prompt and from that writes a humanlike response. GPT-4, a newer model that OpenAI announced is "multimodal" because it can perceive not only text but images as well.

"With AI, the speed at which we get things done has improved. We can analyze across large datasets very quickly. Extract requisite information from a large number of documents without reading them one by one. The GPT when used in business context can refer to organisation's / domain specific data to analyase and provide contextual response for business and it also ensures traceability of the source of information thus avoiding plagiarism," says Prashant Garg, Partner, EY India.

Besides ChatGPT, there are several Gen-AI tools that have been released, particularly by tech giants in direct competition to it, for example Google's BARD and Microsoft's Bing. And there are several more in waiting. Microsoft Bing uses generative AI in its web search function to return results that appear as longer, written answers culled from various internet sources instead of a list of links to relevant websites. From content creation to customer support to medicine and finance, the technology has found usage in various applications.

"At Writesonic, we're leveraging Generative AI to revolutionize the field of content creation as well as conversational AI. Our tools can generate high-quality, unique, and SEO-optimized content in a matter of minutes, allowing businesses and individuals to express their ideas more efficiently and effectively. Additionally, we are helping businesses simplify and streamline their customer support and engagement through our state-of-the-art Generative AI models," says Samanyou Garg, founder, Writesonic.

The application has also found its way into the field of advertising where advertisers are using it to craft personalized campaigns and adapt content to consumers' preferences. In education, generative AI models are being used to develop customized learning materials.The application is also being used to predict weather patterns and simulate the effects of climate change.

But, while generative AI is revolutionizing various fields, it has also raised some ethical concerns.

Copyright violations: One of the major concerns of Generative AI is copyright violations. The material used to train AI models is taken from different sources that the developers don't have rights to. This means infringement of intellectual property. Sadly, at present, there are no laws pertaining to things like copyright and royalties when it comes to AI-generated content. In April 2023, however, the European Union proposed new copyright rules for generative AI that would require companies to disclose any copyrighted material used to develop these tools.

Misinformation: Generative AI is also giving rise to misinformation or malicious or sensitive content, and caused damage to people's reputation and businesses. Take for instance, the case of a startup named ElevenLabs that admitted its voice creation platform had been misused to create deep fake audio versions of celebrities like Emma Watson and Joe Rogan spouting abuse and other unacceptable material.

"AI-generated images, voices or videos that can be near-indistinguishable from authentic ones, opens up avenues for misuse such as spreading misinformation or committing fraud. In the US, we are getting scammers impersonating voices for kids and asking for money from grandparents," Sanjay Parihar, a US-based Technologist.

Errors: AI is not foolproof. It is as good or as accurate as the information with which it is provided. If there are errors or biases in the data on which AI platforms are trained, that can be reflected in the results. It lives and works on old information. It may not have up-to-date information beyond its knowledge cutoff date of 2021. Unlike Google, generative AIs like ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by predicting likely word combinations from a massive amalgam of available online information. ChatGPT, Bing AI and Google Bard have all drawn controversy for producing incorrect or harmful outputs since their launch.

Displacement from jobs. The automation of tasks by generative AI means several job disruptions across the globe. Generative AI is posing a direct threat to employment. According to Goldman Sachs as many as 300 million jobs could be lost or impacted by Gen-AI worldwide.The impacted people would be required to reskill or upskill. Technologist feels that Gen AI specially has the capability of rewriting some of the existing business models.

In the coming few years, the job requirement across the board will change. Companies relying on the knowledge as a commodity will surely need to innovate or adopt the change," adds Parihar.

Thankfully, these concerns have not gone unnoticed. On 22nd March, more than 1,800 signatories, including Elon Musk, the cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak, called for a six-month pause on the development of systems "more powerful" than that of GPT-4. In the first week of May, OpenAI's Sam Altman, Google's Sundar Pichai, Microsoft's Satya Nadella discussed AI with the US government. Even Elon Musk, who was once a co-founder of OpenAI, has expressed concerns about the future of AI and batted for a regulatory authority to ensure development of the technology.

"The concerns surrounding the misuse of AI, specifically ChatGPT or Large Language Models, are valid and require thoughtful consideration. Algorithms can introduce biases, errors, and poor decision-making, eroding trust among the people AI intends to assist. The problem often arises from biased training data, where AI can acquire and amplify existing both implicit and explicit prejudices," says Sunil Gopinath, CEO, Rakuten India.

The OpenAI CEO Sam Altman during his recent India visit also talked about the possibility of AI taking away human jobs, but also said that newer jobs will also be created.

Regulation-The way forward?

The companies in the AI space feel like any powerful technology, Generative AI carries with it a mix of opportunities and challenges, and they are realizing the risks as well as the need for regulation. WriteSonic's Garg says, "To mitigate potential risks, we're committed to integrating robust guidelines and safeguards into our systems. We're also dedicated to maintaining an ongoing dialogue with various stakeholders, including users, policymakers, and the wider public, to ensure that the development and use of AI is guided by shared values and principles."

"While AI, and Generative AI specifically, have indeed shown remarkable capabilities and found applications across various sectors, alongside them it's critical to acknowledge the growing concerns. Regulation could serve to prevent misuse of AI technologies, and help ensure AI systems are designed and used ethically and fairly. For instance, regulation could limit the creation of deepfakes used for deceptive purposes, or ensure AI doesn't perpetuate societal biases," says Parihar.

Read also: India Will Regulate AI, Says MoS Rajeev Chandrasekhar | Entrepreneur

Says Aravind Chandramouli, Head of Data Science, Tredence Inc, "In a bid to promote responsible AI practices, regulating AI is important. The latter can safeguard user privacy, establish transparency and accountability, and foster fair competition."

However, experts do not rule out the potential drawbacks of implementing strict regulation in this area.

"This could slow down the pace of innovation in this field, and put a country's global competitiveness at risk. Every country has access to it and it will take the decision based on the current situation," adds Parihar.

Need for a balanced approach

People in the AI industry feel the best way will be to take a balanced approach. "Overregulation can stifle creativity and progress, whereas insufficient regulation can leave room for misuse. In my view, some level of regulation is necessary for Generative AI. It can help establish a clear set of guidelines for its ethical use, protect users' rights, and prevent malicious applications. It's crucial that any regulation developed is based on a clear understanding of the technology and its potential implications. Policymakers should engage with technologists, ethicists, and other stakeholders to ensure informed and effective policy development," says Garg from Writesonic.

"The effort should be towards improving the interpretability and predictability of AI systems, ensuring they're fair and unbiased, and developing frameworks for their responsible use," says Parihar.

Chandramouli too believes that in order to maximize AI's potential, innovation and ethical considerations must be balanced while ensuring the benefit of individuals and society as a whole.

"The conversation around regulating AI and ensuring responsible practices is an important one. While companies are currently expected to selfregulate, it is clear that some level of external regulation is necessary to protect consumers and maintain trust in AI. However, the approach to regulation must be carefully considered to avoid stifling innovation and impeding the potential benefits that AI can bring," says Gopinath.

According to Garg from EY India, regulation for AI will evolve over time. "We should apply our wisdom to overcome issues related to privacy, security, data sovereignty, copyright, plagiarism and others," adds Garg.

Priya Kapoor

Former Feature Editor

Priya holds more than a decade of experience in journalism. She has worked on various beats and was chosen as a Road Safety Fellow in 2018, wherein she produced many in-depth & insightful features on road crashes in India. She writes on startups, personal finance and Web3. Outside of work, she likes gardening, driving and reading. 

 

 

 

News and Trends

"45% of All Ongoing Hydropower Projects in India are Ours": Patel Engineering

Patel Engineering reported a turnover of INR 4,400 crore in the last fiscal year, with a projected 10 per cent growth for the current year.

Business Process

How CEOs Can Take Control of Their Emails and Achieve Inbox Zero

Although there are many methodologies that leaders can use to manage their emails effectively, a consistent and thought-through process is the most effective way to systemize and respond to emails and is a step of stewardship for the effective leader.

Business News

Former Steve Jobs Intern Says This Is How He Would Have Approached AI

The former intern is now the CEO of AI and data company DataStax.

Side Hustle

'Hustling Every Day': These Friends Started a Side Hustle With $2,500 Each — It 'Snowballed' to Over $500,000 and Became a Multimillion-Dollar Brand

Paris Emily Nicholson and Saskia Teje Jenkins had a 2020 brainstorm session that led to a lucrative business.

Leadership

Visionaries or Vague Promises? Why Companies Fail Without Leaders Who See Beyond the Bottom Line

Visionary leaders turn bold ideas into lasting impact by building resilience, clarity and future-ready teams.