📺 Stream EntrepreneurTV for Free 📺

Rein in the AI Revolution Through the Power of Legal Liability As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications.

By Gleb Tsipursky

entrepreneur daily

Opinions expressed by Entrepreneur contributors are their own.

In an era where technological advancements are accelerating at breakneck speed, it is crucial to ensure that artificial intelligence (AI) development remains in check. As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications.

And some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI, Steve Wozniak, the co-founder of Apple, and over 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.

However, the problem with these proposals is that they require the coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that's much more in line with our existing methods of reining in potentially threatening developments: legal liability.

By leveraging legal liability, we can effectively slow AI development and make certain that these innovations align with our values and ethics. We can ensure that AI companies themselves promote safety and innovate in ways that minimize the threat they pose to society. We can ensure that AI tools are developed and used ethically and effectively, as I discuss in depth in my new book, ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation.

Related: AI Could Replace Up to 300 Million Workers Around the World. But the Most At-Risk Professions Aren't What You'd Expect.

Legal liability: A vital tool for regulating AI development

Section 230 of the Communications Decency Act has long shielded internet platforms from liability for content created by users. However, as AI technology becomes more sophisticated, the line between content creators and content hosts blurs, raising questions about whether AI-powered platforms like ChatGPT should be held liable for the content they produce.

The introduction of legal liability for AI developers will compel companies to prioritize ethical considerations, ensuring that their AI products operate within the bounds of social norms and legal regulations. They will be forced to internalize what economists call negative externalities, meaning negative side effects of products or business activities that affect other parties. A negative externality might be loud music from a nightclub bothering neighbors. The threat of legal liability for negative externalities will effectively slow down AI development, providing ample time for reflection and the establishment of robust governance frameworks.

To curb the rapid, unchecked development of AI, it is essential to hold developers and companies accountable for the consequences of their creations. Legal liability encourages transparency and responsibility, pushing developers to prioritize the refinement of AI algorithms, reducing the risks of harmful outputs, and ensuring compliance with regulatory standards.

For example, an AI chatbot that perpetuates hate speech or misinformation could lead to significant social harm. A more advanced AI given the task of improving the stock of a company might - if not bound by ethical concerns - sabotage its competitors. By imposing legal liability on developers and companies, we create a potent incentive for them to invest in refining the technology to avoid such outcomes.

Legal liability, moreover, is much more doable than a six-month pause, not to speak of a permanent pause. It's aligned with how we do things in America: instead of having the government regular business, we instead permit innovation but punish the negative consequences of harmful business activity.

The benefits of slowing down AI development

Ensuring ethical AI: By slowing down AI development, we can take a deliberate approach to the integration of ethical principles in the design and deployment of AI systems. This will reduce the risk of bias, discrimination, and other ethical pitfalls that could have severe societal implications.

Avoiding technological unemployment: The rapid development of AI has the potential to disrupt labor markets, leading to widespread unemployment. By slowing down the pace of AI advancement, we provide time for labor markets to adapt and mitigate the risk of technological unemployment.

Strengthening regulations: Regulating AI is a complex task that requires a comprehensive understanding of the technology and its implications. Slowing down AI development allows for the establishment of robust regulatory frameworks that address the challenges posed by AI effectively.

Fostering public trust: Introducing legal liability in AI development can help build public trust in these technologies. By demonstrating a commitment to transparency, accountability, and ethical considerations, companies can foster a positive relationship with the public, paving the way for a responsible and sustainable AI-driven future.

Related: The Rise of AI: Why Legal Professionals Must Adapt or Risk Being Left Behind

Concrete steps to implement legal liability in AI development

Clarify Section 230: Section 230 does not appear to cover AI-generated content. The law outlines the term "information content provider" as referring to "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the internet or any other interactive computer service." The definition of "development" of content "in part" remains somewhat ambiguous, but judicial rulings have determined that a platform cannot rely on Section 230 for protection if it supplies "pre-populated answers" so that it is "much more than a passive transmitter of information provided by others." Thus, it's highly likely that legal cases would find that AI-generated content would not be covered by Section 230: it would be helpful for those who want a slowdown of AI development to launch legal cases that would enable courts to clarify this matter. By clarifying that AI-generated content is not exempt from liability, we create a strong incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.

Establish AI governance bodies: In the meantime, governments and private entities should collaborate to establish AI governance bodies that develop guidelines, regulations and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards. Doing so would help manage legal liability and facilitate innovation within ethical bounds.

Encourage collaboration: Fostering collaboration between AI developers, regulators and ethicists is vital for the creation of comprehensive regulatory frameworks. By working together, stakeholders can develop guidelines that strike a balance between innovation and responsible AI development.

Educate the public: Public awareness and understanding of AI technology are essential for effective regulation. By educating the public on the benefits and risks of AI, we can foster informed debates and discussions that drive the development of balanced and effective regulatory frameworks.

Develop liability insurance for AI developers: Insurance companies should offer liability insurance for AI developers, incentivizing them to adopt best practices and adhere to established guidelines. This approach will help reduce the financial risks associated with potential legal liabilities and promote responsible AI development.

Related: Elon Musk Questions Microsoft's Decision to Layoff AI Ethics Team

Conclusion

The increasing prominence of AI technologies like ChatGPT highlights the urgent need to address the ethical and legal implications of AI development. By harnessing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritizes ethical considerations and minimizes the risks associated with these emerging technologies. It is essential that developers, companies, regulators and the public come together to chart a responsible course for AI development that safeguards humanity's best interests and promotes a sustainable, equitable future.

Gleb Tsipursky

CEO of Disaster Avoidance Experts

Dr. Gleb Tsipursky, CEO of Disaster Avoidance Experts, is a behavioral scientist who helps executives make the wisest decisions and manage risks in the future of work. He wrote the best-sellers “Never Go With Your Gut,” “The Blindspots Between Us,” and "Leading Hybrid and Remote Teams."

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Starting a Business

Most People Have No Business Starting a Business. Here's What to Consider Before You Become an Entrepreneur

You need to find the right business opportunity at the right time and take the right steps to beat the odds.

Leadership

AI vs. Humanity — Why Humans Will Always Win in Content Creation

With the proliferation and integration of AI across organizations and business units, PR and marketing professionals may be tempted to lean into this new technology more than recommended.

Business News

Passengers Are Now Entitled to a Full Cash Refund for Canceled Flights, 'Significant' Delays

The U.S. Department of Transportation announced new rules for commercial passengers on Wednesday.

Growing a Business

Who You Hire Matters — Here's How to Form a Team That's Built to Last

Among the many challenges related to managing a small business, hiring a quality team of employees is one of the most important. Check out this list of tips and best practices to find the best people for your business.

Franchise

Franchising Is Not For Everyone. Explore These Lucrative Alternatives to Expand Your Business.

Not every business can be franchised, nor should it. While franchising can be the right growth vehicle for someone with an established brand and proven concept that's ripe for growth, there are other options available for business owners.

Management

7 Ways You Can Use AI to 10x Your Leadership Skills

While technology can boost individual efficiency and effectiveness, it's essential to balance their use with human intuition and creativity to avoid losing personal connection and to optimize workplace satisfaction.