Get All Access for $5/mo

ChatGPT's Creators Are Worried We Could Get Emotionally Attached to the AI Bot, Changing 'Social Norms' OpenAI says ChatGPT users could form "social relationships" with AI.

By Sherin Shibu Edited by Melissa Malamut

Key Takeaways

  • ChatGPT-maker OpenAI released a report Thursday about ChatGPT's safety risks.
  • One risk was emotional dependence, or the possibility of users forming social bonds with the AI, per the report.
  • The CEO of AI plagiarism checker, Copyleaks, told Entrepreneur that technology should never be a replacement for human interaction.

When the latest version of ChatGPT was released in May, it came with a few emotional voices that made the chatbot sound more human than ever.

Listeners called the voices "flirty," "convincingly human," and "sexy." Social media users said they were "falling in love" with it.

But on Thursday, ChatGPT-creator OpenAI released a report confirming that ChatGPT's human-like upgrades could lead to emotional dependence.

"Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships," the report reads.

Related: Only 3 of the Original 11 OpenAI Cofounders Are Still at the Company After Another Leader Departs

ChatGPT can now answer questions voice-to-voice with the ability to remember key details and use them to personalize the conversation, OpenAI noted. The effect? Talking to ChatGPT now feels very close to talking to a human being — if that person didn't judge you, never interrupted you, and didn't hold you accountable for what you said.

These standards of interacting with an AI could change the way human beings interact with each other and "influence social norms," per the report.

OpenAI stated that early testers spoke to the new ChatGPT in a way that showed they could be forming an emotional connection with it. Testers said things, such as, "This is our last day together," which OpenAI said expressed "shared bonds."

Experts, meanwhile, are questioning if it's time to reevaluate how realistic these voices can be.

"Is it time to pause and consider how this technology affects human interaction and relationships?" Alon Yamin, cofounder and CEO of AI plagiarism checker Copyleaks, told Entrepreneur.

"[AI] should never be a replacement for actual human interaction," Yamin added.

To better understand this risk, OpenAI said more testing over longer periods and independent research could help.

Another risk OpenAI highlighted in the report was AI hallucinations or inaccuracies. A human-like voice could inspire more trust in listeners, leading to less fact-checking and more misinformation.

Related: Google's New AI Search Results Are Already Hallucinating

OpenAI isn't the first company to comment on AI's effect on social interactions. Last week, Meta CEO Mark Zuckerberg said that Meta has seen many users turn to AI for emotional support. The company is also reportedly trying to pay celebrities millions to clone their voices for AI products.

OpenAI's GPT-4o release sparked a conversation about AI safety, following the high-profile resignations of leading researchers like former chief scientist Ilya Sutskever.

It also led to Scarlett Johansson calling out the company for creating an AI voice that, she said, sounded "eerily similar" to hers.

Sherin Shibu

Entrepreneur Staff

News Reporter

Sherin Shibu is a business news reporter at Entrepreneur.com. She previously worked for PCMag, Business Insider, The Messenger, and ZDNET as a reporter and copyeditor. Her areas of coverage encompass tech, business, strategy, finance, and even space. She is a Columbia University graduate.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Science & Technology

5 Automation Strategies Every Small Business Should Follow

It's time we make IT automation work for us: streamline processes, boost efficiency and drive growth with the right tools and strategy.

Business News

Former Steve Jobs Intern Says This Is How He Would Have Approached AI

The former intern is now the CEO of AI and data company DataStax.

Leadership

Visionaries or Vague Promises? Why Companies Fail Without Leaders Who See Beyond the Bottom Line

Visionary leaders turn bold ideas into lasting impact by building resilience, clarity and future-ready teams.

Marketing

5 Critical Mistakes to Avoid When Giving a Presentation

Are you tired of enduring dull presentations? Over the years, I have compiled a list of common presentation mistakes and how to avoid them. Here are my top five tips.

Side Hustle

'Hustling Every Day': These Friends Started a Side Hustle With $2,500 Each — It 'Snowballed' to Over $500,000 and Became a Multimillion-Dollar Brand

Paris Emily Nicholson and Saskia Teje Jenkins had a 2020 brainstorm session that led to a lucrative business.

Business Process

How CEOs Can Take Control of Their Emails and Achieve Inbox Zero

Although there are many methodologies that leaders can use to manage their emails effectively, a consistent and thought-through process is the most effective way to systemize and respond to emails and is a step of stewardship for the effective leader.