Microsoft Grounds Its AI Chat Bot After it Learns Sexism and Racism From Twitter Users

Microsoft Grounds Its AI Chat Bot After it Learns Sexism and Racism From Twitter Users
Image credit: TayandYou | Twitter

Grow Your Business, Not Your Inbox

Stay informed and join our daily newsletter now!
2 min read
This story originally appeared on Engadget

Microsoft's Tay AI is youthful beyond just its vaguely hip-sounding dialogue -- it's overly impressionable, too.

The company has grounded its Twitter chat bot (that is, temporarily shutting it down) after people taught it to repeat conspiracy theories, racist views and sexist remarks. We won't echo them here, but they involved 9/11, GamerGate, Hitler, Jews, Trump and less-than-respectful portrayals of President . Yeah, it was that bad. The account is visible as we write this, but the offending tweets are gone; Tay has gone to "sleep" for now.

It's not certain how Microsoft will teach Tay better manners, although it seems like word filters would be a good start. The company tells  that it's making "adjustments" to curb the AI's "inappropriate" remarks, so it's clearly aware that something has to change in its algorithms. Frankly, though, this kind of incident isn't a shock -- if we've learned anything in recent years, it's that leaving something completely open to input from the internet is guaranteed to invite abuse.

More from Entrepreneur

We created the SYOB course to help you get started on your entrepreneurial journey. You can now sign up for just $99, plus receive a 7-day free trial. Just use promo code SYOB99 to claim your offer.
Jumpstart Your Business. Entrepreneur Insider is your all-access pass to the skills, experts, and network you need to get your business off the ground—or take it to the next level.
Entrepreneur Store scours the web for the newest software, gadgets & web services. Explore our giveaways, bundles, "Pay What You Want" deals & more.

Latest on Entrepreneur