My Queue

There are no Videos in your queue.

Click on the Add to next to any video to save to your queue.

There are no Articles in your queue.

Click on the Add to next to any article to save to your queue.

There are no Podcasts in your queue.

Click on the Add to next to any podcast episode to save to your queue.

You're not following any authors.

Click the Follow button on any author page to keep up with the latest content from your favorite authors.

News and Trends / Artificial Intelligence

Microsoft Grounds Its AI Chat Bot After it Learns Sexism and Racism From Twitter Users

Microsoft Grounds Its AI Chat Bot After it Learns Sexism and Racism From Twitter Users
Image credit: TayandYou | Twitter
2 min read
This story originally appeared on Engadget

Microsoft's Tay AI is youthful beyond just its vaguely hip-sounding dialogue -- it's overly impressionable, too.

The company has grounded its Twitter chat bot (that is, temporarily shutting it down) after people taught it to repeat conspiracy theories, racist views and sexist remarks. We won't echo them here, but they involved 9/11, GamerGate, Hitler, Jews, Trump and less-than-respectful portrayals of President Obama. Yeah, it was that bad. The account is visible as we write this, but the offending tweets are gone; Tay has gone to "sleep" for now.

It's not certain how Microsoft will teach Tay better manners, although it seems like word filters would be a good start. The company tells Business Insider that it's making "adjustments" to curb the AI's "inappropriate" remarks, so it's clearly aware that something has to change in its machine learning algorithms. Frankly, though, this kind of incident isn't a shock -- if we've learned anything in recent years, it's that leaving something completely open to input from the internet is guaranteed to invite abuse.

5 Unexpected Ways AI Can Save the World