Microsoft's 'Zo' Chatbot Picked Up Some Offensive Habits

Zo went cray-cray like Tay.
Microsoft's 'Zo' Chatbot Picked Up Some Offensive Habits
Image credit: MSPowerUser
  • ---Shares
Free Webinar | August 16th

Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now »

It seems that creating well-behaved chatbots isn't easy. Over a year after Microsoft's "Tay" bot went full-on racist on Twitter, its successor "Zo" is suffering a similar affliction.

Despite Microsoft programming Zo to ignore politics and religion, the folks at BuzzFeed News managed to get the bot to react to the restricted topics with surprising (and extremely controversial) results. One of these exchanges saw Zo refer to the Qu'ran as "very violent." It also opined on the death of Osama Bin Laden, claiming his "capture" came after "years of intelligence gathering under more than one administration." Microsoft claims the errors in its behavior have now been corrected.

Just last year, Microsoft's Tay bot went from emulating the tone of a supposedly hip teenager to spouting racist tirades within the span of a day. To make matters worse, the entire debacle unfolded on Twitter for everyone to see, forcing Microsoft to disable it. As a result, the company kept Zo within the confines of messaging app Kik, and its mid-sized user base. But it seems the chatbot still managed to pick up some bad habits.

Microsoft blamed Tay's downfall on a concentrated group effort by select users to corrupt the bot, but it claims no such attempt was made at bringing down Zo. The chatbot is still available on Kik, with Microsoft saying it has no plans of disabling it.

Next Article:
Microsoft: MS Paint Isn't Dead, It's Jus...
OK

This website uses cookies to allow us to see how our website and related online services are being used. By continuing to use this website, you consent to our cookie collection. More information about how we collect cookies is found here.