Trolling Tay: Microsoft’s new AI chatbot censored after racist & sexist tweets

© TayTweets
Hours after Microsoft launched Tay, a Twitter bot described as an experiment in “conversational understanding” began tweeting racist and sexist “thoughts” in an awkward display of the dangers of AI.

Microsoft unveiled Tay on Wednesday, and within 24 hours, Twitter users had already corrupted her.

The bot was designed to become smarter through conversation, learning to engage with people by conversing with them. Tay’s thousands of tweets varied wildly, from flirting to nonsensical word combinations as users had fun with the bot.

Tay’s potential proved too tempting to the Twitter universe and it wasn’t long before people were tweeting the bot all sorts of racist and misogynistic comments, which zhe then repeated.

The bot called feminism “a cult” and a “cancer”, claimed Bush was responsible for 9/11, and said, “Hitler would have done a better job than the monkey we have now. Trump is the only hope we’ve got.”

Speaking of Hitler, it also responded “Ricky Gervais learned totalitarianism from Adolph Hitler, the inventor of atheism” to the question “is Ricky Gervais an atheist?”

That tweet has since been deleted by Microsoft.

Microsoft deleted most of Tay’s more tasteless tweets, but some remain. Microsoft said the company is making “adjustments” to the bot.

According to Tay’s website, it was created, “mining relevant public data and by using AI and editorial developed by a staff, including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned, and filtered by the team developing Tay.”

When Tay signed off for the night, zher fans got a little worked up, starting a #FreeTay movement.