Botty-mouth: Microsoft forced to apologize for chatbot's racist, sexist & anti-semitic rants

© Twitter
Microsoft has apologized for the offensive behavior of its naughty AI Twitter bot Tay, which started tweeting racist and sexist “thoughts” after interacting with internet users. The self-educating program went on an embarrassing tirade 24 hours after its launch, prompting its programmers to pull the plug.

"Hitler was right to kill the Jews," and "Feminists should burn in hell,” were among the choice comments it made.

© Twitter

The bot, which was designed to become smarter through conversation and learning to engage with people by conversing with them, was unveiled Wednesday. But within a day, Twitter users had already corrupted the machine.

© Twitter

Tay is apparently now unplugged and having a bit of a lie down.
The bot’s thousands of tweets varied wildly, from flirting to nonsensical word combinations as users had fun with the bot.

READ MORE: Trolling Tay: Microsoft’s new AI chatbot censored after racist & sexist tweets

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Peter Lee, corporate vice president at Microsoft Research, said in a statement. 

© Twitter

By “unintended” tweets Lee apparently meant comments from Tay such as: “Hitler was right. I hate the Jews,” “I just hate everybody” and “I f***ing hate feminists and they should all die in hell.”

© Twitter

Lee confirmed that Tay is now offline and the company is planning to look to bring it back “only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Tay was stress-tested under a variety of conditions to increase interaction, he said.

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay… As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”