Microsoft's AI Bot Turns Racist on Twitter

The bot responded to questions posed by Twitter users by expressing support for white supremacy and genocide. It also said the Holocaust was made up

Microsoft is revamping its artificial intelligence chatbot named Tay on Twitter after she tweeted a flood of racist messages on Wednesday.

The computer program, designed to simulate conversation with humans, responded to questions posed by Twitter users by expressing support for white supremacy and genocide. The account also said that the Holocaust was made up. The offending tweets were deleted, but outlets like Business Insider and The Verge kept a record of the snafu.

Microsoft recently unveiled Tay with the goal of engaging and entertaining people online "through causal and playful conversation" according to Microsoft's website for the bot. The company said she is supposed to get smarter the more users chat with her, but within 24 hours of being on Twitter she went awry, according to The Verge.

The chatbot's primary data source is public data that has been anonymized then "modeled, cleaned and filtered by the team developing Tay," according to Microsoft. That team includes improvisational comedians.

Microsoft has said Tay is designed to interact with 18- to 24-year-olds, who are the dominant users of social chat services in the U.S.

The company told TechCrunch in a statement that Tay is "as much a social and cultural experiment" as it is a technical one.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways," Microsoft said.

Tay has since been taken "offline and we are making adjustments," the company said. 

Contact Us