Microsoft artificial intelligence chat bot goes rogue

‘Tay’ became racist and sexist after chatting with real humans on messaging platforms

Microsoft said the chat bot was a ‘machine learning project’. Photograph: Getty Images
Microsoft said the chat bot was a ‘machine learning project’. Photograph: Getty Images

Microsoft is cleaning up after its artificial intelligence chat bot went rogue.  The company introduced Tay earlier this week to chat with real humans on Twitter and other messaging platforms.

The bot seemed to learn by parroting and then generating its own phrases based on all its interactions and was supposed to emulate the casual speech of a stereotypical millennial.

The internet took advantage and quickly taught Tay to spew messages that became racist, sexist and offensive.

Gone offline

The worst tweets are quickly disappearing from Twitter, and Tay itself has now also gone offline “to absorb it all.”

READ MORE

Some Twitter users seem to think that Microsoft had also manually banned people from interacting with the bot. Others are asking why the company didn’t build filters to prevent Tay from discussing certain topics, such as the Holocaust.

“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement.

“It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Bloomberg