Microsoft speaks of coordinated Tweet attack against chatbot

Spread the love

The machine learning chatbot that Microsoft recently unleashed on Twitter, using racist language in no time at all, is said to have been the victim of a coordinated attack by a group of people. Microsoft is working on an improved version that doesn’t teach bad language.

The chatbot Tay was activated on Twitter on Wednesday with the aim of interacting with other users, mainly young people, of the social network. However, after about a day, the account was taken offline again for making racist and abusive comments. Meanwhile, Microsoft has responded to the controversy by stating on its blog that Tay’s language was due to a weakness in artificial intelligence.

According to Microsoft, Tay was extensively tested in an earlier phase, partly to prevent the bot from learning bad language. However, it has not been disclosed which vulnerability the company has overlooked. Microsoft only speaks of a coordinated attack carried out by a group of unidentified persons. The company is now working to improve the chatbot, and in the meantime it is not available. However, it has not been made clear when Tay will be back online.

Tay started out on Twitter with some good-natured comments, but after a while she started to voice several inappropriate opinions. Tay, for example, denied the holocaust and glorified genocide and Hitler.

You might also like
Exit mobile version