Friendly chatbot Microsoft Tay turned out to be a boor and a racist
Friendly chatbot Microsoft Tay turned out to be a boor and a racist

Friendly chatbot Microsoft Tay turned out to be a boor and a racist

Friendly chatbot Microsoft Tay turned out to be a boor and a racist The other day Microsoft launched the friendly self-learning chatbot Tay on the social network Twitter, which was developed jointly with Research and Bing. Artificial intelligence has been trained in several means of communication: communication through text (knows how to joke and tell stories), recognition of emoticons, discussion of images sent to it. His manner of communication is adjusted to the style of young people aged 18 to 24 years, since, according to the developers, it is this category of population that makes up the majority of the audience in social networks. According to the project, Tay was supposed to communicate, while simultaneously learning how people communicate. However, after a day, Tay’s friendliness evaporated, and he was replaced by racism, rejection of feminism and rude communication with insults. Friendly chatbot Microsoft Tay turned out to be a boor and a racist The humanization of the chat bot went according to an unplanned scenario and, according to the authors of the project, the coordinated provocative actions on the part of the service users are to blame. The reason for Tay’s inappropriate behavior was the analysis of existing conversations on Twitter that AI used to cultivate itself instead of ignoring how most sane people do it. Ultimately, Microsoft had to intervene in the functioning of the chatbot, correct invalid expressions, and later completely suspend the project on social networks. Whether in the future the development team will be able to change Tay’s susceptibility to various kinds of statements, to teach him to \

Add comment