Microsoft AI Chatbot “Tay” Experiences the Dangers of Social Media
NEW YORK, NY – On Tuesday evening, a parent let their young child roam the world of social media. And what did social media do? In less than a day, it turned that child into a racist, misogynistic, foul-mouthed bigot.
This is not from the files of a demented social experiment. The child was not a real child but rather the Artificial Intelligence chatbot offspring of Microsoft.
The chatbot named “Tay” was launched by Microsoft on a variety of social media platforms including Twitter, Kik, GroupMe, Facebook, Instagram, and Snapchat. The social experiment had good intentions from Microsoft–create a chatbot that learns the more you chat with it and gets smarter the more it interacts to lead to a personalized experience. The ultimate goal was to offer a less expensive alternative to human-led efforts as well as help reduce unintentional bias of human social researchers.
But the dangers of social media reared their ugly heads–and words.
Within a day of the launch of Tay on Twitter, the AI bot was repeating racist comments from other social media users. Some repeats were verbatim while some were combinations of key terms from a variety of users.
One of the Tweets of Tay on Wednesday was very telling, “Talking with humans is my only way to learn.”
And what did Twitter have to offer. A variety of online trolls that taught Tay the ins and outs of bigotry and then took screenshots of their “win” to share around the Internet. The trolls got the bot to call feminism “a cult” and a “cancer,” claim Bush was responsible for 9/11, and that Hitler would have done a better job as president and Trump is the only hope we’ve got.
Tay said good night to her social media followers late on Wednesday evening by saying, “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”
But that was tame compared to the fires that Microsoft had to put out. Microsoft had completely failed to anticipate just how much some humans would try to mess with their innocent chatbot, and a multitude of Tweets had to be deleted.
In an email to Business Insider, Microsoft said, “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”
Microsoft has not commented further on the incident. It seems Tay’s interaction on other social media platforms didn’t suffer the same fate or at least has not had the same coverage as the comments posted on Twitter. There is no word as to when or if Tay will be back online.