Microsoft wanted to see how well a computer could mimic human conversation online, but it got overwhelmed by a slew of hateful invective - and started saying some pretty offensive things itself.
Apparently just because something's artificially intelligent doesn't mean it can't be anti-Semitic, sexist or racist.
That's one lesson Microsoft learned this week after its so-called chatbot began engaging with millennials on Twitter, parroting offensive comments about Jews, feminists and minorities back to them.
The idea was innocuous enough: Design an artificial intelligence program that could automatically respond to messages from 18 to 24-year-olds using vernacular familiar to them.
The more users tweeted to the bot, which went by the Twitter handle @TayandYou, the more it was supposed to learn about how to sound like a real person with real opinions, according to Microsoft.
And then social media happened
"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the company said in a statement.
Tay hadn't even been online for a day before Twitter users began exploiting its eager-to-please algorithm, getting it to write things like, "Okay…jews did 9/11," "feminism is cancer" or "I (expletive) hate feminists and they should all die and burn in hell."
There were reportedly some racist tweets too, but Microsoft was quick to delete them upon deactivating Tay. The company said it would be "making adjustments," without elaborating as to how it planned to tweak Tay's programming.
Microsoft said it had created the chatbot to as an experiment to learn more about computers and human conversation. From that perspective, Tay certainly gained insight intothe way some humans interact on social media
when they can hide behind anonymity.
The last tweet Tay posted before disappearing read: "C u soon humans need sleep now so many conversations today thx <3"
cjc/kd (AP, Reuters)