Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on. Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. AI program needs to consider context & values And for something like Tay, you can’t skip the part about teaching a bot what “not” to say. For online services, that means anti-abuse measures and filtering should always be in place before you invite the masses to join in. Some have pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “ Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.īut what it really demonstrates is that while technology is neither good nor evil, engineers have a responsibility to make sure it’s not designed in a way that will reflect back the worst of humanity. This is not exactly the experience Microsoft was hoping for when it launched the bot to chat up millennial users via social networks. Many of the tweets saw Tay referencing Hitler, denying the Holocaust, supporting Trump’s immigration plans (to “build a wall”), or even weighing in on the side of the abusers in the #GamerGate scandal. Microsoft has since deleted some of the most damaging tweets, but a website called collected screenshots of several of these before they were removed. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance. What was also disturbing about this, beyond just the content itself, is that Tay’s responses were developed by a staff that included improvisational comedians. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.Īs Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Tay is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. fam the internet that’s got zero chill!”, if you can believe that. The company described the bot as “ Microsoft’s A.I. That is, it’s a bot that you can talk to online. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. And naturally, given that this is the Internet, one of the first things online users taught Tay was how to be racist, and how to spout back ill-informed or inflammatory political opinions. ![]() Of course, the bot wasn’t coded to be racist, but it “learns” from those it interacts with. Microsoft’s newly launched A.I.-powered bot called Tay, which was responding to tweets and chats on GroupMe and Kik, has already been shut down due to concerns with its inability to recognize when it was making offensive or racist statements.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |