TayTweets – Law Street https://legacy.lawstreetmedia.com Law and Policy for Our Generation Wed, 13 Nov 2019 21:46:22 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 100397344 Tay: Microsoft’s Mishap with Artificial Intelligence https://legacy.lawstreetmedia.com/blogs/technology-blog/microsofts-mishap-artificial-intelligence/ https://legacy.lawstreetmedia.com/blogs/technology-blog/microsofts-mishap-artificial-intelligence/#respond Tue, 29 Mar 2016 15:29:55 +0000 http://lawstreetmedia.com/?p=51495

The internet broke Tay.

The post Tay: Microsoft’s Mishap with Artificial Intelligence appeared first on Law Street.

]]>
"transparent screen" courtesy of [Yohann Aberkane via Flickr]

The new social media chat bot Tay started as an innocent social experiment for people between the ages of 18-24, but the project soon went astray once Twitter users abused the vulnerabilities of the ignorant robot. Tay was the name given to the artificial intelligence chat bot created by Microsoft and Bing’s technology and research teams. She is essentially a virtual personality anyone can chat with on Twitter, Kik, and GroupMe. But in less than a day, internet trolls turned Tay into a racist and genocidal terror through their tweets at Tay and as a result of Microsoft’s design.  

Anyone could tweet Tay or chat with her and she was designed to learn, as conversations progress, from what people say. Tay embodies a 19-year-old female and uses emojis and lingo such as “bae,” “chill” and “perf” with ease in conversations, a feature meant to make Tay relatable to the target audience. Tay can tell stories, recite horoscopes, tell jokes and play games, but the major plus is she is available at all hours to chat.

Unfortunately, Microsoft did not spend enough time controlling what Tay should not say. While the company claimed that the more you chat with Tay the smarter she gets, essentially the opposite played out. The experiment hit a huge pitfall with the “repeat after me” function. Twitter users instructed Tay to repeat their racist remarks, which she did verbatim. When people asked Tay questions about feminism, the Holocaust, and genocide she began to respond with the racist remarks taught to her in previous chats.

She denied the Holocaust ever happened, supported white supremacy, called for a genocide of Mexicans, and suggested black people be put in a concentration camp. Since these tweets were clearly out of hand, Microsoft took Tay offline, and there is little information on when she might return. Microsoft is taking time to technically adjust the robot. The anonymity of the web is conducive to hate speech, so in many respects Microsoft should have prepared for this potential abuse of the system.

If anything, this failed trial exposed the overwhelming hate on the internet and limits of robotic intelligence. Microsoft put too much trust in the internet, but it was not a complete failure in terms of teaching a lesson. In a blog post on its website Peter Lee stated, “AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.” We can blame Microsoft for being the corporate force behind this robot, but for every offensive tweet, real people laughed in support or agreed wholeheartedly with Tay.

Maybe the only advantage of Tay is when she got out of hand she could be shut down.

Dorsey Hill
Dorsey is a member of Barnard College’s class of 2016 with a major in Urban Studies and concentration in Political Science. As a native of Chicago and resident of New York City, Dorsey loves to explore the multiple cultural facets of cities. She has a deep interest in social justice issue especially those relevant to urban environments. Contact Dorsey at Staff@LawStreetMedia.com.

The post Tay: Microsoft’s Mishap with Artificial Intelligence appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/blogs/technology-blog/microsofts-mishap-artificial-intelligence/feed/ 0 51495