Hate Speech – Law Street https://legacy.lawstreetmedia.com Law and Policy for Our Generation Wed, 13 Nov 2019 21:46:22 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 100397344 What Happens When the First Amendment Is Used to Protect Hate? https://legacy.lawstreetmedia.com/blogs/law/happens-first-amendment-used-protect-hate/ https://legacy.lawstreetmedia.com/blogs/law/happens-first-amendment-used-protect-hate/#respond Wed, 16 Aug 2017 17:21:34 +0000 https://lawstreetmedia.com/?p=62716

How do we combat white supremacist language when hate speech is protected under the First Amendment?

The post What Happens When the First Amendment Is Used to Protect Hate? appeared first on Law Street.

]]>
"Charlottesville" Courtesy of Karla Cote License: (CC BY-ND 2.0)

After Saturday’s white supremacist riots and violence against counter-protesters in Charlottesville, Virginia, community members in the city and people nationwide are still reeling. Reported Nazi sympathizer James Alex Fields, Jr., plowed his gray Dodge Challenger through a group of counter-protesters, killing 32-year-old legal assistant Heather Heyer and injuring at least 19 others. Fields has been charged with second-degree murder, three counts of malicious wounding, and one count of hit and run.

Fields’ attack was only one piece of the violence on Saturday. White supremacists, neo-Nazis, and neo-Confederates beat counter-protesters and marched through the streets of Charlottesville with Nazi flags, white supremacist images, and anti-Semitic chants. Following the weekend’s attacks, people are passing around the blame for the white supremacists’ acts of terror in Charlottesville.

In an interview with NPR’s David Green, Virginia Governor Terry McAuliffe explained that the city of Charlottesville had tried to relocate the rally to a more open park about a mile and half away from Emancipation Park, outside of downtown Charlottesville. However, the ACLU of Virginia joined a lawsuit against Charlottesville after the city refused to allow “Unite The Right” organizer Jason Kessler and his supporters to access Emancipation Park on Saturday for the previously approved demonstration.

“That rally should not have been in the middle of downtown – to disperse all those people from the park where they dispersed all over the city streets,” McAuliffe told NPR. “And it became a powder keg. And we got to look at these permits, and we got to look at where we put these rallies and protesters. I got to protect public safety.”

The ACLU of Virginia’s Executive Director Claire G. Gastanaga fired back at McAuliffe on Monday, condemning the violence that took place in Charlottesville but defending her organization’s involvement in the lawsuit against the city.

“We asked the city to adhere to the U.S. Constitution and ensure people’s safety at the protest,” Gastanaga said. “It failed to do so. In our system, the city makes the rules and the courts enforce them. Our role is to ensure that the system works the same for everyone.”

She said the city had failed to present sufficient evidence to the judge that moving the location of the rally would in fact result in no demonstration in downtown Charlottesville, instead of creating a situation in which the city would have to deal with two demonstrations in two separate locations.

“But let’s be clear: our lawsuit challenging the city to act constitutionally did not cause violence nor did it in any way address the question whether demonstrators could carry sticks or other weapons at the events,” Gastanaga said.

Over the years, the ACLU has taken somewhat of an absolutist stance on First Amendment rights, even defending speech that it hates. The organization was recently criticized by one of its own attorneys after the ACLU decided to defend Milo Yiannopoulos, a writer and speaker who is infamous for espousing hate against people of color, Muslims, immigrants, transgender people, and other marginalized individuals.

The events in Charlottesville and the ACLU’s defense of the constitutional rights of white supremacists, Nazis, and other hate-mongers raises an important question: what happens when the First Amendment–or any constitutional right for that matter–is used to protect hate and oppress other people?

In United States v. Schwimmer (1929), a pacifist applicant for naturalization was denied U.S. citizenship because she expressed that she “would not take up arms personally” in defense of the country. In his dissenting opinion, Justice Oliver Wendell Holmes asserted that the Constitution protects thoughts that we may not agree with.

“Some of her answers might excite popular prejudice, but if there is any principle of the Constitution that more imperatively calls for attachment than any other it is the principle of free thought–not free thought for those who agree with us but freedom for the thought that we hate,” Holmes wrote.

That idea has been applied in other cases over the years and has evolved to include hate speech as part of protected speech. The Supreme Court upheld that principle in June when it reaffirmed that hate speech is protected under the First Amendment. Matal v. Tam dealt with the right of Asian American musician Simon Tam and his band “The Slants” to trademark their band name. The band’s trademark application was originally denied because of the band’s inclusion of a racial slur used to refer to Asians in their name.

Justice Samuel Alito wrote that the government’s restriction of “speech expressing ideas that offend … strikes at the heart of the First Amendment. Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.'”

Of course, there are exceptions to that rule as well. The “fighting words” doctrine, which arose out of the Chaplinsky v. New Hampshire (1942) decision, has been used to curtail speech used to incite violence. According to Chaplinsky, fighting words are “words plainly likely to cause a breach of the peace by the addressee, words whose speaking constitutes a breach of the peace by the speaker — including ‘classical fighting words,’ words in current use less ‘classical’ but equally likely to cause violence, and other disorderly words, including profanity, obscenity and threats.”

So where does the legality of the language used in Charlottesville fall on the protected/unprotected speech spectrum? Well, it can be a bit tricky. During the Charlottesville riots, white supremacists and neo-Nazis chanted anti-Semitic phrases like “Blood and soil,” which is derived from language that was used in Nazi Germany. However, if those chants were not spoken directly to a specific person, precedent may deem them to be hate speech but not fighting words. In other instances, rioters targeted specific individuals with racial and homophobic language. In those cases where particular individuals were singled out, a court might find that the aggressor was using fighting words.

Under current legal precedents, restrictions on free speech are not the clearest. What is clear is that hate groups are able to use discriminatory language that instills fear in marginalized communities without necessarily experiencing repercussions for that speech.

But it is also important, and perhaps more effective, to call out hate speech within our own communities. Eliminating hate speech is an important step in combating racism and other forms of hate, but people must also be willing to confront the beliefs and behavior that language is rooted in. Organizations like the subscription-based service Safety Pin Box provide substantive ways that allies can actively show their support for marginalized people, beyond mere social media posts “in solidarity.” People can also donate to anti-racism organizations and call their local, state, and national representatives in regard to specific issues. The events in Charlottesville are an overt demonstration of white supremacy, but they are only symptomatic of more systematic white supremacist structures. In order to combat white supremacy and other forms of hate, people must first address oppressive language and behavior in their own lives among family, friends, co-workers, and other community members.

Marcus Dieterle
Marcus is an editorial intern at Law Street. He is a rising senior at Towson University where he is double majoring in mass communication (with a concentration in journalism and new media) and political science. When he isn’t in the newsroom, you can probably find him reading on the train, practicing his Portuguese, or eating too much pasta. Contact Marcus at Staff@LawStreetMedia.com.

The post What Happens When the First Amendment Is Used to Protect Hate? appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/blogs/law/happens-first-amendment-used-protect-hate/feed/ 0 62716
Germany Passes Law to Fine Social Media Companies that Fail to Remove Hate Speech https://legacy.lawstreetmedia.com/blogs/technology-blog/germany-law-social-media-hate-speech/ https://legacy.lawstreetmedia.com/blogs/technology-blog/germany-law-social-media-hate-speech/#respond Thu, 06 Jul 2017 20:49:20 +0000 https://lawstreetmedia.com/?p=61939

The controversial law is the toughest of its kind in Europe.

The post Germany Passes Law to Fine Social Media Companies that Fail to Remove Hate Speech appeared first on Law Street.

]]>
"Bundestag" Courtesy of Herman; License CC BY-SA 2.0

The parliament in Germany passed a controversial bill last Friday that would give social media companies such as Google, Facebook, and Twitter 24 hours to remove explicitly hateful speech and “obviously illegal” content before facing a fine of up to 50 million euros ($57 million).

Holocaust denial, dissemination of Nazi symbols, racist agitation, and antisemitic language are considered illegal under Germany’s criminal code and would qualify for prompt removal under the Network Enforcement Act, or “Facebook law,” as some are calling it.

The law, which will take effect in October after Germany’s elections, is the toughest of its kind. It also states that social media companies will have seven days to remove other, less offensive posts, and will have to submit a public report on the complaints they have received every six months and explain how they dealt with each instance.

German Justice Minister Heiko Maas has said he wants to treat Facebook as a media company, thereby making it legally liable for hate speech on its platform.

“Freedom of opinion ends where criminal law begins,” Maas said, adding that hate crimes in Germany have increased by 300 percent in the last two years.

“These [posts] are not examples of freedom of speech. They’re attacks on freedom of speech. The worst danger to freedom of speech is a situation where threats go unpunished,” Maas said while addressing the need for the legislation.

Germany already has some of the world’s strictest regulations regarding libel, defamation, and hate speech. However, in light of recent attacks and instances of homegrown terrorism across the continent, German and European lawmakers are facing pressure to further limit radicalization and offensive speech online.

In 2015, the European Commission created a voluntary code of conduct that called for web companies to remove videos that incite terrorism or hatred.

After the attacks in London, both British Prime Minister Theresa May and French President Emmanuel Macron said they are considering laws similar to Germany’s to fine companies that “fail to take action” against terrorist propaganda and violent content.

Facebook said in a statement, “This law as it stands now will not improve efforts to tackle this important societal problem.” And in another statement from May, the company said that the measure “provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fines. It would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies.”

Because of its war-torn past, Europe has been more willing to place restrictions on freedom of speech in favor of limiting propaganda and hate speech than the United States. However, critics and human rights groups say this law may be going too far.

“Many of the violations covered by the bill are highly dependent on context, context which platforms are in no position to assess,” said David Kaye, the U.N. Special Rapporteur to the High Commissioner for Human Rights. “The obligations placed upon private companies to regulate and take down content raises concern with respect to freedom of expression.”

Joe McNamee, the executive director of the digital rights group EDRi, said that the law could establish a precedent for “wholesale privatization of freedom of expression,” with “large internet companies deciding what they want the public discourse to be.”

Celia Heudebourg
Celia Heudebourg is an editorial intern for Law Street Media. She is from Paris, France and is entering her senior year at Macalester College in Minnesota where she studies international relations and political science. When she’s not reading or watching the news, she can be found planning a trip abroad or binge-watching a good Netflix show. Contact Celia at Staff@LawStreetMedia.com.

The post Germany Passes Law to Fine Social Media Companies that Fail to Remove Hate Speech appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/blogs/technology-blog/germany-law-social-media-hate-speech/feed/ 0 61939
What Germany’s New Hate Speech Law Means for Social Media https://legacy.lawstreetmedia.com/blogs/technology-blog/germanys-hate-speech-law/ https://legacy.lawstreetmedia.com/blogs/technology-blog/germanys-hate-speech-law/#respond Thu, 13 Apr 2017 19:21:13 +0000 https://lawstreetmedia.com/?p=60147

It could lead to clashes with U.S.-based companies like Facebook and Twitter.

The post What Germany’s New Hate Speech Law Means for Social Media appeared first on Law Street.

]]>
Image courtesy of re: publica; License: (CC BY-SA 2.0)

American and German hate speech laws are clashing this month after the approval of a German bill that permits fines of up to 50 million euros on social networking sites that fail to remove hate speech and fake news content from their platforms. The bill still needs to be approved by parliament, but if it does pass, it will be the first concrete step by a government to limit and penalize fake news production.

Companies will have 24 hours to take down content that has been flagged by users before the fines kick in. They will also be obligated to file quarterly reports and turn in “malicious” users–an issue that may prove thorny, as demonstrated by Twitter’s recent lawsuit against the federal government.

American-based sites including Facebook and Twitter have been scrambling to fight fake news over the past year but have struggled to walk the line between freedom of speech and hate speech. In Germany, where the legacy of the Nazi reign has created some of the strictest hate speech laws on the books, that line has been far more defined for decades. Under German law, volksverhetzung, which can be translated as “incitement to hatred,” is a crime punishable by heavy fines or several years of imprisonment. These punishments are usually applied to Holocaust denial and overt racist threats but by shifting the focus to social media, Germany is taking on a wider and more varied range of bigoted behavior. German justice minister Heiko Maas told the German media that “there should be just as little tolerance for criminal rabble rousing on social networks as on the street.”

The bill has already come under fire from advocates of free speech, including the EU’s digital commissioner, Andrus Ansip of Estonia. Ansip declared that over-regulating social media will harm innovation and that instead, the EU should encourage self-regulation. However, German supporters of the bill argue that websites have been neglecting reports of abuse coming in from users and that a harsher penalty is the only way to ensure that the sites will truly take fake news and hate speech seriously. The German Jugendschutz, a ministry dedicated to protecting minors online, found that Facebook only removed 39 percent of reported criminal content. Twitter removes an even smaller percentage of reported content–an estimated one in a hundred reported messages. Facebook has refuted the Jugendschutz statistic, arguing that its own analysis showed a higher rate of removal, but Twitter has not pushed back with the same vehement denial.

Tracing and deleting fake news and hate speech is a challenging task, especially for networks like Facebook and Twitter that serve hundreds of millions of users across dozens of countries every day. There is so much content to sift through that it is not surprising the social network teams are struggling to rapidly and accurately take down fake news. However, a worthwhile task shouldn’t be abandoned simply because it is difficult. The true challenge is not taking down abusive content, it is determining whether the strict German definition of hate speech can be applied in an era where even the team in the Oval Office has made disparaging and racist remarks on social media.

Jillian Sequeira
Jillian Sequeira was a member of the College of William and Mary Class of 2016, with a double major in Government and Italian. When she’s not blogging, she’s photographing graffiti around the world and worshiping at the altar of Elon Musk and all things Tesla. Contact Jillian at Staff@LawStreetMedia.com

The post What Germany’s New Hate Speech Law Means for Social Media appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/blogs/technology-blog/germanys-hate-speech-law/feed/ 0 60147
Reddit CEO Admits to Editing Trump Supporters’ Posts https://legacy.lawstreetmedia.com/blogs/technology-blog/reddit-ceo-admits-editing-trump-supporters-posts/ https://legacy.lawstreetmedia.com/blogs/technology-blog/reddit-ceo-admits-editing-trump-supporters-posts/#respond Tue, 29 Nov 2016 15:33:38 +0000 http://lawstreetmedia.com/?p=57221

Steve Huffman is under a lot of fire.

The post Reddit CEO Admits to Editing Trump Supporters’ Posts appeared first on Law Street.

]]>
"SPECIAL SET – 16x hi-res Neourban Hipster Office:" courtesy of Markus Spiske; license: (CC BY 2.0)

Reddit has been under fire since the company’s co-founder and CEO Steve Huffman admitted that he actively changed posts by Trump supporters on the site. The subreddit r/The_Donald is the most active one for Trump fans. On Thursday Huffman admitted in a post on the subreddit that he changed comments that targeted him personally by switching his Reddit username, u/spez, for the usernames of the Trump page’s moderators, so the hateful attacks seemed to be aimed at them. The spontaneous editing only went on for an hour, but the damage was already done.

Huffman said that it had been a long week trying to recover from “pizzagate” and being called a pedophile, but that his community team is “pretty pissed” at him so he won’t do it again any time soon. Pizzagate refers to an insane conspiracy theory that spread quickly through fake news articles and on Reddit, alleging that Hillary Clinton and John Podesta were running a child trafficking business out of a pizzeria in Washington D.C. called Comet Ping Pong.

The pizzeria is real but the story is not, and the owner and staff endured threats and harassment when fired-up people believed the story was true. After Reddit decided to shut down the Pizzagate thread, Huffman got targeted instead. And some people still believe the fake story.

But some people think arbitrary editing should be okay in cases where the original posts are harassing.

However, many Reddit users called Huffman’s actions unethical and called for his resignation. And a legal site called Associate’s Mind argued in an online piece that this might have consequences for how Reddit will be treated under the law in the future. If it is no longer simply a medium for people to express their opinions, but rather a website where admins and owners edit content and express their own opinions, the company might be liable for content posted on it.

Huffman has not commented on some of the r/The_Donald subreddit users’ calls for him to step down as CEO. But he did say that Reddit needs to improve its policies for moderating hate speech. This summer, admins of the site tried to stifle the Trump subreddit’s popularity by changing its algorithm so that it wouldn’t rise to the top or front pages of the site. No matter how small the editing was this time, it was still seemingly a violation of Reddit’s regulations for policing posts and it will remain to be seen how this will affect the company’s popularity.

Emma Von Zeipel
Emma Von Zeipel is a staff writer at Law Street Media. She is originally from one of the islands of Stockholm, Sweden. After working for Democratic Voice of Burma in Thailand, she ended up in New York City. She has a BA in journalism from Stockholm University and is passionate about human rights, good books, horses, and European chocolate. Contact Emma at EVonZeipel@LawStreetMedia.com.

The post Reddit CEO Admits to Editing Trump Supporters’ Posts appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/blogs/technology-blog/reddit-ceo-admits-editing-trump-supporters-posts/feed/ 0 57221
The “Trump Effect” is Impacting Kids https://legacy.lawstreetmedia.com/elections/trump-effect-impacting-kids/ https://legacy.lawstreetmedia.com/elections/trump-effect-impacting-kids/#respond Thu, 06 Oct 2016 13:30:27 +0000 http://lawstreetmedia.com/?p=55980

The "Trump Effect," coming to a school near you

The post The “Trump Effect” is Impacting Kids appeared first on Law Street.

]]>
Image Courtesy of [Paladin Justice via Flickr]

Even if Donald Trump is not successful in November, his legacy of questionable rhetoric will live on in the lives of young American children.

Educators around the country have been sharing their experiences with students participating in a wide range of discriminatory actions, even students as young as third graders. These actions range anywhere from students telling others that they will be deported to quoting Trump’s infamous “Build the wall!” phrase to predominantly Hispanic groups of kids.

The Southern Poverty Law Center came out with a comprehensive report in April detailing what it calls the “Trump Effect.”

The report found:

It’s producing an alarming level of fear and anxiety among children of color and inflaming racial and ethnic tensions in the classroom. Many students worry about being deported

Other students have been emboldened by the divisive, often juvenile rhetoric in the campaign. Teachers have noted an increase in bullying, harassment and intimidation of students whose races, religions or nationalities have been the verbal targets of candidates on the campaign trail.

In a statement, the National Education Association said, “Since Trump entered the race for president last year, educators have witnessed a steady increase in bullying and harassing behavior that mirrors his words and actions on the campaign trail.”

In Fairfax County, Virginia, a mom took to Facebook to talk about her third grade child’s experience at school.

Evelyn Momplaisir wrote on Facebook:

I just got a call from my son’s teacher giving me a heads up that two of his classmates decided to point out the ‘immigrants’ in the class who would be sent ‘home’ when Trump becomes president. They singled him out and were pointing and laughing at him as one who would have to leave because of the color of his skin. In 3rd grade . . . in Fairfax County . . . in 2016!

Her claim was confirmed by Fairfax County school officials, according to the Washington Post, and they are working to address the situation.

In Indiana, students from predominantly white Andrean High School shouted “Build the wall!” at students from heavily Hispanic Bishop Noll Institute during a basketball game. Students from Andrean HS also waved a picture of Trump.

This newfound freedom to express pent up bigotry has been shown by adults at Trump rallies screaming at peaceful protesters and minorities. It is also shown by Trump himself, who can often be heard shouting “Get ’em out!” or singling out minorities in the audience at his events. And now it is bleeding into schools. The SPLC identified many cases of hateful speech directed at students, like a teacher’s report of a fifth-grader saying “that he was supporting Donald Trump because he was going to kill all of the Muslims if he became president!” Additionally, a Latino kindergarten student reportedly asked his teacher if the wall has been built yet because his classmates have told him that he will be deported and trapped behind it.

These instances have prompted the NEA to launch an ad campaign against Trump, particularly focusing on his role in the discrimination and bullying of students in schools around the country.

While kids may not be watching all of the debates, or even closely following the campaigns, this has shown that Trump’s rhetoric is having an impact on the way America’s kids treat one another.

Julia Bryant
Julia Bryant is an Editorial Senior Fellow at Law Street from Howard County, Maryland. She is a junior at the University of Maryland, College Park, pursuing a Bachelor’s degree in Journalism and Economics. You can contact Julia at JBryant@LawStreetMedia.com.

The post The “Trump Effect” is Impacting Kids appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/elections/trump-effect-impacting-kids/feed/ 0 55980
Petition Demands Twitter Delete Trump’s Account for Hate Speech https://legacy.lawstreetmedia.com/elections/petition-demands-twitter-delete-donald-trumps-account-hate-speech/ https://legacy.lawstreetmedia.com/elections/petition-demands-twitter-delete-donald-trumps-account-hate-speech/#respond Tue, 21 Jun 2016 15:59:00 +0000 http://lawstreetmedia.com/?p=53323

Is Trump's Twitter presence sustainable?

The post Petition Demands Twitter Delete Trump’s Account for Hate Speech appeared first on Law Street.

]]>
"Donald Trump" Courtesy of [Gage Skidmore via Flickr]

Donald Trump isn’t known for being nice on social media. If fact, his unapologetic take-no-prisoners approach to attacking anyone and everyone online is now arguably legendary.

Still, despite having ample likes and retweets, there’s an outspoken hive of opponents who loathe Trump’s online presence. In an effort to end Trump’s tweeting once and for all, one man is taking aim at his account.

Erick Sanchez, of Washington D.C., started a Change.org petition calling for Twitter to delete the presumptive presidential nominee’s page on the grounds that is hate speech and therefore should be banned in accordance with Twitter’s general policies.

In the petition, Sanchez cites Twitter’s policies listed under “hateful conduct,” which read:

You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.

Based on said policy, Sanchez reasons that it is Twitter’s responsibility to delete Trump’s account.

Mr. Trump, in under (and over) 140 characters, has exhibited conduct that runs contrary to these rules, and the signers of this petition humbly ask for Twitter’s consideration in the deletion of his account. This is not a matter of stifling his first amendment rights, unlike how he has impeded on the first amendment by revoking the access of media outlets to his events.

In the past Twitter has labeled itself a champion of free speech, however, in 2015 the social network adjusted its policies in order to snuff out abusive and hateful speech. Now Twitter no longer promises uncensored service for its users.

The petition, which as of Monday had over 230 supporters, needs roughly 270 more cosigners to reach its goal.

This isn’t Sanchez’s first petition against Trump. He previously launched a campaign urging restaurateur and chef José Andrés, who is of Spanish decent, to “dump Trump” following the candidate’s insensitive remarks against Mexican immigrants. At the time, Andrés was set to join the billionaire in a luxury hotel venture in Washington D.C. Less than a week later, the celebrity chef pulled out of the development.

It’s hard to imagine that this petition will have the same outcome–barring a major politician from social media is unprecedented. Even so, Twitter should take policing its own policies seriously, especially when it comes to spreading hate speech online.

Alexis Evans
Alexis Evans is an Assistant Editor at Law Street and a Buckeye State native. She has a Bachelor’s Degree in Journalism and a minor in Business from Ohio University. Contact Alexis at aevans@LawStreetMedia.com.

The post Petition Demands Twitter Delete Trump’s Account for Hate Speech appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/elections/petition-demands-twitter-delete-donald-trumps-account-hate-speech/feed/ 0 53323
Tay: Microsoft’s Mishap with Artificial Intelligence https://legacy.lawstreetmedia.com/blogs/technology-blog/microsofts-mishap-artificial-intelligence/ https://legacy.lawstreetmedia.com/blogs/technology-blog/microsofts-mishap-artificial-intelligence/#respond Tue, 29 Mar 2016 15:29:55 +0000 http://lawstreetmedia.com/?p=51495

The internet broke Tay.

The post Tay: Microsoft’s Mishap with Artificial Intelligence appeared first on Law Street.

]]>
"transparent screen" courtesy of [Yohann Aberkane via Flickr]

The new social media chat bot Tay started as an innocent social experiment for people between the ages of 18-24, but the project soon went astray once Twitter users abused the vulnerabilities of the ignorant robot. Tay was the name given to the artificial intelligence chat bot created by Microsoft and Bing’s technology and research teams. She is essentially a virtual personality anyone can chat with on Twitter, Kik, and GroupMe. But in less than a day, internet trolls turned Tay into a racist and genocidal terror through their tweets at Tay and as a result of Microsoft’s design.  

Anyone could tweet Tay or chat with her and she was designed to learn, as conversations progress, from what people say. Tay embodies a 19-year-old female and uses emojis and lingo such as “bae,” “chill” and “perf” with ease in conversations, a feature meant to make Tay relatable to the target audience. Tay can tell stories, recite horoscopes, tell jokes and play games, but the major plus is she is available at all hours to chat.

Unfortunately, Microsoft did not spend enough time controlling what Tay should not say. While the company claimed that the more you chat with Tay the smarter she gets, essentially the opposite played out. The experiment hit a huge pitfall with the “repeat after me” function. Twitter users instructed Tay to repeat their racist remarks, which she did verbatim. When people asked Tay questions about feminism, the Holocaust, and genocide she began to respond with the racist remarks taught to her in previous chats.

She denied the Holocaust ever happened, supported white supremacy, called for a genocide of Mexicans, and suggested black people be put in a concentration camp. Since these tweets were clearly out of hand, Microsoft took Tay offline, and there is little information on when she might return. Microsoft is taking time to technically adjust the robot. The anonymity of the web is conducive to hate speech, so in many respects Microsoft should have prepared for this potential abuse of the system.

If anything, this failed trial exposed the overwhelming hate on the internet and limits of robotic intelligence. Microsoft put too much trust in the internet, but it was not a complete failure in terms of teaching a lesson. In a blog post on its website Peter Lee stated, “AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.” We can blame Microsoft for being the corporate force behind this robot, but for every offensive tweet, real people laughed in support or agreed wholeheartedly with Tay.

Maybe the only advantage of Tay is when she got out of hand she could be shut down.

Dorsey Hill
Dorsey is a member of Barnard College’s class of 2016 with a major in Urban Studies and concentration in Political Science. As a native of Chicago and resident of New York City, Dorsey loves to explore the multiple cultural facets of cities. She has a deep interest in social justice issue especially those relevant to urban environments. Contact Dorsey at Staff@LawStreetMedia.com.

The post Tay: Microsoft’s Mishap with Artificial Intelligence appeared first on Law Street.

]]>
https://legacy.lawstreetmedia.com/blogs/technology-blog/microsofts-mishap-artificial-intelligence/feed/ 0 51495