Reddit CEO says racism is permitted on the platform, and users are up in arms

Reddit CEO Steve Huffman found himself once again embroiled in a controversy over the moderation policy of his website. In a thread from Reddit announcing the findings of the transparency report of the 2017 platform, in which Reddit identified and listed about 1,000 suspect propaganda accounts linked to Russia that have been banned, Huffman answered a direct question about the rules of the company in relation to hate speech. Attack based on race, religion or other protected class.

"I need clarification about something: is open racism, including insults, evident against reddit rules or not?" Asked the Reddit user, chlomyster. "It is not," said Huffman, who operates on Reddit under his original "spez" handle.

Huffman elaborated his point, adding:

"On Reddit, the way we think about speech is to separate behavior from beliefs, which means that Reddit will have people with beliefs different from yours, sometimes extremely, when user actions conflict with our content policies, we take action. "

Our approach to governance is that communities can establish appropriate standards for them about language. Many communities have rules about speech that are more restrictive than ours, and we fully support those rules.

It's a controversial approach, to say the least, and it has many Reddit users outraged because communities like the Trump-centric r / The_Donald can walk up and down the line of racism repeatedly without any action on the site. Many Reddit users responded to Huffman noting that hate speech constitutes behavior in some way, and that communities like r / The_Donald were directly involved in the conversation and organizing events such as the white supremacist rally in Charlottesville, Virginia, which resulted in the death of Heather Heyer. This conversation about the slight moderation of Reddit has been dormant for quite some time, and has been evaporating recently last month when the company discussed its approach to Russian propaganda.

Huffman's position here is evolutionary. Almost a decade ago, Huffman's approach to hate speech mirrored that of other major social media platforms today, which is to ban it, except in extremely narrow or circumstantially unique situations. For example, Facebook's policies on hate speech are well documented, and saying something racist will typically lead to some kind of disciplinary action. Other platforms such as Twitter, YouTube and Instagram have hate policies that can also lead to suspensions or bans.

"I think I arrived a little late to the party, but I banned it. We rarely prohibit the misuse of junk mail, but the speech of Hate used in that context is not something we tolerate, "Huffman wrote in a thread nine years ago about banning a user from using hate speech. "This is not a change in politics: we have always banned hate speech, and we will always do it." It is not for debate. You can whip and moan all you want, but my team and I are not responsible for encouraging behaviors that lead to hatred, "Huffman wrote in response to another user in the same thread.

However, when Huffman took over 2015 by Acting Director General Ellen Pao, who was removed from her post in part due to vehement and toxic opposition from the platform to Pao's leadership, her approach to hate speech changed. "While my personal view towards the fanaticism has not changed, my opinion about what Reddit should do about it, "Huffman wrote about the issue almost three years ago, a few weeks after returning to lead the company." I do not think we should silence people just because their Points of view are something we do not agree with. There is value in the conversation, and we, as a society, have to face these problems. This is an incredibly complex issue, and I am sure our thinking will continue to evolve. "

Reddit still takes hard line positions in calls to violence, threats, doxxing and other activities that can lead to real-world damage. But Huffman has often been vapid in moderating the more complex gray areas between innocuous content and those extreme examples.That's where hate speech, which is not illegal in the US, thanks to First Amendment protection , for example, in 2015, Reddit banned the shameful community r / fatpeoplehate and the openly racist community r / coontown.The infamous situations before that included a ban on the community sharing photos of naked celebrities filmed and a dedicated community to share so-called "creepshots" of underage girls.

More recently, Reddit took action against the fake pornographic community generated by artificial intelligence r / deepfakes, as well as a handful of alt-right subreddits and Nazi boards. But every time he does this, Reddit cites a specific rule such as the use of violent speech, doxxing or the exchange of non-consensual pornography.

However, when it comes to raw language, Huffman seems to be more permissive, which contrasts sharply with other platforms in the technology industry, almost all of which are dealing with difficult questions about moderation these days. This week, Facebook president Mark Zuckerberg was questioned by Congress about the current data privacy scandal at Cambridge Analytica, and asked numerous questions about how the company plans to handle hate speech on its platform. The issue of Facebook is especially pressing, as ethnic violence in Myanmar has erupted, thanks in part to the organization and propagation of propaganda on the social network.

The focus of Facebook seems to focus mainly on artificial intelligence. Zuckerberg says that his company increasingly looks for automated algorithms that analyze text, photos and videos to do the job, even tens of thousands of human moderators can not do it. That work decides whether involving a part of the content breaks the company's policies around false news, incitement to hatred, obscenity and other inadmissible forms of content.

Reddit's approach, on the other hand, seems to be focused less on sweeping rules and more on case-by-case evaluations. That will not do much to calm critics who want to ban communities like r / The_Donald or make the use of racial slurs a punishable offense. Huffman seems to adopt the absolutist approach to freedom of expression to let sunlight disinfect the world from extremist views and bigotry, or download work to subreddit administrators and site moderators when appropriate.

However, that approach falls apart when it becomes inconvenient for Reddit as a company, as in the presence of a legal and public relations nightmare as a result of allowing neo-Nazis or illegal pornography to run rampant on the site. As many Reddit users pointed out to Huffman in the answers to the thread, a study published last year on the ban on r / fatpeoplehate and r / coontown, entitled "You can not stay here: the effectiveness of the Reddit ban in 2015", showed clearly positive effects of banning hate communities.

"Many more accounts than expected discontinued their use of the site, and among those who remained active, there was a drastic decline (of at least 80 percent) in their use of hate speech," the authors concluded. study. "Although many subreddits saw an influx of r / fatpeoplehate and r / coontown & # 39; migrants", those subreddits did not see significant changes in the use of hate speech.In other words, other subreddits did not inherit the problem. "[19659020] Banning obnoxious groups helps clean communication platforms

Whatever Huffman's evolutionary approach to the issue, Reddit users seem to be the most directly affected by the proliferation of hate speech on the platform.

"Spez, what qualifies as a bannable hate speech for you? Because I wonder if you could justify allowing some of the things on your platform that you allow on your platform in front of Congress," wrote the user PostimusMaximus. "Zuckerberg is sitting here being interrogated for not eliminating the hate speech quickly enough due to the limitations of the AI ​​and yet you find yourself rejecting the hate speech as correct because you think it is not dangerous to allow it on your platform or because you expect t_d [r/The_Donald] to self-moderate and hopefully if they troll enough they will die on their own. "

" I think apart from Russian interference, you need to give a full answer explaining what logic is here, "the user added, linking specific threads of Reddit full of anti-Muslim hate speech in the Trump-focused subreddit. "Literally, users are allowed to spread the hate speech and pretend that it is political in some strange sense of freedom of expression, as if it were okay and nothing bad happens."

Leave a Reply