Facebook Unfriends White Nationalism, but Is This Enough?

Towards the end of last month, Facebook announced its new policy on banning white nationalism and separatism, a policy that will also apply to Instagram.

After a long time of conversations and arguments from experts and civil rights advocates, and following the multitude of hate-derived tragedies such as the mosque shooting in New Zealand, Facebook has come to the conclusion that rhetoric which aligns with white nationalism and separatism will not be offered a platform on Facebook.

What’s interesting is that Facebook had technically already banned white supremacy. When it came to white nationalism and separatists groups, however, Facebook had historically claimed that they “don’t seem to be always associated with racism (at least not explicitly).”

This argument is in stark contrast to how experts in this field feel. Educators and experts argue that white nationalism, separatism, and supremacy all have a root in organized hate, and that their history intertwines. The head of Southern Poverty Law Center’s Intelligence Project, Heidi Beirich, has said, “white nationalism is something that people like David Duke [former leader of the Ku Klux Klan] and others came up with to sound less bad.” This analysis is backed up by reports in Vice, which found that white nationalists try to distinguish themselves by saying things like, “I’m not racist. I’m a nationalist,” or “I’m not a white supremacist. I’m a white nationalist.”

The best way to distinguish between banning hate speech and stifling freedom of speech, however, is what consequences this alleged “freedom of speech” leads to.

This policy change has been stated to be one of the most challenging projects for Facebook because it brings to light the problem of how to be extra cautious and sensitive on what and whom to ban. No doubt, the lines can get blurry, and unfortunately, the right to freedom of speech is an argument that extremists have been used in their defense. The best way to distinguish between banning hate speech and stifling freedom of speech, however, is what consequences this alleged “freedom of speech” leads to. Does it illicit violence against groups of people? Because if so, what we’re looking at is less freedom of speech, and more hate speech. 

Right now, Facebook has planned to ban any posts that show pride in being a white nationalist or separatist. Anyone who searches or posts about white nationalism and separatism will be directed to LifeAfterHate, a non-profit organization started by a self-proclaimed ex-racist. The hope is that this will help individuals who harbor extremist views to reconsider their thoughts and beliefs. LifeAfterHate explains that, “online radicalization is a process, not an outcome,” and they want to step into the process in hopes of change. Google also has taken a similar step for searches that involve ISIS.

After Facebook’s announcement, Twitter and YouTube were questioned about their policies but they claimed that their policies already had the needed restrictions.

This is definitely a big and much-delayed step for Facebook. No doubt, these kinds of policies should have been implemented from the beginning, or at least after the shameful displays of hatred courtesy of the Unite the Right rally in Charlottesville in 2017. 

Hate and violence definitely shouldn’t have any part of a safe internet, or the real world. But as we all know, it’s hard to have zero violence or hate. The only thing that we can do is to resist and try our best to shut down the negative voices. Social media is a very easy platform via which one can reach a big audience, and so, it is critical to have it monitored closely to ensure that it doesn’t become a breeding ground for hatred.

I think the fact that these giant tech companies are working and listening to become better and safer is important, whether it’s late or not. Drastic change is needed, and this is a process not a destination. While we can always be better, there’s no doubt that this ban is a step in the right direction.