The Brief: Terrorist attacks at two mosques in Christchurch, New Zealand have left 50 people dead. The gunman live-streamed the events on Facebook and posted a "manifesto" online just before the attack.

Bluesquiggle
DEEP DIVE

50 people were killed and dozens were wounded in shootings at two mosques during Friday prayers in Christchurch, New Zealand on March 15, 2018. This terrorist attack targeting Muslims was the deadliest mass shooting in New Zealand history.

The attacker, a 28-year-old man from Australia live-streamed the massacre on Facebook, posted about his plans on 8Chan, and published a 74-page manifesto online. The close relationship between the shooter’s actions and his posts on social networking sites highlight how on social media, hateful ideologies can spread and tragedies can become a public spectacle.

The gunman’s manifesto was filled with messages of hate, violence, and mass murder in the name of fascism and white supremacy. It was also heavily laden with language from meme culture. On the shooter’s Facebook live stream of the events, before entering the mosque, he called out “subscribe to Pewdiepie.” This phrase is a meme – a reference to Swedish Youtuber Felix Kjellberg – the most-subscribed-to Youtuber on the network. Pewdiepie has been at the center of numerous controversies for his actions including posting a “comedy” video with antisemitic messaging and Nazi imagery. Pewdie publicly responded to being mentioned by the gunman in a tweet where he stated that he felt “absolutely sickened” to have his name “uttered by this person.”

The actions of the gunman on social media indicate that this attack was at least partially intended to be a social media spectacle. In his “manifesto,” the gunman referenced multiple internet memes and also made troll-like statements that appear to have been written to intentionally agitate and confuse. In his post to 8chan before the shooting, he wrote: “it’s time to stop s**tposting and time to make a real life effort.” S**tposting  is a name for when someone intentionally trolls or spams an online network by posting an excess of content. With this statement, the gunman suggested that the attack was an extension of his online activity, an attempt to turn online hate into real-life violence. Other references to memes and semi-ironic statements in the manifesto include a claim that the shooter was most influenced by conservative pundit Candace Owens, and sarcastic remarks that “Spyro the dragon 3 taught me ethno-nationalism. Fortnite trained me to be a killer and to floss on the corpses of my enemies.”

Although the gunman’s manifesto included ironic meme references and elements of trolling, his hate was sincere. Most of the 74-page document contained information about his agenda to advance white supremacy and “eco-fascism” through violence.

The gunman appeared in court the following day where he was charged with murder. While handcuffed, he flashed an upside down ok symbol to cameras. The fact that this symbol became affiliated with white supremacy through an online meme campaign highlights how embedded this terrorist’s beliefs appear to be in far-right white supremacist internet and meme culture.

By referencing Pewdiepie and other recognizable elements of both mainstream and Alt-Right meme culture, the shooter made it clear that his violent extremist beliefs and actions were wrapped up in his online presence.

8chan is a fringe network known for fostering hate and including content that is widely considered to be inappropriate and harmful. The words “embrace infamy” are a slogan that is featured at the center of its homepage. They were echoed in the gunman’s manifesto with the lines “ACCEPT DEATH, EMBRACE INFAMY, ACHIEVE VICTORY.” Although it is not easily accessible to the average internet user – 8chan has been banned from Google’s search results, it is a site for those who are in-the-know to share content that is usually banned from mainstream social media sites. It is overrun with hateful white supremacist, fascist, sexist, anti-LGBTQ, and anti-Muslim comments, memes, and images. When the gunman posted about the attack on the forum, he was mostly met with praise and support for his planned actions from other 8chan commenters.

This is not the first time that a terrorist or mass shooter has actively spread and consumed hate speech online. The gunman who killed 11 people at a Pittsburgh synagogue in 2018 posted to Gab, a controversial “free speech” platform just before the attack. The man who murdered six people in Isla Vista, California in 2014 was active on incel message boards and posted a video about his motivations to YouTube before the attack. Other acts of violence have been broadcast on social media and/or motivated by online interactions.

The attacks at Christchurch have sparked questions about how social media gatekeepers can play a more effective role in preventing hateful and violent rhetoric from spreading on their platforms. Another pressing issue is how they can curb the proliferation of things like footage from the shooting, the shooter’s manifesto, and posts potentially glorifying the attacks. After the shootings in New Zealand, the live video, which was shot with a helmet camera and somewhat resembled a first-person shooter video game, was removed from Facebook along with other related posts made by the shooter and others. However, the video, the shooter’s manifesto, and other elements supporting or condoning the terrorist attacks continue to pop up on various social media sites. Some people who reposted the video edited aspects of it so that algorithms would not be able to detect it. Default autoplay features on sites like Twitter could cause footage from the shooting to start playing automatically in peoples feeds, subjecting them to horrific images.

On Reddit, many posts about the attacks were removed from the platform and the subreddit Pewdiepie Submissions was made private. Subreddits that share information from sites like 8chan, 4chan, and Liveleak were censored as well.

Facebook, Twitter, and YouTube all responded with statements about how they are taking action to keep hate speech and violent imagery off their sites. In spite of this, many critics see these social media giants’ efforts as insufficient, as they have been sites of breeding conspiracy theories, fake news, and hateful ideologies. When the number of clicks and seconds of view-time are the top priority for these algorithms, they often suggest increasingly extreme content. Then, hateful and awful content can fester in ways that while not directly intended, can be harmful.

A combination of algorithms, human reporting, and low-paid laborers monitoring content are the main tools implemented to censor inappropriate, violent, and triggering content. The spread of harmful content before and after this tragedy revealed how these methods are imperfect.

The Christchurch attacks and social media are undeniably intertwined. The exact extent to which social media plays a role in radicalizing assailants and spreading graphic imagery is unclear and up to debate, but it should not be ignored.

The author of this thread, Ellie Hall, points out the discrepancies between how ISIS’s social media presence has been widely banned while white nationalist extremist content is still widespread online. She asks whether the same tools used to effectively block ISIS recruiting efforts will be implemented to suppress harmful white supremacist propaganda:

In addition to comprehensive gatekeeping, digital media literacy is crucial for people both young and old to understand what they see online, how hate can go viral, and whether information found online is reputable or not.