Quantifying 4chan’s politically incorrect bord: How do 4chan raids work!

Yeah science, bitch!

In the wake of Donald J. Trump becoming president of United States of America, a group of researchers decided to study the discussion-board site 4chan.

Their names are Gabriel Emile Hine, Jeremiah Onaolapo, Emiliano De Cristofaro, Nicolas Kourtellis, Ilias Leontiadis, Riginos Samaras, Gianluca Stringhini and Jeremy Blackburn.

As they themselves described, they decided to study 4chan because this site was largely unstudied.

So they conducted a longitudinal study of one of 4chan’s sub-communities.

They focused on /pol/, the “Politically Incorrect” bord, as this board  has been a central figure in the outlandish 2016 US election season, as it has often been linked to the alt-right movement and its rhetoric of hate and racism.

They collected over 8M posts for two and a half months, and analyzed the data.

Quantifying the data

Common knowledge

Although their findings corresponds with common knowledge about /pol/, meaning that /pol/ is a board where users use a lot of hate words, and a place from where raids on other social media platforms are organized, their main contribution was quantifying the data.

Now, thanks to them, we know that “N-word” is the most popular hate word on /pol/, used in more than 2% of posts.

Also, they have discovered that the website most linked to on /pol/ is YouTube.

YouTube has much more URLs posted than the next two sites, Wikipedia and Twitter.

Then comes Archive.is, a site that lets users take on-demand “snapshots” of a website, which is often used on /pol/ to record content – e.g., tweets, blog posts, or news stories – users feel might get deleted.

They also concluded that /pol/ users generate large amounts of original content.

They counted the number of unique images posted on /pol/ during their  observation period, finding 1,003,785 unique images (almost 800GB) out of a total 2,210,972 images (45%).

4chan raids!

Also, they studied “raiding” behavior by looking for evidence of /pol/’s hateful impact on YouTube comments.

Their conclusion was that raids on /pol/ are semiorganized.

As they described: „We anecdotally observe a number of calls for action consisting of a link to a target – e.g., a YouTube video or a Twitter hashtag – and the text “you know what to do,” prompting other 4chan users to start harassing the target. The thread itself often becomes an aggregation point with screenshots of the target’s reaction, sharing of sock puppet accounts used to harass, etc.“

They  discovered that synchronization between /pol/ threads and YouTube comments is correlated with an increase in hate speech in the YouTube comments, and they also showed that peaks of commenting activity on YouTube tend to occur within the lifetime of the thread they were posted to on /pol/.

Ther work was published under an interesting title „Kek, Cucks, and God Emperor Trump: A Measurement Study of 4chan’s Politically Incorrect Forum and Its Effects on the Web“.

HOW To Stop the Raids

After the work was published, researchers observed how it was recieved on /pol/.

/pol/ began diving into their results and as researchers noted, /pol/ community seemed, in their own way, to be genuinely enjoying the work.

For example, they latched on to their finding with respect to hate comments on YouTube.

“Much to our surprise, some of the commenters on /pol/ had cogent (and even positive!) reviews of our work. These posters reveal a side of /pol/ that is often missed by most people. It is a diverse crowd, and at least some posters are willing to critically evaluate the information that is shared”, wrote one of the researchers in a blog post.

As researchers noted, while on the whole /pol/’s reaction was mostly benign, bordering on amusing, they did also witness its dark side.

For example, almost immediately /pol/ users began searching for personal information about us. This culminated in a thread where a /pol/ user analyzed our previous work and social media presence, providing racially charged commentary and a final judgement as to who was responsible for the work. Within about a day, /pol/ users declared that our work was a “UN funded” study. While this is completely untrue, this meme persisted for several days and was even picked up by various news aggregators and social media sites (e.g. Boing Boing), demonstrating first hand how effective /pol/ is at spreading (dis)information”, wrote one of the researchers.

Researchers didnt end their work there.

Their study on /pol/ continued.

Emiliano De Cristofaro, Jeremy Blackburn, Nicolas Kourtellis, Ilias Leontiadis, Gianluca Stringhini also participated in another study, published later, but unlike the first one which was limited to quantifying data from /pol/, this one had more proactive aproach.

They tried to create a model to prevent raids from /pol/.

This study is titled “You Know What to Do”: Proactive Detection of YouTubeVideos Targeted by Coordinated Hate Attacks”.

They analyzed the data from 428 raided YouTube videos, comparing them to 15K regular YouTube videos that were not targeted by raiders.

Results showed that words like “black,” “police,” “white,” “shot,” “world,” “gun,” “war,” “American,” “government,” and “law” are in the top 20 terms in raided videos. Of these, the only word that appears among the top 20 in the non-raided videos is “government.” The top terms for non-raided videos were different: they include words like “god,” “fun,” “movie,” and “love.”

Relying on a set of machine learning classifiers, they proposed an automated solution to identify YouTube videos that are likely to be targeted by coordinated harassers from fringe communities like 4chan.

As they concluded, their experiments showed that they can model YouTube videos and predict those likely to be raided by off-platform communities, and they proposed that YouTube could run their prediction system at upload time, determining the likelihood that a video will be raided at some point in the future.

Using their prediction system, they estimated that only 16% of video would require any action— and they proposed that Youtube could use their prediction system as a filter — flagging videos that are risky and/or might require particular attention.

The researchers claimed that deploying their system could help Youtube to reduce the need for human workforce, by reducing the videos that need to be monitored as high risk ones.

One of the solutions proposed for preventing raids from 4chan was that Youtube could temporarily disable or limit comments on high risk videos, requiring new comments to be approved before going live.

In the end, researchers announced that, as part of their future work, they plan to look into aggressive behavior and harassment from other communities, such as Reddit, Gab.ai, and Kiwi Farms.

Leave a Comment Cancel reply

h3rzzz@gmail.com:

This website uses cookies.