After the work was published, researchers observed how it was recieved on /pol/.
/pol/ began diving into their results and as researchers noted, /pol/ community seemed, in their own way, to be genuinely enjoying the work.
For example, they latched on to their finding with respect to hate comments on YouTube.
“Much to our surprise, some of the commenters on /pol/ had cogent (and even positive!) reviews of our work. These posters reveal a side of /pol/ that is often missed by most people. It is a diverse crowd, and at least some posters are willing to critically evaluate the information that is shared”, wrote one of the researchers in a blog post.
As researchers noted, while on the whole /pol/’s reaction was mostly benign, bordering on amusing, they did also witness its dark side.
“For example, almost immediately /pol/ users began searching for personal information about us. This culminated in a thread where a /pol/ user analyzed our previous work and social media presence, providing racially charged commentary and a final judgement as to who was responsible for the work. Within about a day, /pol/ users declared that our work was a “UN funded” study. While this is completely untrue, this meme persisted for several days and was even picked up by various news aggregators and social media sites (e.g. Boing Boing), demonstrating first hand how effective /pol/ is at spreading (dis)information”, wrote one of the researchers.
Researchers didnt end their work there.
Their study on /pol/ continued.
Emiliano De Cristofaro, Jeremy Blackburn, Nicolas Kourtellis, Ilias Leontiadis, Gianluca Stringhini also participated in another study, published later, but unlike the first one which was limited to quantifying data from /pol/, this one had more proactive aproach.
They tried to create a model to prevent raids from /pol/.
This study is titled “You Know What to Do”: Proactive Detection of YouTubeVideos Targeted by Coordinated Hate Attacks”.
They analyzed the data from 428 raided YouTube videos, comparing them to 15K regular YouTube videos that were not targeted by raiders.
Results showed that words like “black,” “police,” “white,” “shot,” “world,” “gun,” “war,” “American,” “government,” and “law” are in the top 20 terms in raided videos. Of these, the only word that appears among the top 20 in the non-raided videos is “government.” The top terms for non-raided videos were different: they include words like “god,” “fun,” “movie,” and “love.”
Relying on a set of machine learning classifiers, they proposed an automated solution to identify YouTube videos that are likely to be targeted by coordinated harassers from fringe communities like 4chan.
As they concluded, their experiments showed that they can model YouTube videos and predict those likely to be raided by off-platform communities, and they proposed that YouTube could run their prediction system at upload time, determining the likelihood that a video will be raided at some point in the future.
Using their prediction system, they estimated that only 16% of video would require any action— and they proposed that Youtube could use their prediction system as a filter — flagging videos that are risky and/or might require particular attention.
The researchers claimed that deploying their system could help Youtube to reduce the need for human workforce, by reducing the videos that need to be monitored as high risk ones.
One of the solutions proposed for preventing raids from 4chan was that Youtube could temporarily disable or limit comments on high risk videos, requiring new comments to be approved before going live.
In the end, researchers announced that, as part of their future work, they plan to look into aggressive behavior and harassment from other communities, such as Reddit, Gab.ai, and Kiwi Farms.