A team of researchers has successfully demonstrated a method of mitigating fake news on social media networks by combining reinforcement learning with a point process network activity model.
Because flagging, fact-checking and reporting news as ‘fake’ or suspect requires a high degree of human oversight, researchers from Georgia Tech and Georgia State University propose that fake news be mitigated by increasing a user’s exposure to real news. In the study, increasing exposure to real news is optimized using a specifically generated algorithm, which requires very little human oversight after implementation. This strategy is intended to combat the proliferation of fake news by expanding the reach of real news throughout the social network.
This method had promising results during a simulation conducted on Twitter, where the influence of fake news versus valid news was quantified by counting the number of times a user was exposed to each type of article.
In the real-time experiment, the researchers set up five Twitter accounts that made posts randomly over a span of two months, garnering a network of 1894 direct followers and once their followers were factored in, a total network of over 23,000 users. Two of these accounts were designated to disseminate fake news, continuing to post article randomly throughout the day. The remaining three accounts posted pieces on a set time schedule created using a Least Squares Temporal Difference (LTD) algorithm. This scheduling algorithm was intended to post viable news stories at the most effective moment to combat the proliferation of fake news stories, which it did successfully in the experiment.
It was noted in the experiment that results did not include retweets outside the specified network of 23,000, as Twitter’s ‘hashtag’ feature allows posts to be seen by a much larger set of users; nor did they include results for ‘likes.’ However, measuring in-network retweets alone, the group successfully demonstrated a mitigation strategy for fake news that effectively combats its spread by proliferating real news to the same pool of users.
The study also found that as the size and complexity of the network studied grows, methods of targeted mitigation become increasingly effective.
As the researchers themselves note, “our experiment serves as a proof-of-concept for the applicability of point process based intervention in networks, and – to the best of our knowledge – is the first to verify the superiority of a method in a real-time, real-world intervention setting.”