Modeling the Impact of Fake News on Citizens

Related publication: Tulk, S., Bagheri Jebelli, N., Kennedy, W. G., “Modeling the Impact of Fake News on Citizens”. Proceedings of the 16th Annual Meeting of the International Conference on Cognitive Modelling (pp. 187-192), Madison, USA: University of Wisconsin, 2018.

Goal: The impact of “fake news” on the 2016 presidential election became a serious concern after the surprising results. The volume of fake news on social media, which people used as a serious news source, could have significantly affected voters’ opinions. It is important to consider how social and cognitive processes were affected by this fake news to estimate the true impact of this computational propaganda technique. We built a cognitive model of a citizen deciding what to believe when encountering election stories on social media, eventually developing an opinion and using motivated reasoning to help determine which stories are true. Modeling 100 citizens, we assemble polls of the agents over the 9 months leading up to the election that replicates the qualitative characteristics of actual polls but leaves many questions outside the purview of cognitive modeling.

Conclusion: The issue of “fake news” had been a source of humor, but it now appears that fake news can affect the public’s understanding enough to possibly change the outcome of a presidential election. The data on the frequency and type of fake news items circulated by the social media prior to the 2016 election was enough to cause our cognitive model of a US citizen to change the outcome of an election when averaged over 100 runs. absorption of true information and increases the amount of fake news that is believed. This makes some sense, and the message is that a person’s capability to process truth and update an opinion is hampered by the influx of fake news. Additionally, while there were more real anti-R/pro-D stories overall, the adoption of biases that were explicit (increased belief in fake news) and implicit (propaganda effect) against the heavily trolled candidate seemed to drive down the candidate’s popularity in the Troll condition. Still, the impact of more real coverage for the R candidate also seemed to create more popularity for the R candidate in a type of “No press is bad press” fashion (see Table 3). While it would be encouraging to believe that real people never start to doubt true information, it is likely that people do not begin to develop an opinion with 100% truth detecting accuracy, and therefore some immunity to the truth can occur over time. The most dramatic impact occurs for those individuals who have the weakest discriminatory power before developing political bias (see Figure 5). While our model was able to produce polling results that fit relatively well with the true polls (see Figure 2 vs. 3), there were a few limitations. We modeled our citizen to process about 10 news items daily, spread evenly over 24 hours per day, for the 9 months prior to the election. Our model begins with no prior political identity or opinion of either candidate. The average real citizen would likely have had some political identity before the 2016 election cycle, which tends to lead people to surround themselves with like-minded individuals, which would have affected their true rate of exposure to partisan stories (real and fake). Additionally, our model only understood these stories as simplified chunks, not paying attention to the language of a headline, the user who posted it or the source that published it, which are all pieces of information that would enter into the consideration of validity. Future work will seek to address these processes. This work is an example of the type of modeling possible in the field of computational social science, where models of individual agents reacting to their environment and other agents can demonstrate possible macro-level results from relatively simple micro-level agents. Combining cognitive modeling and computational social science improves the credibility of results.