That's Fake News!: Reliability of News When Provided Title, Image, Source Bias & Full Article
As news is increasingly spread through social media platforms, the problem of identifying misleading or false information (colloquially called “fake news”) has come into sharp focus. There are many factors which may help users judge the accuracy of news articles, ranging from the text itself to meta-data like the headline, an image, or the bias of the originating source. In this research, participants (n = 175) of various political ideological leaning categorized news articles as real or fake based on either article text or meta-data. We used a mixed methods approach to investigate how various article elements (news title, image, source bias, and excerpt) impact users’ accuracy in identifying real and fake news. We also compared human performance to automated detection based on the same article elements and found that automated techniques were more accurate than our human sample while in both cases the best performance came not from the article text itself but when focusing on some elements of meta-data. Adding the source bias does not help humans, but does help computer automated detectors. Open-ended responses suggested that the image in particular may be a salient element for humans detecting fake news.
Spezzano, Francesca; Shrestha, Anu; Fails, Jerry Alan; and Stone, Brian W.. (2021). "That's Fake News!: Reliability of News When Provided Title, Image, Source Bias & Full Article". Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 109, . https://doi.org/10.1145/3449183