Report

Dead reckoning: Navigating content moderation after "fake news"

16 Feb 2018
Description

“Fake news” has become an intractable problem and reckoning with it requires mapping new pathways for online news verification and delivery. Since the 2016 election, the phrase has been a daily fixture of U.S. political discourse, with its contested meanings falling increasingly along partisan lines. On the one hand, it has been appropriated by political actors to extend critiques of “mainstream media” that long predate the current moment. On the other, “fake news” has been taken up by a wide range of policymakers, journalists, and scholars to refer to problematic content, such as propaganda and other information warfare campaigns, spreading over social media platforms and search. This white paper clarifies uses of “fake news,” with an eye towards the solutions that have been proposed by platform corporations, news media industry coalitions, media-oriented civil society organizations, and governments. For each proposed solution, the question is not whether standards for media content should be set, but who should set them, who should enforce them, and what entity should hold platforms, the media industry, states, and users accountable. “Fake news” is thus not only about defining what content is problematic or false, but what constitutes credible and legitimate news in the social media era.

- “Fake news” has become a politicized and controversial term, being used both to extend critiques of mainstream media and refer to the growing spread of propaganda and problematic content online.

- Definitions that point to the spread of problematic content rely on assessing the intent of producers and sharers of news, separating content into clear and well-defined categories, and/or identifying features that can be used to detect “fake news” content by machines or human reviewers.

- Strategies for limiting the spread of “fake news” include trust and verification, disrupting economic incentives, de-prioritizing content and banning accounts, as well as limited regulatory approaches.

- Content producers learn quickly and adapt to new standards set by platforms, using tactics like including satire or parody disclaimers to bypass standards enforced through content moderators and automated approaches.

- Moderating “fake news” well requires understanding the context of the article and the source. Currently automated technologies and artificial intelligence (AI) are not advanced enough to address this issue, which requires human-led interventions.

- Third-party fact-checking and media literacy organizations are expected to close the gap between platforms and the public interest, but are currently under resourced to meet this challenge.

Publication Details
Language: 
License Type: 
CC BY
Published year only: 
2018
40
Share
Share
Subject Areas
Keywords
Advertisement