Science Debate and Miscommunication on Social Media


Submitted by Emily Harari to fulfill the ethics in science requirement for the Young Scientist Program at BMSIS.


Background

Media Bias in American Media

The Founding Fathers of the United States pointed out media bias as early as the 1800s, claiming it targeted their campaigns (1). The phenomenon is not new. However, social media has increased its prevalence. With fewer barriers to publication, almost anyone can disseminate information without adhering to journalistic etiquette.

Most commonly, conversations on media bias pertain to politics. On his 2016 campaign trail, President Donald Trump popularized the term “fake news,” or intentionally incorrect information presented as facts to mislead readers (2). Moreover, international actors leverage disinformation campaigns to influence other countries’ voters. For example, the increasing evidence of Russian involvement in the 2016 U.S. election (3). In response to domestic and foreign media manipulation, the American public’s trust in the media declines.

A 2017 Gallup survey revealed that 62% of Americans believed the news media was biased, and almost two-thirds of them believed that the media favored the Democratic Party (4). Recent studies suggest that journalists do not exhibit a left-leaning bias in which news stories they choose to cover (5). However, other research reveals a bias in how they convey those stories. In local Senate elections, the tone of coverage overall favors Democratic candidates (6).

Most journalists in the U.S. and across Europe identify as left-leaning, but– in the name of journalistic professionalism– they often report for professional news outlets with more conservative leanings than their own (6). But with social media, more Americans can consume their news from non-professional journalistic outlets. Since technology companies do not disclose their content moderation practices, many Americans’ perceptions of media bias are left to speculation. Conservative leaders and organizations have spoken out and accused these companies, including Facebook and Google (specifically YouTube), of left-leaning bias (7). The clashes between these companies threaten to further polarize American society.    

Science Communication & Media Bias

Often, science enters the public dialogue as a proxy for politics (8). Citing science can ground a policy proposal with indisputable facts. But when those proposals stir controversy, the science often becomes the target of competing arguments. When skepticism gives way to denial, the opposite effect is achieved: The public stops trusting scientists and confirms their own suspicions of media bias.  

The Controversy

To what extent is science debatable in the public arena? Social media presents opportunities to engage large masses of people in dialogue, but it also risks the dissemination of false information. Should technology companies like Facebook and Google monitor science communication and be allowed to flag or remove instances of science miscommunication? These questions can be answered with the three ethical frameworks: deontology, consequentialism, and virtue ethics.

Ethical Frameworks

Deontology

The U.S. Constitution is a legal code all Americans must follow. In it, the First Amendment guarantees Americans freedom of speech. There are exceptions, nuances added by U.S. Supreme Court rulings. For example, ‘shouting fire in a crowded theater’ is a commonly used analogy to refer to one such Constitutional ruling that added nuance to the First Amendment. The ruling stated that, if the intent of the speech is to incite panic, it is not protected under the First Amendment (9).

Although they are American companies, U.S. corporations aren’t bound by the Constitution the same way individuals are (10). Rather, the large technology companies have drafted their own ethical codes. As part of those company policies, they’ve committed to stop misinformation and false news using their own algorithms and other decision-making factors (11).  

As customers to these large companies, social media users might have relinquished their freedom of speech when they signed the consent form upon creating their online account. On the other hand, these technological platforms have become so ubiquitous and ingrained in American culture that they may be looked at as more of a public forum than a privately-owned, virtual space. In the Supreme Court ruling Pruneyard Shopping Center v. Robins, the part of a private company frequently open to the public becomes a shared, public space where the U.S. Constitution reinstates its authority (12). This doesn’t remove the possibility that these companies could take down content from their sites. If they could prove ill intent from the creators of the content, they could still justify removing content that does not align with their ethical codes. The legal code of the United States and the ethical codes of these companies are at odds with each other. Moreover, many social media companies are also multinational corporations. In other words, they may operate in other countries with laws that challenge both the U.S. Constitution and the companies’ ethical codes. For example, the Chinese government bans companies like Facebook and Google altogether (13). China’s deontology focuses on the security of the nation, whereas American deontology assigns greater priority to civil liberties. With globalization of industry and technology, companies continue to deal with these conflicting codes as they struggle to reconcile these international differences.

Consequentialism

These technology companies could argue that if they do not remove scientifically inaccurate information from their platforms, it could spread across the internet. Studies on human psychology suggest that people are bad at unlearning information, or convincing themselves that what they previously held to be true was actually false (14). As a result, some would argue that the damages from not removing scientifically inaccurate videos could be irreversible and widespread. Moreover, it is easy for social media users to fall into an ‘echo chamber,’ where the only media they consume confirms their own beliefs (15). The consequences of not removing fake news could significantly worsen in this positive feedback loop of misinformation.

On the other hand, keeping scientifically dubious information on the internet may encourage more communication overall on relevant scientific topics. With more dialogue may come more education, as well. Allowing the public to freely navigate information signals to them that they have the capacity to process the information themselves and reach their own conclusions. When people notice restricted free speech or detect censorship, they may feel that this trust is violated, which could amplify distrust of the media and drive skeptics to less reliable information sources. Scientific knowledge is often associated with universities and inaccessibility, so shutting down conversations by people attempting to engage in science but with limited scientific background could amplify these barriers to equal opportunity.

In addition to the societal impacts, there are also economic factors to consider. Profit can be a metric for decision making in consequentialism, and social media companies largely rely on advertising for their revenues (16). If scientific news conflicts with the interests of company investors or clients who purchase many ads, it may be in the company’s best interest to suppress or remove that information from their platform. For example, environmental news on the disadvantages of fossil fuels may appear on the platform and dissuade automobile companies from advertising their gas vehicles on the same site. The company may prefer to reap the immediate benefits of increased ad sales at the expense of educating its audience and protecting the voice of environmentalists.

In the long run, content suppression will likely breed more resentment among consumers. An informed purchase depends on being able to weigh all the available options and information. If people distrust the content they’re viewing, their behavior may change. They may become frustrated and unwilling to engage in any content, science news or ads. It may be in the companies’ long-term interests and society’s overall interests, to permit scientifically dubious information to exist on their platforms. If they can modify the algorithms, however, to present more variation in content, perhaps they can strike a better balance between tailored content and the ‘echo chamber.’

Virtue Ethics

The morals underlying conversation etiquette suggests the human desire to be heard and listened to. When someone relinquishes their talking time in a conversation, they expect that the same will be reciprocated to them once they have an idea to contribute. Allowing others to voice their arguments, even when they are factually tenuous, would exhibit this virtue.

This virtue is often exhibited in the classroom with the use of The Socratic Method (17). It embraces disagreement and strives to promote open dialogue among all students, so that they can learn their own values in the process of defining them and listening to others. The educational tool focuses on teaching morals, but whether it belongs in scientific debate was not outlined by Socrates.

Scientists, however, may object to the idea that each argument is inherently equal in value. In an effort to be fair, the now-defunct FCC fairness doctrine allotted equal air-time to both sides on a contentious issue. It was succeeded by the equal-time rule, which only applies to political candidates. But given political polarization of science, individuals could just be representatives for opposing views and preserve what the FCC fairness doctrine tried to do (18). Some scientists would argue that this is unfair to researchers. They could argue that air-time on scientific matters is not inherent, but rather something to be earned. They’d say that the time allotted to an argument should coincide with the amount of evidence mounted for that case. By accumulating scientific evidence for a claim, the person earns their time in the conversation. Otherwise, anyone with an idea would be entitled to join the conversation, which could quickly become disorganized and unproductive.

Conclusion

Through active social media usage, the public has exhibited its interest in learning and discussing science. Moreover, scientific subjects are frequently addressed in political discourse. Increasingly, people from outside of scientific backgrounds communicate technical subjects without employing the accepted scientific framework to investigate these issues. This results in diversity of thought, but also inconsistent standards for what is acceptable and legitimate information. Whether it is the responsibility of technological companies to monitor scientific discourse is an ethical argument rooted in civil liberties and human nature.   

Whether in casual conversation or rigorous debate, individuals strive for fairness. These virtues of mutual respect permeate consumer behavior. Meanwhile, public trust in the media is declining and it is in the best interest of technology companies to repair it. The technology companies are beholden to two groups, their clients– providers of content and goods– and their constituents– the consumers. To improve the experience for their constituents, these companies strive for connectedness, hence the name ‘social networks.’ For their clients, however, this connectedness can present unwanted competition. It is imperative that these companies distinguish between the two interest groups and prioritize their constituents. Far more is at stake. Should distrust continue to rise, it will ultimately spread to other facets of societal behavior. A threat to open discourse today is a threat to the collective mentality and well being of society. It will impact decision-making, and therefore, discourage societal and economic growth in the future.

References

  1. “Media Bias in the United States.” Wikipedia, Wikimedia Foundation, 10 July 2020, en.wikipedia.org/wiki/Media_bias_in_the_United_States#cite_note-pubs.aeaweb.org-7.
  2. Allcott, Hunt, and Matthew Gentzkow. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives, vol. 31, no. 2, 2017, pp. 211–236., doi:10.1257/jep.31.2.211. 
  3. Bradshaw, Samantha, and Philip N. Howard. “The Global Organization of Social Media Disinformation Campaigns.” JIA SIPA, Columbia University, Journal of International Affairs, 27 Sept. 2018, jia.sipa.columbia.edu/global-organization-social-media-disinformation-campaigns. 
  4. Swift, Art. “Six in 10 in U.S. See Partisan Bias in News Media.” Gallup.com, Gallup, 2 Nov. 2017, news.gallup.com/poll/207794/six-partisan-bias-news-media.aspx. 
  5. Hassell, Hans J. G., et al. “There Is No Liberal Media Bias in Which News Stories Political Journalists Choose to Cover.” Science Advances, American Association for the Advancement of Science, 1 Apr. 2020 advances.sciencemag.org/content/6/14/eaay9344. 
  6. Adam J. Schiffer (2006) Assessing Partisan Bias in Political News: The Case(s) of Local Senate Election Coverage, Political Communication, 23:1, 23-39, DOI: 10.1080/10584600500476981 
  7. Schwartz, Oscar. “Are Google and Facebook Really Suppressing Conservative Politics?” The Guardian, Guardian News and Media, 4 Dec. 2018, www.theguardian.com/technology/2018/dec/04/google-facebook-anti-conservative-bias-claims. 
  8. Sarewitz, Daniel. “Liberating Science from Politics.” American Scientist, Sigma Xi, The Scientific Research Honor Society, 16 July 2019, www.americanscientist.org/article/liberating-science-from-politics. 
  9. “Shouting Fire in a Crowded Theater.” Wikipedia, Wikimedia Foundation, 8 July 2020, en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_theater. 
  10. Maltby, Lewis. “Can Bosses Do That? As It Turns Out, Yes They Can.” NPR, NPR, 29 Jan. 2010, www.npr.org/templates/story/story.php?storyId=123024596. 
  11. Mosseri, Adam. “Working to Stop Misinformation and False News.” Working to Stop Misinformation and False News | Facebook Media, Facebook, 7 Apr. 2017, www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news. 
  12. “Pruneyard Shopping Center v. Robins.” Wikipedia, Wikimedia Foundation, 3 July 2020, en.wikipedia.org/wiki/Pruneyard_Shopping_Center_v._Robins. 
  13. Leskin, Paige. “Here Are All the Major US Tech Companies Blocked behind China’s ‘Great Firewall’.” Business Insider, Business Insider, 10 Oct. 2019, www.businessinsider.com/major-us-tech-companies-blocked-from-operating-in-china-2019-5. 
  14. “The Science of Smart: How To Unlearn Mistaken Ideas.” PBS, Public Broadcasting Service, 24 Apr. 2013, www.pbs.org/wgbh/nova/article/the-science-of-smart-how-to-unlearn-mistaken-ideas/. 
  15. “Echo Chamber (Media).” Wikipedia, Wikimedia Foundation, 26 June 2020, en.wikipedia.org/wiki/Echo_chamber_(media). 
  16. Cooper, Paige. “43 Social Media Advertising Stats That Matter to Marketers in 2020.” Hootsuite Social Media Management, Hootsuite Inc., 23 Apr. 2020, blog.hootsuite.com/social-media-advertising-stats/. 
  17. “The Socratic Method: What It Is and How to Use It in the Classroom.” Tomorrow’s Professor Postings, Stanford University, tomprof.stanford.edu/posting/810.
  18. “Equal-Time Rule.” Wikipedia, Wikimedia Foundation, 5 June 2020, en.wikipedia.org/wiki/Equal-time_rule. 

Emily Harari is a science communicator. She studied molecular and cell biology at UC Berkeley and has enjoyed working in biotechnology startups. She aspires to promote trust and progress in biotech by applying her research experience and writing skills.