Over the past couple of years I experienced in my work and personal life a great level of uncertainty of how to balance screen time in and out of educational settings so that people can learn productively and connect meaningfully. The massive influx of new classroom tech tools and lack of targeted support can be frustrating for educators. Safety concerns and uncertainty prevent them from building a strong online community. During the pandemic it was hard to narrow down which tools to use since so many were being pushed at once. This blog post addresses the concerns of privacy, data ownership, social-emotional effects, societal echo chambers, and other flaws of algorithmically mediated communication via social media.
Over the past decade, EdTech researchers and practitioners alike have become more focused on the addictive and absorbing qualities of social media. When devising an effective social media strategy, or deciding whether or not to deploy social media platforms as teaching tools, the digital well-being of the target audience comes into play. In a blog post on digital well-being, Chryssa Themelis described the perceived risks of social media: “personal relations, may become less meaningful due to lack of face-to-face communication and dialogue even though sharing (photos, tweets and documents) is a prominent cultural norm” (Themelis, 2018).
One set of negative effects of social media relate to social comparison, a psychological process in which people evaluate themselves by comparing their own attitudes, abilities, and traits with others. Social media channels come equipped with ubiquitous comparison information and readily accessible feedback in form of followers, likes, comments, shares, etc.. Such information allows people to form impressions of others quickly in a salient and visible way, which can promote social anxiety (Jiang & Ngien, 2020). A study by Stanford and New York University researchers documented that quitting Facebook improved mental health in the short-term (Allcott, Braghieri, Eichmeyer & Gentzkow, 2020). Widely reported leaks from Facebook’s internal research suggest a substantially negative effect of Instagram on the mental health of female teenagers (e.g., Wall Street Journal, Sept., 2021). A qualitative study of female college students documented participants frequently compared their looks or the number of likes/comments with others and were concerned with how others perceived their appearance on Instagram (Baker, Ferszt & Breines 2019).
Social media create a constant segue of social influence into our lives. What we believe, what we consume, how we act and how we feel is largely influenced by others. Thus, the makeup of our online communities is both crucial and significantly different from offline social networks. Homophily, the psychological tendency to surround ourselves with others who share our perspectives and opinions about the world, is an organizing principle underpinning many social media sites in which algorithms amplify our preference for sameness. Social media therefore limit the exposure to diverse perspectives and favor the formation of groups of like-minded users framing and reinforcing a shared narrative, that is, echo chambers (Cinelli et al., 2021). These echo-chambers collide frequently in predictable patterns of ever-increasing polarization. Digital humanist Nishant Shah describes Web 3.0 as ‘an entertainment-hate complex’ (Shah, 2020): “To be online is to hate – to express it, to be the victim of it, to share it in outrage, or at least to witness it in growing glee, as people rave, rant, and rage with all their might”.
In this emotionalized state, facts are deconstructed to perspectives on reality that will often depend on your social network, and it becomes harder to discern factually correct and incorrect statements online. Websites containing hoaxes and misleading information pop up across the Internet and are often shared on social media to increase their reach – by both human users and artificial bots, deliberately or unintentionally spreading disinformation. Fake news, the term to describe this phenomenon, became itself politicized and controversial, used both to criticize mainstream media and to refer to problematic content online. The World Health Organization (WHO) has referred to the scope and speed of the spread of false information linked to COVID-19 as an ‘infodemic’ that needs swift addressing. This is not a new concern: The arrival of the Zika virus in 2007 generated frantic activity on social media focusing on the algorithmic increase in the spread of the disease and its concerning complications. Sharma, Yadav, Yadav, & Ferdinand (2017) analyzed the discourse in the United States and found that the misleading posts were far more popular than the posts dispersing accurate, relevant public health information.
Tech companies have been deploying three approaches to combat fake news: Deplatforming users, demonetizing content or channels and labeling content as misinformation. It is however an open and extremely divisive question if social media companies are equipped to arbitrate misinformation effectively without suppressing productive dissent and freedom of speech (Prasad, 2021).
Most social media platforms are commercial products, and leverage user data for business purposes. Concerns related to data ownership, net neutrality and privacy are interrelated with the ‘walled garden’ metaphor, defined as “closed or exclusive information services, content, or media on platforms” (Paterson, 2012, pg. 97). In online advertising, the two companies Facebook and Google are essentially capturing the complete market, complemented by regional alternatives such as Baidu in mainland China and Yandex in Russia. The social media platform WeChat combines communication app, e-payments and news distribution. Other examples of walled gardens are the Apple ecosystem, or the overwhelming market share in e-commerce of the platforms Amazon and Alibaba. As these examples demonstrate, walled garden is an issue that transcends social media. It is amplified by the vast amount of personal information that users share with and on social media platforms, and the fact that most services are owned by a handful of large companies (for a critique of Facebook see Sen et al., 2017). As Jaron Lanier (2021) pointed out in a podcast interview, whenever advertising is the business model, users are the product:
The problem is not the Internet or social media in a broad sense but rather specifically the use of the algorithms. When Google and Facebook and others went to the advertising business model anytime anybody did anything, anytime anybody connected with somebody else it was financed by third party whose motivation was to manipulate what happened. Then the whole business model was about how to manipulate more and more. What that results in is people being directed rather than exploring and that makes the world small. That is fundamental. You cannot make these algorithms better. You can’t say we want a better form of constant incremental manipulation of every person. The whole concept from the start is poison (Lanier, 2021, Honestly Podcast).
One cannot consider the pedagogical potential of social media without acknowledging the massing indicators of negative effects on societal discourse and individual well-being that have led some educators to wariness or even aversion towards social media channels and companies. As Prasad (2021) predicted: “In 50 years, social media in 2021 will look like the tobacco industry in 1960 — they knowingly offered an addictive product, and, worse, hid the damage the addiction caused, while actively tried to deepen the dependency”.
Is social media content truly ‘heroin of the mind’ (Prasad, 2021) or is the initial excitement that educator felt for the potential of networked learning with the rise of web 2.0 and learning 2.0 still justified? Share your thoughts in the comments.
Allcott, H., Braghieri, L., Eichmeyer, S., & Gentzkow, M. (2020). The welfare effects of social media. American Economic Review, 110(3), 629-76.
Baker, N., Ferszt, G., & Breines, J. G. (2019). A qualitative study exploring female college students’ Instagram use and body image. Cyberpsychology, behavior, and social networking, 22(4), 277-282.
Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9).
Jiang, S., & Ngien, A. (2020). The Effects of Instagram Use, Social Comparison, and Self-Esteem on Social Anxiety: A Survey Study in Singapore. Social Media + Society.
Lanier, J. (2021). Was the Internet a Horrible Mistake? Honestly with Bari Weiss. Podcast. https://www.honestlypod.com/podcast/episode/2a738715/was-the-internet-a-horrible-mistake (accessed Dec 15, 2021).
Paterson, N. (2012). Walled gardens: the new shape of the public internet. In Proceedings of the 2012 iConference (iConference ’12). Association for Computing Machinery, New York, NY, USA, 97–104.
Prasad, V. (2021). Down With the Social Media Platforms. MedPage Today. Retrieved September 21, 2021 from https://www.medpagetoday.com/opinion/vinay-prasad/93976 (accessed Dec 15, 2021).
Sen, R, Ahmad, S., Phokeer, A., Farooq, Z., Qazi, I., Choffnes, D., & Gummadi, K. (2017). Inside the Walled Garden: Deconstructing Facebook’s Free Basics Program. ACM SIGCOMM Computer Communication Review, 47(5), 12–24.
Shah, N. (2020). Feel Free to Unfriend. The Indian Express. Retrieved July 15, 2021 from https://indianexpress.com/article/express-sunday-eye/feel-free-to-unfriend-social-media-6546128/ (accessed Dec 15, 2021).
Sharma, M., Yadav, K., Yadav, N., & Ferdinand, K. C. (2017). Zika virus pandemic—analysis of Facebook as a social media health information platform. American journal of infection control, 45(3), 301-302.