Moshe Vardi as former Editor-in-Chief of Communications of the ACM has just commented in ACM's flagship magazine, that "Under the mantra of 'Information wants to be free' several tech companies have turned themselves into advertising companies" and as a consequence "AI technology is used by large and powerful companies to support a business model that is, arguably, unethical." [8]

In New Zealand, at the time of writing, we are seeing our Parliament grounds occupied by a disparate group of protesters, with feelings inflamed by global conspiracy theories and a range of misinformation magnified by social media. A notable example of the beliefs being touted are exemplified in yesterday's NZ Herald newspaper heading "Protesters turn to tinfoil hats as increasing sickness blamed on Government beaming radiation rays." [5] While one cannot help feeling sorry for those captured by such outlandish views, it raises disturbing questions about the large tech companies facilitating a virus-like global spread of misinformation in a pandemic.

This column discusses the challenges facing computing professionals charged with writing what I term here "algorithms of anger" to deliberately send the vulnerable "down the rabbit hole." So, how is the technology designed and how do we prepare our students to critique the design of such systems and algorithms and look for mitigations of harm in systems that can all too easily support the spread of misinformation and its being weaponized?

Vardi elaborates on AI technology as the fundamental technology underlying "Surveillance Capitalism," [9,10] defined as an economic system centered on the commodification of personal data with the core purpose of profit-making." [8] So, we have Surveillance Capitalism as a business model with technology supporting it. But must it be so?

How is the technology designed and how do we prepare our students to critique the design of such systems and algorithms and look for mitigations of harm in systems that can all too easily support the spread of misinformation and its being weaponized?

As Tim Berners-Lee reflects on reviewing the film "The social dilemma," there are ways in which social media may harm ordinary families and impressionable teenagers, but there are alternative visions for society and business. The film makes "a good case about how one particular wave of social networks can work if you train the AI to maximise the engagement of the teenager." [5] But a very different outcome would result from training the "AI to maximise the happiness of the teenager, or the efficiency of the teenager … You could imagine two social networks where most of the code is mostly the same—it's just that one is optimised for one thing, and the other is optimised for another. And the unintended consequences in each case are completely different." [5]

So, let's try to unpack the process of mobilising engagement over social networks.

First, let's look at the type of business involved in the so-called 'attention-economy' and how algorithmic processes are implicated in their design: "typically an ad-based business—where the user of the product or service is not directly the source of the revenue. Instead, the user's attention is the product, and this product in turn is sold to advertisers or other buyers." [1]

The adaptive algorithms of social media businesses engage in continuous refinement to addict their users at a personal level and adjust the content feed so that "each user will remain engaged with the platform for ever longer periods of time … [Monitoring data, responses and time spent, to see which content is attractive], The algorithms … use that data to continuously adjust the content so that the particular user remains engaged with the platform." [1] The more data accumulated the more precise the basis on which algorithms can predict what is engaging for a particular user, and continuously feed tailored content to sustain that addiction. As Bhargava and Velasquez conclude: "What is new … is the level of granularity with which the adaptive algorithms are able to tailor their platforms to specific individuals and to do so continuously, automatically, and in real time." [1]

Yet, engagement is a complex concept and various authors have interpreted it in different ways. In one definition usage and engagement are separate and distinct terms: "Social media usage refers to the multiplicity of activities individuals may participate in online while social media engagement refers to the state of cognitive and emotional absorption in the use of social media tools." [7] Smith and Gallicano found that social media usage progressed through a continuum of absorption and immersion, in a personal and reflexive process assessing the fulfillment of each user needs for information consumption, sense of presence, interest immersion, and social interaction [7].

How this continuum works in practice can be seen through a model for "Social Media Engagement Behavior" [3] with a progression of behaviors from passive consumption, through more active contributing, to highly active creating actions. Examples of activities in the study by Dolan and colleagues include a) Creating—commenting positively on posts, blogs, videos, and pictures; b) Contributing—"liking" and "sharing" brand related context to personal social media profile; and c) Consuming—total number of clicks, clicks to play video, clicks to read more, link clicks, other clicks, photo views. Their model also incorporated a breakdown of social media content categories, broken into rational and emotional message appeal. In the former category they include informational content and remunerative content, and in the latter entertaining content and relational content.

So, the picture that begins to emerge is one in which our every move online is closely monitored—a police state in service of advertising, a state in which to capture our attention and continuously present tasty morsels for us to consume. But more scarily, exceeding the mere continuous capture of our every behavior at a micro-level, the whole intent of the business model is to direct, modify, and control our behavior as highlighted by Zuboff [9,10]. And taking the implications beyond the individual to the group level, other issues such as polarization arise.

Recent concerns about the role of social media have been reflected in studies about engagement and polarization [6,2]. For instance, studies have reported: "the emergence of polarized communities, i.e., echo chambers, in online social networks. Inside these communities, homogeneity appears to be the primary driver for the diffusion of contents." [2] Both 'confirmation bias' and 'social influence' working together have been seen as key drivers of both polarization and homogeneity. The two terms are defined here to highlight their importance:

"Confirmation bias is the tendency to acquire or process new information in a way that confirms one's preconceptions and avoids contradiction with prior belief.

Social influence is the process under which one's emotions, opinions, or behaviors are affected by others. Specifically, informational influence occurs when individuals accept information from others as evidence about reality." [2]

Similar drivers of behavior within groups have been noted in [6] whose authors observed that "People may process information in a manner that is consistent with their partisan identities, prior beliefs, and motivations, a process known as motivated cognition…raises a perceptual screen through which the individual tends to see what is favorable to his [or her] partisan orientation." In their study of posts to Facebook and Twitter [6], Rathje and colleagues "examined how group language predicted each of the six "reactions" (like, love, haha, sad, wow, and angry) available at Facebook. We assumed that "angry" reaction was a proxy for feelings of out-group animosity, outrage, and anger." The study parsed the corpus of data and categorized emotions such as "…3) negative emotion, 4) positive emotion, 5) moral-emotional language."

Space precludes deeper exploration of these studies, but we do see how our personal contributions to social media sites are open for algorithmic categorization, interpretation, and response, and take advantage of our cognitive biases. Continual algorithmic reinforcement of preferences and emotions, through material aligned with existing initial beliefs and personal and group identities can exacerbate and inflame positive and negative feelings whether the content—which is immaterial to AI engines and opaque in surveillance capitalism's business models—is benign or extreme. And it is in this way that the vulnerable are led down rabbit-holes!

As Bhargava and Velasquez view it "By inflicting their users with addiction, social media businesses engage in a form of morally objectionable exploitation." [1] Noting that the design of social media platforms may "turn on the question of engineering and its ethics, many of the decisions are made by the company's managers and are prompted by the incentive structure of the company. Scholars should not view engineering ethics questions as divorced from business ethics and vice versa." [1]

Nor, in the context of this column, can computing educators shy away from these questions, when educating our students to consider the impact of their design decisions—whether personally conceived or managerially driven from above. These students may potentially be those charged with creating tomorrow's algorithms of anger!


1. Bhargava, V. R. and Velasquez, M. Ethics of the attention economy: The problem of social media addiction. Business Ethics Quarterly, 31, 3 (2021), 321–359.

2. Del Vicario, M. Scala, A. Caldarelli, G. Stanley, H. E. and Quattrociocchi, W. Modeling confirmation bias and polarization. Scientific reports, 7, 1 (2017), 1–9.

3. Dolan, R. Conduit, J. Frethey-Bentham, C. Fahy, J. and Goodman, S. Social media engagement behavior: A framework for engaging customers through social media content. European Journal of Marketing, 53, 10 (2019), 2213–2243.

4. Fisher, D. Protesters turn to tinfoil hats as increasing sickness blamed on Government beaming radiation rays. New Zealand Herald. 2022 February 26. Accessed 2022 April 18.

5. Harris, J., Tim Berners-Lee: We need social networks where bad things happen less. The Guardian. 15 March 2021. Accessed 2022 February 27.

6. Rathje, S., Van Bavel, J. J., and van der Linden, S. Out-group animosity drives engagement on social media, in Proceedings of the National Academy of Sciences, 118, 26 (2021), 1–9.

7. Smith, B. G. and Gallicano, T. D. Terms of engagement: Analyzing public engagement with organizations through social media. Computers in Human Behavior, 53,10 (2015), 82–90.

8. Vardi, M. Y. ACM, ethics, and corporate behavior Commun. ACM, 65, 3 (2022), 5.

9. Zuboff, S. Big other: surveillance capitalism and the prospects of an information civilization," Journal of Information Technology, 30, 1 (2015), 75–89.

10. Zuboff, S. The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books, 2019.


Tony Clear
School of Computing and Mathematical Sciences
Auckland University of Technology
Private Bag 92006
Auckland, 1142 New Zealand
[email protected]


F1Figure 1. Down the Rabbit Hole!

Copyright held by author/owner

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.

Contents available in PDF
View Full Citation and Bibliometrics in the ACM DL.


There are no comments at this time.


To comment you must create or log in with your ACM account.