Beyond Public Squares, Dumb Conduits, and Gatekeepers: The Need for a New Legal Metaphor for Social Media

Amber Sinha

In the past few years, social networking sites have come to play a central role in intermediating the public’s access to and deliberation of information critical to a thriving democracy. In stark contrast to early utopian visions which imagined that the internet would create a more informed public, facilitate citizen-led engagement, and democratize media, what we see now is the growing association of social media platforms with political polarization and the entrenchment of racism, homophobia, and xenophobia. There is a dire need to think of regulatory strategies that look beyond the ‘dumb conduit’ metaphors that justify safe harbor protection to social networking sites. Alongside, it is also important to critically analyze the outcomes of regulatory steps such that they do not adversely impact free speech and privacy. By surveying the potential analogies of company towns, common carriers, and editorial functions, this essay provides a blueprint for how we may envision differentiated intermediary liability rules to govern social networking sites in a responsive manner. 

Illustration by Jahnavi Koganti

Introduction

Only months after Donald Trump’s 2016 election victory – a feat mired in controversy over alleged Russian interference using social media, specifically Facebook – Mark Zuckerberg remarked that his company has grown to serve a role more akin to government, rather than a corporation. Zuckerberg argued that Facebook was responsible for creating guidelines and rules that governed the exchange of ideas of over two billion people online. Another way to look at the same argument is to acknowledge that, today, a quarter of the world’s population (and of India) are subject to the laws of Facebook’s terms and conditions and privacy policies, and public discourse around the globe is shaped within the constraints and conditions they create. Social media platforms, like Facebook, wield hitherto unimaginable power to catalyze public opinions, causing a particular narrative to gather steam – that Big Tech can pose an existential threat to democracy.

This, of course, is in absolute contrast to the early utopian visions which imagined that the internet would create a more informed public, facilitate citizen-led engagement, and democratize media. Instead, what we see now is the growing association of social media platforms with political polarization and the entrenchment of racism, homophobia, and xenophobia. The regulation of social networking sites has emerged as one of the most important and complex policy problems of this time. In this essay, I will explore the inefficacy of the existing regulatory framework, and provide a blueprint for how to think of appropriate regulatory metaphors to revisit it.

1. The role of new media in democratic discourse

For a thriving democracy, three essential components are generally necessary: free and fair elections, working forms of deliberation, and the ability of its people to organize themselves for the purposes of protest. The basic idea behind deliberative democracies is that effective public political participation means more than just majoritarian decision-making. It involves the exchange of reasons and arguments – elected representatives should be able to provide the reasons behind their decisions, and respond to the questions that citizens ask in return. This process of debate, discussion, and persuasion, in addition to the aggregation of votes, is crucial for the legitimacy of policy outcomes.

The advent of the internet and social media has meant that millions of people are interacting with each other and debating issues. At the time of writing this essay, there are over 3.01 billion people online, over 20 percent of the world’s population. Since the early 2000s, a general optimism around new media, coupled with a mounting loss of faith in mainstream media, led many to believe that social networking sites would limit the ability of editors – compromised by economic and political compulsions – to play the role of gatekeepers of news. It was hoped that public accountability would emerge from the networked nature of the new media. Several examples of citizen journalism enabled by social media were hailed as harbingers of a new era of news.

This vision of social media as a democratizing actor was based on the ideal that it would be open, egalitarian, and enable genuine public-driven engagement. Google News, Facebook’s News Feed which tries to put together a dynamic feed for both personal and global stories, and Twitter’s trending hashtag feature looked poised to be the key drivers of an emerging news ecosystem. Initially, this new media was hailed as a natural consequence of the internet which would enable greater public participation, allow journalists to find more stories, and engage with readers directly.

A democratic society needs media and platforms that allow us to explore different perspectives and arguments before we make up our minds. Instead, these algorithms seize on our half-baked opinions and hasten their crystallization.

Over time, it became evident that far from being open or egalitarian, social media platforms introduce their own specific techno-commercial curation of how information is accessed. This can often amplify, and not lessen, the issues that plague mainstream media. For a democratic society to thrive, individuals need to be active participants in discourse and not passive recipients of information. Social media platforms view users primarily as consumers, and not citizens. Their single-minded drive to appeal to our basest and narrowest set of stimuli may make good business sense, but does no favors to the cause of democracy. As citizens, we need to be exposed to more than the most agreeable or extreme form of our still evolving opinions. The signal we give to algorithms through likes and clicks are often only a fleeting or tentative take on an issue. A democratic society needs media and platforms that allow us to explore different perspectives and arguments before we make up our minds. Instead, these algorithms seize on our half-baked opinions and hasten their crystallization.  It is bad enough that our online selves drive this propaganda, but lately, politically aligned actors are making creative use of such platforms to inundate us with misinformation, hate speech, and polarizing content.

2. The ‘public spheres’ of online platforms

Internet platforms have tremendous power to shape and moderate content that they facilitate. Although run by private corporations, these platforms have become public squares for discourse without any public accountability. This has blurred the lines between the public and the private. In the United States, the Supreme Court ruled that streets and parks, regardless of who owns them, must be kept open to the public for expressive activity. In the landmark 1939 case Hague v. Committee for Industrial Organization, the court said clearly:

“Wherever the title of streets and parks may rest, they have immemorially been held in trust for the use of the public and time out of mind, have been used for the purposes of assembly, communicating thought between citizens, and discussing public questions. Such use of the streets and public places has, from ancient times, been a part of the privileges, immunities, rights, and liberties of citizens.”

Despite its relative obscurity, there are few constitutional rights with more everyday relevance than the right to speak freely in public or address crowds on the sidewalks. The peculiarity of viewing even privately-owned spaces as ‘public forums’ lies in moving beyond the restrictions imposed by the state in penalizing private actions on public property. This means that free speech must be allowed to occur freely in public places, thus giving citizens the rights to assemble, protest, and engage in free conversation. While we do not have anything similar to the public forum doctrine in all common law countries, in most cases, there will be clearly articulated rights to assembly, with similar objectives. Thus far, courts have been hesitant to accord social media platforms the status of public forums. The primary reason is that these remain privately-owned platforms with their own community guidelines. While often informed by laws on issues such as copyright infringement, hate speech, and misinformation, the enforcement of community guidelines are not judicially-determined decisions.

This became a thorny issue when United States president Donald Trump, using his personal Twitter handle, blocked the accounts of several people, seven of whom filed a suit against this act. This private handle (@realDonalTrump), with over 53 million followers, is used by the president on a daily basis to pronounce policy decisions and opinions. In fact, the former White House Press Secretary Sean Spencer clearly stated that tweets from this handle could be considered official statements made by the president.

The Southern District court of New York refused to see Twitter as a traditional public forum. But it said that the interactive space accompanying each tweet, vis-à-vis how people are allowed to share, comment on, and otherwise engage with the tweet, may be considered a designated public forum. However, even here the key concern was not whether Twitter was a public forum or not, but that a citizen’s right to access government information was being restricted. The court’s reasoning was that the nature of the platform is irrelevant; it is the nature of speech, and the fact that it is government speech, that is relevant. Even though the concerned account is a private one and Trump operates it as any other private user would, when the platform is used to perform roles that relate to public functions, it automatically transforms from a private account to a designated public forum.

Besides, for those of us who consume and engage with information through platforms like Facebook and Twitter, the web, over time, gets reduced to a personalized, and therefore narrower, version of itself. Our Facebook timelines are occupied more and more by people and posts with shared and similar interests, proclivities, and ideological leanings. Attempts to break out of this restricted worldview by following people and organizations whose voices one may perceive as dissimilar to their own are often unsuccessful. In these circumstances, it feels as though platforms like Facebook deliberately resist attempts by people to burst the personalized bubble it creates for them. It is ironic then that in a hearing before the Senate Select Committee on Intelligence in 2018, Jack Dorsey, the founder and chief executive officer of Twitter, repeatedly referred to Twitter as a “digital public square”, which required “free and open exchange”.

Clearly, there are parts of social media which are designated spaces, where government officials, ministries, departments, elected representatives create pages, accounts, and handles to communicate with the public. This part of the platform is designated as a public forum and the same standards apply here. But that is not the case for content created by ordinary citizens on social networking platforms.

In several countries, including the US and India, courts have applied the well-known ‘public-function’ test, under which the duties of the state will apply if a private entity exercises powers traditionally reserved exclusively for the state. This means that if an entity performs a function of a sovereign character or one that significantly impacts public life, it must be considered the state for that purpose. The need for such a provision arises from the tremendous amount of power exercised by social networking sites in contemporary times.

3. Legal metaphors for social media

Over the past three decades, we have seen legal jurisprudence evolve to understand and address the legal questions posed by the internet and cyberspace. Most of these issues remain unresolved in our legal imagination, but we have formulated structured and clear principles about how one may approach them. Jurisprudence on cyberlaw is built largely around finding the appropriate metaphor. More often than not, the law and jurists seek assistance from existing regulations governing offline activities which can be most likened to the digital activity in question. The regulation of internet intermediaries has been built around the overworked metaphor of ‘dumb conduits’. Below, we explore the different analogies that could instruct how we regulate intermediaries in general, and social networking sites in particular.

Kate Klonick argues that there are three possible ways to look at the major social media companies. The first is to view them as ‘company towns’ and ascribe to them the character of the state, bound to respect free speech obligations as the state would. The second is to view them as common carriers or broadcast media, not equivalent to a public body but still subject to a higher standard of regulation so as to safeguard public access to its services. The third analogy would treat social media sites like news editors, who generally receive full protections of the free speech doctrine when making editorial decisions.

Jonathan Peters is a proponent of the first analogy. Peters relies on the landmark US Supreme Court case Marsh v. Alabama which states that “the more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.” While this view of March has roundly been rejected in later cases, Benjamin Jackson provides a more rounded argument for invoking the ‘public-functions’ test. He argues that “managing public squares and meeting places” has fallen within the domain of the state, and now that social networking sites perform this role, they perform a public function. This approach has received some judicial blessing in the US, most notably in Packingham v. North Carolina, where the court equated social networking sites such as Facebook, Twitter, and Linkedin with the ‘modern public square’. This formulation, while effective in dealing with the denial of access of information on these platforms, will pose other problems. As both Klonick and Daphne Keller suggest, this may be disastrous in dealing with already exacerbated problems of misinformation and hate speech online.

The second analogy likens social networking sites to common carriers such as broadcast media. According to Black Law’s Dictionary, a common carrier is an entity that “holds itself out to the public as offering to transport freight or passengers for a fee”. This common law doctrine has been central to the regulation of modern telecommunication carriers such as radio and television broadcasters. These broadcasters are not considered analogous to the state, in that they retain their private identities and the rights that go alongside. However, they are expected to be subjected to a higher degree of regulation, most importantly, the ‘equal access’ obligations. These obligations are based on one of three rationales. In the case of radio, the need for regulation arose from the “scarcity” of radio frequencies, prompting governments to intervene through a licensing and allocation system. Cable television does not suffer from the same scarcity limitations as radio; here the rationale for regulation is based on the bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home through the cable operator. The third criterion is that of invasiveness. Back in 1997, a US court categorically denied that the unique factors that justified greater regulation of cable and broadcast were present in the case of the internet. Its decision was based on the reasoning that the internet was not as ‘invasive as radio or television’ as it required affirmative action to access a specific piece of information unlike on radio and television.

A decade later, in 2008, Bracha and Pasquale critiqued this position, arguing that the internet has emerged as a space where “small, independent speakers [are] relegated to an increasingly marginal position while a handful of commercial giants capture the overwhelming majority of users’ attention and re-emerge as the essential gateways for effective speech”. Effective application of the common carrier analogy requires looking at two key questions. First, in what ways are internet intermediaries, and in particular social networking sites, comparable to common carriers like cable and broadcasters. Second, do these intermediaries satisfy either the “scarcity” test, the “bottleneck monopoly power” test, or the “invasiveness” test. The nature of regulation that they must be subject to could depend on the role they are performing and how it satisfies one of the above tests.

The basis for safe harbor is the idea that intermediaries are dumb conduits for the distribution of the speech of their users, rather than speakers themselves. However, this argument of dumb conduit is no longer tenable. Most, if not all, intermediaries affirmatively shape the form and substance of user content in some manner, using highly intelligent prioritization algorithms.

The final analogy is that of ‘editors’, where social networking sites exercise content moderation powers in line with the protected speech rights of a newspaper editor. Volokh and Falk have argued that search engine results are protected speech because they are a result of editorial judgments. It has been debated whether search engines, by virtue of dealing with facts as opposed to opinions, are rendered outside the scope of free speech. This position may not be tenable under several common law jurisdictions as facts are where much of the speech begins, and search results also represent a subjective opinion about facts. The same considerations may also apply to ‘editorial’ decisions of social networking sites. This characterization would also have an impact on the safe harbor protection (in that they are exempt from liability for user-generated content) that internet intermediaries enjoy in several jurisdictions. The basis for safe harbor is the idea that intermediaries are dumb conduits for the distribution of the speech of their users, rather than speakers themselves. However, this argument of dumb conduit is no longer tenable. Most, if not all, intermediaries affirmatively shape the form and substance of user content in some manner, using highly intelligent prioritization algorithms.  

First, let us consider the more superficial design features of intermediaries. When Twitter, for instance, claims safe harbor, it positions itself primarily as a distributor of users’ tweets. However, its user interface is deterministic and affects the nature and content of tweets. The 140-character limitation (now 280) has led to the evolution of Twitter’s own syntax and vocabulary. Replies, likes, retweets, and hashtags are among the design features that shape how content is created on such a platform. But while these do impact the generation of content, they are perhaps not sufficient argument against safe harbor. They do not render Twitter much more than a thoroughfare for ideas, albeit one with distinct rules on what form those ideas may take.

The more insidious design features are also more obscure or opaque in nature, and worth looking at more closely. Many intermediaries employ design features to hold our attention by making their interfaces more addictive. Facebook employs techniques to ensure that each user sees stories and updates in their news feeds that they may not have seen on the previous visit to the site. It analyzes, sorts, and reuses user data to make meaning out of their “reactions”, search terms, and browsing activity in order to curate the content of each user’s individual feed, personalized advertisements, and recommendations. All of this is done under the garb of improving user experience. Given the deluge of information that exists online, it is indeed desirable that platforms personalize our experience in some manner. But the constant tinkering with user data and personalization goes far beyond what is strictly necessary.

Essentially, the discovery of information is transformed from an individual to a social and algorithmic endeavor. On a platform like Facebook, a large portion of users are exposed to news shared by their friends. Such selective exposure to opinions of like-minded people existed in the pre-digital era as well. However, the ease with which we can find, follow, and focus on such people and exclude others in the online world enhances this tendency. A study by Bakshy and others shows that on Facebook, three filters – the social network, the feed population algorithm, and the user’s own content selection – combine to decrease exposure to ideologically challenging news from a random baseline by more than 25 percent for conservative users, and close to 50 percent for liberal users in the US. There is little empirical work on the subject in India, but it is clear that Indian users too have limited exposure to diverse views on a platform like Facebook. However, these statistics are of limited value. The digression of 25 to 50 percent assumes that the baseline is a completely bias-free exposure, which is a fiction. In fact, there is now evidence to suggest that those who are only on mainstream media are more likely to be stuck in ideological bubbles. The combination of filters on Facebook still allows for exposure to some ideologically challenging news.

4. Revisiting the structure of intermediary liability regulation

In any case, there is a clear need for differentiating between infrastructure information intermediaries (such as ISPs) and content information intermediaries that facilitate communication (such as social media networks). It might be possible to create content-neutral standards for infrastructure information intermediaries that do not primarily focus on content transmission. For example, a set of content-neutral standards (like common carrier regulations) could apply to infrastructure intermediaries, while separate standards that are not content-neutral would apply to content intermediaries. Given their full and total control over our user experience online, intermediaries do owe us a duty of care.

The other criterion for differentiation of platforms could be on the basis of size. The draft Information Technology (Intermediary Guidelines) Rules, 2018 in India seeks to tackle this classification on the basis of the number of users. If resources and capacity are the guiding principle behind this classification, this criterion becomes problematic as a large user base can be reached by small businesses with low turnover as well. The other potential criterion for this classification can be monetary size, which may be more reflective of the capacity of the intermediary to exercise due diligence.

The approach of imposing statutory liability on web platforms for harmful speech is widely criticized for being violative of the constitutionally protected human right of free speech and expression. Because private platforms operate with the fear of being penalized if they fail to regulate harmful speech, they are likely to err on the side of caution and remove content, even when it is unnecessary. This can have a chilling effect on free speech on the internet. This threat to free speech is exacerbated by the difficulty in enforcing such regulatory policies. Regulations expect platforms to take down content within a prescribed time period from the time they have ‘knowledge’ of the objectionable content. For platforms with millions of users, all of whom have the ability to post and report content, being saddled with short time periods (often just 24 hours) to take down content, poses a very heavy burden. The natural response then is to remove content without diligently evaluating its illegality.

The second approach is a more involved form of co-regulation. For example, the German law that seeks to implement hate speech online, the Network Enforcement Law, envisions the recognition of independent institutions as self-regulated ones within the purview of the Act. Where certain content is reported by users as illegal but is not manifestly unlawful, the service provider is permitted up to seven days to remove it; here, the provider may refer the decision of unlawfulness to this self-regulated institution. The idea of having trusted institutions such as press councils play a more active role is a good one. However, the German framework compromises the independence of the institutions significantly. It allows the Federal Office of Justice the power to ‘recognize’ institutions. Ideally, this power should be fully independent of the state, and should include representation from stakeholders from within the industry and civil society.

The oft-used metaphor of dumb conduits for internet intermediaries is no longer applicable for social networking sites. There is a dire need to identify other regulatory parallels which better explain the role of these intermediaries.

While both of the above approaches have their pros and cons, what is clear is that the oft-used metaphor of dumb conduits for internet intermediaries is no longer applicable for social networking sites. There is a dire need to identify other regulatory parallels which better explain the role of these intermediaries. Given the complex range of roles performed by a company like Facebook, it is also worth considering if these disparate roles ought to be regulated differently. The regulatory exercise for internet intermediaries is complex as none of the analog metaphors are able to capture its functions fully or accurately enough to present a viable regulatory model. This calls for the formulation of meta-regulatory models which have a sufficient degree of flexibility built into them. 

Instead of laying down precise and specific rules and means of enforcement, the regulator could use a combination of inducements and sanctions to incentivize outcomes based on clearly-defined public interest objectives. This can include differentiated approaches for both rule-making and adjudication of complaints. This could be done by allowing industry bodies and companies to draft their own codes of conduct. These codes of conduct must meet specified objectives, and should subsequently be ratified by the regulator. Robust notice and comment, and public consultation thresholds can be set that individual associations drafting the codes of conduct must meet.

Coglianese and Mendelson define meta-regulation “as ways that outside regulators deliberately – rather than unintentionally – seek to induce targets to develop their own internal, self‐regulatory responses to public problems”. Broadly, most regulators must choose between two regulatory philosophies. The first is the deterrence model and the second is the compliance model. The deterrence model is an adversarial style of regulation built around sanctions for rule-breaking behavior. It relies on a model of economic theory which states that those regulated are rational actors who would respond to incentives and disincentives. The compliance model, on the other hand, emphasizes cooperation rather than confrontation and conciliation rather than coercion. It seeks to prevent harm rather than punish an evil. Its conception of enforcement centers upon the attainment of the broad aims of legislation, rather than sanctioning its breach. The complexities of the online content regulation problem statement make a clear case for a mix of both these models.

Further, intermediary liability regulators would need to invoke enforcement strategies that both successfully deter egregious offenders while rewarding those who are proactively taking steps to lead to favorable outcomes. In this case, good regulation would require adopting different responsive strategies, taking into account the behavior of the regulated actors. This can be done effectively only if there is an enforcement escalation and the threat of a credible tipping point that is sufficiently powerful to deter even the worst offenders. The regulator must be able to perform the functions of an educator, an ombudsman, a judicial body, and an enforcer. On one end of the spectrum, the regulator should be able to perform support functions such as educating platforms through informal guidance, standards setting, advisory services, and training. On the other, the regulator should have a variety of sanction powers at its disposal, starting from soft powers such as notices and warnings, naming and shaming, and mandatory audits, to powers to investigate and impose fines and compensatory orders.

List of References

1. Graber, Mark A., Sanford Levinson, and Mark V. Tushnet, Constitutional Democracy in Crisis? New York: Oxford University Press, 2018.
2. Sinha, Amber, The Networked Public. New Delhi: Rupa Publications, 2019.
3. Udupa, Sahana, ‘India Needs a Fresh Strategy to Tackle Online Extreme Speech’, Economic and Political Weekly, 11 February 2019, accessed 26 July 2019, https://www.epw.in/engage/article/election-2019-india-needs-fresh-strategy-to-tackle-new-digital-tools.
4. Achen, Christopher H. and Larry M. Bartels, Democracy for Realists: Why Elections Do Not Produce Responsive Government, Princeton University Press, 2016.
5. Pariser, Eli, The Filter Bubble Penguin Books, Reprint Edition, 2012.
6. Giglietto, Fabio, Laura Lannelli, Luca Rossi and Augusto Valeriani, ‘Fakes, News and the Election: A New Taxonomy for the Study of Misleading Information Within the Hybrid Media System’, The SSRN, 17 April 2019, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2878774.
7. Wardle, Claire and Hossein Derakhshan,’ Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making’, Council of Europe Report DGI (2017) 09, Council of Europe, 2017.
8. boyd, danah, ‘Streams of Content, Limited Attention: The Flow of Information through Social Media’, Web2.0 Expo, 17 November 2009, http://www.danah.org/papers/talks/Web2Expo.html.
9. Bakshy, Eytan, Messing Solomon, Lada Adamic, ‘Exposure to Ideologically Diverse News and Opinion on Facebook’, Facebook Research, 9 May 2015, available at https://research.fb.com/publications/exposure-to-ideologically-diverse-information-on-facebook/.
10. Srinivasan, Dina, ‘The Antitrust Case Against Facebook’, Berkeley Business Law Journal Vol. 16, Issue 1, Forthcoming, 10 September 2018, https://ssrn.com/abstract=3247362.
11. Gerlitz, Carolin and Anne Helmond, ‘The Like Economy: Social Buttons and the Data-Intensive Web’, New Media & Society 15, no. 8 (April 2013): 1348–65, https://doi.org/10.1177/1461444812472322.
12. McNamee, Roger, Zucked: Waking up to the Facebook Catastrophe, New York: Penguin Press, 2019.
13. Kosinski, Michal, David Stillwell and Thore Graepel, ‘Private Traits and Attributes Are Predictable from Digital Records of Human Behaviour’, PNAS, 9 April 2013, accessed 26 July 2019, https://www.pnas.org/content/110/15/5802.
14. Sunstein, Cass R., Republic: Divided Democracy in the Age of Social Media, Princeton: Princeton University Press, 2017.
15. Klonick, Kate, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’, 131 HARV. L. REV. 1598, 1625 (2018).
16. Peters, Jonathan, ‘The “Sovereigns of Cyberspace” and State Action: The First Amendment’s Application – or Lack Thereof – to Third-Party Platforms’, 32 BERKELEY TECH. L.J. 989, 990 (2017).
17. Oren Bracha & Frank Pasquale, Federal Search Commission? Access, Fairness, and Accountability In the Law of Search, 93 CORNELL L. REV. 1149, 1193 (2008).
18. Ayers, Ian. & J. Braithwaite, Responsive Regulation: Transcending the Deregulation Debate, Oxford: Oxford University Press, 1992.
19. Volokh, Eugene and Falk, Donald, First Amendment Protection for Search Engine Search Results – White Paper Commissioned by Google (April 20, 2012). UCLA School of Law Research Paper No. 12-22, Available at SSRN: https://ssrn.com/abstract=2055364 or http://dx.doi.org/10.2139/ssrn.2055364.
20. Baldwin, Robert, Martin Cave and Martin Lodge, The Oxford Handbook of Regulation, Oxford, Oxford University Press, 2010.
21. Coglianese, Cary, and Evan Mendelson. 2010. “Meta-Regulation and Self-Regulation.” In The Oxford Handbook of Regulation, edited by Martin Cave, Robert Baldwin, and Martin Lodge, 146-168. Oxford: Oxford University Press.

Acknowledgements: The author would like to thank Pooja Saxena and Gurshabad Grover for their edits and feedback.

Amber Sinha is the executive director of the Centre for Internet and Society, India. At CIS, Amber has led projects on privacy, digital identity, artificial intelligence, and misinformation. Amber’s research has been cited with appreciation by the Supreme Court of India. He is a member of the Steering Committee of ABOUT ML, an initiative to bring diverse perspectives to develop, test, and implement machine learning system documentation practices. His first book, The Networked Public, was released in 2019. Amber studied law and humanities at National Law School of India University, Bangalore.