Feminist Frames for a Brave New Digitality

Feminist Frames for a Brave New Digitality

Anita Gurumurthy & Nandini Chami

The excesses of intelligence capitalism present an unprecedented urgency to reimagine sociality and reinvent the institutional architectures for a new world. We need to revitalize our theories of agency, social subjectivity, and planetary wellbeing; revamp the norms and rules that determine rights; and revisit the political practice of feminist solidarity. Our sense-making frames cannot afford a nostalgia about human supremacy. They must recognize non-human materialities, putting an environment in which all matter share existence, front and centre. This will allow us to revisualize personhood and social subjectivity through a relatedness with natural ecosystems and technological artefacts. Current institutional norms are woefully inadequate, unable as they are to tackle a totalizing intelligence capitalism. The digital paradigm must be (re)claimed through a post-individualist, anti-patriarchal, decentralized and anti-imperialist institutional framework. What we need are norms for a collective claim to data and a political commitment to systematically scrutinize the social identity of AI systems. Feminist efforts to build community and forge publics are entrapped in the dominant communicative arenas of the digital that instrumentalize and co-opt political subjectivity. Through a self-reflexive place-making that visibilizes the often-illegible practices of community and solidarity and embraces cross-fertilizations, feminism can lead the way for emancipatory posthuman futures.

Illustration by Mansi Thakkar

Crisis at the digital turn

As feminism and its radical propensity confronts the digital epoch, the Covid conjuncture provides a stocktaking moment for revisiting the human condition. It allows us to contemplate the meta frames that must guide us into a just and egalitarian future, providing an occasion to sharpen our epistemic toolkit and explore what a transformative being and becoming in digitality – the condition of human-digital hybridity – means.

The story of connectivity and access seems to have lost its once-impassioned urgency and emancipatory potential. The market for gadgets has reached an equilibrium adequate for capital’s continuing conquest through datafication of human sociality. The gadgetless or unconnected, such as indigenous people, are perhaps not as important for the corporate data machine as the ecosystems they inhabit. As for the dispossessed others like wage workers, their lives are anyway being captured by cameras, internet of things (IoT), and automated decision-making technologies generating the data to categorize and convert them into ‘bottom-of-the-pyramid’ markets for ever-expanding product ‘innovations’.

The production of capital in the digital epoch may be seen as the stage of ‘intelligence capitalism’. Enclosing the data it ceaselessly collects and accumulates, and deploying its data enclosure for honing an ‘intelligence advantage’, digital age capitalism ruthlessly eliminates competition to aggrandize value on the network. The capitalist data machine commoditizes information not only to produce economic value, but also to control ‘bios’ or life to a more intensified degree than before. In the current form of capitalism, therefore, “life itself is the main capital”.1 The data gold rush is the new imperialist frontier – a Leninist territorial capture by force for capitalist interests. Only that, coercion today is achieved by stealth, as the self and society are folded into intelligence capitalism through digitally-surveilled motions of the everyday.

The digital’s inherent propensity for deterritorial communications has worked well for the new global feudals – big platform companies – and their business models. Erasing the materiality of embodied labor and eradicating the relationality of care, intelligence capitalism decouples social reproduction from production. A regime of despotic control reigns over digital production chains, atomizing labor power and normalizing precarity. A pronounced asymmetry is evident in the neo-colonial division of labor within which gendered and racialized categorizations determine the very promise of freedom.

The economy of “life as surplus”,2 feeding on incessant profiling, displaces critical agency and radical subjectivity, cannibalizing diversity. The ‘data subject’ is but a proxy for the proliferation of differences as the engine of commodification in our quantified environments.

In the wild west of intelligence capitalism, rule-making is privatized and legitimized through platform as protocol and AI as law. Institutions of norms-setting and rule-making have been rendered ineffectual, and even irrelevant, with the data lords determining visions and meanings of development, democracy, human rights, trade, and peace and security.

So goes the digital tale. A compelling contemporary myth that is a far cry from Haraway’s cyborgian vision for a feminist future that can vanquish “an informatics of domination”.3

The digital turn signals an urgency to reimagine sociality and invent the institutional architectures of a new world. We need to revitalize our theories of agency, social subjectivity, and planetary wellbeing; revamp the norms and rules that determine rights; and revisit the political practice of feminist solidarity.

From a feminist standpoint, this reality is untenable. The digital turn signals an urgency to reimagine sociality and invent the institutional architectures of a new world. We need to revitalize our theories of agency, social subjectivity, and planetary wellbeing; revamp the norms and rules that determine rights; and revisit the political practice of feminist solidarity.

Towards this, our essay proposes an epistemic triumvirate of sense-making, claims-making and place-making as the basis of such renewal.

Sense-making – Embracing the posthuman condition

There are no essential differences or absolute demarcations, between bodily existence and computer simulation, cybernetic mechanism and biological organism, robot technology and human goals.
– N. Katherine Hayles4

The binaries of data and body, human and technology, often lead us to essentialisms – a dystopic bemoaning of datafied destiny in intelligence capitalism or utopic readings of AI as the magic wand for ‘human’ advancement. Moving away from such dead ends, feminist political action must find a theoretical portal for liberation that allows for greater complexity.

Feminist theorists like Donna Haraway, Rosi Braidotti, and Katherine Hayles reject these tight boundaries and dualisms. They question the category of the autonomous, liberal, human subject who stands apart from non-human others.5 Asserting the inseparability of mind and matter, they propose a ‘posthuman’ systems framing. Posthumanism contends that with digital technologies, the embodied mind becomes distributed across multiple terrains of hyperconnection and hyperpersonalization. The posthuman person is hence a complex, material-informational entity, constantly being (re)constructed.6

This is not to suggest a loss of humanity, but a shift in the way we understand nature and the hybrid lives we lead. In the continuous interaction with electronic devices, the human person does indeed embody agency; however, agency is now reconfigured. It is distributed and interactive. Human agency correlates with the distributed cognitive system as a whole, in which ‘thinking’ is done by human and non-human actors.7

As an eco-philosophical approach, feminist posthumanism also theorizes a seamlessness between subject and object, subjectivity and ecology – an inter-connectedness between all matter – “that locates the subject in the flow of relations with multiple others”.8 Feminist posthuman theorists thus underline a post-anthropocentric perspective on the environment. The ‘environment’ is not only the context for human agency, but the arena for the production of the entirety of both ‘natural’ and ‘social’ worlds. There is nothing beyond environment, and nothing (for instance, humans and their diverse cultures) is excluded from it.9

Why is the posthuman frame important to our actions?

To marshal the vision and action adequate to a sustainable future that is cognizant of the limits of anthropocentricism and the false idea of a singular, undifferentiated humanity, the conceptual frames we deploy must explain the structures of power and domination. In posthumanism, things and persons, nature and technology, virtual and real are entangled in a complex whole. Our evolution towards the posthuman condition, as Braidotti reflects, is a stage of crisis under the ‘capitalocene’ – how capitalism in the digital technological conjuncture informs and subordinates the possibility of thinking about what a human is to an excessive extent.10 Arranging and ordering human beings as risky/non-risky, deserving/undeserving, valued/disposable and so on, capitalist data regimes construct and reconstruct the materiality of bodies through control, colonization, and exploitation. They hold subjectivity hostage.

But nostalgic assertions harking back at human supremacy and a disavowal of AI may end up denying “social ontology” at the digital turn.11 Such nostalgia will prevent the crystallization of institutional ethics appropriate and adequate to the posthuman condition. Non-human materialities are bona fide participants within events and interactions, rather than recalcitrant objects, social constructs, or instrumentalities.12

Our task, therefore, is to dismantle disempowering relationalities, revisualizing personhood through an ethics of connection – with natural ecosystems, robots, AI, and the material others that make the whole of our existence. Displacing the oppressive regimes of data governmentality in dominant computational systems, our action must situate itself in the quest for a new sensibility, mobilizing new modes of social subjectivity.

Claims-making – defining network-data freedoms

The overlap in the sociopolitical circumstances of human and artificial agents is not predicated on some shared biological or ecological background, nor on shared experiences or conscious states, but more concretely on the material and institutional realities within which human and nonhuman agents “share existence”.
– Bruno Latour13

Articulating how rights arise in the relationality of matter – human and nonhuman – in shared digital destiny is a vital feminist task. Indeed, time-space contingencies or ‘the contextual’ must occupy a salient place, but it must be colinear with a common baseline, that is, ‘a shared vision’, for emancipatory personhood. A shift from liberal constructs of the human in human rights is urgently needed to reimagine the idea of rights through a posthuman institutional ethics.

What this entails – stepping beyond human-centric ideas of solidarity, social justice, and equality – is a planetary ecosystem focus.14 The ways in which capitalism, state, patriarchy, imperialism, and white supremacy have historically required control over bodies and nature, need to be the starting point in this quest for new rights.

How do we then begin to articulate the substance of digital rights, or more broadly, network-data freedoms for an expanded personhood?

Intelligence capitalism is a totalizing, imperialist force. People and planet, machines and code, are subordinated in digital value chains, their agentic propensities exploited and extracted for profit. From rare-earth mining in Congo, chip production in Asia, affective and intellectual labor in the digital economy, mass deployment of surveillance paraphernalia by states to bio-piracy and bio-prospecting through digital gene sequencing, and AI modelling meant to discriminate and destroy, the pan-global digital ecosystem emboldened by finance capital has unleashed a disciplinary regime that has seen an erosion of personhood and the evisceration of planetary wellbeing.

Institutional norms are at a crossroads. Not unlike the post-war crisis that birthed the human rights regime, the world polity today is at a hairpin bend. A post-democratic complacence is sweeping across state institutions, while the global multilateral system is occupied by imaginations of ‘sustainable development’ that valorize a capitalist future through the tropes of equality, inclusion, opportunity, innovation, and progress. Humanist ideals in global justice have been used to defend the very practices that subvert it.

The digital needs to be reconceived in post-individualist, anti-imperialist, anti-patriarchal terms. The myth of data as a disembedded, non-rivalrous, ever-flowing resource obfuscates the systemic relationalities of the network-data-nature-culture assemblage in intelligence capitalism. Not only must these relationalities be opened up in order to question what data may be “dematerialized”15 from human and non-human matter, under what conditions, towards what gains, and for whom, data materiality itself must also be continuously examined in relation to historical markers – race, gender, class, caste, geography, and more.

A post-individualist framework is needed for claims about data that will account for how embodification online and the processes through which data becomes intelligence are evaluated for physical, material, and non-material implications.

A post-individualist framework is needed for claims about data that will account for how embodification online and the processes through which data becomes intelligence are evaluated for physical, material, and non-material implications.16

This does not mean a negation of personal rights, but rather, an attempt to inscribe the social with the possibilities for an authentic posthuman personhood. Privacy rights based on individual consent have proven to be ineffectual at best and harmful at worst, with corporations acting as de facto mediators of informational claims in which the body is embedded. The structural implications of loss of privacy for minority communities in derived datasets (when identities reappear) have been the subject of much study.17 A reification of personhood in the form of, for example, privacy rights as a boundary against things or abuse of commoditized data is unlikely to solve the social or collective crisis of corporatized data control.18

The corollary of this is that extractive regimes of data as private enclosures will need to make way for a different institutional framework for data’s “rematerialization”19 so that datafied relationalities can (re)produce critical, agentic personhoods for thriving nature-culture-techno ecosystems.

Non-Western ontologies provide important points of departure – locating humans as integral to ‘environment’,20 and underscoring new visions for conceptualizing human-digital assemblages. They suggest alternative ideas of data materiality where relationality and “belonging” (of part with the whole) rather than “exclusionary rights” (between subject and object)21 can become the basis of claims. The notion of the data commons – increasingly gaining ground in digital rights theory and activism – has the radical potential for an ecosystem approach to data resources. By situating data within the very same natural-social environment in which humans share space, this approach allows for explorations of collective claims that adhere to shared ethics and norms. It alerts us to the possibilities for posthuman personhood that can bring forth post-anthropocentric, legal-institutional framings of the digital.

As Sarah Keenen observes, “property’s governmental power reaches beyond the subject, determining not only what belongs to who, but also who belongs where, and how spaces of belonging will be shaped in the future”.22 Left to itself, data’s commoditization is bound to (re)create a disciplinary order that is brutal in its alienation and destruction. In elaborating the ideas of the data subject, therefore, legal-institutional visions must legitimize the claims of marginal subjects, ensuring a place for them in the future. Claims need to be reimagined as potential posthumanist rights.23

The institutional aspects of data also need to consider personhood as is constituted in the interplay between human and non-human parts of global intelligent ecosystems. Whether AI, for instance, has consciousness or sentience is the wrong question here. The fact is that, in the digital moment, thinking and embodiment are distributed. They are entangled in structures of power that need to be made known. Daniel Estrada points to how Kiwibots, a start-up offering food delivery through robots – rather than using AI software to control its bots – farms out the control task to low-paid operators in Colombia who use GPS to direct the bot to its destination. Kiwibots provides an interesting case at the intersection of automation, teleoperation, and the global labor market that challenges the strict dichotomy between humans and machines. In this digital ecosystem, instrumentalizing the robot as ‘the slave’ would amount to “indirectly treating another human as a slave, with many of the same structures of exploitation and oppression the term invites”.24 The right question for ethical policy, therefore, is – how should robotic/AI agency co-construct social subjectivities?

An institutional framework for AI must recognize overlapping structures of oppression that situate digital things – data pools, databases, networks, AI systems, cameras, internet of things, robots, cloud architectures – as agents of power. Feminist data and AI scholarship is replete with analyses of the subordination and violence implicit in the gendering and racialization of bots.25 Scholarship also points to the denial of personhood through state control of marginal citizens through real time surveillance.26 The “social identity” of robots and AI must therefore be available for public inspection.27 Put differently, posthumanist frames of justice include a morality for the non-human world, opening up intelligence and bodies cohering in the form of automated code to political scrutiny and renewed imaginings.

The claims of local actors resisting the multiple tyrannies of oppression cannot materialize unless the international political economy of development discourse and rule-making are challenged fundamentally. The right to participate as full persons in network-data assemblages is for all individuals and collectivities. It cannot fructify in the current trajectories of corporate-led, imperialist, undemocratic global systems that co-opt and control the digital. Quite ironically, the institutions of international human rights law had discovered the posthuman category when, from the beginning, capitalist interests were combined with human rights, and the corporation deemed a person.28 The future of rights and justice depends on destabilizing these realities for a transformative “ecological potential”.29 We need collective, decentralized and anti-imperialist imaginaries to govern the network and data that can enable a thriving of diverse posthumanist assemblages.

Place-making – Constructing feminist publics

We need to understand the body not as bound to the private or to the self – the western idea of the autonomous individual – but as being linked integrally to material expressions of community and public space. In this sense there is no neat divide between the corporeal and the social; there is instead what has been called a ‘social flesh.’
– Wendy Harcourt and Arturo Escobar30

At the heart of intelligence capitalism is the impulse that produces ever-multiplying differences. A post-feminist valorization of narcissistic individuality feeds the network-data complex with likes, forwards, retweets, and more, individualizing feminism and flattening it into a proliferating, universal hashtag culture of performativity. To be in the network is to model the self on its logic. The feminist subject in the current conjuncture, therefore, emerges as an active, freely choosing, and self-reinventing persona, unaffected by structural constraints.31

While the internet revolutionized the creation and construction of community and solidarity, changing the very scale and space of feminist politics, its evolution in intelligence capitalism complicates feminist place-making. It draws the self and subjectivity into a depoliticized space, obliterating socio-structural hierarchies. Algorithmic cultures of platform publics accommodate radical identity-based politics, cannibalizing them into “a market-driven and state-sanctioned governmentality of diversity” that Chandra Talpade Mohanty critiques in her reflections of minority struggles in current neoliberal times.32 She points to how questions of oppression and exploitation, as systematic, institutionalized processes, have difficulty being heard when neoliberal narratives disallow the salience of collective experience or redefine this experience as a commodity to be consumed.33

Tragically, the embodied experience of digitality is as much an embedded product of the structures of oppression and exploitation, including, race, caste, disability/ability, age, gender, sexuality, geography etc., as in previous epochs. Activists and feminist rights organizations have documented the extreme violence that women and people of non-normative gender identities and sexualities face in the design architectures of online publics, geared to encash viral outrage, literally.34

The online space of flows privileges certain bodies and narratives, even as it eclipses and sidesteps others incongruous with the demands of its political economy. Attempts to perforate pop culture with feminist strategies are rewarded,35 and publics adopting playful modes of resistance or meme cultures encouraged. The bodies of women of colour – even if legible – are often “relegated to metaphors”,36 while locations of race, gender, class, nation, empire, sexuality acquire a post-intersectional grammar that is unified in various combinations for the market. The contradictions in feminist ontological assumptions are rendered irrelevant.

For feminist action then, the current posthuman condition presents a persistent tension between critical, radical subjectivity and online communicative publics. How do we call out and resist oppression when its experience is coded into the self-propelling logic of the network space? How can feminist publics, implicated as they are in the material architectures of intelligence capitalism, rescue themselves?

Feminist practice needs a reflexivity that can account for the legibilities coming from the interpretative power of certain types of politics, and the erasures of certain others, both tied to the logics of the network-data complex. This will allow us to discern and politicize less visible feminist practices that are resisting the ravages of capital, demonstrating how community and solidarity in the transnational moment are conceived and enacted in a global frame for a global citizenship.37 The radical politics of such place-making – embedded (locally/in the particular) yet connected (translocally/through human-digital assemblages) – share a vision for an alternative democratic global order. These visions seek to make public the systemic basis of oppression, and the multifarious sites of resistance from where women farmers, indigenous women protesting the brutal exploitation of their ecological resources, persecuted women from minority religions laying claims to citizenship, trans and queer women, disabled women, and the new precariat on digital value chains – are collectively asserting their right to be heard.

Feminist place-making is not merely about creating the site/s for free play of multiple subjectivities. It is about deploying the public arena in the digital moment as a constitutive element in subjective identification itself.

These resistance narratives tell us what it means to create a place, or indeed, a tapestry of places, that can be part of multiscalar frames of feminist action. The notion and practice of solidarity needs to be recovered from assumptions of universal, templatized, global publics that dilute, discipline, and disarticulate counter-hegemonic knowledges. However, as Mohanty points out, this cannot be to the neglect of the structural.38 Questions of imperialism and capitalism resurgent in the political economy of the digital are central to how political feminisms of today carve out the material spaces of resistance – online and offline – and can provide a global frame for building solidarity.

Therefore, in the context of intelligence capitalism, our interest lies in continuously unpacking how the institutional context of the online communicative arena functions; how communicative publics are reproduced through dominant relationalities (including platform ownership, design, protocols, and governance); how ‘individual experiences’ may be traced back to the systems that (re)produce them (and vice versa); and how communicative arenas of the digital are themselves in constant interaction with dominant ideologies, and historical structures of oppression, exploitation, cultural norms, legal rules and ruling institutions, all (re)producing one another.

Feminist place-making is not merely about creating the site/s for free play of multiple subjectivities. It is about deploying the public arena in the digital moment as a constitutive element in subjective identification itself.39

Emerging through new alliances to decolonize, detabilize, and discover, feminist actions for a new digitality must forge cross-fertilizations that include indigenous and First Nation peoples, environmental and digital rights activists, technologists, anti-globalization forces, and several others.

A brave new feminist digitality

The posthuman is not postpolitical. The posthuman condition does not mark the end of political agency, but a recasting of it in the direction of relational ontology.
Rosi Braidotti40

An emboldened capitalism riding on digital highways and data power confronts twenty-first century feminism. Its instinct for survival and intuition for opportunism have been laid bare in these surreal times of the global Covid pandemic. The personal wealth of the Big Tech monarchs has gone up even as the world registered heightened inequality and hunger.

A recalibration of our politics is not a matter of choice. A new deal must be clinched and the ‘informatics of domination’ overthrown. Digital and data technologies are not extraneous objects. They are agentic entities in the ecosystems we inhabit – of centralized power, imperialist control, and patriarchal superiority. But they need not be. Revisualizing personhood and reimagining an ethics and politics of connection, community and care, feminism must mobilize action appropriate to emancipatory posthuman futures on an array of fronts.

A recalibration of our politics is not a matter of choice. A new deal must be clinched and the ‘informatics of domination’ overthrown.

Who the future of promise belongs to depends on the political-institutional design possibilities that realign the nature-culture-techno present. This task is simultaneously about seeking transformative global norm-making for data and AI, as it is about a (re)socialization of subjectivity. New actions and coalitions will need to be forged along with unlike others not easily legible in the coercive politics of likeness we navigate in our techno-structures.

There is no room for nostalgic humanism. Our sense-making, claims-making and place-making strategies must account for an emergent reflexivity – ‘a becoming social’ – that can confront the digital devil in the detail.

It is time to ready the feminist arsenal for a humane and just digital epoch.

Notes

  • 1 Kall, J. (2017a). A posthuman data subject in the European data protection regime. Making my data real working paper 4/2017. Retrieved from https://makingmydatareal.files.wordpress.com/2017/03/mmdr_kacc88ll_4_2017.pdf.
  • 2 Cooper, M. E. (2011). Life as surplus: Biotechnology and capitalism in the neoliberal era. University of Washington Press.
  • 3 Haraway, D. (2006). A cyborg manifesto: Science, technology, and socialist-feminism in the late 20th century. In The international handbook of virtual learning environments (pp. 117-158). Springer, Dordrecht.
  • 4 Hayles, N. K. (2008). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. University of Chicago Press.
  • 5 Käll, J. (2017). A posthuman data subject? The right to be forgotten and beyond. German Law Journal, 18(5), 1145-1162.
  • 6 Delio, I. (2020). The posthuman as complex dynamical personhood: A reply to Hyun-Shik Jun, Social Epistemology Review and Reply Collective. Retrieved from https://social-epistemology.com/2020/04/15/the-posthuman-as-complex-dynamical-personhood-a-reply-to-hyun-shik-jun-ilia-delio/.
  • 7 ibid.
  • 8 Braidotti, R. (2013). The posthuman. John Wiley & Sons. pp 50
  • 9 Fox, N. J., & Alldred, P. (2020). Sustainability, feminist posthumanism and the unusual capacities of (post) humans. Environmental sociology, 6(2), 121-131.
  • 10 Braidotti, R. (2019). A theoretical framework for the critical posthumanities. Theory, culture & society, 36(6), 31-61.
  • 11 Estrada, D. (2019). Human Supremacy as Posthuman Risk. Computer Ethics-Philosophical Enquiry (CEPE) Proceedings, 2019(1), 13.
  • 12 Sundberg (2014, 33) cited in Fox, N. J., & Alldred, P. (2020). op.cit.
  • 13 Latour, B. (2003). Do you believe in reality? News from the trenches of the science wars. Philosophy of technology: The technological condition, Blackwell Publishing Ltd, 126–137.
  • 14 Braidotti, R. (2016). Posthuman feminist theory. Oxford handbook of feminist theory, 673.
  • 15 Hayles, 1999 cited in Käll, J. (2017b). op.cit.
  • 16 Käll, J. (2017b). op.cit.
  • 17 Dencik, L., Hintz, A., & Cable, J. (2017). Towards data justice. DATA POLITICS, 167.
  • 18 Kall, J. (October, 2020). The materiality of data as property. Retrieved from https://www.academia.edu/42843054/The_Materiality_of_Data_as_Property.
  • 19 Hayles, 1999 cited in Käll, J. (2017b). op.cit.
  • 20 Rosiek, Snyder, and Pratt (2019); Todd (2016) cited in Fox, N. J., & Alldred, P. (2020). op.cit.
  • 21 Keenan, S. (2014). Subversive property: Law and the production of spaces of belonging. Routledge.
  • 22 Keenan, S. (2014). op.cit.
  • 23 Käll, J. (2017b). op.cit.
  • 24 Estrada, D. (2019). op.cit.
  • 25 Balsamo, A. M. (1996). Technologies of the gendered body: Reading cyborg women. Duke University Press.
  • 26 Nayar, P. K. (2012). ‘I Sing the Body Biometric’: Surveillance and Biological Citizenship. Economic and Political Weekly, 17-22.
  • 27 Brscić, Kidokoro, Suehiro & Kanda (2015); Romero (2018); Salvini et al. (2010); Smith & Zeller (2017) cited in Estrada, D. (2019). op.cit.
  • 28 Baxi, U. (2009). Human rights in a posthuman world: Critical essays. Oxford University Press.
  • 29 Fox, N. J., & Alldred, P. (2020). op.cit.
  • 30 Harcourt, W., & Escobar, A. (2002). Women and the politics of place. Development, 45(1), 7-14.
  • 31 Gill, R. (2007). Postfeminist media culture: Elements of a sensibility. European journal of cultural studies, 10(2), 147-166.
  • 32 Mohanty, C. T. (2013). Transnational feminist crossings: On neoliberalism and radical critique. Signs: Journal of women in culture and society, 38(4), 967-991.
  • 33 ibid.
  • 34 Namita, A. (2017). Mapping research in gender and digital technology. Retrieved from https://idl-bnc-idrc.dspacedirect.org/bitstream/handle/10625/58151/58271.pdf?sequence=1.
  • 35 Kauer (2009), cited in Baer, H. (2016). Redoing feminism: Digital activism, body politics, and neoliberalism. Feminist Media Studies, 16(1), 17-34. http://dx.doi.org/10.1080/14680777.2015.1093070.
  • 36 EPW Engage (October 2020). Is ‘Intersectionality’ a useful analytical framework for feminists in India?, Retrieved from https://www.epw.in/engage/discussion/intersectionality-useful-analytical-framework.
  • 37 Batliwala, S. (2002). Grassroots movements as transnational actors: Implications for global civil society. Voluntas: International journal of voluntary and nonprofit organizations, 13(4), 393-409.
  • 38 Jibrin, R., & Salem, S. (2015). Revisiting intersectionality: Reflections on theory and praxis. Transcripts: An Interdisciplinary Journal in the Humanities and Sciences, 5, 7-24.
  • 39 Harper, Phillip Brian. Gay male identity, personal privacy, and relations of public exchange: Notes on directions for queer critique. Eds. Harper, Phillip Brian, Anne McClintock, José Esteban Muñoz, and Trish Rosen. Social Text: Queer transexions of race, nation, and gender. 52-53 (Autumn-Winter 1997): 5-29.
  • 40 Braidotti, R. (2016). op.cit.

Anita Gurumurthy is a founding member and executive director of IT for Change, where she leads research collaborations and projects in relation to the network society, with a focus on governance, democracy and gender justice. Her work reflects a keen interest in southern frameworks and the political economy of Internet governance and data and surveillance.

Nandini Chami is the deputy director of IT for Change. Her work largely focuses on research and policy advocacy in the domains of digital rights and development, and the political economy of women’s rights in the information society. She is part of the organisation’s advocacy efforts around the 2030 development agenda on issues of ‘data for development’ and digital technologies and gender justice. She also provides strategic support to IT for Change’s field centre, Prakriye.

Imagining the AI We Want: Towards a New AI Constitutionalism

Imagining the AI We Want: Towards a New AI Constitutionalism

Jun-E Tan

Imagining the AI We Want: Towards a New AI Constitutionalism

Jun-E Tan

Artificial intelligence (AI) technologies promise vast benefits to society but also bring unprecedented risks when abused or misused. As such, a movement towards AI constitutionalism has begun, as stakeholders come together to articulate the values and principles that should inform the development, deployment, and use of AI. This essay outlines the current state of AI constitutionalism. It argues that existing discourses and initiatives centre on non-legally binding AI ethics that are overly narrow and technical in their substance, and overlook systemic and structural concerns. Most AI guidelines and value statements come from small and privileged groups of AI experts in the Global North and reflect their interests and priorities, with little or no inputs from those affected by these technologies. This essay suggests three principles for an AI constitutionalism rooted in societal and local contexts: viewing AI as a means instead of an end, with an emphasis on clarifying the objectives and analyzing the feasibility of the technology in providing solutions; emphasizing relationality in AI ethics, moving away from an individualistic and rationalistic paradigm; and envisioning an AI governance that goes beyond self-regulation by the industry, and is instead supported by checks and balances, institutional frameworks, and regulatory environments arrived at through participatory processes.

Illustration by Jahnavi Koganti

1. Introduction

The ability of machines to learn from the past and make predictions about the future promises vast improvements to our individual and collective lives. With artificial intelligence (AI), we are able to rapidly detect patterns and anomalies in data, discover new insights, and inform decision-making. Better public health and transportation, more efficient services and increased accessibility, climate change mitigation and adaptation, etc. are part of a long list of the potential benefits of AI.

Governments and companies, eager to deploy and employ these technologies, often cite these potential benefits to frame the adoption of AI as a matter of inevitable progress. The possibilities of ‘AI for good’ are endless, we are told, as long as we provide the machines with enough data to churn. The technology is neutral, we are assured, and AI experts are working on perfecting these systems, complete with ethical considerations, so that negative impacts are minimized. Yet, as more AI-enabled systems are rolled out and adopted, accounts of unintended consequences and intentional abuse continue to accumulate at an alarming pace. Cautionary tales of the unintended consequences of AI abound – machines exacerbating racial biases,1 exam grading algorithms turning out to be hugely erroneous,2 and automated social protection schemes failing society’s most vulnerable, leading to death by starvation in extreme cases.3 Then there are egregious cases of intentional abuse – state and non-state actors leveraging AI capabilities to surveil entire populations,4 manipulate voter behavior,5 or produce highly realistic manipulated audio-visual content (also known as deepfakes) that can undermine the foundations of trust in society.6

Amidst these promises and anxieties, a movement towards AI constitutionalism has begun in recent years, as stakeholders from the market, state, and civil society put forth visions of what ethical AI should constitute and how these technologies should be governed. By AI constitutionalism, we mean the process of norm-making or the articulation of key values and principles which guide the design, construction, deployment, and usage of AI technologies. The concept is inspired by the more established body of work on digital constitutionalism, defined by Dennis Redeker and his colleagues as “a constellation of initiatives [including declarations, magna cartas, charters, bills of rights, etc.] that have sought to articulate a set of political rights, governance norms, and limitations on the exercise of power on the Internet”,789which are not only important for political and symbolic reasons, but also for shaping laws and regulations in the digital era.

Indeed, the process of shaping norms is exceedingly important as it entails a reckoning with our collective values. Norms are a sort of moral compass that guide us towards an imagined future. Especially in the context of AI, a nascent technology whose direction and implications are not yet fully known, some big picture questions need to be discussed. What are our goals and principles as a society? Where do we draw the line between possible trade-offs and values that are sacred and must be protected at all costs? What behaviors do we reward or sanction? And depending on the answers to these questions, what types of AI should we build (or not build) to aid our progress as a civilization?

In this essay, I outline the current state of AI constitutionalism, and provide arguments about why existing discourses and initiatives in this space will not lead us towards a future that is cognizant of human dignity and sustainable development. Based on these arguments, I imagine a new AI constitutionalism that imbues technological discourses with socio-political relevance, thus opening up discussions rooted in specific applications and contexts. Finally, I put forth three principles that should guide future initiatives in AI constitutionalism:

1) AI must be viewed as a ‘means’ instead of an ‘end’,
2) AI ethics must emphasize relationality and context, and
3) AI governance must go beyond self-regulation by the industry.

2. AI ethics: Why it is not enough

In the last five years, the area of AI ethics has become increasingly active, with stakeholders at various levels and in different geographic locations issuing policy statements or guidelines on what ethical AI is or should be. Together, these provide a fertile ground for analyzing the underlying priorities and assumptions that mark the current state of AI constitutionalism and shape the character of norm-making in the field.

Anna Jobin and her colleagues at ETH Zurich gathered at least 84 institutional reports or guidance documents on ethical AI in their 2019 analysis of the global landscape of AI ethics guidelines and principles.10 Most of these documents come from private companies (22.6 percent), government agencies (21.4 percent), academic and research institutions (10.7 percent), and intergovernmental or supranational organizations (9.5 percent). Prominent examples at the government level include the OECD AI Principles and the European Commission’s Ethics Guidelines for Trustworthy AI. Corporations, civil society, and other multistakeholder groups have also come up with their own non-legally binding positions and manifestos. Examples include Google’s AI principles,11 the Universal Guidelines for Artificial Intelligence developed by The Public Voice,12 the Tenets of Partnership on AI to Benefit People and Society,13 and the Beijing AI Principles.

There is some convergence in the values or principles that emerge as paramount in these ethical AI guidelines and statements. In Jobin and her colleagues’ analysis, the most commonly articulated principles are those of transparency, justice and fairness, non-maleficence (causing no harm), responsibility, and privacy. Six others appear less frequently, and in the following order: beneficence (promoting good), freedom and autonomy, trust, dignity, sustainability, and solidarity. However, despite the convergence in the values that are prioritized by existing AI policy documents, the picture becomes increasingly complex when we look beyond the terms themselves, and focus on their interpretation and implementation. At this point, some divergence or lack of consensus begins to emerge.

Most articulations on AI ethics tend to focus on narrow technical problems and fixes. An evaluation by Thilo Hagendorff from the University of Tübingen14 of 22 ethical AI guidelines, finds that the most popular values (such as accountability, explainability, and privacy) tend to be the easiest to operationalize mathematically, while the more systemic problems are overlooked. These systemic problems, Hagendorff suggests, include the weakening of social cohesion (through filter bubbles and echo chambers, for instance), the political abuse of AI systems, environmental impacts of the technology, and trolley problems (in which there is no clear decision on which choice is more ethical; for instance, having to choose between killing a pedestrian or the driver of an autonomous vehicle). Moreover, very little attention is paid to the ethical dilemmas plaguing the industry itself – the lack of diversity within the AI community or the invisible and precarious labor that goes into enabling AI technologies, such as dataset labeling and content moderation.

Technology is framed as an inevitable step towards progress; its application is taken for granted regardless of the context. In other words, being ethical only entails “building better”; “not building” is not an option.

Discussions on AI ethics are also based on certain assumptions and framings – “moral backgrounds” according to Daniel Green and his colleagues15 – which set the scope and direction of AI constitutionalism. Green and his colleagues’ critical review of seven high-profile value statements in ethical AI finds that the discourse is in line with conventional business ethics but sidesteps the imperatives of social justice and considerations of human flourishing. Technology is framed as an inevitable step towards progress; its application is taken for granted regardless of the context. In other words, being ethical only entails “building better”; “not building” is not an option. Furthermore, scrutiny of the ethicality of AI technologies is restricted to the design level, and does not extend to the business level. A design-level approach to ethical AI, for instance, looks only at reducing the racial bias of facial recognition software, without questioning the ethics of deploying this technology for mass surveillance in the first place. Another implicit assumption is that ethical design is the exclusive domain of experts within the AI community (for instance, tech companies, academics, lawyers). Product users and buyers are just stakeholders who “have AI happen to them”. Seemingly ironclad values and principles start to show cracks when these assumptions are questioned. What can we expect from ethical AI that is techno-deterministic and does not take a critical view of what the technology is used for? For whom and in whose interest are AI technologies being built and deployed?

More challenges emerge as we move away from the substantive content of AI ethics discourses and start putting principles into practice. First, AI ethics is, at best, seen as good intentions with no guarantee for good actions, and at worst, criticized as deliberate attempts to ward off hard regulations. Ethics whitewashing is a real concern as corporations eschew regulations and put forth self-formulated ethical guidelines as sufficient for AI governance. In practice, ethical considerations come in only after the top priorities of profit margins, client requirements, and project constraints have been resolved.16 It is difficult to rely on the goodwill of corporations which have arguably co-opted the academic field of AI ethics in an attempt to delay regulations.17 The existence of ethical guidelines does not guarantee that companies will be ethical. There are well-documented instances of companies resorting to ethics dumping and shirking wherever convenient, most obvious in the precarious work conditions of content moderation workers in the Global South.18

Ethics whitewashing is a real concern as corporations eschew regulations and put forth self-formulated ethical guidelines as sufficient for AI governance. In practice, ethical considerations come in only after the top priorities of profit margins, client requirements, and project constraints have been resolved.

Mainstream discussions on AI ethics assume that technologies exist in a vacuum, devoid of context. These assumptions are often made by a very small and privileged group of people in the Global North,19 who do not see the need to engage people outside of their own community even though the tools they build significantly impact the world at large. When AI technologies are designed and deployed without attention to context, systemic harms are amplified, and entire populations, especially in the Global South, can be rendered more vulnerable.20 Above all, discussions on ethics remain just that – discussions – not legally binding and enforceable. AI ethics, in its current state, does not lead to ethical AI. If we are serious about making technology work for the people and the planet, our efforts towards AI constitutionalism need to look beyond dominant discourses. This is what I attempt to do in the following section.

3. Towards a new AI constitutionalism

Already, there is mounting resistance against corporations and their maneuvering of ethical self-regulation. Carly Kind, Director of the Ada Lovelace Institute, observes a “third wave” of AI ethics, following a first wave comprising of principles and philosophical debates, and a second wave focusing on narrow, technical fixes. Kind argues that the third wave of AI ethics is less conceptual, more focused on applications, and takes into account structural issues. Research institutes, activists, and advocates have mobilized to effect changes in AI design and use, with some successes such as legislations and moratoria on the use of algorithms for applications such as test grading and facial recognition.21 An emerging body of work on “radical AI” aims to expose the power imbalances exacerbated by AI and offer solutions.22

The Covid-19 pandemic has laid bare these structural imbalances and triggered a renewed rush towards digitalization, with its associated concerns. Against this backdrop, we have also seen a shift towards a more critical view of AI and its implementation. It is precisely at this point that a new AI constitutionalism, or at least a significantly upgraded one, is needed and possible. We must seize this moment to take control of the narrative and determine what is important for our collective future, and how AI can help us achieve this vision. This is particularly urgent for communities that lie outside of the AI power centres, whose views remain underrepresented in global norm-making and standards-setting, and whose contexts may not be understood by those building the technologies and making the ethical decisions that underpin them. Some groups have already rallied together to collect and compile principles important to their communities, such as the Digital Justice Manifesto put together by the Just Net Coalition23 (a global network of civil society actors based mostly in the Global South), and the CARE Principles for Indigenous Data Governance by the Global Indigenous Data Alliance.24

Societal constitutionalism is a process of constitutional rule-making that starts from social groups like civil society, representatives from the business community, or multistakeholder coalitions. As noted by Redeker et al.,25 the process can be seen in three phases: “an initial phase of coming to an agreement about a set of norms by a specific group; a second phase in which these norms become law; and a third phase in which reflection about this builds up to achieving constitutional character”. Thus far, most of the norm-making in AI has been top down, coming from high-level policymakers, transnational Big Tech firms, or small groups of elites at national levels, reflecting the priorities of these groups. This is insufficient not only from a democratic point of view, but also because the vast applications of AI across different fields, from agriculture to zoology, necessitates the inputs of field experts who understand local contexts and implications.

A reimagination of AI constitutionalism should move the discourse from a purely technological approach to take societal considerations into account. It needs to move from the realm of the abstract to focus on application. Governance norms, political rights, and limitations of power within the field of AI should be democratically deliberated at different levels of a nested societal system and within different political jurisdictions (e.g. city, state, national, regional, international levels). This would allow all stakeholders and interest groups (e.g. professional associations, business associations, civil society networks, grassroots communities) to contribute meaningfully to the governance of AI from their own vantage points. This collective bottom-up approach, I propose, should be underpinned by the following considerations:

3.1. AI as a means to an end (and not an end in itself)

One prevalent assumption about AI is that it is an inevitable step towards progress, that AI technologies, if built well, can solve any problem. The tech industry’s optimism in this regard is echoed by the state. As a result, AI becomes an end in itself instead of a means to an end. Technological determinism is reflected in the willingness of governments to keep the AI regulatory environment minimalist, in order to not stifle innovation. In the rush to remain competitive in a high-tech, machine-enabled future, governments have outlined national AI strategies to promote research, talent, and investments in the sector, while remaining noncommittal about safeguarding against potential human rights violations.26 The possibilities of ‘AI for good’ begin to fall flat when seen from this perspective. If the objective of AI is indeed to bring social and economic benefits to the people, governments need to prioritize human rights over the needs of the industry and address the thorny issues that result from these technologies, including mass job displacements and a rapid concentration of wealth in the hands of a few.

For AI to be the means to an end, we need to first clarify our objectives and then critically assess if using AI is the best way to achieve them. In this, we can follow the lead of vision statements such as the UN Sustainable Development Goals and the Universal Declaration of Human Rights which have clearly-specified objectives, arrived at through extensive international consultations, negotiations, and agreements. The UN SDGs also come with a specific timeline (by 2030) as well as established indicators to help evaluate if the objectives have been met. Additionally, we can draw on relevant national27 and sectoral policies,28 or even organizational vision and mission statements which have often gone through contestations and consensus-building by multiple stakeholders. The use of AI needs to be grounded in such clearly-stated visions and blueprints for a better society.

Furthermore, it needs to be acknowledged that AI is only one tool in a full range of options, and not all problems should/can be solved by such technologies. In a presentation at Princeton University, titled ‘How to recognize AI snake oil’, Arvind Narayanan argued that while AI has become highly accurate in applications of perception (e.g. content identification, speech to text, facial recognition), and is improving in applications of automating judgment (e.g. spam detection, detection of copyrighted material, content recommendation), applications that promise to predict social outcomes (e.g. predicting criminal recidivism, job performance, terrorist risk) are still “fundamentally dubious”. Justifying the use of the term ‘snake oil AI’, Narayanan pointed to existing studies that show that AI backed by thousands of datasets is not substantially better at predicting social outcomes than manual scoring using only a few data points. Discussions on AI constitutionalism should, therefore, be grounded in clearly-stated objectives and feasibility studies, as well as allow room for rejecting AI usage, especially when there are potential risks for stakeholder communities.

3.2. AI ethics to emphasize relationality and context

According to Sabelo Mhlambi from Harvard University, Western ethical traditions tend to emphasize “rationality” as a prized quality of personhood – along the lines of “I think, therefore I am” – where humanness is defined by the individual’s ability to arrive at the truth through logical deduction.29 Not only is this an inherently individualistic worldview, it has also been used to justify colonial and racial subjugation based on the belief that certain groups are not rational enough, and therefore, do not deserve to be treated as humans. An AI framework that prioritizes rationality and individualism ignores the interconnectedness of our globalized and digitalized world, and serves to exacerbate historical injustices and perpetuate new forms of digital exploitation. The failure to recognize the relationality of people, objects, and events has left us hurtling towards countless crises and avoidable tragedies (such as man-made climate change exacerbated by nations’ inability to coordinate a multilateral response).

An AI framework that prioritizes rationality and individualism ignores the interconnectedness of our globalized and digitalized world, and serves to exacerbate historical injustices and perpetuate new forms of digital exploitation.

Scholars of technology and ethics have offered diverse philosophies anchored in relationality – such as Ubuntu,30 Confucianism,31 and indigenous epistemologies (e.g. Hawai’i, Cree, and Lakota)32 – that view ethical behavior in the context of social relationships and relationships with non-human entities such as the environment, or even sentient AI in the future. The moral character of AI must be judged based on its impacts on social relationships and the overall context and environment it interacts with. For example, evaluating AI-powered automated decision-making systems through the ethical lens of Ubuntu, Mhlambi points to a range of ethical risks. These include the exclusion of marginalized communities because of biases and non-participatory decision-making, societal fragmentation as a result of the attention economy and its associated features, and inequalities resulting from the rapid concentration of data and resources in the hands of a powerful few.33 In contrast, current ethical AI frameworks say very little about extractive business models of surveillance capitalism or the heavy carbon footprint of training AI.

The development and deployment of AI technologies take place in a complex, networked world. Discussions on AI constitutionalism thus need a paradigmatic shift in ethics from the individual to the relational, and must consider issues as diverse as collective privacy and consent, power and decolonization, invisible labor and environmental externalities in AI supply chains, as well as unintended consequences (for instance, when systems interact in unpredictable ways with their particular environments).

3.3. AI governance to go beyond self-regulation by the industry

The tech ethos of “move fast and break things” becomes much less persuasive if we make the connection that an algorithmic tweak in Facebook can lead to (or prevent) a genocide in Myanmar.34 Some friction in the system, by way of checks and balances, is necessary to make sure that any technology released is safe for society, and to guard against AI exceptionalism. Besides safety, AI can have significant systems-level opportunities and threats. An AI Security Map drawn by Jessica Newman at the University of Berkeley proposes 20 such areas – digital/physical (e.g. malicious use of AI and automated cyberattacks, secure convergence/integration of AI with other technologies), political (e.g. disinformation and manipulation, geopolitical strategy, and international collaboration), economic (e.g. reduced inequalities, promotion of AI research and development), and social domains (e.g. privacy and data rights, sustainability and ecology).35 It is difficult to imagine that self-regulation in the AI industry would carry us through all of these different areas, across different sectoral and geographical contexts.

The World Economic Forum defines governance as “making decisions and exercising authority in order to guide the behavior of individuals and organizations”.36 As AI constitutionalism is ultimately about governance of technology, discussions should not stop at AI ethics or be left to experts. Instead, we should explore other mechanisms such as institutional frameworks and regulatory environments to bridge principles and practice. Under the broad ambit of AI constitutionalism, diverse governance issues can be debated at various policy levels – for example, cross-border data flows and data sovereignty can be discussed at the international level; hard limits against malicious use of AI and data governance frameworks can be discussed at a national level; data privacy, especially in sensitive sectors such as finance and health, can be taken up at a sectoral level.

Broad participation in AI governance can have positive spillover effects such as trust-building, pooling multidisciplinary knowledge, and capacity-building across different domains. For this, a new AI constitutionalism needs to push for stakeholder participation at various levels. Underrepresented nations need to be invited and supported in norm-making initiatives at the international level; civil society must be consulted and engaged at national and city levels. These discussions should not focus only on the technical, and the onus should be on the AI community to make the information accessible to all. As a recent report by Upturn and Omidyar Network puts it, non-technical properties about an automated system, such as clarity about its existence, purpose, constitution,37 and impact, can be “just as important, and often more important” than its technical artifacts (its policies, inputs and outputs, training data, and source code).38

4. End reflections

AI constitutionalism needs to be squarely rooted in societal contexts and must make the connections between technology and the traditional fault lines of power and privilege. The resulting discourses will be complex and contested, reflecting the messy realities that the technology is embedded in, rather than the neat lists of values and principles that see the technology in a vacuum. The values of AI ethics (such as fairness, accountability, and transparency) will take on different, more consequential meanings when applied at a societal level, challenging actors in the Global North to explore ways to decolonialize AI and distribute its benefits based on solidarity, not paternalism.

By lifting AI constitutionalism from its narrow, technological focus to the societal and application level, we will find opportunities for greater participation and a more diverse range of perspectives to shape governance norms, power structures, and political rights in the field of AI. This will make space for actors in the Global South to deliberate on our own AI-enabled future, drawing from our cultural philosophies, and governing AI through our laws and institutional frameworks. It is critical that we claim this space to govern technology, as the unprecedented advances promised by AI can only be fulfilled if it is carefully controlled. Forfeiting this space would leave us stranded with a vastly different outcome of being controlled by technology and those wielding it.

Notes

Jun-E Tan is an independent researcher based in Malaysia, currently working on the topic of AI governance in Southeast Asia. Her research interests are broadly anchored in the areas of sustainable development, human rights, and digital communication. More information on her research and projects can be found on her website, https://jun-etan.com.

How the Global South Can Rise to the Challenge of a Digital New Deal

How the Global South Can Rise to the Challenge of a Digital New Deal

Richard Kozul-Wright

The fault lines in the global economic order, exposed once again by the pandemic, is an opportune moment for long-standing advocates of transformative change to put forth new agendas for the post-Covid world. Structural challenges posed by a globally uneven playing field, and upheld by discriminatory trade policies, unidirectional flows of labor and data, and differential levels of environmental and human degradation experienced by the Global South, require an overhaul of international systems and call for a reconfiguration of domestic priorities.

Taking cognizance of this, Richard Kozul-Wright, along with University of Boston’s Kevin Gallagher, put forth the ‘Geneva Principles for a Global Green New Deal’ which envision a global realignment of development goals, conferring autonomy to states, encouraging productive spending, and accounting for climate realities.

We spoke with Richard Kozul-Wright, director of the division on globalization and development strategies at the United Nations Conference on Trade and Development (UNCTAD), about whether and how these principles can be rearticulated and reconfigured to inform a progressive and egalitarian agenda for rapidly digitalizing economies. How can we govern data, the digital, and network technologies differently? How can we forge a democratic future for digital trade? What institutional arrangements are needed for redistributive justice in a rapidly digitalizing world? And above all, how can we rise to the challenge of a Digital New Deal?

Richard Kozul-Wright

Edited Excerpts

IT for Change (ITfC): The Covid-19 pandemic has given a boost to the digitalization of economies. Across the world, we are witnessing the expansion of digital servicification and a rise in forays by US and Chinese Big Tech corporations into foreign markets. In your view, what is worrisome about this trend and what are some of the risks we should be looking out for?

Richard Kozul-Wright (RKW): Clearly, access to the digital economy has helped during the pandemic by keeping information flowing, by keeping spending going through digital payment platforms and financial technology services, and by keeping classrooms going through online education and e-learning. But there are some real worries, three in particular, that we have to pay attention to. First, these gains are obviously limited by the digital divide, both within and between countries. It seems almost certain that the emphasis on the use of and access to digital technologies during the pandemic will further exacerbate existing inequalities. In that sense, greater digital servicification will lead to further divisions. So that exaggeration of the digital divide is the first concern. The second is that the digital economy, at least as we see it, is a rent-based economy where the ‘winner takes most’, if not all. In the absence of the right kind of regulations, digital servicification will almost certainly lead to higher concentration of rents in the hands of a few big digital platforms, mainly from the US and China, that already have a clear lead in that respect. This is an obvious concern. Beyond the issue of differential access to technology, the increased income inequality that’s likely to be generated, will further political and social divisions. The third concern is related to the first two. The digital economy is based on access to data and, as a consequence of the pandemic, more data is being collected and processed by the platforms. This data is also, for all practical purposes, owned by these platforms. Most developing countries, at this moment in time, don’t have the legislatory or the physical infrastructure to be able to strengthen data sovereignty. This will pose further challenges for developing countries as the first-mover advantage becomes more and more entrenched, and the challenge around access to and ownership of data becomes more and more problematic.

ITfC: What are your thoughts on how the current multilateral trade regime is contributing to some of these problems that you mentioned?

RKW: We at UNCTAD are worried about the way in which the rules of the global economy in general, including in the trading system, are rigged in favor of certain vested interests. That’s the background against which we look at these problems. In the particular context of the digital economy, the World Trade Organization (WTO) has an existing work program on e-commerce – to have discussions on e-commerce rules, allow countries to understand what these rules can do, and how those rules might impact development processes in particular. At the same time, the Doha Development Agenda, which has not as yet ended, is being squeezed out of the discussion by attempts to shift rule-making to newer issues, including those related to the digital era. This is coming at a time when the WTO itself, as an institution, has lost a lot of trust, particularly from its development partners. That’s a real concern for us in the particular context of wider discussions of reform of the WTO. Any reforms at this time should not come at the cost of the Doha Development Agenda. That round needs to be concluded before any new issues are put on the negotiating table, including rules involving the governance of the digital economy.

We are worried that the current way in which the WTO is operating will work to the advantage of the digital giants and against developing countries which lack the digital infrastructure necessary to be able to benefit from these new technologies. Digital rules at this moment in time, to borrow a slightly outdated metaphor, at least technologically speaking, would be putting the cart before the horse. We don’t think that’s very appropriate in the current context. On top of that, the big digital platforms are not only financially very powerful, but also politically very powerful. They have the political and financial clout to put pressure on governments to rig the digital rules, in exactly the same way that other powerful corporations have been able to rig the rules in other parts of the trade system. Now is not the appropriate time to try and force rule-making on digital issues, both for the developing countries as well as the WTO itself, which is going through a very difficult moment.

ITfC: As you mentioned, it’s quite apparent to outside observers as well that the multilateral rule of law and global trade systems are in crisis for a variety of reasons. In the digital governance context, this has meant a pervasive influence of multistakeholderism which has often undermined public interest because of corporations arguing for an equal seat at the table. Given the urgency to create norms for future digital economies – making digital transnational corporations (TNCs) accountable and deciding new rules on digital taxation and tariffs – we need global governance frameworks to challenge the current unequal order. How can we reinvent governance of digital TNCs within the current system?

RKW: I am not sure multistakeholderism adequately describes the evolution of global governance intention. We see this much more as a neo-capitalist world in which the interests of large corporations in developed economies are being advanced in cahoots with their own states in a way which resembles mercantilism. That poses challenges for developing countries who have much weaker states and firms than is the case for advanced economies.

The one thing that Covid-19 has obviously done is to highlight the pivotal role of the state and the public notion of economic interest. That’s clear in the context of the global health pandemic, but it is also true of other aspects of public goods and socio-economic rights. The challenge ahead is to reinvigorate the state and to get back towards a multilateral system in which the state, rather than private sector interests, sets the goals that define the common good. That’s the big challenge. This is very difficult given the way in which the rules of the system have been redesigned over the last 40 years to pander to private interests. The nature of the challenge goes back to the lack of trust in the system. And this lack of trust is reflected in the way in which the forces of the political economy are playing out, in particular, along digital lines.

Developing countries are still very much in resistance mode. They haven't yet found the positive agenda that is necessary to build the policy space they need, not only in the context of the digital economy but across a series of economic activities. They need that space if they're going to recover from this crisis in a better way than they did ten years ago.

One of the things that will be important to challenge coming out of the current crisis is the narrative, that is already being heard, of rapidly reglobalising the system in response to the pandemic, using the pandemic as a kind of bait-and-switch. The crisis is being used to say that what we need is an international solution to this problem (which we all agree is the case). However, the bait, in the form of access to international technologies and the necessary goods and services during the pandemic, is quickly being switched into code for extending the rules of the digital economy which favor existing vested interests. Resistance to this kind of bait-and-switch by developing countries and civil society organizations is the necessary first step. The more difficult challenge is whether on the back of the pandemic, on the back of the recognition that the state matters even more in protecting lives and livelihoods, the existing rules of the game – currently heavily stacked in favor of certain interests – can be rewritten to bring about the elements of social and economic justice that are clearly missing from the system. Developing countries are still very much in resistance mode. They haven’t yet found the positive agenda that is necessary to build the policy space they need, not only in the context of the digital economy but across a series of economic activities. They need that space if they’re going to recover from this crisis in a better way than they did ten years ago, and to build the kind of resilience – economic, social, and medical – that everyone is talking about as a necessary forward step out of the pandemic. That’s where the challenge lies right now.

ITfC: Typically, the governance challenge is so difficult because it demands that we come up with a vision of the kind of world we want to build. At the beginning of the pandemic, in April, you had published an article in The Tribune where you spoke about the five strategic goals for a Global Green New Deal. In your view, what may be the normative principles for a Digital New Deal that is also cognizant of the looming ecological crisis?

RKW: This was part of some work we were doing jointly with the Boston University to develop a general set of principles that we think are necessary to revive the multilateral system across a whole swathe of areas of economic life, and not just the trading system, where the rules and norms have been diverted by neoliberalism and the rise of unchecked corporate power. It’s a problem with respect to finance, intellectual property, and so on across that system. It’s not a system that’s capable, despite all the talk, of delivering fairer outcomes. It doesn’t produce the kind of caring economy that can protect the most vulnerable populations and promote a wider sense of economic rights. It doesn’t lead to a kind of participatory politics that can counteract the capture of policymaking by powerful interest groups. Ultimately, that’s the biggest concern. What is currently offered doesn’t lend itself to a sustainable future in which the environment is not being constantly ravaged and defiled for narrow private interests. The idea behind our work was the need for a different set of principles on which to deliver these kinds of broad strategic goals. They’re very general in nature, but they apply as much to the digital economy, or the evolving digital economy, as they do to the analog economy. The goal should not be liberalization, privatization, deregulation – these may or may not be useful instruments to achieve the larger goals of environmental sustainability and shared prosperity. Rather, we need to ensure that the basic principles around which we structure our aims and policies are such that the instruments don’t pre-empt or distort the overriding goal, but are calibrated to deliver those goals.

Obviously, common but differentiated responsibility in any multilateral context remains a basic principle for us, particularly where global public goods and the global commons are concerned. That notion applies, in particular, to the digital economy through the commitment to special and differential treatment in trading rules. Policy space – within the interdependent world we inhabit – should be extended to allow for the pursuit of national development strategies in line with a country’s particular capabilities and historical legacies. This has to be central to any kind of global rules. The need for proper participation on equal terms, accountability, and full membership in the process of designing multilateral rules systems has to be central. These are among the set of principles we have tried to outline and that we think have a broad resonance when it comes to the design of any sort of international interaction across states. The necessity of these principles is even more true for the digital economy where the dangers of corporate capture, rent seeking, polarization are arguably more intense than many other areas of economic life. Trying to take those general principles and applying them to the specifics of the digital economy is a challenge, and should be a necessary part of a Digital New Deal that we are trying to articulate for a more sustainable and inclusive multilateral system.

ITfC: How do you think countries in the Global South could forge their pathways to development in the digital economic order? There is a dual challenge here: to not replicate the growth model of neoliberalism which is predicated on data extractivism and to not be reduced to mere data mines for companies of the Global North.

RKW: This is very much an industrial policy challenge. The digital is the latest wave of industrial ‘progress’. It’s the newest path towards the industrial frontier. The challenges can only be met with active policy engagement by governments. It can’t be left to markets for all kinds of reasons including inherent problems of the digital economies – scale economies, externalities, asymmetries – that are hardwired into these activities. These have to be addressed by governments. Thinking about industrial policy in this digital context is the necessary first step.

When advanced economies talk about WTO reform, as they are doing now, they are thinking about ways to make it all the more difficult for developing countries to use the kinds of policy tools that they themselves used to build up capacity in this area.

One thing that developing countries shouldn’t be shy of is pointing out continuously that the lead of the advanced economies themselves, despite their rhetoric, is because of their use of industrial policy in this area, often linked to the military-industrial complex. The endless use of subsidies, financial support, tariffs to build up assets and capabilities in the area is what advanced economies have been doing over the course of the last 50 years or more to gain this dominant position. Thinking in industrial policy terms is critical for the Global South to get a handle on this challenge. This speaks to the need to rethink the rules of the international trading system that has done its utmost to prevent active industrial policy from being part of the toolkit for developing countries over the last 20-30 years. Certainly, when advanced economies talk about WTO reform, as they are doing now, they are thinking about ways to make it all the more difficult for developing countries to use the kinds of policy tools that they themselves used to build up capacity in this area. That’s the first set of challenges that developing countries need to focus on.

We also, in the work that we have done, have tried to outline a kind of digital cooperation agenda, particularly at the regional level, for developing countries. There are a lot of opportunities in the digital context for building regional alliances and strengthening regional integration, whether its about building data economy, cloud computing infrastructure, broadband infrastructure, promoting e-commerce, use of regional digital payments – there are a lot of areas that make up the digital economy that lend themselves to a much stronger regional agenda. We have tried to articulate a kind of progressive digital cooperation agenda for developing countries. That’s an important way to go, all the more so as one suspects that regionalism will become more important coming out of this pandemic. All the talk about shortening value chains, for example, needs to take hold amongst developing countries too. That’s another important area.

The last one, in context of particularly South-South cooperation, is learning from success stories. There are success stories in the developing world. The obvious one is China (though it’s not the only one). We do have a Belt-and-Road platform at UNCTAD, where we want to try and disseminate lessons from the Chinese experience that other developing countries could usefully tap into when thinking about their own structural transformation challenge. This includes, of course, the digital economy where China has emerged as a major digital player in the course of 20-25 years. So that sharing of experiences among countries of the South, for example, countries that have been able to develop legislation on data sovereignty, is also a necessary part of the kind of strategic thinking that developing countries are going to need if they are going to benefit from what is potentially a very transformative technology but also a technology that could leave them even further behind if they don’t develop the right policy tools to harness it.

ITfC: The development finance for building the critical digital and data public infrastructures needed by developing countries is often a challenge. In your view, what is not right with the development financing in the digital sector today? How can this change? How can development finance rise to the challenge of the Digital New Deal?

RKW: That’s another key question. When we think about industrial policy, it’s not just technology issues that are at play. Development finance has a critical role in the industrial policy agenda. At UNCTAD, we have for a long time criticized the way in which footloose capital and the deregulation of financial markets along with the narrowing of central bank agendas, have distorted the financing of the development agenda and moved it away from thinking about how finance contributes to structural transformation to thinking about how you can boost stock markets and other types of short-term, often highly speculative, asset classes. That, unfortunately, remains the agenda. This is what the World Bank calls the ‘maximizing finance agenda’ that uses public funds to incentivize private investors, and this remains a dominant and highly distortionary feature of the international financial system. That’s a general problem that needs to be tackled, not only for the digital economy but for many other traditional economic activities where the Global South needs to build capacities.

We need a much more regulated financial system, both at the national and international levels. In that context, we have always insisted on the critical role of development banks – both national, regional, and, ideally, multilateral development banks – as sources of reliable, stable finance that give firms and governments in the South the necessary longer-term horizon that is essential if you are going to truly diversify and upgrade your economy with long-term investment planning. That’s a general point, but it is a central point.

In the context of the digital economy, a related but additional challenge is that the South is, inevitably, in an infant industry territory, where start-ups suffer a whole series of disadvantages that come from their lack of scale and more limited capacities. Development banks have often, even the successful and good ones, failed to find ways to effectively encourage and nurture smaller businesses which are of a more productive nature – I’m not talking here about the microfinance agenda which is part of the problem and not part of the solution. That need to find effective financing windows for potentially productive start-ups in the digital economy will be a necessary part of the financing agenda coming out of the crisis, as we try and look for ways to rebuild the interface between finance and industry in a much more constructive way than has been the case in most countries in the last few years. There again, lessons from China are very important for other developing countries in examining how to think about these challenges.

This interview was conducted by Nandini Chami and Khawla Zainab of IT for Change.

Richard Kozul-Wright is director of the globalisation and development strategies division in UNCTAD and is responsible for the UNCTAD flagship publication, The Trade and Development Report. He has worked at the United Nations in both New York and Geneva. He holds a PhD in economics from the University of Cambridge UK and has published widely on economic issues in academic journals, books and media outlets.

Indigenous Data Sovereignty: Towards an Equitable and Inclusive Digital Future

Indigenous Data Sovereignty: Towards an Equitable and Inclusive Digital Future

Maui Hudson

In the face of a rapidly proliferating digital economy, indigenous communities across the world have for long pointed at the many ramifications that the incursions of the digital have had on their economic and cultural lives. This is best illustrated by the ongoing negotiations between the Māori community and the New Zealand crown, intended to secure rights of sovereignty over data produced by and about the community. Maui Hudson, associate professor at the University of Waikato, is leading the Tikanga in Technology project aimed at exploring Māori approaches to collective privacy, benefit, and governance in a digital environment, with a view to increasing the benefits to the community and reducing data harm.

We spoke to Hudson on the unique problems and challenges that indigenous communities are tackling in the emerging techno-economic landscape and fruitful ways in which indigenous perspectives can be employed to confront the technological transformations of the day. The interview covered a range of crucial subjects – from the inadequacy of current intellectual property regimes, and growing tendencies towards data colonization; to the kind of principles that would constitute an ethical approach to data use, and how indigenous concerns and knowledge systems can be integrated into the vision of a Digital New Deal.

Maui Hudson

Edited Excerpts

IT for Change (ITfC): Concerns around data colonialism have been front and centre in the digital economy as Big Tech encloses and expropriates value from data resources of individuals and communities. From your work, could you share your thoughts on the new threats that data colonialism poses to the political and economic sovereignty of indigenous communities?

Maui Hudson (MH): The work I have been doing has been grounded in the language of indigenous data sovereignty. The reference to sovereignty is intentional and speaks back to the assumption that open data is the best way to generate value. In reality, open data facilitates the appropriation of data resources, just as physical resources were extracted from indigenous lands by colonial powers. First-world nations have a disproportionate technological capacity to generate value from data. Therefore, the advocacy for open data supports the consolidation of data power and value in large businesses and conglomerates, leading to further marginalization of local communities and exacerbating societal inequities.

The data that is collected by these conglomerates, in turn, influences the narratives that research creates and has a direct effect on how resources get allocated. Many indigenous communities find themselves in a position where the data collected about them either reflects a deficit mentality that blames communities for the situation in which they find themselves, or they get invisibilized through aggregation into larger groups.

ITfC: How do existing intellectual property regimes (IPRs) impact the claims of indigenous communities over their data resources?

MH: IPRs treat data as property which can be owned by individuals or entities. It creates rights which allow the owners to determine who can and who can’t use that property. The ability to apply the rights is time limited and once expired allows that property to become part of the public domain. A number of indigenous knowledge resources don’t meet the criteria for IPRs and communities are caught between keeping that knowledge secret or sharing it with community members, thereby exposing it to the public domain. Traditional knowledge, traditional songs, and traditional medicinal techniques  cannot be protected using IPRs. But people can use them as source material for the development of products which can attract an IPR. Similarly, genomic data about indigenous flora and fauna cannot be patented, yet products derived from both traditional knowledge and genomic data can be protected by IPRs.

ITfC: What is the conception of data sovereignty as articulated from the Māori perspective?

MH: Data sovereignty is a cloud computing term which states that data should be subject to the laws of the nation within which it is stored. Indigenous data sovereignty takes an alternative position – It states that data should be subject to the laws of the nation from which it is collected. This is oriented towards increasing indigenous control of indigenous data which can be scaled down to Māori control of Māori data or Tribal control of Tribal data.

In Aotearoa, New Zealand, Māori signed the Treaty of Waitangi with the Crown in 1840. One of the clauses in the Treaty guaranteed Māori “full exclusive and undisturbed possession of their Lands and Estates Forests Fisheries and other properties which they may collectively or individually possess, so long as it is their wish and desire to retain the same in their possession…”. The word used to describe other properties in the Māori language is ‘taonga’ and in recent times Māori and the Crown have begun to talk about data as a taonga. While Māori and the Crown are talking about their relative rights and interests in data, there are definitely different expectations and ideas about where they sit. 

ITfC: Some scholars have begun to point to the collective claims of communities over their data, proposing the idea of “community data”. Can we imagine data as a common pool resource? Given the de facto control of Big Tech over data resources, how do we move towards this possibility?

MH: Indigenous data sovereignty recognises the collective interests of communities in data as a common resource. Food gathering places and environmental resources are managed as shared resources with protocols in place to ensure their sustainability. Similarly, traditional knowledge is shared within the community for the benefit of the community and represents another common resource.

The idea of individual ownership, whether that be for land or data, is anathema to indigenous sensibilities. The ethic of individual consent alongside the ethic of open data inevitably leads to the de facto control of Big Tech over data resources. These companies may or may not claim to own the data but, through possession and controlling access to it, exert greater levels of control over data that exists within the community.

Creating transparency about what data is indigenous data and where indigenous data resides is the first step towards supporting greater indigenous participation in the governance of data. This creates more collaborative and participatory forms of governance which brings a diversity of values into deliberations about appropriate data use and equitable approaches to benefit sharing.

The notion that indigenous communities should retain rights in relation to accessing data for governance of their communities, or participating in the governance of data when others access data about their communities, expands the set of interests that have a say over what appropriate use of data resources looks like. This is what indigenous data sovereignty expects of data aggregators.

Creating transparency about what data is indigenous data and where indigenous data resides is the first step towards supporting greater indigenous participation in the governance of data. This creates more collaborative and participatory forms of governance which brings a diversity of values into deliberations about appropriate data use and equitable approaches to benefit sharing.

ITfC: Given the emphasis of community over individual ownership within Māori community, does the experience with land or natural resources offer historical lessons regarding the legal formulation and institutionalization of the category of community ownership?

MH: One of the challenges with using land and natural resources is that they have all become subject to legal formulation even if ownership is shared across the group. In most cases, it is shared with a subset of the whole and this creates different kinds of tensions. However, data trusts, data commons, and data cooperatives are really examples drawn from other lands or natural resource environments.

ITfC: What reforms are needed to existing global frameworks (trade, IP, knowledge) so that the sovereign claim of indigenous communities to their data can be protected?

MH: Global frameworks supporting IP and trade have been developed to enhance economic activity. While it may have been successful in terms of overall economic value generated across the globe, it is apparent that the benefits of these activities have been unfairly distributed. Inequities between the Global North and South and disparities between high-income and low- and middle-income countries have been exacerbated through the protocols and rules established through global frameworks. Changes to existing frameworks are unlikely given the vested interests developed countries have in maintaining their advantage, as well as the level of consensus building required to ratify changes. Creating transparency at a digital infrastructure level may be the path of least resistance and the recent development of the CARE principles for Indigenous Data Governance provide an avenue for system change.  The CARE Principles, which have been endorsed by the Global Indigenous Data Alliance (GIDA), are being promoted by the Research Data Alliance as complementary to the FAIR principles for Scientific Data Management. The idea that data should be FAIR and CARE, giving equal attention to the characteristics of the data as well as the purposes of its use, is resonating with the socially responsible segments of the data science community.

Creating transparency at a digital infrastructure level may be the path of least resistance and the recent development of the CARE principles for Indigenous Data Governance provide an avenue for system change.

There are a number of examples where multinational corporations have misappropriated cultural knowledge, icons, and material to develop or promote products, and they have been called out for doing that. This wasn’t on the basis of data sovereignty but an assertion of cultural intellectual property rights (which are captured within the spirit of indigenous data sovereignty). Indigenous data soveeignty is also being used in discussions with the Crown about the appropriateness of offshoring data (using cloud-based services for data storage).

ITfC: With the vision of inclusion, equity and prosperity for all, what is your vision for a Digital New Deal?

MH: As digital futures become a part of indigenous realities, there is a greater focus on how data, digital platforms, and cyber infrastructures enhance, rather than diminish, diversity, inclusion, and equity. How can digital ecosystems facilitate indigenous cultures and languages to flourish? How should the value generated through digital economies contribute to the wellbeing and prosperity of all communities? How might indigenous artificial intelligence inform decision-making? My vision for a Digital New Deal would see a more equitable and inclusive society that embraces diversity and builds capacity in indigenous communities which allows them to maintain their culture in traditional environments and digital ones too.

Much of the additional value that is being generated in the digital economy arises from aggregation and scale. Aggregation through centralization is one avenue, but structurally this creates an inherent inequity. We have to work out how to allow aggregation without centralization. Infrastructure that supports federalization and provides nested polycentric governance approaches will ensure value can be negotiated and distributed more fairly.

Indigenous data sovereignty asserts indigenous rights over indigenous data with the aim of bringing indigenous values into digital platforms, indigenous worldviews into digital infrastructures, and indigenous voices into digital economies.

ITfC: How do you envisage the fault lines and the unifying points for the Global South as we enter the next decade?

MH: There are natural boundaries that exist between different communities, whether those be physical, cultural or linguistic. This is an inherent part of diversity, but can create challenges for building consensus and creating unity, especially when there is a need to mobilise against multinational corporations operating in global digital environments. The appropriation of data and concentration of wealth is the obvious outcome of the global system that has been created. It is also clear that nation states and the international community are not active enough in redistributing wealth to address global inequalities. Indigenous data sovereignty asserts indigenous rights over indigenous data with the aim of bringing indigenous values into digital platforms, indigenous worldviews into digital infrastructures, and indigenous voices into digital economies.

Maui Hudson affiliates to Whakatohea in Aotearoa New Zealand. He is an Associate Professor and Director of the Te Kotahi Research Institute at the University of Waikato. He was a founding member of Te Mana Raraunga Māori Data Sovereignty Network and the Global Indigenous Data Alliance. He leads a research project on Indigenous approaches to transforming data ecosystems, and is a co-director of Local Contexts, a digital tagging system that embeds provenance and community protocols into the metadata of traditional knowledge and genome sequences.

Beyond Public Squares, Dumb Conduits, and Gatekeepers: The Need for a New Legal Metaphor for Social Media

Beyond Public Squares, Dumb Conduits, and Gatekeepers: The Need for a New Legal Metaphor for Social Media

Amber Sinha

In the past few years, social networking sites have come to play a central role in intermediating the public’s access to and deliberation of information critical to a thriving democracy. In stark contrast to early utopian visions which imagined that the internet would create a more informed public, facilitate citizen-led engagement, and democratize media, what we see now is the growing association of social media platforms with political polarization and the entrenchment of racism, homophobia, and xenophobia. There is a dire need to think of regulatory strategies that look beyond the ‘dumb conduit’ metaphors that justify safe harbor protection to social networking sites. Alongside, it is also important to critically analyze the outcomes of regulatory steps such that they do not adversely impact free speech and privacy. By surveying the potential analogies of company towns, common carriers, and editorial functions, this essay provides a blueprint for how we may envision differentiated intermediary liability rules to govern social networking sites in a responsive manner. 

Illustration by Jahnavi Koganti

Introduction

Only months after Donald Trump’s 2016 election victory – a feat mired in controversy over alleged Russian interference using social media, specifically Facebook – Mark Zuckerberg remarked that his company has grown to serve a role more akin to government, rather than a corporation. Zuckerberg argued that Facebook was responsible for creating guidelines and rules that governed the exchange of ideas of over two billion people online. Another way to look at the same argument is to acknowledge that, today, a quarter of the world’s population (and of India) are subject to the laws of Facebook’s terms and conditions and privacy policies, and public discourse around the globe is shaped within the constraints and conditions they create. Social media platforms, like Facebook, wield hitherto unimaginable power to catalyze public opinions, causing a particular narrative to gather steam – that Big Tech can pose an existential threat to democracy.

This, of course, is in absolute contrast to the early utopian visions which imagined that the internet would create a more informed public, facilitate citizen-led engagement, and democratize media. Instead, what we see now is the growing association of social media platforms with political polarization and the entrenchment of racism, homophobia, and xenophobia. The regulation of social networking sites has emerged as one of the most important and complex policy problems of this time. In this essay, I will explore the inefficacy of the existing regulatory framework, and provide a blueprint for how to think of appropriate regulatory metaphors to revisit it.

1. The role of new media in democratic discourse

For a thriving democracy, three essential components are generally necessary: free and fair elections, working forms of deliberation, and the ability of its people to organize themselves for the purposes of protest. The basic idea behind deliberative democracies is that effective public political participation means more than just majoritarian decision-making. It involves the exchange of reasons and arguments – elected representatives should be able to provide the reasons behind their decisions, and respond to the questions that citizens ask in return. This process of debate, discussion, and persuasion, in addition to the aggregation of votes, is crucial for the legitimacy of policy outcomes.

The advent of the internet and social media has meant that millions of people are interacting with each other and debating issues. At the time of writing this essay, there are over 3.01 billion people online, over 20 percent of the world’s population. Since the early 2000s, a general optimism around new media, coupled with a mounting loss of faith in mainstream media, led many to believe that social networking sites would limit the ability of editors – compromised by economic and political compulsions – to play the role of gatekeepers of news. It was hoped that public accountability would emerge from the networked nature of the new media. Several examples of citizen journalism enabled by social media were hailed as harbingers of a new era of news.

This vision of social media as a democratizing actor was based on the ideal that it would be open, egalitarian, and enable genuine public-driven engagement. Google News, Facebook’s News Feed which tries to put together a dynamic feed for both personal and global stories, and Twitter’s trending hashtag feature looked poised to be the key drivers of an emerging news ecosystem. Initially, this new media was hailed as a natural consequence of the internet which would enable greater public participation, allow journalists to find more stories, and engage with readers directly.

A democratic society needs media and platforms that allow us to explore different perspectives and arguments before we make up our minds. Instead, these algorithms seize on our half-baked opinions and hasten their crystallization.

Over time, it became evident that far from being open or egalitarian, social media platforms introduce their own specific techno-commercial curation of how information is accessed. This can often amplify, and not lessen, the issues that plague mainstream media. For a democratic society to thrive, individuals need to be active participants in discourse and not passive recipients of information. Social media platforms view users primarily as consumers, and not citizens. Their single-minded drive to appeal to our basest and narrowest set of stimuli may make good business sense, but does no favors to the cause of democracy. As citizens, we need to be exposed to more than the most agreeable or extreme form of our still evolving opinions. The signal we give to algorithms through likes and clicks are often only a fleeting or tentative take on an issue. A democratic society needs media and platforms that allow us to explore different perspectives and arguments before we make up our minds. Instead, these algorithms seize on our half-baked opinions and hasten their crystallization.  It is bad enough that our online selves drive this propaganda, but lately, politically aligned actors are making creative use of such platforms to inundate us with misinformation, hate speech, and polarizing content.

2. The ‘public spheres’ of online platforms

Internet platforms have tremendous power to shape and moderate content that they facilitate. Although run by private corporations, these platforms have become public squares for discourse without any public accountability. This has blurred the lines between the public and the private. In the United States, the Supreme Court ruled that streets and parks, regardless of who owns them, must be kept open to the public for expressive activity. In the landmark 1939 case Hague v. Committee for Industrial Organization, the court said clearly:

“Wherever the title of streets and parks may rest, they have immemorially been held in trust for the use of the public and time out of mind, have been used for the purposes of assembly, communicating thought between citizens, and discussing public questions. Such use of the streets and public places has, from ancient times, been a part of the privileges, immunities, rights, and liberties of citizens.”

Despite its relative obscurity, there are few constitutional rights with more everyday relevance than the right to speak freely in public or address crowds on the sidewalks. The peculiarity of viewing even privately-owned spaces as ‘public forums’ lies in moving beyond the restrictions imposed by the state in penalizing private actions on public property. This means that free speech must be allowed to occur freely in public places, thus giving citizens the rights to assemble, protest, and engage in free conversation. While we do not have anything similar to the public forum doctrine in all common law countries, in most cases, there will be clearly articulated rights to assembly, with similar objectives. Thus far, courts have been hesitant to accord social media platforms the status of public forums. The primary reason is that these remain privately-owned platforms with their own community guidelines. While often informed by laws on issues such as copyright infringement, hate speech, and misinformation, the enforcement of community guidelines are not judicially-determined decisions.

This became a thorny issue when United States president Donald Trump, using his personal Twitter handle, blocked the accounts of several people, seven of whom filed a suit against this act. This private handle (@realDonalTrump), with over 53 million followers, is used by the president on a daily basis to pronounce policy decisions and opinions. In fact, the former White House Press Secretary Sean Spencer clearly stated that tweets from this handle could be considered official statements made by the president.

The Southern District court of New York refused to see Twitter as a traditional public forum. But it said that the interactive space accompanying each tweet, vis-à-vis how people are allowed to share, comment on, and otherwise engage with the tweet, may be considered a designated public forum. However, even here the key concern was not whether Twitter was a public forum or not, but that a citizen’s right to access government information was being restricted. The court’s reasoning was that the nature of the platform is irrelevant; it is the nature of speech, and the fact that it is government speech, that is relevant. Even though the concerned account is a private one and Trump operates it as any other private user would, when the platform is used to perform roles that relate to public functions, it automatically transforms from a private account to a designated public forum.

Besides, for those of us who consume and engage with information through platforms like Facebook and Twitter, the web, over time, gets reduced to a personalized, and therefore narrower, version of itself. Our Facebook timelines are occupied more and more by people and posts with shared and similar interests, proclivities, and ideological leanings. Attempts to break out of this restricted worldview by following people and organizations whose voices one may perceive as dissimilar to their own are often unsuccessful. In these circumstances, it feels as though platforms like Facebook deliberately resist attempts by people to burst the personalized bubble it creates for them. It is ironic then that in a hearing before the Senate Select Committee on Intelligence in 2018, Jack Dorsey, the founder and chief executive officer of Twitter, repeatedly referred to Twitter as a “digital public square”, which required “free and open exchange”.

Clearly, there are parts of social media which are designated spaces, where government officials, ministries, departments, elected representatives create pages, accounts, and handles to communicate with the public. This part of the platform is designated as a public forum and the same standards apply here. But that is not the case for content created by ordinary citizens on social networking platforms.

In several countries, including the US and India, courts have applied the well-known ‘public-function’ test, under which the duties of the state will apply if a private entity exercises powers traditionally reserved exclusively for the state. This means that if an entity performs a function of a sovereign character or one that significantly impacts public life, it must be considered the state for that purpose. The need for such a provision arises from the tremendous amount of power exercised by social networking sites in contemporary times.

3. Legal metaphors for social media

Over the past three decades, we have seen legal jurisprudence evolve to understand and address the legal questions posed by the internet and cyberspace. Most of these issues remain unresolved in our legal imagination, but we have formulated structured and clear principles about how one may approach them. Jurisprudence on cyberlaw is built largely around finding the appropriate metaphor. More often than not, the law and jurists seek assistance from existing regulations governing offline activities which can be most likened to the digital activity in question. The regulation of internet intermediaries has been built around the overworked metaphor of ‘dumb conduits’. Below, we explore the different analogies that could instruct how we regulate intermediaries in general, and social networking sites in particular.

Kate Klonick argues that there are three possible ways to look at the major social media companies. The first is to view them as ‘company towns’ and ascribe to them the character of the state, bound to respect free speech obligations as the state would. The second is to view them as common carriers or broadcast media, not equivalent to a public body but still subject to a higher standard of regulation so as to safeguard public access to its services. The third analogy would treat social media sites like news editors, who generally receive full protections of the free speech doctrine when making editorial decisions.

Jonathan Peters is a proponent of the first analogy. Peters relies on the landmark US Supreme Court case Marsh v. Alabama which states that “the more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.” While this view of March has roundly been rejected in later cases, Benjamin Jackson provides a more rounded argument for invoking the ‘public-functions’ test. He argues that “managing public squares and meeting places” has fallen within the domain of the state, and now that social networking sites perform this role, they perform a public function. This approach has received some judicial blessing in the US, most notably in Packingham v. North Carolina, where the court equated social networking sites such as Facebook, Twitter, and Linkedin with the ‘modern public square’. This formulation, while effective in dealing with the denial of access of information on these platforms, will pose other problems. As both Klonick and Daphne Keller suggest, this may be disastrous in dealing with already exacerbated problems of misinformation and hate speech online.

The second analogy likens social networking sites to common carriers such as broadcast media. According to Black Law’s Dictionary, a common carrier is an entity that “holds itself out to the public as offering to transport freight or passengers for a fee”. This common law doctrine has been central to the regulation of modern telecommunication carriers such as radio and television broadcasters. These broadcasters are not considered analogous to the state, in that they retain their private identities and the rights that go alongside. However, they are expected to be subjected to a higher degree of regulation, most importantly, the ‘equal access’ obligations. These obligations are based on one of three rationales. In the case of radio, the need for regulation arose from the “scarcity” of radio frequencies, prompting governments to intervene through a licensing and allocation system. Cable television does not suffer from the same scarcity limitations as radio; here the rationale for regulation is based on the bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home through the cable operator. The third criterion is that of invasiveness. Back in 1997, a US court categorically denied that the unique factors that justified greater regulation of cable and broadcast were present in the case of the internet. Its decision was based on the reasoning that the internet was not as ‘invasive as radio or television’ as it required affirmative action to access a specific piece of information unlike on radio and television.

A decade later, in 2008, Bracha and Pasquale critiqued this position, arguing that the internet has emerged as a space where “small, independent speakers [are] relegated to an increasingly marginal position while a handful of commercial giants capture the overwhelming majority of users’ attention and re-emerge as the essential gateways for effective speech”. Effective application of the common carrier analogy requires looking at two key questions. First, in what ways are internet intermediaries, and in particular social networking sites, comparable to common carriers like cable and broadcasters. Second, do these intermediaries satisfy either the “scarcity” test, the “bottleneck monopoly power” test, or the “invasiveness” test. The nature of regulation that they must be subject to could depend on the role they are performing and how it satisfies one of the above tests.

The basis for safe harbor is the idea that intermediaries are dumb conduits for the distribution of the speech of their users, rather than speakers themselves. However, this argument of dumb conduit is no longer tenable. Most, if not all, intermediaries affirmatively shape the form and substance of user content in some manner, using highly intelligent prioritization algorithms.

The final analogy is that of ‘editors’, where social networking sites exercise content moderation powers in line with the protected speech rights of a newspaper editor. Volokh and Falk have argued that search engine results are protected speech because they are a result of editorial judgments. It has been debated whether search engines, by virtue of dealing with facts as opposed to opinions, are rendered outside the scope of free speech. This position may not be tenable under several common law jurisdictions as facts are where much of the speech begins, and search results also represent a subjective opinion about facts. The same considerations may also apply to ‘editorial’ decisions of social networking sites. This characterization would also have an impact on the safe harbor protection (in that they are exempt from liability for user-generated content) that internet intermediaries enjoy in several jurisdictions. The basis for safe harbor is the idea that intermediaries are dumb conduits for the distribution of the speech of their users, rather than speakers themselves. However, this argument of dumb conduit is no longer tenable. Most, if not all, intermediaries affirmatively shape the form and substance of user content in some manner, using highly intelligent prioritization algorithms.  

First, let us consider the more superficial design features of intermediaries. When Twitter, for instance, claims safe harbor, it positions itself primarily as a distributor of users’ tweets. However, its user interface is deterministic and affects the nature and content of tweets. The 140-character limitation (now 280) has led to the evolution of Twitter’s own syntax and vocabulary. Replies, likes, retweets, and hashtags are among the design features that shape how content is created on such a platform. But while these do impact the generation of content, they are perhaps not sufficient argument against safe harbor. They do not render Twitter much more than a thoroughfare for ideas, albeit one with distinct rules on what form those ideas may take.

The more insidious design features are also more obscure or opaque in nature, and worth looking at more closely. Many intermediaries employ design features to hold our attention by making their interfaces more addictive. Facebook employs techniques to ensure that each user sees stories and updates in their news feeds that they may not have seen on the previous visit to the site. It analyzes, sorts, and reuses user data to make meaning out of their “reactions”, search terms, and browsing activity in order to curate the content of each user’s individual feed, personalized advertisements, and recommendations. All of this is done under the garb of improving user experience. Given the deluge of information that exists online, it is indeed desirable that platforms personalize our experience in some manner. But the constant tinkering with user data and personalization goes far beyond what is strictly necessary.

Essentially, the discovery of information is transformed from an individual to a social and algorithmic endeavor. On a platform like Facebook, a large portion of users are exposed to news shared by their friends. Such selective exposure to opinions of like-minded people existed in the pre-digital era as well. However, the ease with which we can find, follow, and focus on such people and exclude others in the online world enhances this tendency. A study by Bakshy and others shows that on Facebook, three filters – the social network, the feed population algorithm, and the user’s own content selection – combine to decrease exposure to ideologically challenging news from a random baseline by more than 25 percent for conservative users, and close to 50 percent for liberal users in the US. There is little empirical work on the subject in India, but it is clear that Indian users too have limited exposure to diverse views on a platform like Facebook. However, these statistics are of limited value. The digression of 25 to 50 percent assumes that the baseline is a completely bias-free exposure, which is a fiction. In fact, there is now evidence to suggest that those who are only on mainstream media are more likely to be stuck in ideological bubbles. The combination of filters on Facebook still allows for exposure to some ideologically challenging news.

4. Revisiting the structure of intermediary liability regulation

In any case, there is a clear need for differentiating between infrastructure information intermediaries (such as ISPs) and content information intermediaries that facilitate communication (such as social media networks). It might be possible to create content-neutral standards for infrastructure information intermediaries that do not primarily focus on content transmission. For example, a set of content-neutral standards (like common carrier regulations) could apply to infrastructure intermediaries, while separate standards that are not content-neutral would apply to content intermediaries. Given their full and total control over our user experience online, intermediaries do owe us a duty of care.

The other criterion for differentiation of platforms could be on the basis of size. The draft Information Technology (Intermediary Guidelines) Rules, 2018 in India seeks to tackle this classification on the basis of the number of users. If resources and capacity are the guiding principle behind this classification, this criterion becomes problematic as a large user base can be reached by small businesses with low turnover as well. The other potential criterion for this classification can be monetary size, which may be more reflective of the capacity of the intermediary to exercise due diligence.

The approach of imposing statutory liability on web platforms for harmful speech is widely criticized for being violative of the constitutionally protected human right of free speech and expression. Because private platforms operate with the fear of being penalized if they fail to regulate harmful speech, they are likely to err on the side of caution and remove content, even when it is unnecessary. This can have a chilling effect on free speech on the internet. This threat to free speech is exacerbated by the difficulty in enforcing such regulatory policies. Regulations expect platforms to take down content within a prescribed time period from the time they have ‘knowledge’ of the objectionable content. For platforms with millions of users, all of whom have the ability to post and report content, being saddled with short time periods (often just 24 hours) to take down content, poses a very heavy burden. The natural response then is to remove content without diligently evaluating its illegality.

The second approach is a more involved form of co-regulation. For example, the German law that seeks to implement hate speech online, the Network Enforcement Law, envisions the recognition of independent institutions as self-regulated ones within the purview of the Act. Where certain content is reported by users as illegal but is not manifestly unlawful, the service provider is permitted up to seven days to remove it; here, the provider may refer the decision of unlawfulness to this self-regulated institution. The idea of having trusted institutions such as press councils play a more active role is a good one. However, the German framework compromises the independence of the institutions significantly. It allows the Federal Office of Justice the power to ‘recognize’ institutions. Ideally, this power should be fully independent of the state, and should include representation from stakeholders from within the industry and civil society.

The oft-used metaphor of dumb conduits for internet intermediaries is no longer applicable for social networking sites. There is a dire need to identify other regulatory parallels which better explain the role of these intermediaries.

While both of the above approaches have their pros and cons, what is clear is that the oft-used metaphor of dumb conduits for internet intermediaries is no longer applicable for social networking sites. There is a dire need to identify other regulatory parallels which better explain the role of these intermediaries. Given the complex range of roles performed by a company like Facebook, it is also worth considering if these disparate roles ought to be regulated differently. The regulatory exercise for internet intermediaries is complex as none of the analog metaphors are able to capture its functions fully or accurately enough to present a viable regulatory model. This calls for the formulation of meta-regulatory models which have a sufficient degree of flexibility built into them. 

Instead of laying down precise and specific rules and means of enforcement, the regulator could use a combination of inducements and sanctions to incentivize outcomes based on clearly-defined public interest objectives. This can include differentiated approaches for both rule-making and adjudication of complaints. This could be done by allowing industry bodies and companies to draft their own codes of conduct. These codes of conduct must meet specified objectives, and should subsequently be ratified by the regulator. Robust notice and comment, and public consultation thresholds can be set that individual associations drafting the codes of conduct must meet.

Coglianese and Mendelson define meta-regulation “as ways that outside regulators deliberately – rather than unintentionally – seek to induce targets to develop their own internal, self‐regulatory responses to public problems”. Broadly, most regulators must choose between two regulatory philosophies. The first is the deterrence model and the second is the compliance model. The deterrence model is an adversarial style of regulation built around sanctions for rule-breaking behavior. It relies on a model of economic theory which states that those regulated are rational actors who would respond to incentives and disincentives. The compliance model, on the other hand, emphasizes cooperation rather than confrontation and conciliation rather than coercion. It seeks to prevent harm rather than punish an evil. Its conception of enforcement centers upon the attainment of the broad aims of legislation, rather than sanctioning its breach. The complexities of the online content regulation problem statement make a clear case for a mix of both these models.

Further, intermediary liability regulators would need to invoke enforcement strategies that both successfully deter egregious offenders while rewarding those who are proactively taking steps to lead to favorable outcomes. In this case, good regulation would require adopting different responsive strategies, taking into account the behavior of the regulated actors. This can be done effectively only if there is an enforcement escalation and the threat of a credible tipping point that is sufficiently powerful to deter even the worst offenders. The regulator must be able to perform the functions of an educator, an ombudsman, a judicial body, and an enforcer. On one end of the spectrum, the regulator should be able to perform support functions such as educating platforms through informal guidance, standards setting, advisory services, and training. On the other, the regulator should have a variety of sanction powers at its disposal, starting from soft powers such as notices and warnings, naming and shaming, and mandatory audits, to powers to investigate and impose fines and compensatory orders.

List of References

1. Graber, Mark A., Sanford Levinson, and Mark V. Tushnet, Constitutional Democracy in Crisis? New York: Oxford University Press, 2018.
2. Sinha, Amber, The Networked Public. New Delhi: Rupa Publications, 2019.
3. Udupa, Sahana, ‘India Needs a Fresh Strategy to Tackle Online Extreme Speech’, Economic and Political Weekly, 11 February 2019, accessed 26 July 2019, https://www.epw.in/engage/article/election-2019-india-needs-fresh-strategy-to-tackle-new-digital-tools.
4. Achen, Christopher H. and Larry M. Bartels, Democracy for Realists: Why Elections Do Not Produce Responsive Government, Princeton University Press, 2016.
5. Pariser, Eli, The Filter Bubble Penguin Books, Reprint Edition, 2012.
6. Giglietto, Fabio, Laura Lannelli, Luca Rossi and Augusto Valeriani, ‘Fakes, News and the Election: A New Taxonomy for the Study of Misleading Information Within the Hybrid Media System’, The SSRN, 17 April 2019, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2878774.
7. Wardle, Claire and Hossein Derakhshan,’ Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making’, Council of Europe Report DGI (2017) 09, Council of Europe, 2017.
8. boyd, danah, ‘Streams of Content, Limited Attention: The Flow of Information through Social Media’, Web2.0 Expo, 17 November 2009, http://www.danah.org/papers/talks/Web2Expo.html.
9. Bakshy, Eytan, Messing Solomon, Lada Adamic, ‘Exposure to Ideologically Diverse News and Opinion on Facebook’, Facebook Research, 9 May 2015, available at https://research.fb.com/publications/exposure-to-ideologically-diverse-information-on-facebook/.
10. Srinivasan, Dina, ‘The Antitrust Case Against Facebook’, Berkeley Business Law Journal Vol. 16, Issue 1, Forthcoming, 10 September 2018, https://ssrn.com/abstract=3247362.
11. Gerlitz, Carolin and Anne Helmond, ‘The Like Economy: Social Buttons and the Data-Intensive Web’, New Media & Society 15, no. 8 (April 2013): 1348–65, https://doi.org/10.1177/1461444812472322.
12. McNamee, Roger, Zucked: Waking up to the Facebook Catastrophe, New York: Penguin Press, 2019.
13. Kosinski, Michal, David Stillwell and Thore Graepel, ‘Private Traits and Attributes Are Predictable from Digital Records of Human Behaviour’, PNAS, 9 April 2013, accessed 26 July 2019, https://www.pnas.org/content/110/15/5802.
14. Sunstein, Cass R., Republic: Divided Democracy in the Age of Social Media, Princeton: Princeton University Press, 2017.
15. Klonick, Kate, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’, 131 HARV. L. REV. 1598, 1625 (2018).
16. Peters, Jonathan, ‘The “Sovereigns of Cyberspace” and State Action: The First Amendment’s Application – or Lack Thereof – to Third-Party Platforms’, 32 BERKELEY TECH. L.J. 989, 990 (2017).
17. Oren Bracha & Frank Pasquale, Federal Search Commission? Access, Fairness, and Accountability In the Law of Search, 93 CORNELL L. REV. 1149, 1193 (2008).
18. Ayers, Ian. & J. Braithwaite, Responsive Regulation: Transcending the Deregulation Debate, Oxford: Oxford University Press, 1992.
19. Volokh, Eugene and Falk, Donald, First Amendment Protection for Search Engine Search Results – White Paper Commissioned by Google (April 20, 2012). UCLA School of Law Research Paper No. 12-22, Available at SSRN: https://ssrn.com/abstract=2055364 or http://dx.doi.org/10.2139/ssrn.2055364.
20. Baldwin, Robert, Martin Cave and Martin Lodge, The Oxford Handbook of Regulation, Oxford, Oxford University Press, 2010.
21. Coglianese, Cary, and Evan Mendelson. 2010. “Meta-Regulation and Self-Regulation.” In The Oxford Handbook of Regulation, edited by Martin Cave, Robert Baldwin, and Martin Lodge, 146-168. Oxford: Oxford University Press.

Acknowledgements: The author would like to thank Pooja Saxena and Gurshabad Grover for their edits and feedback.

Amber Sinha is the executive director of the Centre for Internet and Society, India. At CIS, Amber has led projects on privacy, digital identity, artificial intelligence, and misinformation. Amber’s research has been cited with appreciation by the Supreme Court of India. He is a member of the Steering Committee of ABOUT ML, an initiative to bring diverse perspectives to develop, test, and implement machine learning system documentation practices. His first book, The Networked Public, was released in 2019. Amber studied law and humanities at National Law School of India University, Bangalore.