Lessons From a Pandemic: Three Provocations for AI Governance
Amba Kak
What, if anything, can the global pandemic teach us about regulating artificial intelligence (AI)? Through three provocations (AI as abstraction; AI as distraction; AI policy as infrastructure policy), this essay explores how the data-driven responses to – and the technology-related impacts of – the Covid-19 pandemic hold crucial insights for the emergent policy terrain around algorithmic accountability and the political economy of AI systems.
Introduction
What, if anything, can the global pandemic teach us about regulating artificial intelligence (AI)? Through three provocations (AI as abstraction; AI as distraction; AI policy as infrastructure policy), this essay explores how the data-driven responses to – and the technology-related impacts of – the Covid-19 pandemic hold crucial insights for the emergent policy terrain around algorithmic accountability and the political economy of AI systems.
First, just as abstract and decontextualized data visualizations and statistics about the pandemic have enabled the proliferation of narratives claiming that the “pandemic doesn’t discriminate”, I argue that abstraction in the discourse around artificial intelligence or AI systems plays a similarly pernicious role. For those engaged in advocacy around the social harms of AI systems, a definitional exercise could be a key way to rescue AI from the abstract, and foreground social and material concerns around these systems.
Second, contact-tracing apps deployed during the pandemic are a good entry point to understand ‘AI as distraction’. If contact-tracing apps were at the peak of the hype cycle in the early months of the pandemic, they now appear to be in the “trough of disillusionment”.1 It’s a good time then to ask: What was lost in the hype? Distraction is a useful way to understand the real function of many AI/algorithmic decision-making systems (ADS) tools which often disguise the underlying motivations and distract from deeper inequities and governance failures. Process-focused regulatory mechanisms like algorithmic impact assessments (AIA) hold promise, but they need to be structured to combat distraction and reveal the motivations driving these projects before they are implemented.
Finally, the pandemic has popularized the comparison of platforms to public utilities and brought a renewed focus to their “infrastructural” power. I argue that the “infrastructural turn”2 in AI policy is well on its way too, although this is sometimes obscured because of the lack of consensus around what counts as policy “about AI” versus broader data governance norms or industrial and competition regulation. AI policy should, in fact, be understood as an assemblage of these various policy trends aimed at democratizing, or at least, diversifying access to the inputs that sustain this new computing landscape: data, software, compute, expertise.
AI as abstraction
“The number of such laborers died/injured during migration to their native places due to such lockdown, State-wise?
Government Response: No such data is available.”
The Indian government’s response to a recent question on migrant workers who died as a consequence of the nationwide Covid-19 lockdown, imposed on March 23, 2020 with barely four hours’ notice, touched a raw nerve in public discourse.3 It came at a moment when statistics and data visualizations about the spread and impact of the pandemic have become normalized as a key mode of managing the pandemic. This is often referred to as “data-driven governance”.4 The government’s response – no data available – was a reminder that the picture the data paints is one that is palatable and indeed beneficial to those that construct it. In other words, “data on the impact of Covid” is not a neutral container: Who decides what counts as impact? Why isn’t there data on deaths due to the economic or governance impacts of Covid? Or data on the socio-economic profiles of those infected, and those who succumbed? As Rashida Richardson notes, “To exercise sovereignty is the power to authorize and enforce what information is relevant and necessary to govern.”5
As mentioned earlier, abstract and decontextualized data visualizations and statistics about the pandemic have enabled the proliferation of narratives claiming that the “pandemic doesn’t discriminate”, thereby erasing the stark disparities in how different demographics have been impacted. These data stories (like the lack of data on migrant deaths) can legitimize similarly abstract policy decisions that fail to take into account the immediate and urgent needs of particular demographic groups or localities.6 In response, counter data-narratives too have begun to emerge. Data for Black Lives and the COVID Racial Data Tracker in the US collected confirmed case data by race.7 In India, the Criminal Justice and Police Accountability Project (CPAP) studied 34,000 arrest records and 500 First Information Reports filed in Madhya Pradesh during the pandemic to understand the patterns of policing and locate the socio-economic profiles of the individuals policed.8 They produced a “countermap” that demonstrated that arbitrary and disproportionate criminalization of marginalized communities had only amplified during the pandemic.
Abstraction plays a similarly pernicious role in the discourse around AI systems. The term AI is ubiquitous in public discourse about technology but remains notoriously underspecified; it is hard to pinpoint precisely what kinds of systems are being referred to under this umbrella term.9 The moniker ‘artificial intelligence’ connotes the replacement of humans with machine thinking. It has an aura of futurism and magic10 routinely reinforced by images of robots11 that often accompany articles about AI. This imagination of AI has only served to create and foster an ‘AI hype’, which has ironically benefited a range of routine systems with vastly different functionality and levels of computational intensity. From content filters on social media and fraud detection tools in welfare systems to facial or other forms of biometric recognition to “smart” refrigerators and self-driving cars, there is an ever-expanding spectrum of systems that are enveloped under the rubric of “AI”. This has led to heated “boundary wars” in the technical research and business community that try to pinpoint a definitional threshold.12 For these groups, the stakes are high; the definitional threshold will determine which programs benefit from the ever-expanding pool of funding for AI research or make new ventures more appealing to investors.13
AI as an abstract buzzword can be brandished against complex social problems as if it were a neutral and external 'solution' rather than a sociotechnical system designed and developed to make value-laden choices and trade-offs.
For those engaged in advocacy around the social harms of AI systems, a definitional exercise could, however, be a key way to rescue AI from the abstract, and foreground social and material concerns around these systems. Just as glossy data visualizations can obscure the unequal impacts and governance failures of the pandemic, AI as an abstract buzzword can be brandished against complex social problems as if it were a neutral and external ‘solution’ rather than a sociotechnical system14 designed and developed to make value-laden choices and trade-offs.15 These abstract narratives of so-called autonomous systems also obscure the material infrastructure and distributed global workforce that undergirds the AI economy.
There has been a growing shift toward using the term ‘algorithmic decision-making systems’ or ADS to describe some of the most ubiquitous and worrying algorithmic systems in use today. This is a change being propelled by advocacy organizations and there are already multiple official policy documents, and now legislation, that use this framing, primarily in the context of government use of ADS.16 Identifying these as “decision systems” shifts the emphasis from an abstract notion of mimicking or replacing human intelligence to systems that make decisions, allocate resources, create priorities, and engage in value trade-offs. A growing body of research has clarified the various choices or trade-offs that are made at every step in the lifecycle of the system: from the data used to train these systems, the choice of algorithmic models that are used (and the causal logics they deploy), and the complex ways in which those ‘supervising’ these systems interpret and apply their results.
In fact, concentrating on the human labor involved at multiple steps in the life cycle of algorithmic systems has been another key tactic in de-abstracting the idea of ‘autonomous AI’. Policy solutions like ‘human-in-the-loop’ that envision human supervision to be an antidote to concerns of algorithmic opacity have also largely failed, leading to calls for a more nuanced exploration of this relationship and changing the lens to “algorithm-in-the-loop”.17 Other research focuses on the large globally distributed workforce which prepares the foundational datasets required for many of the most ubiquitous text and image processing systems.18
AI as distraction
Earlier this year, as most of the world was confronted with a rapidly spreading pandemic with no end in sight, contact-tracing apps developed by governments and some of the world’s the largest technology companies were a prominent (and arguably central) part of both official and popular narratives about the response to Covid-19.19 In the policy space, there were heated debates and rapid civil society responses to such technology-oriented solutions to the public health crisis which highlighted the concerns of privacy, transparency, and efficacy. In countries with low internet penetration or smartphone coverage, the overwhelming reliance on technological measures raised serious concerns of exclusion and, relatedly, the efficacy of using data derived from these apps to guide policy decisions.
Several months into the pandemic, as many countries grapple with a second wave of high infection rates, there is now markedly less buzz around technological solutions to the global public health crisis.20 While contact-tracing apps are still available in most countries, they appear peripheral (if at all) in news and official accounts of the Covid-19 response. Recent download rates of such apps in Europe, where they are strictly optional, have been very low, ranging from 20 percent of the population in Germany to just 3 percent in France.21 In India, where it is effectively mandatory, the Aarogya Setu app has gone from being a key part of the Prime Minister’s Covid-19 address to one mired in controversy.22 It is still effectively unworkable for large parts of the population without a smartphone and access to the internet or lower digital literacy skills.
If contact-tracing apps were at the peak of the hype cycle in the early months of the pandemic, they now appear to be in the “trough of disillusionment”.23 It’s a good time then to ask: what was lost in the hype? What was the opportunity cost of the focus on these kinds of consumer technology in a time of crisis? In the Indian context, I argued along with my coauthor that “these technology-based responses to the pandemic obscure that the country still lacks the foundational infrastructure for analyzing digital health information”.24 In other words, the focus on apps distracted from the more foundational lack of digitized information about the public health system, such as the number of hospital beds, disease incidence, and death tolls. Such data would have been invaluable for government agencies making decisions about how to ration hospital resources and testing facilities, but most of it is not available for policy and planning authorities. In the US, Cathy O’Neill argued that the app-hype was distracting from the glaring lack of testing and clear official messaging around masks and other precautionary measures.25
AI systems are typically proposed as a magic bullet to solve complex social problems. In reality, they can inhibit progress on broader reforms.
Distraction has been a key function of many AI/ADS tools in two primary ways. First, similar to the example of app-hype during the pandemic, AI systems are typically proposed as a magic bullet to solve complex social problems. In reality, they can inhibit progress on broader reforms. The buzz around using AI “to solve poverty” is a stark example of this.26 Data-driven forms of financial technology have been promoted as a form of inclusion to bring the poorest within the net of the formal banking and digital payments ecosystem.27 However, these technology-driven programs distract from the economic reality that these individuals lack the means and assets to participate in these systems and are particularly vulnerable to exploitative and predatory lending schemes.28
Secondly, algorithmic systems can also distract from the underlying political or economic values being pursued by the institutions that introduce them. A 2013 case from Michigan serves as one instance of how algorithms can be used to disguise austerity measures or other forms of neoliberal governance.29 In October 2013, Michigan implemented a new automated unemployment insurance system to reduce operating costs and target fraud in unemployment insurance claims. When the Michigan Integrated Data Automated System (MiDAS) was implemented, the Unemployment Insurance Agency laid off 432 employees – roughly one third of its staff. After hundreds of people started complaining about being unfairly fined for fraud, the Auditor General found that MiDAS was “in error” 92 percent of the time. This error can be explained in terms of technical parameters, but that would distract from the fact that it was embedded in the broader and ongoing cutbacks in unemployment insurance and other forms of social welfare benefits under the new Governor Rick Snyder. The political values of the administration were reflected in the way the algorithm functioned to severely limit the number of recipients, and discipline or demonize those reliant on state aid.30 A recent attempt at using facial recognition technologies in a housing complex in New York led to protests from resident groups who argued that it was in fact “a form of tenant harassment, designed to evict rent-stabilized residents” at a time of rapid gentrification in the neighborhood.31
Process-focused regulatory mechanisms like algorithmic impact assessments (AIA) could be one way to combat distraction, reveal the motivations driving these projects, and engage in a meaningful cost-benefit analysis. Requiring entities to conduct AIAs is increasingly being proposed as a tool to ensure accountability and transparency when using algorithmic decision-making systems. While AIAs are an active field of research, they are already beginning to find mention as a requirement in regulations like the directive on Automated Decision-Making Systems in Canada and the Algorithmic Accountability Bill, 2019 in the United States. These regulations delegate many of the specifics of AIA to future executive rulemaking, and there is an active debate on how to best identify the types of effects that count as impact, when these assessments are conducted (ex ante and/or ex post), and who are invited to participate or consulted in these assessments. In addition to focusing on potential impacts, it will also be critical to structure AIAs to ensure that the broader political and economic motivations of these uses are illuminated. This can only happen through consultations that not only include the perspectives of those directly impacted but also deliberately decenter the technical components of these projects in favor of the social and economic contexts in which they will be used.
AI policy as infrastructure policy
“The pandemic has many losers but it already has one clear winner: big tech”, declared an Economist headline in March 2020.32 The indispensability of large scale multinational technology companies was both revealed and entrenched at the height of the pandemic as virtual platforms for communication and market exchange became key to maintaining normalcy in the economy and social spheres like education.
As mentioned earlier, various governments were seen collaborating and negotiating with private actors as part of the technological responses to mitigating the spread of the virus as well as creating systems to govern society post the lifting of lockdowns. In China, despite popular accounts that the state has unrestricted access to data held by companies, reports suggest a more complex picture of state-private sector negotiation over access to data. Chinese authorities were putting considerable pressure on companies like Alibaba and Tencent to share their data infrastructure for the purpose of geolocation and other data required for the government’s flagship Health Code apps. It was revealed that the data held by state-owned telecom companies did not compare with the GPS and other data held by platforms like Allpay and WeChat. The British Prime Minister famously invited senior representatives from four of the largest Silicon Valley technology companies in an emergency effort to tap into the resources of big tech.33 Other companies, like Microsoft and Amazon cloud platforms hosted a range of government data dashboards and technology tools, including that of the United States Center for Disease Control (CDC). The CDC also used Microsoft’s customizable healthcare chatbots.34 Google search engine pledged ad grants to the World Health Organization (WHO) to play a key role in sharing factual information on how to prevent the spread of the virus. Previously low profile, Google’s life sciences company Verily was suddenly in the news for carrying out large scale drive-through testing in the US.35 These instances underscore the ways in which big tech companies could leverage network effects, data linkages, and large amounts of available capital to expand their footprint across multiple social domains from communication to finance to healthcare while the pandemic swept across the world.
The Apple and Google partnership, launched in April 2020, for a Bluetooth-powered contact-tracing app also holds critical insights about the nature of power these platforms exert. The two companies were going to make a contact-tracing toolkit available as part of their operating systems which could then be leveraged by state-sanctioned health apps. This provided a potential solution for the dozens of governments grappling with the challenge of Bluetooth-related restrictions on smartphones that limited the efficacy of these apps. However, in order to use these features, the governments’ apps would have to play by Google and Apple’s rules on how their apps would be designed. This was a significant boon for individual privacy and security because it mandated a decentralized architecture and therefore restricted data sharing with centralized government servers. Soon enough, however, tensions emerged as governments of France and the UK, among others, were slighted by the idea that Google and Apple would dictate how states designed their technological response to Covid-19.36 It was a reminder that the smartphone itself contains crucial social infrastructure controlled by a handful of companies globally. As Micheal Veale notes, “It’s great for individual privacy, but the kind of infrastructural power it enables should give us sleepless nights.”37
In fact, this focus on the infrastructural power of platforms has taken renewed prominence in policy circles over the last year, sometimes expressed in comparisons of these companies with public utilities. The infrastructural lens is an important tool to understand the business logics that have created these forms of platform power, as well as the material infrastructure (data centers, submarine cables, smartphones, chipsets) that sustain it and inhibit competition. More broadly, the infrastructural lens is helpful to understand the impacts of being excluded from the use of these platforms, which has been a key concern with the shift to virtual learning during the pandemic.
The “infrastructural turn”38 in AI policy is well on its way too, although this is sometimes obscured because of the lack of consensus around what counts as policy “about AI” versus broader data governance norms or industrial and competition regulation. AI policy should, in fact, be understood as an assemblage of these various policy trends that respond to and anticipate the ongoing shift towards a computing landscape that consists of high intensity computational tasks, typically involving large amounts of data. It is a landscape dominated by internet companies like Google, Facebook, Amazon, Microsoft, Apple in the US and Alibaba and Tencent in China that have been able to leverage their access to data, computational power, algorithmic expertise, and capital to build and develop cutting edge algorithmic tools that have, in turn, served to expand the scale, reach, and monetization potential of these platforms. The US and Chinese economies have disproportionately benefited from the wealth generated by these companies, despite the fact that Global South countries like India and Brazil are some of the largest markets for Silicon Valley companies by the sheer number of users.39 Dominance in the AI marketplace is also deeply intertwined with the development of cutting edge military and cybersecurity technologies. As a result, it is a combination of economic and security anxieties that are fueling a range of policy developments aimed more explicitly at promoting domestic or native enterprises, and the creation of “national champions”. This kind of rhetoric has been most evident in policy developments at the European Union level as well as several recent policy moves by the Indian government.40
The infrastructural turn in AI policy involves disaggregated and targeted legal and policy interventions aimed at democratizing, or at least, diversifying access to the inputs that sustain this new computing landscape: data, software, compute, expertise. The Indian government has prominently made “access to data” for Indian companies and the state a key lever to enhance domestic competitiveness. A broadly stated mandatory data access proposal in recent policy documents has raised more questions than answers around the legal and technical frameworks to facilitate such a regime. Data localization or the legal requirement to store data on servers within the geographical territory of the country has been another site of heated policy making, with the draft Personal Data Protection Bill of 2019 including a requirement to keep a copy of personal data in India. One of the key official justifications for data localization has been the need to bring foreign companies firmly within Indian jurisdiction. Data localization can then be understood as a foundational step in a more aggressive access to a data regulatory regime, and one that is likely to invite stiff opposition from a range of stakeholders. Access to computing resources as well as diversifying the players providing cloud computing services has been another key theme in recent policy documents both in the EU and India. The draft e-commerce policy specifically states the need to create domestic cloud computing companies and includes government subsidies for such companies as a potential route to consider. Other efforts like public research clouds and data trusts are also experiments in creating pooled in computation and data resources that can reduce the barriers to entry for smaller and medium-sized companies as well as research organizations. Finally, while access to data and computing has been most prominent, these policy documents also note the need to cultivate and fund research centers of excellence in order to retain talent and compete with Silicon Valley and Chinese R&D.
The contours of AI policy should not be limited to axes of accountability, discrimination, and privacy, but also expand its scope to recognize the data governance and competition policies that attempt to influence the global political economy of AI.
Rather than be dismissed as digital protectionism, these developments should be taken seriously for their explicit acknowledgement of data governance as a form of industrial policy. That is not to say that the fundamental rights rationale for enacting data protection and surveillance regulation are facetious, but rather that there are additional and intersecting geopolitical and geoeconomic drivers for all these forms of data governance policy which need to be understood and engaged with by the AI policy community. In other words, the contours of AI policy should not be limited to axes of accountability, discrimination, and privacy, but also expand its scope to recognize the data governance and competition policies that attempt to influence the global political economy of AI.
Notes
- 1 Ingram, D. and Bhojwani, J. (2020) Covid apps went through the hype cycle. Now, they might be ready to work, NBC News, 6 October. https://www.nbcnews.com/tech/tech-news/covid-apps-went-through-hype-cycle-now-they-might-be-n1242249.
- 2 Plantin, JC and Punathambekar, A. (2019) “Digital Media Infrastructures: Pipes, Platforms, and Politics.” Media, Culture & Society 41, no. 2: 163–74. https://doi.org/10.1177/0163443718818376.
- 3 SWAN (2020), ‘No data, no problem: Centre in denial about migrant worker deaths and distress, The Wire, 16 September. https://thewire.in/rights/migrant-workers-no-data-centre-covid-19-lockdown-deaths-distress-swan.
- 4 See Aula, V. (2020) ‘The public debate around COVID-19 demonstrates our ongoing and misplaced trust in numbers’, 15 May. LSE Blogs https://blogs.lse.ac.uk/impactofsocialsciences/2020/05/15/the-public-debate-around-covid-19-demonstrates-our-ongoing-and-misplaced-trust-in-numbers/; Milan, S (2020) ‘Techno-solutionism and the standard human in the making of the Covid-19 pandemic’, 20 October. https://journals.sagepub.com/doi/full/10.1177/2053951720966781.
- 5 Richardson, R. (2020) Government Data Practices as Necropolitics and Racial Arithmetic, Global Data Justice, 8. October. https://globaldatajustice.org/covid-19/necropolitics-racial-arithmetic.
- 6 Bowe, E., Simmons, E. Mattern, S. (2020) Learning from lines: Critical Covid data visualizations and the quarantine quotidian, Big Data & Society. 27 July. https://journals.sagepub.com/doi/full/10.1177/2053951720939236.
- 7 The Covid Tracking Project, The COVID Racial Data Tracker (2020). https://covidtracking.com/race/.
- 8 Beg, B., Sonavane, N. and Bokil, A. (2020) ‘Arbitrary & disproportionate criminalisation of marginalised communities: A countermap of pandemic policing in India’, Oxford Human Rights Hub, 13 October. https://ohrh.law.ox.ac.uk/arbitrary-disproportionate-criminalisation-of-marginalised-communities-a-countermap-of-pandemic-policing-in-india/.
- 9 Krafft P.M. et al (2019) ‘Defining AI in Policy versus Practice’, December. https://arxiv.org/abs/1912.11095
- 10 Campolo, A., Crawford, K. (2020) ‘Enchanted determinism: Power without responsibility in artificial intelligence’, Engaging Science, Technology and Society, Volume 6: 1-19. https://estsjournal.org/index.php/ests/article/view/277.
- 11 Leufer, D. (2020) ‘Why we need to bust some myths about AI’, Patterns, 9 October. https://www.sciencedirect.com/science/article/pii/S2666389920301653.
- 12 Bogost, I. (2017) ‘‘Artificial intelligence’ has become meaningless’, The Atlantic, 4 March. https://www.theatlantic.com/technology/archive/2017/03/what-is-artificial-intelligence/518547/.
- 13 Knight, W. (2019) ‘About 40% of Europe’s “AI companies” don’t use any AI at all’, MIT Technology Review, 5 March. https://www.technologyreview.com/2019/03/05/65990/about-40-of-europes-ai-companies-dont-actually-use-any-ai-at-all/.
- 14 Suchman, L. (2007) Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press. https://books.google.co.in/books/about/Human_Machine_Reconfigurations.html?id=VwKMDV-Gv1MC&redir_esc=y.
- 15 Dobbe, R. Gilbert, T.K., Mintz, Y. (2019) ‘Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments’. https://arxiv.org/abs/1911.09005.
- 16 Algorithmic Accountability Act 2019 https://www.congress.gov/bill/116th-congress/house-bill/2231; Government of Canada Directive on Automated Decision Making Systems https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592; California Automated Decision Systems Accountability Act of 2020, AB-2269 (2020), http://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB2269; A Local Law in relation to automated decision systems used by agencies, Local Law 49 of 2018, https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=3137815&GUID=437A6A6D-62E1-47E2-9C42-461253F9C6D0.
- 17 Green, B. and Chen, Y. (2019) ‘The Principles and Limits of Algorithm-in-the-Loop Decision Making, Proceedings of the ACM on Human-Computer Interaction’ in Proceedings of the ACM on Human-Computer Interaction. https://dl.acm.org/doi/10.1145/3359152.
- 18 Epstein, G. (2019) ‘How ‘ghost work’ in Silicon Valley pressures the workforce, with Mary Gray’, TechCrunch, 18 August. https://techcrunch.com/2019/08/16/how-ghost-work-in-silicon-valley-pressures-the-workforce-with-mary-gray/; Roberts, S. (2019) ‘Behind the screen – Content moderation in the shadows of social media. Yale University Press.
- 19 Global Pandemic App Watch (GPAW), https://craiedl.ca/gpaw/.(Last accessed November 2020)
- 20 Jee, C. (2000) ‘Is a successful contact tracing app possible? These countries think so’, MIT Technology Review, 10 August. https://www.technologyreview.com/2020/08/10/1006174/covid-contract-tracing-app-germany-ireland-success/.
- 21 No Author (2020) ‘Why contact-tracing apps haven’t lived up to expectations’, The Next Web, 23 October. https://thenextweb.com/syndication/2020/10/24/why-contact-tracing-apps-havent-lived-up-to-expectations/.
- 22 No Author (2020) ‘Aarogya Setu controversy: Centre clarifies, but identity of developer still unclear’, MoneyControl, 29 October. https://www.moneycontrol.com/news/technology/aarogya-setu-controversy-centre-issues-statement-on-covid-19-tracker-app-6033061.html.
- 23 Ingram, D. and Bhojwani, J. ‘Covid apps went through the hype cycle. Now, they might be ready to work’, NBC News, 6 October. https://www.nbcnews.com/tech/tech-news/covid-apps-went-through-hype-cycle-now-they-might-be-n1242249.
- 24 Joshi, D. and Kak, A. ‘India’s digital response to Covid risks inefficiency, exclusion, and discrimination’, The Caravan, 19 April. https://caravanmagazine.in/health/india-digitial-response-covid-19-risks-inefficacy-exclusion-discrimination.
- 25 O’Neil, C. (2020) ‘The Covid-19 tracking app won’t work’, Bloomberg, 16 April. https://www.bloombergquint.com/gadfly/the-covid-19-tracking-app-won-t-work.
- 26 Roepe, L.R. (2018) ‘How AI can help fight poverty’, Dell Technologies, 14 November. https://www.delltechnologies.com/en-us/perspectives/how-ai-can-help-fight-poverty/; Bennington-Castro, J. (2017) ‘AI is a game-changer in the fight against hunger and poverty. Here’s why’, NBC News, 22 June. https://www.nbcnews.com/mach/tech/ai-game-changer-fight-against-hunger-poverty-here-s-why-ncna774696.
- 27 No Author (2019), ‘Is fintech the latest weapon in the fight against poverty?’, DW. https://www.dw.com/en/is-fintech-the-latest-weapon-in-the-fight-against-poverty/a-45920457.
- 28 Fick, M. and Mohammed, O. (2018) ‘Kenya moves to regulate fintech-fuelled lending craze’, Reuters, 25 May. https://www.reuters.com/places/africa/article/us-kenya-fintech-insight/kenya-moves-to-regulate-fintech-fuelled-lending-craze-idUSKCN1IQ1IP.
- 29 Jennifer Lord (In conversation with Kate Crawford, AI Now Institute) https://ainowinstitute.org/symposia/videos/automating-judgement.html.
- 30 Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. Video event. https://fordschool.umich.edu/events/2018/automating-inequality-how-high-tech-tools-profile-police-and-punish-poor.
- 31 Rogers F. and Moran, T. (2020) ‘Atlantic Plaza Towers tenants won a halt to facial recognition in their building’, AI Now Institute, 9 January. https://medium.com/@AINowInstitute/atlantic-plaza-towers-tenants-won-a-halt-to-facial-recognition-in-their-building-now-theyre-274289a6d8eb.
- 32 The Economist (2020) ‘Big tech’s covid-19 opportunity’, 4 April. https://www.economist.com/leaders/2020/04/04/big-techs-covid-19-opportunity.
- 33 Kind, C. (2020) ‘What will the first pandemic of the algorithmic age mean for data governance’, Ada Lovelace Institute. https://www.adalovelaceinstitute.org/what-will-the-first-pandemic-of-the-algorithmic-age-mean-for-data-governance/.
- 34 Roberts, E.Y. (2020) ‘Microsoft Azure powers CDC Covid-19 assessment bot’, Technology Record. https://www.technologyrecord.com/Article/microsoft-azure-powers-cdc-covid-19-assessment-bot-103464
- 35 Singer, N. (2018) ‘Big tech zeros in on the virus-testing market, New York Times’, 18 June. https://www.nytimes.com/2020/06/18/technology/big-tech-google-verily-virus-testing.html
- 36 Kelion, L. (2020) ‘Coronavirus: Apple and France in stand-off over contact-tracing app’, BBC, 21 April. https://www.bbc.com/news/technology-52366129; No author (2020) ‘Why Britain is ignoring the Google-Apple protocol for its tracing app’, The Economist, 9 May. https://www.economist.com/britain/2020/05/09/why-britain-is-ignoring-the-google-apple-protocol-for-its-tracing-app.
- 37 Veale, M (2020) ‘Privacy is not the problem with the Apple-Google contact-tracing app’, The Guardian, 1 July. https://www.theguardian.com/commentisfree/2020/jul/01/apple-google-contact-tracing-app-tech-giant-digital-rights.
- 38 Plantin, JC and Punathambekar, A. (2019) “Digital Media Infrastructures: Pipes, Platforms, and Politics.” Media, Culture & Society 41, no. 2: 163–74. https://doi.org/10.1177/0163443718818376
- 39 UNCTAD, Digital Economy Report 2019. https://unctad.org/webflyer/digital-economy-report-2019.
- 40 Kak, A. (2020) “The Global South is everywhere, but also always somewhere”: National Policy Narratives and AI Justice, AIES ’20. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society: 307-312. https://dl.acm.org/doi/10.1145/3375627.3375859
Amba is the Director of Global Policy & Programs at New York University’s AI Now Institute where she develops and leads the institute’s global policy engagement, programs, and partnerships, and is a fellow at the NYU School of Law. She is also on the Strategy Advisory Committee of the Mozilla Foundation. Trained as a lawyer, Amba graduated from NUJS, and then read for the BCL and an MSc at the University of Oxford on the Rhodes Scholarship.