Q. One of the central concepts of your work is that of data colonialism. Could you elaborate on this and connect it to recent developments in the post-pandemic platform economy, particularly the activities of the Big Tech?
Nick: The fundamental point that we refer to when talking about data colonialism, is the idea of this special moment in history where instead of land or what comes from the land being the objects of colonial extraction, now there is something even bigger. In practice, it’s almost everything which is human life. The flow, the stuff of everyday life, is now available for extraction for profit in the form of data, and that is the core idea of talking about this as a colonial land grab or data colonialism. As to how it has changed by the pandemic, it has been intensified because most of us have been living a lot of our lives across digital platforms, many of which – including Zoom which we are using right now – survive by extracting data or metadata from the flow of what we’re doing.
Ulises: Following on what Nick said, our official definition of data colonialism is that it is a new and emerging social order and is characterized by this continuous extraction of data from our lives and from the domains surrounding our lives. When we start to talk about data colonialism, I think also the concern arises about whether we’re making a one to one comparison with historical colonialism. We are not, in fact. There are of course many differences. Historical colonialism had different contexts and modalities. It had a very different scale. Depending on what part of the world we’re talking about, it looked very different. But what we’re arguing is that historic colonialism and data colonialism share one very important similarity and that is their function. The function of both types of colonialism is to extract resources. That’s why we’re focusing on that similarity to try and trace the continuity between the old order and the new order that we call data colonialism.
Q. Is there in your view to data colonialism, an older geographic dimension that divides up the world into colonizers and colonies? Is the nation status as relevant to the colonialism dynamic now, which in your view, is centered around tech companies that may be insensitive to more conventional geopolitical questions?
Ulises: Well, I think it is useful to try to understand why we are, in fact, calling it colonialism and whether it deserves the name colonialism. And part of our answer to that question is that in tracing those continuities that we’re talking about, we also realize there are continuities in the rationale or the justification to present it. I’ll tell you about the role that the concept of ‘cheap’ plays in defining these continuities. So, for instance, to colonize the world nature had to be rendered as ‘cheap’ by the colonizer globally. In this geopolitical dimension that you were just mentioning, nature was just there for the taking. It was not occupied. If it was occupied, it was inhabited by humans who were not necessarily “full human beings” (i.e. indigenous populations). So, nature was just cheap. And from there we move on to cheap labor. The idea that in order to transform nature into wealth, the human labor required was also cheap. In colonialism, this was mostly defined along racial terms.
Again, in this geopolitical vision of the world, certain parts of the world where certain kinds of human beings lived were supposed to provide cheap labor. And now we’ve gone from cheap nature, to cheap labor, to cheap data. Cheap data has many of the same rationales. It’s supposed to be cheap, abundant, and seen as an exhaust that we create. We might individually own it, but in its aggregate form it requires very advanced technological tools that only certain nations and certain companies in certain geographical locations have access to. They have the access to this technology to transform this resource into something useful, and it is said, continuously, that it’s for our benefit. This is what progress for all of humanity looks like. So, these geopolitical divisions do continue very much to define this new order.
Nick: Like Ulises is saying, we do think there are fundamental advantages to thinking about all the transformations with data – whether they’re happening in the Global South or indeed the Global North – through this colonial framework. There are two types of advantages: advantages of scope and scale, and advantages of depth. In terms of scope, this does mean changing the time frame in which we contextualize what’s going on with data today. Almost all the debates contextualize these developments using the past few decades: 30 years since the invention of the World Wide Web and then even just the past 15 years with the invention of digital platforms, have made this sort of data extraction possible. But we want to look at the act of extraction in terms of the past 500 years, maybe even a little bit more. Really going back to Spain and Portugal’s capture of what became the Americas for the former’s extraction and benefit, which was the start of the great period of empires continued by England and Holland. Taking this perspective changes our view of what the long-term continuity is.
But there’s also a matter of changing our sense of the scope in terms of the sheer variety of places where this data colonial land grab is going on. It isn’t just a matter of social media, though they are really important. It’s a matter of the growth of surveillance in almost every type of low-paid, low-status work right across the world, just as much in the Global North as in the Global South today. Profound changes in the conditions of work for laboring bodies are under way, which are made possible by the continuous presence of devices extracting data and tracking objects, so that our packages arrive on time. It also means the huge growth of data captured within corporations. But of course, that’s also changing because as our devices start to become smart devices, they’re very often capturing the details of our life and claiming them as something that’s now suddenly internal to the corporation that built the device, and claims an absolute right to track whatever we do with that device. So, this question of scope is really important. It’s a much bigger and more fundamental transformation.
We also think that the colonial framework helps us understand the depth of what’s going on as well. Here, we want to emphasize that – just as with historical colonialism, data colonialism brings about a fundamentally new type of social order, a much deeper way of integrating power in economy and in society for this one goal of extracting more and more data to enhance profit within capitalism. And that leads us to questions around freedom. Data colonialism is a fundamental change in the conditions in which we can possibly be free. Just as of course historical colonialism represented similar changes.
And then the final benefit of the colonial frame, which I really don’t think you can reach if you just see everything that’s going on with data just through the lens of recent capitalism, is that it helps us see the deeper drive behind the goal of Palantir and Facebook—and thousands of other data companies in China, in India and Africa and so on—to track us in terms of the original long-term goal of the West: to claim a prior authorized privileged status to knowledge on the basis of its supposedly superior rationality. This is the way of capturing the world through a claim of superior rationality that the Peruvian sociologists Anibal Quijano called not just colonialism but coloniality – the deeper logic that rationalizes colonialism, that makes it seem natural. You can’t explain that within the framework of capitalism; it’s only within the framework of colonialism of 500 or 600 years that you can understand this deep continuity which is really at the core of the new data regime. The idea that it’s better to track us and to capture the resulting knowledge than to leave us free to live our lives without corporate interference. This is a profoundly colonial way of thinking about the world.
Q. Zeroing in on social media, there has been a lot of debate recently regarding the disruptive impact of technology in this domain – misinformation, hate-speech, political echo-chambers, and all round damage to our public sphere. Yet, there is less attention paid to something which you cover -the underlying political economy and business-model of social media, and how they propagate these outcomes. Could you speak to that?
Nick: There is an excellent book by Yochai Benkler and colleagues at the Berkman Klein Center for Internet and Society where I’m associated, called Network Propaganda, which in effect, argues that Facebook cannot be responsible for the collapse of American democratic politics. There are many other factors, such as right-wing media, which lead to this; and actually, Facebook tends not to reach those people who are most polarized in America, who tend to be poor and not with smartphones, and so on and so forth. But I think they miss something in the book when, in effect, they suggest that Facebook is a beneficial technology that just happens to have bad side effects. The affordances of Facebook are not entirely neutral. Its very business model, from very early on, was designed to extract value from social life. And, from our perspective, that itself needs to be challenged. It’s an attempt to treat social life as if it were merely an object for extraction.
The affordances of Facebook are not entirely neutral. Its very business model, from very early on, was designed to extract value from social life. And, from our perspective, that itself needs to be challenged. It’s an attempt to treat social life as if it were merely an object for extraction.
This is what we mean when we talk about data relations in relation to Marx, and his theory of how capitalism is socially reproduced. Marx was a social theorist too, and his theory of how capitalism socially reproduced itself was through labor relations, selling one’s labor on the market for wages. But we argue in our book that we need to be a little more creative with Marx. And so we expand on his idea that social relations reproduce capitalism and we suggest we have a new type of social relation today, which is data relations: literally redesigning the flux of social life so that profit can be extracted just from it, not through the additional mechanism of selling your labor but from just breathing, sending a picture to a friend, smiling at a camera with someone you love. Just being a human being is something from which profit can be extracted because it happens across platforms which are configured exactly to capture it so that value can be extracted. And if you look at the very early patents of Facebook, you can see how it literally had to create a shadow world in which, everything we do online is configured so that it meets certain very basic criteria, and thereby becomes countable in certain regular ways, and the way it’s counted relates to the people that we’re all countably connected to. This is a redesign of social life for profit which from the perspective of our argument may or may not be polarizing. I would actually argue it still is polarizing, but in any case it’s certainly extractive, and it offends certain fundamental values of what social life should be. Human life should be for humans, not for corporations, and that transformation – the taking of human life for corporate value – is a colonial shift, and we have witnessed it in the past two decades.
Ulises: It’s easy to understand what’s going on by looking at social media and the various scandals that we have become accustomed to seeing in the news. But social media is just one part of the equation. If you focus exclusively on social media, you lose sight of what’s happening in other domains – Logistics, internal corporate data, artificial intelligence, and algorithms. In our book, we came up with this concept of the social quantification sector. We didn’t want to just talk about Big Tech companies. We didn’t want to just talk about Silicon Valley, or what’s happening in China. We wanted a broader term to encapsulate what’s happening globally across all of these different domains, and that’s why we came up with the social quantification sector. If data colonialism is about the extraction of data, then there’s this whole industrial sector devoted to that extraction and transformation of data from our social life into profit. That sector includes, yes, the Big Tech companies like Google, Amazon, Facebook, etc. as well as their counterparts in China like Tencent and Alibaba. But it also includes smaller players –-companies currently doing research and development in apps for phones, data brokers, hardware, and software manufacturers. We wanted to capture all of them in this notion of the ‘social quantification sector’, and use it to explain the role they play in the economy.
If data colonialism is about the extraction of data, then there’s this whole industrial sector devoted to that extraction and transformation of data from our social life into profit. That sector includes, yes, the Big Tech companies like Google, Amazon, Facebook, etc. as well as their counterparts in China like Tencent and Alibaba. But it also includes smaller players –-companies currently doing research and development in apps for phones, data brokers, hardware, and software manufacturers.
Q. What are these new types of media, which are also maybe defining our social relations at this point of time? And is there anything beyond what we consider our social media that we should be looking at?
Ulises: Like I said, it’s very useful to understand what’s going on by looking at developments in social media. In fact, the whole argument of surveillance capitalism, which Shoshana Zuboff presents, in some ways narrows what’s going on in that domain (although we agree with a lot of what she says). Yes, maybe in order to fix things we can make those companies better, if we can identify the bad apples, the rogue Facebooks. Then we can make the system more tolerable, and it can go on being extractive.
But our argument extends beyond that. Surveillance capitalism doesn’t always explain what’s happening with other areas like artificial intelligence, for instance, if you look at the use of machine learning to make policy decisions or marketing decisions, it might not directly be linked to what’s happening with social media. That’s why we try to expand the discussion beyond just what’s happening with social media because then we lose track of, for instance, what’s happening in precision agriculture (the use of data to revolutionize agriculture by introducing logistics and robotics). What does social media have to do with agriculture? Perhaps very little, except that precision agriculture might be connected to understanding consumer demands. So if people want more blueberries, because that’s what they’re saying on social media, then perhaps there’s a way in which that translates to this new way of thinking about agriculture – to datafy it and introduce analytics, logistics, and production cycles. So the connection between technology and social relations is much more complex that simply what is happening with social media.
Nick: To stay with agriculture, obviously there may well be benefits for using certain types of data for certain types of agriculture work. For example, to deal with climate change issues. But from what I’ve read, precision agriculture completely changes the relationship of the farmer to the land, because all that accumulated knowledge was often stored in communities, not just a single individual. When digitalization occurs, all that knowledge – of the land, how the land evolves, what works, and what doesn’t – becomes delegitimated. The only thing that matters is the gathering of data continuously from fields, with an extraordinarily high level of precision, using machine learning to extract patterns, and so on. This data is fed then into the machines that you use on the land. And very often the farmer has very little ability to add inputs. It has a big impact on the expertise and agency of the farmer. The same is happening in teaching and education, where a lot of teaching, particularly in the richer Global North, is now happening on digital platforms, which operate by continuously tracking students, monitoring their tasks, and storing records for the future of how they performed. Again, this shift completely changes the relationship between teacher and student, undermining older forms of expertise of teachers. And in health too. The Wall Street Journal said that health data is now an open frontier, obviously for extraction. This is clearly — even though the Wall Street Journal doesn’t say it as such — a colonial metaphor. It’s an open frontier for Big Tech to come in and take this wonderfully continuous stream of health data. Possibly for wider benefits, it can be argued, but clearly for profit, first and foremost. So, when you sum all this up, the production of our food to sustain us, the production of citizens through the education of our youngest children, the maintenance of our bodies— these are increasingly becoming extractive zones. While this is not to say that useful data can’t sometimes be created through social planning, the idea that these are now extraction zones really should give us pause about the direction we are heading in.
The production of our food to sustain us, the production of citizens through the education of our youngest children, the maintenance of our bodies— these are increasingly becoming extractive zones. While this is not to say that useful data can’t sometimes be created through social planning, the idea that these are now extraction zones really should give us pause about the direction we are heading in.
Q. You have yourselves talked about how – as our societies and interactions begin to be undergirded by an automated, algorithmic, and data-centric technical infrastructure, this has a profound influence on our behavior as subjects, as well as on the horizon of our collective norms, freedoms, and possibilities. Can you talk us through this?
Nick: Very often the reaction of people to these developments is to only proclaim the benefits. It’s very convenient to have personalized ads rather than ads that are completely irrelevant, which are irritating. It’s maybe very convenient to have a health app that knows all about our body and therefore makes targeted recommendations, saving us time trying to understand what we need. Convenience is often at the core of what marketers offer when they say they want to track us. In other words, you can’t have personalized benefits unless you have personalized surveillance and personalized surveillance, therefore, comes to be seen as a good thing.
But we want to challenge that head on. If you listen carefully to what marketers say and what they dream about what they could do in the future if they could track us even more intensely — possibly with devices embedded under our skins to know about our body state as they compete to sell us goods — it’s clear that we’re getting very close here to a borderline beyond which freedom is not even possible. We want to call that out because it’s the idea that human life should be good that is inconsistent with each of us being continuously tracked at all times by an external power. And as we know, there’s no such thing as a fully accountable power. The idea that such extractive power could be good is very very problematic. We argue in our book that it challenges our freedom in the most fundamental sense and here we need to think about what our concept of freedom is. If we rely just on the concept of market freedom, then it’s hard to argue that we’re not being given more choice. However, if you look at it through a deeper sense of freedom, which is the freedom to have some ability to control the direction of one’s life on one’s own terms without being surveilled, without being interfered with, then it’s hard to see how our choices aren’t increasingly being interfered with. But much of marketing theory depends on trying to do precisely that.
The idea that underlies really any concept of freedom (and there are many concepts of freedom in different cultures) is what we call the space of the self. This is a space where if someone interferes with —maybe by looking at you performing a natural function in your culture— they interfere with the bounded space of the self, which must be protected. Otherwise, we have nothing. We believe that we’re going down that threatening path through continuous tracking. That’s perhaps the deepest reason why we need to challenge this historical direction.
Ulises: This also brings up an important feature of data colonialism. Original historical colonialism was an incredibly brutal and oppressive system. The life expectancy for an adult or a child working in one of the mines the Spanish setup in what is now Mexico to extract gold and silver was four years.
Colonialism was brutal, and exterminated life. When we talk about data colonialism, people ask us: well, where is the violence? Our response is that the violence continues to be part of the system in physical and in symbolic ways. As far as physical, someone who is denied health care because of an algorithm does suffer very physical consequences to their life and to their welfare. But the violence is also symbolic in that, as Nick mentioned, the loss of this space of the self is just being accepted and normalized, with some going as far as to call it “progress” that is good for humanity.
Data colonialism is, in that way, much more consensual. It calls for active participation, and we give it willingly. It’s not like we’re being forced or brutalized to accept this system. In fact, many of us willingly accept the terms and conditions of this order, when we click that little box that says “I give Facebook or Google permission to do all of these things with my data.” Of course, we never read all of those terms of service completely (and in the book we do an analysis of how this might be compared to earlier forms of colonial oppression). But the fact of the matter is that we are willing, consensual participants in this new system. So, in that way violence is much more diffused, it doesn’t need to be as direct, because we don’t have to have a colonizer exercising brutal force or deception. After 200 years of capitalism, we are used to this new social contract of clicking that little box that says I give permission to surrender; I surrender this space of myself so that corporations can do all of this extraction.
Q. Is there a productive distinction to be made in the way these effects are playing out in different parts of the world. Are there, for instance, unique dimensions to how these effects are playing out in the Global South or is it having the same sort of detrimental effect across globe?
Ulises: Even though data colonialism is a global order, and most of us—no matter where we are— participate in some form or another in it, the effects of it are not evenly distributed, to paraphrase William Gibson. This becomes very clear when we look at what’s happening in not only the Global South, but with groups that have historically been at risk. Nick mentioned earlier this phenomenon of coloniality, which examines the legacy of colonialism. When we look at that legacy, we know that certain groups, mostly based on race, class, and gender, paid a heavier price during historical colonialism, and continue to do so during data colonialism. The effects of racist algorithms, algorithms that discriminate on class lines, and even environmental effects (because this new order requires a lot of energy and energy as we know, can have a destructive effect on our environment) tend to impact some populations more than others. The implications of this new order are still being felt in a very unequal way by certain populations and those are the same populations — people of color, women, poor people — who have felt the impacts of colonialism in more pronounced ways for centuries.
Q. Looking now at the other side of the equation, what are the kinds of policy-responses that have occurred lately with respect some of the problems you have mentioned? How would assess/critique these efforts?
Nick: Well, it’s fair to emphasize that the very idea of thinking about what’s happening with data computers in a colonial frame is only ten years old. Paul Dourish and Scott Mainwaring talked about ubiquitous computing as a colonial impulse in 2012. Then our work on data colonialism was published for the first time in 2018, although the geographers Thatcher et al introduced a metaphorical use of the term in 2016, so it’s relatively new. Only in the past few years have we started to have a full debate about the issues and already there are some signs of legislative reactions. The first, perhaps, is the GDPR (general data protection regulation) in the European Union, which is important in how it challenges the prior right of corporations just to extract data as their privilege, and it insists that there are human rights issues generated by the use of private and personal data. There are problems with that, though, because it’s a consent regime that the EU has introduced and it’s not clear you can have meaningful consent in a lot of circumstances, particularly where employer-employee relationships are involved. The GDPR doesn’t necessarily apply to all sectors, and arguably it has just made it easier for the very big corporations like Google and Facebook to take the legal costs below the line, and go on doing what they’ve been doing. The EU is planning new legislation, but there are signs that it’s probably going to be more closely aligned to corporate goals, as is the more relaxed American regulatory approach. In America, apart from what particular states and cities have been doing, like San Francisco banning facial recognition technology, there have been no successes at the federal level in terms of limiting what platforms can do, and we’re waiting to see what the Federal Trade Commission will do in relation to Facebook and Google. In other countries, India has made some challenges to global platforms. Britain too. But again, it’s very much an evolving picture, and what we haven’t seen from any government anywhere is a fundamental challenge to the logic of what platforms are doing – constructing a social life for profit extraction.
Almost always, there’s an acceptance on the part of governments of the basic idea that we need platforms. Platforms are essential, it’s assumed, for the new economy and society, but we have to manage the negative side effects, which from our point of view do not go to the core of the extraction that’s going on and therefore misses the bigger issue. This is, of course, at the root of fundamental inequalities, including on a global scale, in terms of which countries have the ability to challenge Big Tech or not. Just to emphasize, if your country has a small economy, or it does not have sufficient server capacity within its own borders, or it does not have its own platforms, its ability to challenge what Big Tech from U.S. or China is doing is close to zero. Therefore, I’m not optimistic about the prospects for legislation here. But we still have to see whether they could be a challenge developing from parts of the Global North or hopefully from countries like Brazil if it gets a new government and India perhaps under new circumstances. Let’s hope we can have the beginnings of a challenge from the Global South, but right now it’s still very unclear.
Q. What frames and outcomes should inform a new governance vision and action plan in the immediate future and in the next decade that builds from the victories and wantings at this juncture?
Nick: We need to move away from a consent model, because in a profoundly unequal space of extraction, the idea of relying on lone individuals giving consent or not is a very limited approach, particularly if they are workers using gig apps to give them their income, and so they don’t have a chance to negotiate those terms and conditions. So, consent becomes meaningless. We need to move away from the idea of individual consent to something closer to social consent, by which I mean, challenge the basic framing of all this as a benefit that business offers to humanity – tracking for personalized benefits, for societal modulation of targets, and so on.
We need to move away from the idea of individual consent to something closer to social consent, by which I mean, challenge the basic framing of all this as a benefit that business offers to humanity – tracking for personalized benefits, for societal modulation of targets, and so on.
It’s vital to see this fundamentally as a form of extraction of information flows from human life. And only then the default will shift from saying “Yes that’s fine”, to “No, that’s not fine” – unless we have a clear, social benefit that is genuinely, socially negotiated by real communities who are given all the information about what it is they’re choosing, and are given the power to monitor how that operates and say no if they don’t like the way it operates.
We’re not in any sense against data collection if it’s grounded in real communities, making real decisions based on consent, and continually monitoring that in accordance with their values. But as soon as we make that explicit, it’s obvious that this is a world away from the way data is normally extracted in today’s corporate status-quo.
Ulises: Yes, I think what Nick is describing begins to outline what a larger project of decolonizing data might look like. In terms of articulating a new vision for governance, we do try to learn from the past and see how previous struggles for decolonization can inform our present struggles. Not to say that we can co-opt those struggles as our own. People have been resisting colonialism for centuries and their struggles are very unique. But I think we can learn from them in terms of imagining how we go about decolonizing data. More as an exercise in imagination, for instance, I’ve argued that perhaps nations should nationalize their data. There are already proposals that argue that individuals should be paid for their data, but that strikes me as a very neoliberal solution. I would prefer to see states claiming resources for the primary benefit of their citizens. But of course, historically nationalization had its problems. Nonetheless, it behooves us to start thinking about what that might look like. Similarly, with the emergence of the Non-Aligned Movement, after the Cold War, many nations in the Global South decided they didn’t like the solution presented by capitalism or communism and realized they needed to find a third way, a movement of non-aligned nations. Likewise, I think today we need a non-aligned technologies movement which some colleagues and members of the network are involved in developing.
Q. Apart from at the policy level, what are some other interesting counter-currents and alternative models that have emerged with respect to social media? What kind of potential do they hold, in your estimation?
Ulises: What’s exciting about the emerging movement to decolonize data is that it’s not just a state approach or international relations policy. Decolonizing data is very much a cultural, educational, and technological movement. That’s where we’re seeing lots of exciting things happening. We’re seeing that the most valuable tool in this struggle is imagination because resisting colonialism has always been an exercise in creativity. Even when colonialism couldn’t be resisted with the body or with artifacts, it could always be resisted with the mind. There are lots of examples of communities resisting data colonialism, reimagining technological space by using alternative technologies, alternative modes of connecting and communicating. We see a lot of potential in these movements, which are still very new and emerging.
Nick: You mentioned federated social media -the idea of radically rethinking fundamental technological infrastructure for the internet so it’s based on distributed platforms, relying on servers closer to local communities, and operating by rules being negotiated by local communities. There’s certainly a lot of potential in that because that goes to the fundamental question of power over this system of extraction.
That movement, which is in its very early phases and is very much a matter for experts and people with high levels of technical knowledge, is not popularized yet. That needs to happen. It is different from the opportunity to disconnect which is offered by platforms themselves. Apple has built into the operation of its iPhone device an application which encourages you to disconnect for a little bit. They think they’re making a gesture to radically challenge the system by opting for the button press; in other words, a revolution offered by the system. But clearly, this is never going to work in the long run, even though it may deal with some psychic distress. It’s quite clear we need collective, large-scale solutions and not individualistic ones because it’s individualism that has been driven by neoliberalism, and that is one of the core drivers of us getting ourselves into this problem in the first place.
Q. What do you believe is the urgent course of action, particularly for civil society and groups to pursue? Secondly, what are sort of some of the key fault lines and issues that we should be looking out for in the near to mid-term future?
Ulises: There are a lot of things that we can do immediately and that we must do urgently to begin to resist this new order. Just to focus on one thing: It would be great if society can put pressure on corporations and governments to make sure that artificial intelligence algorithms stop being racist and sexist. That’s an elemental thing that shouldn’t be that difficult to do. Part of the problem is that we have mostly approached this problem, as Laurence Diver says, in an ex post manner: we discovered after the fact that the algorithm is racist, and then we try to do something about it. In this regard, civil society can help by starting to think about the problem in an ex ante way, problems that can be solved even before they arise. Education is going to be crucial to that exercise, because of the way we train the people who build these tools — programmers, human-computer interaction designers, and database designers. Right now, we’re teaching them to approach these problems in a very instrumental, utilitarian way. They need to become more critical thinkers. In other words, instead of making sure people in the humanities learn programming, we must turn this around and make sure that people who are doing the programming are also exposed to the humanities, the social sciences and the arts as a way to encourage critical thinking. I’m talking about decolonizing the computer science curriculum. That’s something that civil society can accomplish, and can help us to stop glorifying Big Tech and its mission.
Part of the problem is that we have mostly approached issues, as Laurence Diver says, in an ex post manner: we discovered after the fact that the algorithm is racist, and then we try to do something about it. In this regard, civil society can help by starting to think about the problem in an ex ante way, problems that can be solved even before they arise.
Nick: At its core, both Ulises and I see our work as an attempt to give us imaginative tools. This is the point of the use of the colonial framework, not as a metaphor, but as a better way of turning the historical lens onto the nature of our current reality everywhere. Particularly in the Global South yes, but in fact increasingly everywhere. It’s a matter of allowing ourselves the resources to imagine a different future from the one that’s being planned for us together. I was very struck by a phrase by Satya Nadella, the chief executive of Microsoft Corporation, who said that we must ‘retrofit for the future’. Great phrase, isn’t it? It seems to suggest: “We at Microsoft already know the future! We’ve got it covered. Just get ready for it and do what we say! Retrofit for that future that you have not had any ability to influence or choose, and it will all be fine.”
The violence of that notion, retrofitting for the future, is that in three words it cancels out any notion of democracy and captures it within the walls of the boardroom. This means that we have to have a much more powerful collective imagination. We must be able to say “No, that cannot be the future that we are building for ourselves. It cannot be acceptable.” But we clearly can’t do that as individuals, none of us as an individual has the power to speak up against a USD 2 trillion company. What we have to do is help each other build solidarity, support acts of resistance and explore commonalities between our experiences and the exploitation of delivery workers whose living conditions we may not know much about, or have little in common with, depending on how wealthy we are. We need to build new forms of solidarity – across class, race, and gender – to understand the lateral violence being done to all forms and conditions of life and forms of freedom. This situation cannot be the basis of a fair and just world. This has to be challenged. We need to build international forms of solidarity, which will not just be between states, but be amongst civil society, workers’ organizations, and others.