The hidden power in the stories Big Tech sells us

Daniel Stanley
10 min readMay 27, 2021

--

So ubiquitous have the Big Tech platforms now become in our lives that most of us now experience them in at least two distinct ways. Firstly and immediately, through the apps and software that we use for our online lives — never a greater proportion of our experience than in the last year. Meanwhile, the companies behind this software have also become an ever-increasing part of the news headlines, for their indiscretions, controversies and attempts to justify them.

Across both these domains, the companies aspire to present themselves as at the cutting edge; the latest, newest incarnation of their type, representing clean, beneficial progress towards a better future. But in reality, these companies and the ideas they propagate have deep historical roots — not only in the publicly promoted origin stories of founders in garages, but in the cultural narratives and ideologies that motivate their attempts at dominance, and the public narratives they deploy to justify those attempts.

In both cases, it is clearly in the tech companies’ interests that we take these deeper narratives for granted. But it does not take too much digging to uncover troubling aspects — and to see them manifest in how these companies seek to shape our futures. As we deliberate how to respond to the damage that the growth of these companies has caused (and continues to), it will serve us well to better understand what drives them, and how they have managed to escape proper scrutiny and accountability for so long.

There’s Silicon in those Hills

Silicon Valley itself, while clearly aspiring to a Hollywood-like transcendence of geographic realities, is very much a real place, and as such its mindset and values can only be fully understood with reference to the actual physical history of that place.

Dr Jennifer Estruth, Assistant Professor of History at Bard College and a Faculty Associate at the Berkman Klein Center for Internet and Society, explores this history in her work, which focuses on the connections between the real history of Silicon Valley, and the companies and ideology that the name refers to today.

The famous painting ‘American Progress’ from 1872 (above), with a central figure representing progress and technology, bringing settlers in her wake, and chasing away native peoples, is representative of the ‘Gold Rush’ mythology that is central to the Silicon Valley story.

The Gold Rush of the 19th century saw the Valley really begin its journey to the place it is today. One core lesson of that period — that the real money lies not in seeking the gold, but in equipping those seeking it with the tools and equipment they need — has become a key part of how modern Silicon Valley companies have come to see themselves. As equippers of the modern Gold Rush, tech companies provide the means by which modern prospectors from a range of industries can seek out profits. The story serves to exclude or devalue the relevance of the wider destructive outcomes of this extractive, exploitative system, from its encouragement of violence to its environmental impact — an omission the modern narratives of tech progress have continued.

Silicon Valley’s ideology is not only about the past, of course, but also a particular view of the future. During the Cold War, Silicon Valley played a key part in the Space Race, working closely with the Defence Industry in pursuit of a post-Earth future. Such an outcome was attractive amidst the threat of looming nuclear war, but it also helped avoid difficult questions about contemporary challenges — as Dr Estruth puts it “futurism was then, as now, a way of speculating yourself out of present problems”.

What these narratives of past and future have in common is taking for granted that the solutions to our problems lie in ever greater expansion, extraction and conquest — which as Dr Estruth points out, are clear themes that remain dominant in the Valley ideology today, though elided from public view:

“These temporal imaginaries allow tech firms to form imaginary futures, or solve imaginary problems that distract and displace contemporary calls for solutions to actual present and ongoing issues”

Resistance is Futile

Portraying the future as set in a particular way is certainly a tactic that has been central to how technology companies have sought to impose their aims and perspective on the wider world. Technology author and editor of the Convivial Society Michael Sacasas describes these narratives as framing technological development ‘as a deterministic process to which human beings have no choice but to adapt’.

Much writing about technology and society today — particularly around automation and AI — asserts this idea that ‘nothing can stop’ certain outcomes from occurring and all we can do is to try and be ready for them, or somehow “manage” their impact. Such assertions are commonplace, and vary in their zeal, their intentions and implications, but their common effect is to sideline the role of factors like economic policies, legal structures, and political power in deciding technological outcomes.

For Big Tech companies, they provide convenient cover, removing any element of choice from their decisions, and positioning both public and governments as simply spectator:

“Resistance is futile — one either gets on board, or gets left behind”.

Clearly these kinds of assertions share much in common with the narratives of deterministic inevitability that played a key part in justifying laissez-faire approaches to the wider economy. In particular, they both attempt to side-line the role of government and civil society in deciding outcomes. But the ability of tech companies to continually produce tangible evidence of progress in their field — if not of their contribution to wider social progress — equips them particularly well to claim knowledge and ownership of what futures are possible.

More recently though, this ownership has run into challenges that may provide some space for alternatives to emerge.

The global faces of AI

AI is certainly one technology whose rise to dominance is often presented as inevitable. Yet despite its functional novelty, the ways that it has been received and interpreted around the world have varied greatly according to differing cultures and histories.

Dr Kanta Dihal

Dr. Kanta Dihal is Senior Research Fellow and Project Lead of the Global AI Narratives Project at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. She leads two research projects, Global AI Narratives and Decolonizing AI, which explore how this intercultural public understanding of artificial intelligence is constructed by fictional and non-fictional narratives.

In the western anglophone world, AI has often been associated with dystopian narratives of human obsolescence and machine takeovers, of terminators and robot revolutions. Such has been the enduring influence of this pessimistic tradition that even aspirations for AI were coloured by it. Until the 2010s the most popular vision of what the public wanted from chatbots was effectively a ‘terminator in the service of mankind’: omniscient, emotionless and obedient.

As the 21st Century has developed and the actual experience and prospect of AI has accelerated, expectations and preferences have begun to shift. Other cultural traditions, seeing AI through a more hopeful lens and desiring a more fallible, likable form of intelligence, have started to have greater influence. Less human-centred philosophical traditions, for example in Japan, have typically emphasised connectedness with the machine, rather than separateness, and led to different expectations of AI.

Whatever their origin and angle, such is the importance of AI to our current visions and stories of the future that inevitably they intertwine with attempts to advocate and resist different political ideologies, and into wider geopolitical power struggles. The context of the rise of China as not just a rival superpower, but a potential AI superpower, has given Western technology companies greater license to pursue their AI ambitions, supported by the rationale that they are ‘our’ horse in a race we must win. The powerful implication is that we must therefore at all costs avoid weighing them down with the unhelpful baggage of regulation or oversight.

However, long building pressure on governments to regulate the industry, and tensions as the companies grow to such an extent they begin to overlap, have started to undermine and call into question some of the core narratives they have long relied on to justify their work. Even as seemingly basic a concept as privacy has started to face critique.

The privacy problem

One of the legacies of the early days of the internet that we are yet to fully escape is the perception of it as a ‘wild west’ space, in which wise explorers must seek individual protection from a variety of looming, often mysterious threats. This depiction was not an entirely unfair depiction of those early days and, just like in the real world, there remain a variety of threats that can still be readily encountered on the modern web.

However, despite the massive evolution of the internet over the last two decades — from its more libertarian origins to the dominance of the gigantic walled gardens of modern tech companies — this narrative of personal protection remains resonant and familiar in our everyday browsing experiences, visually reinforced by images of padlocks, security and alerts.

As such, privacy has been a foundation on which many of the Big Tech companies have managed to build their own organisational narratives. This might seem counter-intuitive — after all their intrusions on our privacy are one of the repeated critiques of such companies. But the fatal flaw that they exploit in this notion of personal privacy is how it frames the situation first and foremost with the individual user as the main actor, and hands them the responsibility for choosing the outcome they desire.

The privacy framing helps the companies to escape being held accountable for the results of the systems they operate, particularly as those problems tend to manifest not at the individual level but the collective. There are clear parallels here with critiques that have been made of the way recycling has put the responsibility on individual consumers to deal with the externalities of a hugely profitable plastics industry, or on how ethical consumerism pulls attention away from systemic issues to individual choices.

The comfort that the Big Tech companies have with personal privacy as a useful narrative can be seen most obviously in how often they deploy it explicitly in their advertising. Apple in particular has made it a pillar of their public message, declaring ‘Privacy is King’, ‘Privacy Matters’ and ‘Privacy. That’s iPhone.

Following through on this agenda has recently brought them directly into dispute with Facebook, who objected to the imposition of a pop up asking users to opt-in to off-app tracking. But even Facebook itself, entirely reliant on a model of tracking and targeting its users, sees the benefits of the personal privacy frame in letting them off the hook. Responding to the leak of the personal data of 500 million users, a Facebook spokesperson declared earlier this year “It’s always good for everyone to make sure that their settings align with what they want to be sharing publicly.”, while Nick Clegg made similar arguments in his recent defense of the Newsfeed algorithm. In perhaps the most obvious of all attempts to assert this framing, a set of Facebook adverts ran during 2020 which led with the statement ‘Privacy is Personal’.

As a matter of individual practice, maintaining your personal privacy of course remains crucial, and in many places a matter of life or death. But as a way of discussing, understanding and addressing the many social problems we face, created or at best accelerated by the nature of the technology companies’ products, it remains entirely inadequate.

As we go forward, this conceptual inadequacy is only likely to increase. As machine learning and AI components of targeting become more effective, the platforms will no longer need individual data in order to target their users effectively — and thus it will become increasingly easy for them to claim they support our ‘personal privacy’ while continuing to undermine our collective wellbeing.

Having been hollowed out through the misuse by the tech companies themselves, privacy is no longer able to provide the foundation for either a reliable critique of Big Tech, or for proposing a different approach. For that, a different, more collective conception is needed.

Looking across these different aspects, the need to dig deeper into the narrative roots behind the technology companies, and particularly those that provide the basis for justifying their positions, becomes ever clearer. This is not merely an exercise in discovery and understanding — it can reveal the fragile contradictions and weaknesses that can provide the basis for building critiques and arguments that are harder to resist.

What is also needed however is to build a positive vision — a narrative for what we want our relationship with technology to be, a vision for how we might use and manage data in a way that is beneficial to us all individually and collectively. That is a harder task, but it is best done with a clear sense of where mistakes have been made in the past, and how exclusion and myopia lurking in the ways we have understood technology — or been persuaded to understand it — have resulted in the negative outcomes we are living with.

This article is based on discussion at and around the event ‘Escape from Humanity: the narratives behind Big Tech’ held by the Future Narratives Lab on 21st April 2021. You can access recordings of the event here

--

--