Issue #146 The Hacking of Culture and the Creation of Socio-Technical Debt

The Hacking of Culture and the Creation of Socio-Technical Debt

Kim Córdova and Bruce Schneier

John Fullmer, Yzy Gap Psyop Red Round Jacket, 2023. Courtesy of the artist.

Issue #146
June 2024

Culture is increasingly mediated through algorithms. These algorithms have splintered the organization of culture, a result of states and tech companies vying for influence over mass audiences. One byproduct of this splintering is a shift from imperfect but broad cultural narratives to a proliferation of niche groups, who are defined by ideology or aesthetics instead of nationality or geography. This change reflects a material shift in the relationship between collective identity and power, and illustrates how states no longer have exclusive domain over either. Today, both power and culture are increasingly corporate.

Blending Stewart Brand and Jean-Jacques Rousseau, McKenzie Wark writes in A Hacker Manifesto that “information wants to be free but is everywhere in chains.”1 Sounding simultaneously harmless and revolutionary, Wark’s assertion as part of her analysis of the role of what she terms “the hacker class” in creating new world orders points to one of the main ideas that became foundational to the reorganization of power in the era of the internet: that “information wants to be free.” This credo, itself a co-option of Brand’s influential original assertion in a conversation with Apple cofounder Steve Wozniak at the 1984 Hackers Conference and later in his 1987 book The Media Lab: Inventing the Future at MIT, became a central ethos for early internet inventors, activists,2 and entrepreneurs. Ultimately, this notion was foundational in the construction of the era we find ourselves in today: an era in which internet companies dominate public and private life. These companies used the supposed desire of information to be free as a pretext for building platforms that allowed people to connect and share content. Over time, this development helped facilitate the definitive power transfer of our time, from states to corporations.

This power transfer was enabled in part by personal data and its potential power to influence people’s behavior—a critical goal in both politics and business. The pioneers of the digital advertising industry claimed that the more data they had about people, the more they could influence their behavior. In this way, they used data as a proxy for influence, and built the business case for mass digital surveillance. The big idea was that data can accurately model, predict, and influence the behavior of everyone—from consumers to voters to criminals. In reality, the relationship between data and influence is fuzzier, since influence is hard to measure or quantify. But the idea of data as a proxy for influence is appealing precisely because data is quantifiable, whereas influence is vague. The business model of Google Ads, Facebook, Experian, and similar companies works because data is cheap to gather, and the effectiveness of the resulting influence is difficult to measure. The credo was “Build the platform, harvest the data…then profit.” By 2006, a major policy paper could ask, “Is Data the New Oil?”3

The digital platforms that have succeeded most in attracting and sustaining mass attention—Facebook, TikTok, Instagram—have become cultural. The design of these platforms dictates the circulation of customs, symbols, stories, values, and norms that bind people together in protocols of shared identity. Culture, as articulated through human systems such as art and media, is a kind of social infrastructure. Put differently, culture is the operating system of society.

Like any well-designed operating system, culture is invisible to most people most of the time. Hidden in plain sight, we make use of it constantly without realizing it. As an operating system, culture forms the base infrastructure layer of societal interaction, facilitating communication, cooperation, and interrelations. Always evolving, culture is elastic: we build on it, remix it, and even break it.

Culture can also be hacked—subverted for specific advantage.4 If culture is like an operating system, then to hack it is to exploit the design of that system to gain unauthorized control and manipulate it towards a specific end. This can be for good or for bad. The morality of the hack depends on the intent and actions of the hacker.

When businesses hack culture to gather data, they are not necessarily destroying or burning down social fabrics and cultural infrastructure. Rather, they reroute the way information and value circulate, for the benefit of their shareholders. This isn’t new. There have been culture hacks before. For example, by lending it covert support, the CIA hacked the abstract expressionism movement to promote the idea that capitalism was friendly to high culture.5 Advertising appropriated the folk-cultural images of Santa Claus and the American cowboy to sell Coca-Cola and Marlboro cigarettes, respectively. In Mexico, after the revolution of 1910, the ruling party hacked muralist works, aiming to construct a unifying national narrative.

Culture hacks under digital capitalism are different. Whereas traditional propaganda goes in one direction—from government to population, or from corporation to customers—the internet-surveillance business works in two directions: extracting data while pushing engaging content. The extracted data is used to determine what content a user would find most engaging, and that engagement is used to extract more data, and so on. The goal is to keep as many users as possible on platforms for as long as possible, in order to sell access to those users to advertisers. Another difference between traditional propaganda and digital platforms is that the former aims to craft messages with broad appeal, while the latter hyper-personalizes content for individual users.

Kate Crawford and Vladan Jole, Anatomy of an AI System, 2018. This is a large scale map of the Amazon Echo as an anatomical map of human labor, data, and planetary resources.

The rise of Chinese-owned TikTok has triggered heated debate in the US about the potential for a foreign-owned platform to influence users by manipulating what they see. Never mind that US corporations have used similar tactics for years. While the political commitments of platform owners are indeed consequential—Chinese-owned companies are in service to the Chinese Communist Party, while US-owned companies are in service to business goals—the far more pressing issue is that both have virtually unchecked surveillance power. They are both reshaping societies by hacking culture to extract data and serve content. Funny memes, shocking news, and aspirational images all function similarly: they provide companies with unprecedented access to societies’ collective dreams and fears.6 By determining who sees what when and where, platform owners influence how societies articulate their understanding of themselves.

Tech companies want us to believe that algorithmically determined content is effectively neutral: that it merely reflects the user’s behavior and tastes back at them. In 2021, Instagram head Adam Mosseri wrote a post on the company’s blog entitled “Shedding More Light on How Instagram Works.” A similar window into TikTok’s functioning was provided by journalist Ben Smith in his article “How TikTok Reads Your Mind.”7 Both pieces boil down to roughly the same idea: “We use complicated math to give you more of what your behavior shows us you really like.”

This has two consequences. First, companies that control what users see in a nontransparent way influence how we perceive the world. They can even shape our personal relationships. Second, by optimizing algorithms for individual attention, a sense of culture as common ground is lost. Rather than binding people through shared narratives, digital platforms fracture common cultural norms into self-reinforcing filter bubbles.8

This fragmentation of shared cultural identity reflects how the data surveillance business is rewriting both the established order of global power, and social contracts between national governments and their citizens. Before the internet, in the era of the modern state, imperfect but broad narratives shaped distinct cultural identities; “Mexican culture” was different from “French culture,” and so on. These narratives were designed to carve away an “us” from “them,” in a way that served government aims. Culture has long been understood to operate within the envelope of nationality, as exemplified by the organization of museum collections according to the nationality of artists, or by the Venice Biennale—the Olympics of the art world, with its national pavilions format.

National culture, however, is about more than museum collections or promoting tourism. It broadly legitimizes state power by emotionally binding citizens to a self-understood identity. This identity helps ensure a continuing supply of military recruits to fight for the preservation of the state. Sociologist James Davison Hunter, who popularized the phrase “culture war,” stresses that culture is used to justify violence to defend these identities.9 We saw an example of this on January 6, 2021, with the storming of the US Capitol. Many of those involved were motivated by a desire to defend a certain idea of cultural identity they believed was under threat.

Military priorities were also entangled with the origins of the tech industry. The US Department of Defense funded ARPANET, the first version of the internet. But the internet wouldn’t have become what it is today without the influence of both West Coast counterculture and small-l libertarianism, which saw the early internet as primarily a space to connect and play. One of the first digital game designers was Bernie De Koven, founder of the Games Preserve Foundation. A noted game theorist, he was inspired by Stewart Brand’s interest in “play-ins” to start a center dedicated to play. Brand had envisioned play-ins as an alternative form of protest against the Vietnam War; they would be their own “soft war” of subversion against the military.10 But the rise of digital surveillance as the business model of nascent tech corporations would hack this anti-establishment spirit, turning instruments of social cohesion and connection into instruments of control.

It’s this counterculture side of tech’s lineage, which advocated for the social value of play, that attuned the tech industry to the utility of culture. We see the commingling of play and military control in Brand’s Whole Earth Catalog, which was a huge influence on early tech culture. Described as “a kind of Bible for counterculture technology,” the Whole Earth Catalog was popular with the first generation of internet engineers, and established crucial “assumptions about the ideal relationships between information, technology, and community.”11 Brand’s 1972 Rolling Stone article “Spacewar: Fantastic Life and Symbolic Death Among the Computer” further emphasized how rudimentary video games were central to the engineering community. These games were wildly popular at leading engineering research centers: Stanford, MIT, ARPA, Xerox, and others. This passion for gaming as an expression of technical skills and a way for hacker communities to bond led to the development of MUD (Multi-User Dungeon) programs, which enabled multiple people to communicate and collaborate online simultaneously.

Joshua Citarella, Choose Your Future, 2020. Courtesy of the artist.

The first MUD was developed in 1978 by engineers who wanted to play fantasy games online. It applied the early-internet ethos of decentralism and personalization to video games, making it a precursor to massive multiplayer online role-playing games and modern chat rooms and Facebook groups. Today, these video games and game-like simulations—now a commercial industry worth around $200 billion12—serve as important recruitment and training tools for the military.13 The history of the tech industry and culture is full of this tension between the internet as an engineering plaything and as a surveillance commodity.

Historically, infrastructure businesses—like railroad companies in the nineteenth-century US—have always wielded considerable power. Internet companies that are also infrastructure businesses combine commercial interests with influence over national and individual security. As we transitioned from railroad tycoons connecting physical space to cloud computing companies connecting digital space, the pace of technological development put governments at a disadvantage. The result is that corporations now lead the development of new tech (a reversal from the ARPANET days), and governments follow, struggling to modernize public services in line with the new tech. Companies like Microsoft are functionally providing national cybersecurity. Starlink, Elon Musk’s satellite internet service, is a consumer product that facilitates military communications for the war in Ukraine. Traditionally, this kind of service had been restricted to selected users and was the purview of states.14 Increasingly, it is clear that a handful of transnational companies are using their technological advantages to consolidate economic and political power to a degree previously afforded to only great-power nations.

Worse, since these companies operate across multiple countries and regions, there is no regulatory body with the jurisdiction to effectively constrain them. This transition of authority from states to corporations and the nature of surveillance as the business model of the internet rewrites social contracts between national governments and their citizens. But it also also blurs the lines among citizen, consumer, and worker. An example of this are Google’s Recaptchas, visual image puzzles used in cybersecurity to “prove” that the user is a human and not a bot. While these puzzles are used by companies and governments to add a layer of security to their sites, their value is in how they record a user’s input in solving the puzzles to train Google’s computer vision AI systems. Similarly, Microsoft provides significant cybersecurity services to governments while it also trains its AI models on citizens’ conversations with Bing.15 Under this dyanmic, when citizens use digital tools and services provided by tech companies, often to access government webpages and resources, they become de facto free labor for the tech companies providing them. The value generated by this citizen-user-laborer stays with the company, as it is used to develop and refine their products. In this new blurred reality, the relationships among corporations, governments, power, and identity are shifting. Our social and cultural infrastructure suffers as a result, creating a new kind of technical debt of social and cultural infrustructure.

In the field of software development, technical debt refers to the future cost of ignoring a near-term engineering problem.16 Technical debt grows as engineers implement short-term patches or workarounds, choosing to push the more expensive and involved re-engineering fixes for later. This debt accrues over time, to be paid back in the long term. The result of a decision to solve an immediate problem at the expense of the long-term one effectively mortgages the future in favor of an easier present. In terms of cultural and social infrastructure, we use the same phrase to refer to the long-term costs that result from avoiding or not fully addressing social needs in the present. More than a mere mistake, socio-technical debt stems from willfully not addressing a social problem today and leaving a much larger problem to be addressed in the future.

For example, this kind of technical debt was created by the cratering of the news industry, which relied on social media to drive traffic—and revenue—to news websites. When social media companies adjusted their algorithms to deprioritize news, traffic to news sites plummeted, causing an existential crisis for many publications.17 Now, traditional news stories make up only 3 percent of social media content. At the same time, 66 percent of people ages eighteen to twenty-four say they get their “news” from TikTok, Facebook, and Twitter.18 To be clear, Facebook did not accrue technical debt when it swallowed the news industry. We as a society are dealing with technical debt in the sense that we are being forced to pay the social cost of allowing them to do that.

One result of this shift in information consumption as a result of changes to the cultural infrastructure of social media is the rise in polarization and radicalism. So by neglecting to adequately regulate tech companies and support news outlets in the near term, our governments have paved the way for social instability in the long term. We as a society also have to find and fund new systems to act as a watchdog over both corporate and governmental power.

Another example of socio-technical debt is the slow erosion of main streets and malls by e-commerce.19 These places used to be important sites for physical gathering, which helped the shops and restaurants concentrated there stay in business. But e-commerce and direct-to-consumer trends have undermined the economic viability of main streets and malls, and have made it much harder for small businesses to survive. The long-term consequence of this to society is the hollowing out of town centers and the loss of spaces for physical gathering—which we will all have to pay for eventually.

Suzanne Treister, POST-SURVEILLANCE ART/NSA ON DRUGS, 2014. Courtesy of the artist.

The faltering finances of museums will also create long-term consequences for society as a whole, especially in the US, where Museums mostly depend on private donors to cover operational costs. But a younger generation of philanthropists is shifting its giving priorities away from the arts, leading to a funding crisis at some institutions.20

One final example: libraries. NYU Sociologist Eric Klinenberg called libraries “the textbook example of social infrastructure in action.”21 But today they are stretched to the breaking point, like museums, main streets, and news media. In New York City, Mayor Eric Adams has proposed a series of severe budget cuts to the city’s library system over the past year, despite having seen a spike in usage recently. The steepest cuts were eventually retracted, but most libraries in the city have still had to cancel social programs and cut the number of days they’re open.22 As more and more spaces for meeting in real life close, we increasingly turn to digital platforms for connection to replace them. But these virtual spaces are optimized for shareholder returns, not public good.

Just seven companies—Alphabet (the parent company of Google), Amazon, Apple, Meta, Microsoft, Nvidia and Tesla—drove 60 percent of the gains of the S&P stock market index in 2023.23 Four—Alibaba, Amazon, Google, and Microsoft—deliver the majority of cloud services.24 These companies have captured the delivery of digital and physical goods and services. Everything involved with social media, cloud computing, groceries, and medicine is trapped in their flywheels, because the constellation of systems that previously put the brakes on corporate power, such as monopoly laws, labor unions, and news media, has been eroded. Product dependence and regulatory capture have further undermined the capacity of states to respond to the rise in corporate hard and soft power. Lock-in and other anticompetitive corporate behavior have prevented market mechanisms from working properly. As democracy falls into deeper crisis with each passing year, policy and culture are increasingly bent towards serving corporate interest. The illusion that business, government, and culture are siloed sustains this status quo.

Our digitized global economy has made us all participants in the international data trade, however reluctantly. Though we are aware of the privacy invasions and social costs of digital platforms, we nevertheless participate in these systems because we feel as though we have no alternative—which itself is partly the result of tech monopolies and the lack of competition.

Now, the ascendence of AI is thrusting big data into a new phase and new conflicts with social contracts. The development of bigger, more powerful AI models means more demand for data. Again, massive wholesale extractions of culture are at the heart of these efforts.25 As AI researchers and artists Kate Crawford and Vladan Joler explain in the catalog to their exhibition Calculating Empires, AI developers require “the entire history of human knowledge and culture … The current lawsuits over generative systems like GPT and Stable Diffusion highlight how completely dependent AI systems are on extracting, enclosing, and commodifying the entire history of cognitive and creative labor.”26

Permitting internet companies to hack the systems in which culture is produced and circulates is a short-term trade-off that has proven to have devastating long-term consequences. When governments give tech companies unregulated access to our social and cultural infrastructure, the social contract becomes biased towards their profit. When we get immediate catharsis through sharing memes or engaging in internet flamewars, real protest is muzzled. We are increasing our collective socio-technical debt by ceding our social and cultural infrastructure to tech monopolies.

Cultural expression is fundamental to what makes us human. It’s an impulse, innate to us as a species, and this impulse will continue to be a gold mine to tech companies. There is evidence that AI models trained on synthetic data—data produced by other AI models rather than humans—can corrupt these models, causing them to return false or nonsensical answers to queries.27 So as AI-produced data floods the internet, data that is guaranteed to have been derived from humans becomes more valuable. In this context, our human nature, compelling us to make and express culture, is the dream of digital capitalism. We become a perpetual motion machine churning out free data. Beholden to shareholders, these corporations see it as their fiduciary duty—a moral imperative even—to extract value from this cultural life.

We are in a strange transition. The previous global order, in which states wielded ultimate authority, hasn’t quite died. At the same time, large corporations have stepped in to deliver some of the services abandoned by states, but at the price of privacy and civic well-being. Increasingly, corporations provide consistent, if not pleasant, economic and social organization. Something similar occurred during the Gilded Age in the US (1870s–1890s). But back then, the influence of robber barons was largely constrained to the geographies in which they operated, and their services (like the railroad) were not previously provided by states. In our current transitionary period, public life worldwide is being reimagined in accordance with corporate values. Amidst a tug-of-war between the old state-centric world and the emerging capital-centric world, there is a growing radicalism fueled partly by frustration over social and personal needs going unmet under a transnational order that is maximized for profit rather than public good.

Culture is increasingly divorced from national identity in our globalized, fragmented world. On the positive side, this decoupling can make culture more inclusive of marginalized people. Other groups, however, may perceive this new status quo as a threat, especially those facing a loss of privilege. The rise of white Christian nationalism shows that the right still regards national identity and culture as crucial—as potent tools in the struggle to build political power, often through anti-democratic means. This phenomenon shows that the separation of cultural identity from national identity doesn’t negate the latter. Instead, it creates new political realities and new orders of power.

Nations issuing passports still behave as though they are the definitive arbiters of identity. But culture today—particularly the multiverse of internet cultures—exposes how this is increasingly untrue. With government discredited as an ultimate authority, and identity less and less connected to nationality, we can find a measure of hope for navigating the current transition in the fact that culture is never static. New forms of resistance are always emerging. But we must ask ourselves: Have the tech industry’s overwhelming surveillance powers rendered subversion impossible? Or does its scramble to gather all the world’s data offer new possibilities to hack the system?

Notes
1

McKenzie Wark, A Hacker Manifesto (Harvard University Press, 2004), thesis 126.

2

Jon Katz, “Birth of a Digital Nation,” Wired, April 1, 1997.

3

Marcin Szczepanski, “Is Data the New Oil? Competition Issues in the Digital Economy,” European Parliamentary Research Service, January 2020.

4

Bruce Schneier, A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back (W. W. Norton & Sons, 2023).

5

Lucie Levine, “Was Modern Art Really a CIA Psy-Op?” JStor Daily, April 1, 2020.

6

Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (W. W. Norton & Sons, 2015).

7

Adam Mosseri, “Shedding More Light on How Instagram Works,” Instagram Blog, June 8, 2021; Ben Smith, “How TikTok Reads Your Mind,” New York Times, December 5, 2021.

8

Giacomo Figà Talamanca and Selene Arfini, “Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers,” Philosophy & Technology 35, no. 1 (2022).

9

Zack Stanton, “How the ‘Culture War’ Could Break Democracy,” Politico, May 5, 2021.

10

Jason Johnson, “Inside the Failed, Utopian New Games Movement,” Kill Screen, October 25, 2013.

11

Fred Turner, “Taking the Whole Earth Digital,” chap. 4 in From Counter Culture to Cyberculture: Stewart Brand, The Whole Earth Network, and the Rise of Digital Utopianism (University of Chicago Press, 2006).

12

Kaare Ericksen, “The State of the Video Games Industry: A Special Report,” Variety, February 1, 2024.

13

Rosa Schwartzburg, “The US Military Is Embedded in the Gaming World. It’s Target: Teen Recruits,” The Guardian, February 14, 2024; Scott Kuhn, “Soldiers Maintain Readiness Playing Video Games,” US Army, April 29, 2020; Katie Lange, “Military Esports: How Gaming Is Changing Recruitment & Moral,” US Department of Defense, December 13, 2022.

14

Shaun Waterman, “Growing Commercial SATCOM Raises Trust Issues for Pentagon,” Air & Space Forces Magazine, April 3, 2024.

15

Geoffrey A Fowler, “Your Instagrams Are Training AI. There’s Little You Can Do About It,” Washington Post, September 27, 2023.

16

Zengyang Li, Paris Avgeriou, and Peng Liang, “A Systematic Mapping Study on Technical Debt and Its Management,” Journal of Systems and Software, December 2014.

17

David Streitfeld, “How the Media Industry Keeps Losing the Future,” New York Times, February 28, 2024.

18

“The End of the Social Network,” The Economist, February 1, 2024; Ollie Davies, “What Happens If Teens Get Their News From TikTok?” The Guardian, February 22, 2023.

19

Eric Jaffe, “Quantifying the Death of the Classic American Main Street,” Medium, March 16, 2018.

20

Julia Halprin, “The Hangover from the Museum Party: Institutions in the US Are Facing a Funding Crisis,” Art Newspaper, January 19, 2024.

21

Quoted in Pete Buttigieg, “The Key to Happiness Might Be as Simple as a Library or Park,” New York Times, September 14, 2018.

22

Jeffery C. Mays and Dana Rubinstein, “Mayor Adams Walks Back Budget Cuts Many Saw as Unnecessary,” New York Times, April 24, 2024.

23

Karl Russell and Joe Rennison, “These Seven Tech Stocks Are Driving the Market,” New York Times, January 22, 2024.

24

Ian Bremmer, “How Big Tech Will Reshape the Global Order,” Foreign Affairs, October 19, 2021.

25

Nathan Sanders and Bruce Schneier, “How the ‘Frontier’ Became the Slogan for Uncontrolled AI,” Jacobin, February 27, 2024.

26

Kate Crawford and Vladan Joler, Calculating Empires: A Genealogy of Technology and Power, 1500–2025 (Fondazione Prada, 2023), 9. Exhibition catalog.

27

Rahul Rao, “AI Generated Data Can Poison Future AI Models,” Scientific American, July 28, 2023.

Category
Technology, Internet, Capitalism
Subject
Artificial intelligence, State & Government, Manifestos
Return to Issue #146

Kim Córdova is an art writer. Currently, she is a Fulbright Scholar based in Bogotá and is a 2023 recipient of the Andy Warhol Arts Writers Grant.

Bruce Schneier is a security technologist. He is a fellow and lecturer at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation, and the Chief of Security Architecture at Inrupt, Inc.

Advertisement
Subscribe

e-flux announcements are emailed press releases for art exhibitions from all over the world.

Agenda delivers news from galleries, art spaces, and publications, while Criticism publishes reviews of exhibitions and books.

Architecture announcements cover current architecture and design projects, symposia, exhibitions, and publications from all over the world.

Film announcements are newsletters about screenings, film festivals, and exhibitions of moving image.

Education announces academic employment opportunities, calls for applications, symposia, publications, exhibitions, and educational programs.

Sign up to receive information about events organized by e-flux at e-flux Screening Room, Bar Laika, or elsewhere.

I have read e-flux’s privacy policy and agree that e-flux may send me announcements to the email address entered above and that my data will be processed for this purpose in accordance with e-flux’s privacy policy*

Thank you for your interest in e-flux. Check your inbox to confirm your subscription.