Speakers

Amira Möding

Amira Moeding is a final year PhD student in history at the University of Cambridge working on an Intellectual History of ‘Big Data.’ Amira works as a research assistant at the Department of Computer Science and Technology at Cambridge in a project that aims to render so-called “AI” applications more intelligible and counter the current hype around machine learning. They have published on Big Data and Racism. A paper co-authored with T. Matzner is currently under review with ISIS on “The Rise of Machine Empiricism: On the Development of Statistical Methods in Artificial Intelligence Research 1987–2017.” Amira held fellowships at the University of Cambridge and Humboldt Universität zu Berlin.

Daniela Zetti

Dr. Daniela Zetti is a historian of technology and media specializing in the history of digital and non-digital carriers of knowledge such as paper, television and acoustic signals. She has been working in Zurich, Luneburg and Munich. Currently, she is a lecturer at the Institute for the History of Medicine and Science Studies and a research associate at the Ethical Innovation Hub, both University of Lubeck. In January 2025 she will start a research project on the relocation of the radio studios in Lausanne, Switzerland: “The house of radio, 2025-2026”, located at the University of Lausanne.

Florian Hoof

Florian Hoof is a Research Associate at the research center Normative Orders, and Associate Lecturer in Film and Media Studies, both at Goethe University Frankfurt. His research interests include digital infrastructures and logistics, security studies, science and technology studies, non-theatrical film. He is the author of Angels of Efficiency. A Media History of Consulting (Oxford University Press 2020), Alternative Sports and Media Distribution – Infrastructures and the Circulation of Moving Images (Palgrave 2025), and co-author of Films that Work Harder. The Global Circulation of Industrial Cinema (Amsterdam University Press 2023).

Iginio Gagliardone

Iginio Gagliardone is Professor of Media Studies at Wits University in Johannesburg, South Africa and inaugural fellow of Wits’ Machine Intelligence and Neural Discovery (MIND) Institute. He is the author of “The Politics of Technology in Africa” (2016) and “China, Africa, and the Future of the Internet” (2019). His most recent work examines the international politics of Artificial Intelligence and the emergence of new imageries of technological evolution in Africa.

Jean-Christophe Plantin

Jean-Christophe Plantin is an associate professor at the Department of Media and Communications at the London School of Economics and Political Science. His research and teaching focus on the infrastructural power of tech giants, the platformization of cybersecurity, gaming, and libraries, and the invisible labor of cleaning data infrastructure. Previously, he was a Postdoctoral Fellow at the University of Michigan. He received a PhD in Communication and Information Studies from Université de Technologie de Compiègne, France.

Moira Weigel

Moira Weigel writes and teaches about the history, theory, and social life of media and communication technologies, from the early nineteenth century to the present. She originally trained in comparative literature and film studies, working in German, French, Spanish, and Mandarin Chinese. More recently, she has focused on data-driven technologies, particularly social media and marketplace platforms, as well as on new developments in artificial intelligence and machine translation. Her articles on these subjects have appeared or are forthcoming in boundary 2, The International Journal of Communication, New Media and Society, Polity, and Social Media + Society. Her first two books are Labor of Love: the Invention of Dating (2016) and, with Ben Tarnoff, Voices from the Valley: Tech Workers Talk About What They Do and How They Do It (2020). These have been translated into six languages and appeared in dozens of outlets including The New Yorker, The New York Times, The Economist, The Washington Post, The Atlantic, The Guardian, The Wall Street Journal, Wired, NPR, CNN, and HBO.

Nick Couldry

Nick Couldry is a sociologist of media and culture. He is Professor of Media Communications and Social Theory at the London School of Economics and Political Science, and since 2017 a Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society. He is the author or editor of seventeen books including The Mediated Construction of Reality (with Andreas Hepp, Polity, 2016), Media, Society, World: Social Theory and Digital Media Practice (Polity 2012) and MediaSpace (Routledge 2004, co-edited with Anna McCarthy). His latest books include The Space of the World (Polity, forthcoming October 2024), Data Grab: The New Colonialism of Big Tech and How to Fight Back (Penguin/W. H. Allen 2024, with Ulises Mejias), Media: Why It Matters (Polity: 2019) and Media, Voice, Space and Power: Essays of Refraction (Routledge 2021). Nick is also the co-founder of the Tierra Común network of scholars and activists: https://www.tierracomun.net/.

Nick Seaver

Nick Seaver is Associate Professor of Anthropology at Tufts University, where he also directs the program in Science, Technology & Society. His research has appeared in venues including Cultural AnthropologyCultural StudiesBig Data & Society, and the Journal of the Royal Anthropological Institute. He is the author of Computing Taste: Algorithms and the Makers of Music Recommendation (2022), an ethnographic study of music recommender system developers. His current research examines the rise of attention as a value and virtue in machine learning worlds. 

Nicole Starosielski

Nicole Starosielski, Professor of Film and Media at the University of California-Berkeley, conducts research on global internet and media distribution, communications infrastructures ranging from data centers to undersea cables, and media’s environmental and elemental dimensions. Starosielski is author or co-editor of over thirty articles and five books on media, infrastructure, and environments, including: The Undersea Network (2015), Media Hot and Cold (2021), Signal Traffic: Critical Studies of Media Infrastructure (2015), Sustainable Media: Critical Approaches to Media and Environment (2016), Assembly Codes: The Logistics of Media (2021), as well as co-editor of the “Elements” book series at Duke University Press. 

Shichang Duan

Dr. Shichang Duan is a postdoc working in the Anthropology Department, University of Amsterdam. His current project investigates the Chinese transnational e-commerce platforms, such as Alibaba and Shein. He obtained his PhD in May 2023 with the dissertation titled, “Reproducing Farmers: The labor divisions among live e-commerce teammates in rural China.” He was a visiting scholar at Utrecht University, and a research fellow at University of Siegen.

Steven Gonzalez Monserrate

Steven Gonzalez Monserrate is a postdoctoral researcher with the Fixing Futures research training group at Goethe University. He received his PhD from the Massachusetts Institute of Technology in the History, Anthropology, Science, Technology & Society (HASTS) program. His book manuscript, “Cloud Ecologies”, is an ethnographic investigation of data centers and their environmental and political impacts in New England, Arizona, Puerto Rico, Iceland, and Singapore. Steven holds an MA in Anthropology from Brandeis University and a BA in Feminist Anthropology from Keene State College. Steven is also an artist, filmmaker, and a speculative fiction writer (under the byline E.G. Condé).

Data and AI: On the rise of Machine Empiricism in Artificial Intelligence (AI) Research

To understand the current capitalization of the digital technology industries, in particula Microsoft, Meta, and Alphabet/Google, examining the hype surrounding AI and ist material basis are vital. The dream of a super intelligence (enshrined in a super application) generates significant revenue as the success of OpenAI, currently valued at 157 billion, has shown. Underlying the current successes of ostensibly intelligent applications such as Chat-GPT and Gemini, their surprising accessibility and general possibilities for applications, are vast data processing infrastructures and new ways of generating data. Data here have taken on the character of a basic resource. Yet, this data are not mainly generated by surveilling users to get a sense of their preferences. Instead, data is ‘mined’ from the web broadly, Wikipedia or in very few cases scientific papers to generate ‘intelligence’ and model the world so that it becomes accessible and processable for the artificial neural network. I propose the notion of ‘machine
empiricism’ to come to terms with modelling the world solely to allow the system to probabilistically generate likely answers or pictures. I argue that the recent success of “AI” has led to a surge in building data infrastructures as well as various attempts to get more data and experimenting with different types of corpora next to ‘mining’ the web. I provide a genealogy of this development by pointing to the “economic imperatives” that started this process. Here, I attempt to analyse the peculiarities of data generation for “Modern AI.” Finally, I aim to show how the notion of machine empiricism can help us critically assess the current hype.

Digital sovereignty as an ill-structured (or wicked?) problem

In this chapter, we discuss digital sovereignty as an ill-structured problem negotiated within democratic and participative discourses in public and private organisations. We argue that even though ill-structured problems describe challenges that cannot be solved in a formally structured way, tensions between knowledge and practice within discursive attempts at solutions and theoretical foundations may gradually lead to (increasingly) well-structured problem formulations. We first invoke Herbert Simon’s analytical take on ill-structured problems to then apply the concept to digital sovereignty, the conception of which oscillates between the individual and the collective (inter-)national level. In light of issues and transgressions related to digital practices in violation or absence of digital sovereignty, however, we voice a call to resiliently pursue and engage in spaces of negotiations rather than succumbing to defeatism. Identifying digital sovereignty as an ill-structured problem can only emphasise the relevance of attempts to determine whether a transformation into a well-structured problem is possible. Accordingly, we contend that the discourse about digital sovereignty confirms an ‘ill-structured problem’ as a timely analytical term that helps enhance our understanding of problems shaped by the conditions of (digital) societies.

Mapping Media Cultures of Digital Security and Trust.

In recent years, media theory has discussed digital media and infrastructures in terms of surveillance, distribution, or representation. These concepts have been underpinned by more basic media theoretical categories such as “technology,” “aesthetics,” “storage,” “transmission,” “circulation,” or “processing. What has received less attention are questions of the (in)stability, security, and (un)trustworthiness of media devices, infrastructures, and circulation. My contribution outlines ways to develop trust and security as a media theoretical category and outlines a research agenda for mapping media cultures of security and trust. I argue for a praxeological perspective that considers digital “security spaces” not only in relation to apparatuses of security, but as part of global efforts to create trust in media circulation.

AI sovereignty and networked sovereignty: a view from Africa

The conception of digital sovereignty has been associated, especially in the early stages of the diffusion of the Internet, with efforts to keep specific data and information outside of a state’s jurisdiction. AI sovereignty responds to an almost opposite logic, indicating the ability of a state to access and make use of data that are produced within its jurisdiction. These two strategies – which I refer to as lock-out and lock-in sovereignty – share some common roots (e.g. the attempt to protect and enhance specific cultural attributes recognised as important by a national community), but they also point to different technical, economic, and political characteristics needed to enforce one or the other type of sovereignty. In this presentation I examine key elements that set these concepts, and their implementation, apart and how they intersect with both existing and potential articulations of national sovereignty in Africa. In particular I oppose a negative – and still pervasive – definition of sovereignty applied to African states, based on the Westphalian ideal and “measuring the gap between what Africa is and what we are told it ought to be” (Mbembe 2019, p. 26); and the possibilities disclosed by re-appropriating practices of “networked sovereignty” (Mbembe, 2016) that characterised pre-colonial Africa .

The Platformization of Cybersecurity: Uncovering Invisible Labor in “Bug Bounty” Platforms

By selling bug bounty programs, where security researchers (or “hackers”) report security vulnerabilities (or “bug”) to organizations for a reward (or “bounty”), large vendors such as HackerOne or Bugcrowd Bugcrowd apply the platform model to cybersecurity. While scholarship on bug bounty overwhelmingly focuses on hackers, we look at the other side of the platform and ask: how do bug bounty platforms shape the work of managers working at the client organization? What can we learn from these managers’ work about how digital platforms affect cybersecurity globally? We use data from 13 semi-structured interviews with bug bounty managers that we analyze using critical platform studies and science & technology studies scholarship on invisible work in infrastructure. Our results are two-fold. First, processing and fixing a bug require the manager to coordinate the work of three specific actors—researchers, triagers, and the owner of the asset affected by the vulnerability. Second, managers must constantly and intensively engage in three actions with these actors: attracting them, negotiating with them, and monitoring their work. Uncovering this invisible work shows how the platform model introduces new contingencies in the resolution of vulnerability and amplifies inequalities of cybersecurity resources between organizations—hence interrogating its positive contribution to the global security of digital infrastructure.

Amazon Fixers: The Limits of Platform Governance and the Codes of Consulting

Compliance agencies advise businesses on Amazon or other marketplace platforms such as Walmart or TikTok on how to avoid being abused by competitors or suspended or banned. This is an informal industry, dominated by a few small firms, typically founded by people who fit into one of two categories. Either, they are experienced merchants, who have transitioned into consulting in order to increase or diversify their income. Or, they are former employees of the platform firms in question. The primary service that such consultants provide is know-how, specifically, knowing how to “speak the language” of the platform. In practice, this means serving as a translator between clients and the platform, as well as sometimes literally translating for clients who use platforms to sell goods directly to consumers across multiple regional markets. Drawing on seven months of ethnography at a compliance agency, as well as numerous interviews with founders and employees at other agencies, this paper explores the work of e-commerce consultants for what it reveals about the limitations of platform governance and the ways that diversely positioned actors respond to them. I examine not only the “codes” that consultants use to speak to Amazon but the way they make determinations about morality and professionalism in this constitutively gray area.

Big data’s emerging social order: The changing states of analysis and critique

What’s going on with Big Data and AI is much larger than surveillance, and it has much deeper roots than the ‘coup from above’ that Shoshana Zuboff diagnosed in The Age of Surveillance Capitalism. In this conversational plenary session, I will argue that it makes much more sense to understand what’s going on as an emerging social and economic order in which, as with the building and sustaining any order, the full range of society’s actors are involved. This point has been central to the data colonialism framework (Couldry and Mejias 2019, Mejias and Couldry 2024) from the beginning and is one thing that distinguishes it from the ‘surveillance capitalism’ thesis. The important question then becomes: what remain the key areas of concern and critique, once we recognise the deep roots of today’s transformations in life through data and platforms. Today’s socioeconomic, platform-based order will not be overturned by political or regulatory fiat (in any case Zuboff’s main horizon of hope – US Democratic politics before Donald Trump’s recent re-election – no longer serves us). What then are the other forms of resistance and challenge about which we must think?

Seeing Like a Mouse: The  Ends of Workplace Surveillance and the Semiotics of Deception

Mouse jigglers are a class of device designed to cause cursor movement on computer screens, with the goal of preventing automated systems from identifying users as “away” from their devices. With the rise of work-from-home arrangements in the Covid-19 pandemic and the accompanying spread of workplace surveillance technologies, mouse jigglers (and related technologies like automated keypressers) have become objects of popular discourse and scrutiny. These devices embody ideas about the possibility and scope of automated attention tracking, while also reflecting the evolution of modes of resistance. This paper analyzes the figure of the mouse jiggler, in both its technical configuration and discursive construction, to investigate a broader set of concerns about the automated detection and management of attention today. Attention—in the workplace and beyond—is known and governed through tenuous technical proxies that are susceptible to deception and resistance.

Opening the Black Box: Toward an Engaged Digital Infrastructure Research Agenda

Digital infrastructures are the physical installations that support the transmission, storage, and computation of data. These infrastructures include data centers; subsea and terrestrial fiber-optic cables; and Internet Exchange Points (“IXPs”), Points of Presence (“POPs”) and colocation facilities where networks are interconnected. Despite the fact that these installations underpin all Internet operations today, they remain absent in most academic institutions, and to-date there has been no curriculum that covers digital infrastructure components, business models, design/build, operations, and maintenance, alongside the impacts of/on economies, geopolitics, artificial intelligence, and the environment. In this talk, I argue that this is a critical problem for governance, since as more industries and people depend on these systems, there needs to be a basic literacy about how they work. It is also, however, an industry problem — data center and subsea cable operators are looking for the next generation of their workforce. I describe the development of core curriculum on digital infrastructure; a model for global cooperative design; dissemination of research to industry actors; and the imagination and facilitation of new directions for infrastructure futures. Such activities, I show, are part of an engaged research agenda, one that is intertwined with educational imperatives, and in which academia is not conceptualized as a neutral, unbiased commentator, but as an active participant in global decision-making.

Platform-Mediated racialization: A case study of rural Chinese wig sellers and

Black clients on Alibaba

Due to the rise of global e-commerce platforms like Alibaba, online interactions between rural Chinese wig sellers and Black clients are increasing, which leads to diversified grassroots interpretations of Blackness in China. While existing scholarship mostly focuses on discursive constructions of race in China, the role of platforms during racialization remains undertheorized. Based on four months of fieldwork in a wig factory in Xuchang, this research examines how Chinese wig sellers construct a hierarchy of Black clients by articulating platform metrics with a global infrastructure network and second-hand racial knowledge. We argue that e-platforms mediate the racialization of Black clients by rural Chinese wig sellers, instead of being a neutral instrument or dominant power. This research bridges the gap between platform studies and critical race studies and contributes to the reconceptualization of relations between platform and race.

The Cloud’s (In)Security: The Perils of Terraforming Environment for Computation

Data centers are highly secure facilities. As the computational engines of digital capitalism, significant resources are devoted to ‘protecting’ these infrastructures and ensuring the unceasing continuity of their operations. Security, conceived in this way, might refer to the military logics, geopolitical dynamics, and imperial histories that insulate or heighten the vulnerability of data centers. However, while the threats of cyberwarfare significantly influence the siting and operation of data centers, security considerations extend to Nature as both a threat and opportunity, a resource and an obstacle for ubiquitous computation and ‘artificial intelligence’. Drawing on multi-sited ethnographic research, this paper offers a comparative survey of data centers operating in different environments. Case studies include a data center in the arid southwestern United States, another in the arctic chill of Iceland, and a final case in the humid tropics of Singapore. With a focus on the materiality of computation and the pragmatic challenges that different climates pose for servers, I frame the Cloud’s expansion as an act of terraforming. By attending to both the environmental impacts of computation and the required inputs for the metabolism of data, the paper concludes by asking how the Cloud’s terraforming threatens as much as it assures continuity.