If...then: algorithmic power and politics taina bucher - Download the ebook today and own the comple

Page 1


https://ebookmass.com/product/if-then-algorithmic-power-and-

Instant digital products (PDF, ePub, MOBI) ready for you

Download now and discover formats that fit your needs...

The Art of Theatre: Then and Now 4th Edition

https://ebookmass.com/product/the-art-of-theatre-then-and-now-4thedition/

ebookmass.com

And Then I Turned Into a Mermaid Laura Kirkpatrick

https://ebookmass.com/product/and-then-i-turned-into-a-mermaid-laurakirkpatrick/

ebookmass.com

Alternative Comedy Now and Then: Critical Perspectives

Oliver Double

https://ebookmass.com/product/alternative-comedy-now-and-thencritical-perspectives-oliver-double/

ebookmass.com

Without Saying Goodbye Laura Jarratt

https://ebookmass.com/product/without-saying-goodbye-laura-jarratt/

ebookmass.com

Oswaal NTA CUET (UG) 10 Sample Question Papers, English (Entrance Exam Preparation Book 2022) 1st Edition Oswaal Editorial Board

https://ebookmass.com/product/oswaal-nta-cuet-ug-10-sample-questionpapers-english-entrance-exam-preparation-book-2022-1st-edition-oswaaleditorial-board/

ebookmass.com

Boomer (Cerberus MC Book 25) Marie James

https://ebookmass.com/product/boomer-cerberus-mc-book-25-marie-james/

ebookmass.com

Dermatología : atlas, diagnóstico y tratamiento Roberto Arenas Guzmán

https://ebookmass.com/product/dermatologia-atlas-diagnostico-ytratamiento-roberto-arenas-guzman/

ebookmass.com

CCSP Certified Cloud Security Professional. Exam Guide 3rd Edition Unknown

https://ebookmass.com/product/ccsp-certified-cloud-securityprofessional-exam-guide-3rd-edition-unknown/

ebookmass.com

Putin’s Totalitarian Democracy: Ideology, Myth, and Violence in the Twenty-First Century Kate C. Langdon

https://ebookmass.com/product/putins-totalitarian-democracy-ideologymyth-and-violence-in-the-twenty-first-century-kate-c-langdon/ ebookmass.com

https://ebookmass.com/product/essential-skills-for-a-medicalteacher-3rd-edition-ronald-harden/

ebookmass.com

If . . . Then

Oxford Studies in Digital Politics

Series Editor: Andrew Chadwick, Professor of Political Communication in the Centre for Research in Communication and Culture and the Department of Social Sciences, Loughborough University

Using Technology, Building Democracy: Digital Campaigning and the Construction of Citizenship

Jessica Baldwin-Philippi

Expect Us: Online Communities and Political Mobilization

Jessica L. Beyer

The Hybrid Media System: Politics and Power

Andrew Chadwick

The Only Constant Is Change: Technology, Political Communication, and Innovation Over Time

Ben Epstein

Tweeting to Power: The Social Media Revolution in American Politics

Jason Gainous and Kevin M. Wagner

Risk and Hyperconnectivity: Media and Memories of Neoliberalism

Andrew Hoskins and John Tulloch

Democracy’s Fourth Wave?: Digital Media and the Arab Spring

Philip N. Howard and Muzammil M. Hussain

The Digital Origins of Dictatorship and Democracy: Information Technology and Political Islam

Philip N. Howard

Analytic Activism: Digital Listening and the New Political Strategy

David Karpf

The MoveOn Effect: The Unexpected Transformation of American Political Advocacy

David Karpf

Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy

Daniel Kreiss

Taking Our Country Back: The Crafting of Networked Politics from Howard Dean to Barack Obama

Daniel Kreiss

Media and Protest Logics in the Digital Era: The Umbrella Movement in Hong Kong

Francis L.F. Lee and Joseph M. Chan

Bits and Atoms: Information and Communication Technology in Areas of Limited Statehood

Steven Livingston and Gregor Walter-Drop

Digital Cities: The Internet and the Geography of Opportunity

Karen Mossberger, Caroline J. Tolbert, and William W. Franko

Revolution Stalled: The Political Limits of the Internet in the Post-Soviet Sphere

Sarah Oates

Disruptive Power: The Crisis of the State in the Digital Age

Taylor Owen

Affective Publics: Sentiment, Technology, and Politics

Zizi Papacharissi

The Citizen Marketer: Promoting Political Opinion in the Social Media Age

Joel Penney

Presidential Campaigning in the Internet Age

Jennifer Stromer-Galley

News on the Internet: Information and Citizenship in the 21st Century

David Tewksbury and Jason Rittenberg

The Civic Organization and the Digital Citizen: Communicating Engagement in a Networked Age

Chris Wells

Networked Publics and Digital Contention: The Politics of Everyday Life in Tunisia

Mohamed Zayani

If . . . Then

ALGORITHMIC POWER AND POLITICS

TAINA BUCHER

1

Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2018

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer.

Library of Congress Cataloging-in-Publication Data

Names: Bucher, Taina, author.

Title: If then : algorithmic power and politics / Taina Bucher.

Description: New York : Oxford University Press, [2018] | Includes bibliographical references and index. |

Identifiers: LCCN 2017054909 (print) | LCCN 2018008562 (ebook) | ISBN 9780190493042 (Updf) | ISBN 9780190493059 (Epub) | ISBN 9780190493035 (pbk. : alk. paper) | ISBN 9780190493028 (hardcover : alk. paper)

Subjects: LCSH: Information technology—Social aspects. | Information society—Social aspects. | Algorithms—Social aspects. | Big data—Social aspects. | Artificial intelligence—Social aspects.

Classification: LCC HM851 (ebook) | LCC HM851 .B798 2018 (print) | DDC 303.48/33—dc23

LC record available at https://lccn.loc.gov/2017054909

1 3 5 7 9 8 6 4 2

Paperback printed by WebCom, Inc., Canada

Hardback printed by Bridgeport National Bindery, Inc., United States of America

Acknowledgments vii

1. Introduction: Programmed Sociality 1

2. The Multiplicity of Algorithms 19

3. Neither Black Nor Box: (Un)knowing Algorithms 41

4. Life at the Top: Engineering Participation 66

5. Affective Landscapes: Everyday Encounters with Algorithms 93

6. Programming the News: When Algorithms Come to Matter 118

7. Conclusion: Algorithmic Life 149

Notes 161

Bibliography 175 Index 195

Acknowledgments

I enjoyed writing this book. The initial idea behind this book started taking shape as a dissertation at the University of Oslo, but soon evolved into something else entirely, as most things do. I therefore owe a sense of gratitude toward all the efforts I put into my PhD project. Memories of past struggles greatly lessened the burden of undertaking and writing my first book. The encouragement and the generosity of series editor Andrew Chadwick and editor Angela Chnapko at Oxford University Press were also huge contributions to that end, as well as fundamental in developing this project. Thanks to both of you for believing it would make a valuable contribution to the Oxford Studies in Digital Politics series. The book is a product of various encounters with people, things, and places. It was written in the libraries, offices, homes and cafés of Copenhagen, Oslo, Berlin, and New York. Writing allowed me to explore these places in new ways. I’d also like to acknowledge the sunlight, coffee, sounds, views, connectivity, and solitude that these places helpfully offered. The University of Copenhagen has provided a rich academic community, and I am grateful to all my colleagues at the Centre for Communication and Computing for the intellectual discussions and support. Thanks to Shannon Mattern for hosting me at the New School in New York for my sabbatical. A number of people have provided valuable feedback on the book as it emerged: Anne-Britt Gran, Michael Veale, Angèle Christin, Ganaele Langlois, and Fenwick McKelvey. Thanks for your insightful comments. My appreciation also goes to all people involved in the editorial process, copyediting, transcription services, and to all the anonymous referees whose work and critical remarks have greatly improved the final product. There are countless other scholars and students met at conferences and seminars to thank as well for their keen interest in my work and their astute suggestions. I hope a collective word of thanks will be accepted. This book also benefited from insightful interviews and conversations with media leaders, producers, and social media users. I am grateful to all the people who generously agreed to be interviewed and for giving up their time to help me understand the world of algorithms a bit better.

Fragments of this text have been previously published, but are all freshly milled here. Chapter 4 takes parts from “Want to be on the top?” (New Media & Society). Chapter 5 builds on pieces from “The algorithmic imaginary” (Information, Communication & Society) and chapter 6 has adapted sections from “Machines don’t have instincts” (New Media & Society). Probably there are many more fragments to be acknowledged, but the boundaries of a text are never easily delineated. Chapter 5 and 6 also benefited from financial support from the Digitization and Diversity project funded by the Research Council of Norway. Thanks to friends and family for their patience and support, and especially to my mom who never forgot to mention that there is more to life than writing a book. More than anyone I am grateful to Georg Kjøll for his unconditional love, superb editorial skills, music playlists, daily home-cooked dinners, and companionship; you make living and writing so much fun each and every day.

If . . . Then

Programmed Sociality

Let us for a moment consider the following scenario: Copenhagen on a rainy November day. The semester is coming to a close. Students and professors alike are stressed, but it is nonetheless a sociable time of the year. Conference trips, essays to be graded, dinner plans, and Christmas presents in the pipeline. In other words: buying plane tickets, surfing the Web for restaurant tips, streaming music, coordinating dinner plans with friends, shopping for Christmas gifts, preparing lectures, watching a movie after work. Besides describing a random day in an academic life, all these activities imply using the Internet. I found some cheap tickets using a metasearch engine, discovered a nice restaurant through a collaborative filtering system, listened to a music playlist suggested to me by a music-streaming site, bought a gift for my mother at an online designer store, chatted with friends on a social-networking site to find a suitable time for a pre-Christmas dinner, and, finally, watched an episode of my favorite TV series on a movie-streaming site.

It is not just a story about the good life of a Scandinavian academic. It suggests a life deeply entwined with media. Following media theorist Mark Deuze (2012), we might say that life is not only lived in and through media but in and through specific types of media. What these activities have in common is a high degree of interaction with algorithmic media, media whose core function depends on algorithmic operations. This book starts from the premise that life is not merely infused with media but increasingly takes place in and through an algorithmic media landscape. Key to this premise is the notion of the co-production of social life, practices, and technology. While people interact with specific media companies and platforms, these platforms interact with people as well. Users do not simply consult websites or talk to their friends online. Social media and other commercial Web companies recommend, suggest, and provide users with what their algorithms have predicted to be the most relevant, hot, or interesting news, books, or movies to watch, buy, and consume. Platforms act as performative intermediaries that participate in shaping the worlds they only purport to represent. Facebook is not simply a social networking site that

lets users “connect with friends and the world around you.”1 As media scholar José van Dijck has argued, “Social media are inevitably automated systems that engineer and manipulate connections” (2013: 12). By the same token, Netflix is not a website that lets users “see what’s next and watch anytime, cancel anytime.”2 It can’t be seen as a neutral platform that merely queries its vast database about a user’s request to show the movies they explicitly want to watch. Relying on vast amounts of data, Netflix algorithms are used to analyze patterns in people’s taste, to recommend more of the same. Popularity is not only a quantifiable measure that helps companies such as Facebook and Netflix to determine relevant content. User input and the patterns emerging from it are turned into a means of production. What we see is no longer what we get. What we get is what we did and that is what we see. In the case that Netflix suggests we watch House of Cards, it is largely a matter of consumers getting back their own processed data. When the show was released in 2013 it quickly became a cornerstone for data-driven programming, the idea that successful business decisions are driven by big data analytics.

Of course, there is nothing inherently wrong with this form of data-driven media production. After all, it seems, many people enjoy watching the show. The interesting and potentially troubling question is how reliance on data and predictive analytics might funnel cultural production in particular directions, how individual social media platforms code and brand specific niches of everyday life (van Dijck, 2013: 22). Starting from the basic question of how software is shaping the conditions of everyday life, this book sets out to explore the contours and implications of the question itself. In what ways can we say that software, and more specifically algorithms, shape everyday life and networked communication? What indeed, are algorithms and why should we care about their possible shaping effects to begin with?

Let us quickly return to my rainy November day. While chatting with friends on Facebook about the pre-Christmas dinner, two rather specific ads appeared on the right-hand side of my Facebook news feed. One was for a hotel in Lisbon, where I was going to travel for the conference, and the other was for a party dress. How did Facebook know about my upcoming trip, or that I had just bought my mother a Christmas gift from that shop? My musings were only briefly interrupted by one of my friends asking me about a concert I had recently been to. She had seen a picture I posted on Facebook from the concert a few days earlier. My other friend wondered, why hadn’t she seen the picture? After all, as she remarked, she checks her Facebook feed all the time. While these connections might be coincidental, their effects are not incidental. They matter because they affect our encounters with the world and how we relate to each other. While seeing ads for party dresses in a festive season might not appear strange, nor missing a picture posted from a concert for that matter, such programmed forms of sociality are not inconsequential. These moments are mediated, augmented, produced, and governed by networked systems powered by software and algorithms. Understood as the coded instructions that a computer needs to follow to perform a given task, algorithms are deployed to make

decisions, to sort and make meaningfully visible the vast amount of data produced and available on the Web. Viewed together, these moments tell the story of how our lives are networked and connected. They hint at the fundamental question of who or what has power to set the conditions for what can be seen and known with whatever possible effects. To address this important question, this book proposes to consider the power and politics of software and algorithms that condense and construct the conditions for the intelligible and sensible in our current media environment.

The ideas of power and politics I have in mind are both very broad, yet quite specific. For one, this book is not going to argue that algorithms have power. Sure, algorithms are powerful, but the ways in which this statement holds true cannot simply be understood by looking at the coded instructions telling the machine what to do. Drawing on the French philosopher Michel Foucault’s (1982; 1977) understanding of power as exercised, relational and productive, I intend to show how the notion of “algorithmic power” implies much more than the specific algorithm ranking e.g. a news feed. What I am going to argue is that the notion of algorithmic power may not even be about the algorithm, in the more technical sense of the term. Power always takes on many forms, including not only the ways in which it is exercised through computable instructions, but also through the claims made over algorithms. As such, we might say that algorithmic systems embody an ensemble of strategies, where power is immanent to the field of action and situation in question. Furthermore, following Foucault, power helps to produce certain forms of acting and knowing, ultimately pointing to the need for examining power through the kinds of encounters and orientations algorithmic systems seem to be generative of.

Neither are the “politics” of this book about politics with a capital P. I will not be discussing parliamentary politics, elections, campaigns, or political communication in the strictest sense. Rather, politics is understood in more general terms, as ways of world-making—the practices and capacities entailed in ordering and arranging different ways of being in the world. Drawing on insights from Science and Technology Studies (STS), politics here is more about the making of certain realities than taking reality for granted (Mol, 2002; Moser, 2008; Law, 2002). In chapter 2 I will describe this form of politics of the real, of what gets to be in the world in terms of an “ontological politics” (Mol, 2002). In ranking, classifying, sorting, predicting, and processing data, algorithms are political in the sense that they help to make the world appear in certain ways rather than others. Speaking of algorithmic politics in this sense, then, refers to the idea that realities are never given but brought into being and actualized in and through algorithmic systems. In analyzing power and politics, we need to be attentive of the way in which some realities are always strengthened while others are weakened, and to recognize the vital role of nonhumans in co-creating these ways of being in the world. If . . . Then argues that algorithmic power and politics is neither about algorithms determining how the social world is fabricated nor about what algorithms do per se. Rather it is about how

and  when different aspects of algorithms and the algorithmic become available to specific actors, under what circumstance, and who or what gets to be part of how algorithms are defined.

Programmed Sociality

Increasingly, we have come to rely on algorithms as programmable decision-makers to manage, curate, and organize the massive amounts of information and data available on the Web and to do so in a meaningful way. Yet, the nature and implications of such arrangements are far from clear. What exactly is it that algorithms “do” and what are the constitutive conditions necessary for them to do what they do? How are algorithms enlisted as part of situated practices, and how do they operate in different settings? How can we develop a productive and critical inquiry of algorithms without reducing it to a question of humans versus the machine?

Let’s begin a tentative answer with a conceptual understanding of how software induces, augments, supports, and produces sociality. Here, I suggest the concept of programmed sociality as a helpful heuristic device. Through this we might study algorithmic power and politics as emerging through the specific programmed arrangements of social media platforms, and the activities that are allowed to take place within those arrangements. Facebook and other software systems support and shape sociality in ways that are specific to the architecture and material substrate of the medium in question. To do justice to the concept of programmed sociality, it is important to highlight that it does not lead us down a pathway of technological determinism. In using the term “programmed,” I draw on computer scientist John von Neumann’s notion of “program,” for which the term “to program” means to “assemble” and to “organize” (Grier, 1996: 52). This is crucial, as it frames software and algorithms as dynamic and performative rather than as fixed and static entities. Regarding “sociality,” I refer to the concept of how different actors belong together and relate to each other. That is, sociality implies the ways in which entities (both human and non-human) are associated and gathered together, enabling interaction between the entities concerned (Latour, 2005). To be concerned with programmed sociality is to be interested in how actors are articulated in and through computational means of assembling and organizing, which always already embody certain norms and values about the social world. To exemplify how algorithmic media prescribe certain norms, values, and practices, let me describe how programmed sociality plays out in the specific context of Facebook, by focusing on friendships as a particularly pertinent form of being together online.

As Facebook has become an integral part of everyday life, providing a venue for friendships to unfold and be maintained, it is easy to forget just how involved Facebook is in what we often just take to be interpersonal relationships. Everything from setting up a profile and connecting with other users to maintaining a network

of friends entails an intimate relation with the software underlying the platform itself. As van Dijck has pointed out, “what is important to understand about social network sites is how they activate relational impulses” (2012: 161). It is important to understand that relationships are activated online, but also how and when they are activated: by whom, for what purpose, and according to which mechanisms. With nearly two billion users, many of whom have been members of the platform for many years, most people have long forgotten what it felt like to become a member, how they became the friend that Facebook wanted them to be. Upon first registering with the site, the user is instantly faced with the imperative to add friends. Once a user chooses to set up an account, he is immediately prompted to start filling in the personal profile template. Users’ identities need to be defined within a fixed set of standards to be compatible with the algorithmic logic driving social software systems. If users could freely choose what they wish to say about themselves, there would be no real comparable or compatible data for the algorithms to process and work with. Without this orderly existence as part of the databases, our connections would not make much sense. After all, “data structures and algorithms are two halves of the ontology of the world according to a computer” (Manovich, 2001: 84). Being part of databases means more than simply belonging to a collection of data. It means being part of an ordered space, encoded according to a common scheme (Dourish, 2014). As Tarleton Gillespie (2014) points out, data always need to be readied before an algorithm can process them. Categorization is a powerful mechanism in making data algorithm-ready. “What the categories are, what belongs in a category, and who decides how to implement these categories in practice, are all powerful assertions about how things are and are supposed to be” (Bowker and Star, in Gillespie, 2014: 171). The template provided by Facebook upon signing in constitutes only one of many forms of categorization that help make the data algorithm-ready. The politics of categorization becomes most pertinent in questions concerning inclusion and exclusion. The recurring conflicts over breastfeeding images and Facebook’s nudity-detection systems—comprising both algorithms and human managers—represent a particularly long-lasting debate over censorship and platform policies (Arthur, 2012). The politics of categorization is not just a matter of restricting breastfeeding images but one that fundamentally links database architecture and algorithmic operations to subjectification.

To understand how sociality is programmed—that is, how friendships are programmatically organized and shaped, let us consider the ways in which the platform simulates existing notions of friendship. As theorists of friendship have argued, shared activity and history are important aspects of considering someone a friend (Helm, 2010). Simulating and augmenting the notion of a shared history, Facebook provides several tools and techniques dedicated to supporting memory. As poorly connected or unengaged users pose a threat to the platform’s conditions of existence, programming reasons for engagement constitutes a key rationale from the point of view of platforms. On Facebook, connecting users to potential friends

provides the first step in ensuring a loyal user base, because friendships commit. Functioning as a memory device, algorithms and software features do not merely help users find friends from their past, they play an important part in maintaining and cultivating friendships, once formed. As such, a variety of features prompt users to take certain relational actions, the most well-known being that of notifying users about a friend’s birthday. While simulating the gesture of phatic communication represented in congratulating someone on his or her birthday, the birthdayreminder feature comes with an added benefit: The birthday feature is the most basic way of making users return to the platform, by providing a concrete suggestion for a communicative action to be performed. As I’ve described elsewhere, platforms like Facebook want users to feel invested in their relationships, so they are continually coming up with new features and functionalities that remind them of their social “obligations” as friends (Bucher, 2013).

While the traditional notion of friendship highlights the voluntary and durational aspects of becoming friends and becoming friends anew (Allan, 1989), the software, one may claim, functions as a suggestive force encouraging users to connect and engage with the people in ways that are afforded by and benefit the platform. From the point of view of critical political economy, sociality and connectivity are resources that fuel the development of new business models. Platforms do not activate relational impulses in an effort to be nice. Ultimately, someone benefits financially from users’ online activities. This is, of course, a familiar story and one that many scholars have already told in illuminating and engaging ways (see Andrejevic, 2013; Couldry, 2012; Fuchs, 2012; Gehl, 2014; Mansell, 2012; van Dijck, 2013).

From the perspective of companies like Facebook and Google, but also from the perspective of legacy news media organizations (discussed in chapter 6), algorithms are ultimately folded into promises of profit and business models. In this sense, a “good” and well-functioning algorithm is one that creates value, one that makes better and more efficient predictions, and one that ultimately makes people engage and return to the platform or news site time and again. The question then becomes: What are the ways in which a platform sparks enough curiosity, desire, and interest in users for them to return?

The subtle ways of software can, for example, be seen in the manner in which Facebook reminds and (re)introduces users to each other. When browsing through my Facebook news feed it is almost as if the algorithm is saying, “Someone you haven’t spoken to in five years just liked this,” or, “This person whom you haven’t heard from in ages, suddenly seems to be up to something fun.” Somehow, I am nudged into thinking that these updates are important, that I should pay attention to them, that they are newsworthy. Rather than meaning that friendships on Facebook are less than voluntary, my claim is that the ways in which we relate to each other as “friends” is highly mediated and conditioned by algorithmic systems. People we do not necessarily think about, people we might not remember, or people we might not even consider friends continue to show up on our personalized news

feeds, as friend suggestions, in birthday reminders, and so forth. While it is often difficult to recognize how Facebook “actively steers and curates connections” (van Dijck, 2013: 12), moments of everyday connectivity provide a glimpse into the ways in which the effects of software might not necessarily feel as incidental as they might appear to be.

The programmed sociality apparent in Facebook is about more than making people remember friends from their past with the help of friend-finding algorithms. From the very beginning, a friendship is formed and the friendship is “put to test.” From a political economic standpoint, this is very important, as not every friendship is equally valuable. Some relations are more “promising” and “worthwhile” than others. With more friends than the news feed feature allows for, friendships are continuously monitored for affinity and activity, which are important measures for rendering relations visible. Friendships in social media are programmed forms of sociality precisely because they are continuously measured, valued, and examined according to some underlying criteria or logic. As this book will show in more detail, the news feed algorithm plays a powerful role in producing the conditions for the intelligible and sensible, operating to make certain users more visible at the expense of others. The Facebook news feed displays an edited view of what one’s friends are up to, in an order of calculated importance with the most important updates at top of the feed. Every action and interaction connected to Facebook, be it a status update, comment on someone’s photo, or “like button” clicked, may become a story on someone’s news feed. Not every action, however, is of equal importance, nor is every friend for that matter. Friendships are put to test, because friendships need to be nurtured and maintained to stand the “test of time.” Algorithms decide which stories should show up on users’ news feeds, but also, crucially, which friends. Rachel, a 24-year-old journalist from New York City, whom I interviewed for this book about her perceptions of algorithms, exclaimed that “Facebook ruins friendships.” With more than seven hundred friends, Rachel says that she is constantly taken aback by all the information and people Facebook seems to be hiding from her feed:

So, in that sense, it does feel as if there is only a select group of friends I interact with on the social network, while I’ve practically forgotten about the hundreds of others I have on there. An example of this is a friend from high school, who liked one of my posts a few weeks back. I’d totally forgotten she was even on Facebook until she liked it and we started chatting.

Rachel’s experience is reminiscent of what I mean by programmed sociality, the notion that social formations and connections are algorithmically conditioned and governed by the sociotechnical and political-economic configurations of specific media platforms. Rachel’s worry about Facebook ruining friendship should also remind us that algorithms need to be understood as powerful gatekeepers, playing

an important role in deciding who gets to be seen and heard and whose voices are considered less important. Programmed sociality, then, is political in the sense that it is ordered, governed, and shaped in and though software and algorithms. If we want to consider everyday life in the algorithmic media landscape, we need to pay attention to the ways in which many of the things we think of as societal—including friendship—may be expressed, mediated and shaped in technological designs and how these designs, in turn, shape our social values. As we will see throughout the book, such considerations, however, do not stop with values in design, but exceed the purely technical (whatever that is taken to mean) in important ways.

A key argument of this book is that the power and politics of algorithms stems from how algorithmic systems shape people’s encounters and orientations in the world. At the same time, I claim that this shaping power cannot be reduced to code. Specifically, I argue for an understanding of algorithmic power that hinges on the principle of relational materialism, the idea that algorithms “are no mere props for performance but parts and parcel of hybrid assemblages endowed with diffused personhood and relational agency” (Vannini, 2015: 5). Thus, it is important to acknowledge that while we start with the question of how software and algorithms shape sociality by looking at materiality in the more conventional sense as “properties of a technology,” the answer cannot be found in these properties alone, but rather the ways in which programmed sociality is realized as a function of code, people, and context.

Computable Friendships

The concept of friendship provides an apt example for the understanding of programmed sociality and algorithmic life, because it shows the discrepancies between our common-sense notions of friendship and the ways in which friendship becomes embedded in and modeled by the algorithmic infrastructures. Friendships are deeply rooted in the human condition as a fundamental aspect of being together with other people, and which is always already contested based on cultural and historical contexts. Traditionally, friendship has been thought of as an exclusive social relation, a private and intimate relation between two persons (Aristotle, 2002; Derrida, 2005; Hays, 1988). For this reason, true friendship has been regarded as something that one cannot have with many people at the same time, simply because it requires time to build, nurture, and maintain. Compared to Aristotle’s conception of friendship as something rather precious that one cannot have with many people at once (Aristotle, 2002), Facebook seems to promote the completely opposite idea.

The way the platform puts friendships at the center of a business model is no coincidence, of course, and is probably one of the core reasons Facebook has evolved into an unprecedented media company during the past decade. In a patent

application filed by Facebook concerning the People You May Know (PYMK) feature, no doubt is left as to the value of friendships for Facebook: “Social networking systems value user connections because better-connected users tend to increase their use of the social networking system, thus increasing user-engagement and corresponding increase in, for example, advertising opportunities” (Schultz et al., 2014). Software intervenes in friendships by suggesting, augmenting, or encouraging certain actions or relational impulses. Furthermore, software is already implicated in the ways in which the platform imagines and performs friendships. Contrary to the notion that “friendship clearly exists as a relation between individuals” (Webb, 2003: 138), friendship on Facebook exists as a relation between multiple actors, between humans and non-humans alike. As Facebook exemplifies in another patent document:

[T]he term friend need not require that members actually be friends in real life (which would generally be the case when one of the members is a business or other entity); it simply implies a connection in the social network. (Kendall and Zhou, 2010: 2)

The disconnect between how members usually understand friendship and the ways in which Facebook “understands” friendship becomes obvious in the quote above. According to Facebook, a user can be “friends” with a Facebook page, a song, a movie, a business, and so on. While it might seem strange to consider a movie a friend, this conception of friendship derives from the network model of the Web in which users and movies are considered “nodes” in the network and the relationship that exists between them an “edge” or, indeed, a friend. Indeed, “the terms ‘user’ and ‘friend’ depend on the frame of reference” (Chen et al. 2014). It is exactly the different and sometimes conflicting frames of reference that are of interest in this book. A core contention is that media platforms and their underlying software and infrastructures contain an important frame of reference for understanding sociality and connectivity today. If we accept that software can have a frame of reference, a way of seeing and organizing the world, then what does it mean to be a friend on Facebook or, more precisely, what are friends for, if seen from the perspective of the platform?

Facebook friendships are, above all, computable. In an age of algorithmic media, the term algorithmic, used as an adjective, suggests that even friendships are now subject to “mechanisms that introduce and privilege quantification, proceduralization, and automation” (Gillespie, 2016a: 27). Measuring the performance of individuals and organizations is nothing new, though. As sociologists Espeland and Sauder (2007) suggest, social measurements and rankings have become a key driver for modern societies during the past couple of decades. According to philosopher Ian Hacking, “society became statistical” through the “enumeration of people and their habits” (1990: 1). Hacking connects the emergence of a statistical society to

the idea of “making up people,” meaning that classifications used to describe people influence the forms of experience that are possible for them, but also how the effects on people, in turn, change the classifications:

The systematic collection of data about people has affected not only the ways in which we conceive of a society, but also the ways in which we describe our neighbour. It has profoundly transformed what we choose to do, who we try to be, and what we think of ourselves. (1990: 3)

If we accept Hacking’s notion of “making up people,” it becomes necessary to interrogate the ways in which counting, quantification, and classification limits the condition of possibility for subjects on Facebook. The manner in which the categories and classifications are constituted is not arbitrary or neutral; nor are the implications incidental. Here, I want to suggest that Facebook friends are “made-up people” in the sense described by Hacking. Friends are not natural kinds but, rather, constructions that serve specific purposes in a specific historical and cultural context. As has already been pointed out, the category “friend” as used by the Facebook platform does not even require members of that category to be human beings. While all Facebook users may belong to the set of “friends,” subsets of “friends” or “users” are dynamically being made up to serve different purposes. As exemplified in a Facebook patent application detailing the idea of adaptive ranking of news feed: “The social networking system divides its users into different sets, for example, based on demographic characteristics of the users and generates one model for each set of users” (Gubin et al., 2014). In other words, sets are powerful classification devices implying a politics in terms of demarcating who or what belongs and what does not, “what the set ‘counts’ in, what counts as members in the set” (Baki, 2015: 37). Subsets are “made up” based on demographic information, attributes of connections, frequency of interaction, and other factors affecting the features used for modeling what users can see or not see (we will return to the politics of visibility in chapter 4). If friends are not of a natural kind, what kinds of friends or, rather, subsets are there? Again, we might turn to Facebook patent applications for some tentative answers. According to a document describing a technique for optimizing user engagement, some friends seem more valuable than others (Chen et al., 2014). Particularly useful friends are called “top friends,” defined as persons having the highest measures of relatedness to a specific user (Chen et al., 2014). The determinants of relatedness are generated using a so-called coefficient module. These, in turn, depend on a variety of factors (as is the case with all of Facebook’s computations)—for example, “based on how many times or how frequently interactions occurred within the last 30 days, 60 days, 90 days, etc.” (Chen et al., 2014). Top friends are used for a number of purposes and in a variety of contexts such as:

To identify participants to play online games; to identify relevant connections to the user for inclusion in her social network; to display a listing of photos of persons having highest relevance to the user; to otherwise display or list an identification of persons having highest relevance to the user; to identify persons with whom the user can engage in an instant message or chat session; etc. (Chen et al. 2014)

Above all, top friends are used to prioritize information associated with them above others. Top friends are made-up people insofar as “those kinds of people would not have existed, as a kind of people, until they had been so classified, organized and taxed” (Hacking, 2007: 288). The subset of algorithmic top friends can be seen as a new category of people, emerging in the age of programmed sociality and algorithmic life. There are many more. As the notion of top friends shows, computable friendships hinge on measuring and evaluating users in order to be able to determine their friendship status. While friendships have always been qualitatively determined, as the notion of “best friend” suggests, the extent to which Facebook now quantifiably produces and classifies friendships works to dehumanize sociality itself by encouraging an empty form of competitiveness. Like most social media platforms, Facebook measures social impact, reputation, and influence through the creation of composite numbers that function as a score (Gerlitz & Lury, 2014: 175). The score is typically used to feed rankings or enhance predictions. The computing of friendships is no different. In another patent application Facebook engineers suggest that the value of friendship is not confined to users but also serves an essential role in sustaining the social networking system itself. As Schultz et al. (2014) suggest, better-connected users tend to increase their use, thereby increasing advertising opportunities. A socalled friendship value is not only computed to determine the probability of two users “friending” but also to make decisions as to whether to show a specific advertising unit to a user. The higher the score, the “better” Facebook deems the friendship to be, increasing the likelihood of using the connection to help promote products. The value of a friendship is produced as a composite number based on a “friendship score, sending score, receiving score or some combination of the scores as determined by value computation engine” (Schultz et al., 2014). According to Schultz et al., “the sending and receiving scores reflect the potential increase in the user’s continued active utilization of the social networking system due to a given connection” (2014: 2). From a computational perspective, friendships are nothing more than an equation geared toward maximizing engagement with the platform.

Far from loving the friend for the friend’s own sake, which would be exemplary of the Aristotelian notion of virtue ethics and friendship, Facebook “wants” friendships to happen in order to increase engagement with the social network, ultimately serving revenue purposes. The quantification and metrification of friendship are not merely part of how connections are computed by Facebook’s algorithmic infrastructure but increasingly make up the visuals of social networking systems

through the pervasive display of numbers on the graphical user interface. With more than 3 billion likes and comments posted on Facebook every day, users are both expressing their sentiments and engagement and reminded and made aware of these actions and affects through the visual traces thereof. As the software artist Ben Grosser notes, “A significant component of Facebook’s interface is its revealed enumerations of these ‘likes,’ comments and more” (Grosser 2014: 1). Grosser questions whether people would add as many friends if they were not constantly confronted with how many they have or whether people would “like” as many ads if they were not always told how many others have liked them before them. Grosser’s artwork The Demetricator is a software plugin that removes all metrics from the Facebook interface and critically examines these questions. According to Grosser, Facebook draws on people’s “deeply ingrained ‘desire for more’ compelling people to reimagine friendship as a quantitative space, and pushing us to watch the metric as our guide” (Grosser, 2014). The pervasive enumeration of everything on the user interface function as a rhetorical device, teaching users that more is better. More is also necessary if we consider the operational logics of the platform. The drive toward more is evident when considering the value of friendship, given that more friends increases the likelihood of engaging with the site. Friends are suggested based on mutual friends, but also on factors such as low activity or few friends. The idea is that, by suggesting friends with low activity level, Facebook “can enable those users to likely have more friends as a result of being suggested [. . .] and thereby likely increasing the candidate user’s engagement level with the social networking system” (Wang et al., 2012: 6). Friendships, then, are variously made and maintained by humans and non-humans alike. The specifics of how friendships are susceptible to computation is immeasurable in and of itself. The purpose, however, of explicating the term “programmed sociality” as core to understanding algorithmic life is to draw attention to software and computational infrastructure as conditions of possibility for sociality in digital media.

Guilt by Association

Whereas “friend” in the sociological sense signifies a voluntary relationship that serves a wide range of emotional and social aims, “friends” as seen from the perspective of the platform are highly valuable data carriers that can be utilized for a variety of reasons. One of the core principles underlying networked media is that information is derived as much from the edges (connections) as it is from the nodes (users, businesses, objects). This means that users do not just provide data about themselves when they fill out profiles, like things, or comment on posts; in doing so, they simultaneously reveal things about the people and things they are interacting with. If data are missing from a user’s personal profile or that user is not as engaged as the platforms would prefer, the best way to extract more information about the user is

through his or her various connections. From a platform perspective, friends are in the data delivery business. This becomes particularly evident when looking at the patent documents by Facebook describing techniques related to advertising. For example, given the insufficient personal information provided by a particular member, ads are tailored and targeted based on friends. Facebook calls this “guilt by association” (Kendall & Zhou, 2010: 2). While the authors of the patent document acknowledge that they are “giving credence to an old adage,” the word “guilt” is worth pondering. Guilt evokes notions of responsibility, autonomy, and accountability. Who or what is responsible for the content shown on Facebook, and who might be held accountable? While it might be easier to understand how users’ own actions determine what content they see online, it seems more difficult to come to terms with the notion that users also play a crucial role in determining what their friends see on their feeds. Guilt by association, as Facebook uses the term, implies that users are made “complicit” in their friends’ ad targeting, which seems highly problematic. While it is now commonplace to say users are the product, not the media platforms they are using, the extent to which users are used to promote content and products—often, without their explicit knowledge—is unprecedented in the age of algorithmic media. If the classical notion of friendship is political in the sense that it assumes gendered hierarchy through the notion of brotherhood (Derrida, 2005), the politics of algorithms suggests hierarchies of a different sort— of what is “best,” “top,” “hot,” “relevant,” and “most interesting.”

When examining the contours and current state of algorithmic life, it is important to understand the mechanisms through which algorithmic media shaping sociality is deeply intertwined with power and politics. This book is not just about highlighting the role of algorithms as a core governing principle underlying most online media platforms today, but also about showing how algorithms always already invoke and implicate users, culture, practice, ownership, ethics, imaginaries, and affect. It means that talking about algorithms implies asking questions about how and when users are implicated in developing and maintaining algorithmic logics, as well as asking questions about governance, who owns the data, and to what end it is put to use? While friends have always been valuable for advertisers, algorithms seem to lessen the autonomy and intentionality of people by turning everything they do into a potential data point for the targeting of ads and news feed content. Such is the network logic, which users cannot escape. For neoliberalism, “friendship is inimical to capital, and as such, like everything else, it is under attack” (Cutterham, 2013: 41). Moreover, as Melissa Gregg holds, “‘friendship’ is labour in the sense that it involves constant attention and cultivation, the rewards of which include improved standing and greater opportunity” (2007: 5). As Langlois and Elmer point out, “social media seek to mine life itself” (2013: 4). That is, social media platforms “do much more than just sell users’ attention to advertisers: they actually help identify the very strategies through which attention can be fully harnessed” (Langlois & Elmer, 2013: 4). Algorithms are key to this end. If we want to understand the ways in which

power and politics are enacted in and through contemporary media, we need to look more closely at the ways in which information, culture, and social life are being processed and rendered intelligible. In this book, I set out to do so.

Examining algorithmic media and the ways in which life is increasingly affected by algorithmic processing, means acknowledging how algorithms are not static things but, rather, evolving, dynamic, and relational processes hinging on a complex set of actors, both humans and nonhumans. Programmed sociality implies that social relations such as friendships are not merely transposed onto a platform like Facebook but are more fundamentally transduced. The concept of transduction names the process whereby a particular domain is constantly undergoing change, or individuation, as a consequence of being in touch or touched by something else (Mackenzie, 2002; Simondon, 1992). Rather than maintaining an interest in what friendship is, transduction and the related term “technicity” help to account for how domains such as friendship come into being because of sociomaterial entanglements. Using Facebook to access a social network transduces or modulates how a person connects with friends. When using Facebook, the technicity of friendship unfolds as conjunctions between users and algorithms (e.g., the PYMK feature), coded objects (e.g., shared video), and infrastructure (e.g., protocols and networks). As Kitchin and Dodge point out: “This power to affect change is not deterministic but is contingent and relational, the product of the conjunction between code and people” (2005: 178). Transduction and technicity become useful analytical devices in exemplifying the concept of programmed sociality as they point toward the ways in which software has the capacity to produce and instantiate modalities of friendship, specific to the environment in which it operates. The productive power of technology, as signified by the concept of technicity, does not operate in isolation or as a unidirectional force. Algorithms and software, in this view, do not determine what friendships are in any absolute or fixed sense. Rather, technicity usefully emphasizes the ways in which algorithms are entities that fundamentally hinge on people’s practices and interaction, in order to be realized and developed in the first place. Taking such a perspective allows us to see friendship and other instances of programmed sociality as emerging sociomaterial accomplishments.

Back to the rainy November day introduced at the beginning of the chapter: The question of what categories were used to determine the specific ads or the content of my and my friends’ news feeds persists. Was it my clicking behavior, my age and gender, pages that I have liked, the cookies set by the online design store where I bought the gift for my mom, my friends’ clicking behavior, the friends of my friend, everything or nothing of the above? Whatever the exact reason might be, online spaces are always computed according to underlying assumptions, norms, and values. Although we simply do not know and have no way of knowing how exactly our data and the algorithmic processing are shaping our experiences online, a critical perspective on sociotechnical systems, along with personal encounters and experiences with algorithmic forms of connectivity and sociality, might help to

illuminate the ways in which “categorization is a powerful semantic and political intervention” (Gillespie, 2014:171). Judging solely by the content and ads served up on my news feed, I am perceived as having children, being overweight, and single— none of which is true, at least for the time being. While the case of Facebook and programmed friendship provides a useful entry point to questions of how information is governed and organized online, an understanding of algorithmic power and politics cannot simply be reduced to a single social media platform. As this book will show, how we come to know others, the world, and ourselves as mediated through algorithms is the result of complex sociomaterial practices that exceed the specific coded instructions. Facebook and the notion of programmed sociality are but one way in which algorithmic arrangements bring new possibilities, realities, and interventions into being. And there are many more.

By now, I hope to have instilled enough curiosity in the reader to keep you exploring the power and politics of algorithms in the contemporary media landscape with me throughout the next chapters. The goal of the book is for readers to not simply be the subjects of algorithmic judgment but, rather, to be in a position to critically judge the workings of the algorithmic apparatus for themselves.

Outline of the Book

The overall aim of this book is to sketch the contours of an algorithmic media landscape as it is currently unfolding. While algorithms and software are starting to catch the interest of social scientists and humanities scholars, having become somewhat of a buzzword in media and communication studies during the past years, we are only at the beginning of understanding how algorithms and computation more broadly are affecting social life and the production and dissemination of knowledge as we know it. This book seeks to contribute to these discussions by offering conceptual, theoretical, and empirical analyses of the ways in which algorithms produce the conditions for the sensible and intelligible.

In the chapters 2 and 3, which comprise the conceptual framework of the book, I focus on the ontological, epistemological, and methodological dimensions of algorithms. Chapter 2 provides an outline for understanding what algorithms are and how they are conceptualized in different manners. While the chapter functions as a conceptual introduction to the interdisciplinary field of critical algorithms studies, merging perspectives from computer science, social sciences and the humanities, it mainly does so for analytical reasons. I argue that these are not simply different perspectives on a static object called an algorithm, but rather, following insights from STS (Law, 2002; Mol 2002), provide different versions of what an algorithm is. Even if we assume that we are talking about the same algorithm (say, the “Facebook algorithm” or “K-nearest neighbor”), the algorithm is always “many different things. It is not one, but many” (Law, 2002: 15).

Turn static files into dynamic content formats.

Create a flipbook
If...then: algorithmic power and politics taina bucher - Download the ebook today and own the comple by Education Libraries - Issuu