Foreword
Not long ago, I was given the opportunity to reflect on what was accomplished with the publication of The Panoptic Sort in 1993 and to offer some additional thoughts about what remained to be explored about these concerns.1 This Foreword will serve the purpose of calling your attention to some of the contributions that have been made to this area and my thinking since its initial publication. Although I have never considered myself a historian, I have come to see the value of exploring some early writing with the aid of references to examples of scholarly writing that has amplified, challenged, perhaps even corrected mischaracterizations of what was taking place at the time of the original publication. It is my hope that you will find these references useful in your own assessments of what I had set forth. And, in an effort to associate them more closely with the issues to be considered, I have placed them in the kind of sequential order that is traditionally followed in a Preface, something that was neglected in the first edition.
The Prologue, or Chapter 1, was intended to set forth a working definition of the sociotechnical system that I referred to as the “panoptic sort,” beginning with reference to its application to the evaluation of individuals seeking financial credit. It called attention to the kinds of data, or information, that had been selected as a basis for evaluating the suitability of each candidate for such an important opportunity to improve their life chances. Although passing note is made of the assumptions being made by Jürgen Habermas about how rational discourse was supposed to produce human understanding, far more extended reference is made to the contributions made to my understanding of panoptic strategies by Karl Marx, especially with regard to the continuing value of exploration of the relationships between the economic base and superstructure.
Related contributions, especially relevant these days with regard to the internet and social media, have re-emerged as part of ongoing debates about the nature of productive and nonproductive labor
Jacques Ellul’s contributions are primarily associated with the increasingly central role of technology within society, and with particular relevance for our understanding of the panoptic sort as it continues to evolve. This is especially true with regard to the extent that technology has a mind of its own—something of the sort that we are beginning to see in the developments taking place within computing and algorithmically derived societal interventions. Max Weber’s
The Panoptic Sort. Second Edition. Oscar H. Gandy, Jr., Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197579411.003.0001
contributions, especially those related to the nature and importance of rationalization, and the importance of knowledge in shaping the distributions of power within society, invite our attention to the ways in which both shape and are shaped by their incorporation into the legal system.
Two of the more important influences on my thinking about the panoptic sort were the writings of Michel Foucault and Anthony Giddens. From Foucault, I borrowed the concept of panopticism, as it became the driving force behind my thinking about power and social control. While the structural dimensions of a design for a prison by Jeremy Bentham came to be overemphasized in much writing about panopticism, Foucault applied his disciplinary approach to surveillance and classification, and his experimental approach to the determination of which interventions made the most sense with regard to prisoners, students, as well as the managers of the public health system. Like Weber, Foucault understood the nature of some of the complex relationships between knowledge and power. Going beyond the disciplinary function of control through the denial of opportunity, and the punishment of disobedience, Foucault called our attention to the creative or generative impact of sociotechnical systems, helping to produce more and more types of individuals in need of surveillance, assessment, and eventually, behavioral intervention.
Anthony Giddens’ contributions to my thinking about the panoptic sort and the multidimensional technologies of discrimination that continue to emerge in order to shape our lives and improve the efficiency and effectiveness of efforts at the production of influence, really have no comparison. His theory of structuration finds something of a midpoint between theories of hegemonic domination, the gentle processes of cultivation, described by George Gerbner as applied to the role of the mass media, and the nearly absolute denial of external influence by some of the more extreme cultural theorists of a not too distant moment in time. Giddens’ notion of distanciation is especially useful as a way to understand the operation of power from afar; not limited to a central tower but distributed throughout a network of elements later characterized by Haggerty and Ericson as part of “the surveillant assemblage” that facilitated the generation of the “data doubles” with, and against which we are continually being compared.2 His emphasis on the variety of settings and locales in which surveillance of interactions and relations come to be defined through analysis has influenced some of the ways in which the nature of the panoptic sort as a process is revealed throughout the rest of the book.
The serious work of describing the panoptic sort began with a focus in Chapter 2 on the nature and use of information and power in the variety of ways that were made possible through its operation in different sectors of the economy, and in society more generally. What is striking, and actually a bit embarrassing upon reflection after all these years, despite its place in the chapter’s title, is the fact that “information” is never defined in this lengthy chapter.
Instead, the introduction jumps immediately to its uses. There is really no justification for this failure to define, given the important contribution to our understanding of the meaning of information provided by Russell Ackoff, back in 1989. Unfortunately for me, the greatest number of citations to his comments on the ordering of the essential values associated with its generation and use came after the publication of some of his “greatest hits” in 1999.3
Ackoff suggested that an ounce of information was worth a pound of data, while an ounce of knowledge was worth a pound of information, and an ounce of understanding was worth a pound of knowledge. The value of wisdom was not estimated, and it is worth reflecting on why that particular computational challenge had not been pursued. The distinction between data, what we generally refer to as information, is actually quite important, and we’ll explore that distinction more closely when we explore the actual operation of the panoptic sort as presented in Chapter 3.
With power as the primary point of emphasis in that chapter, it begins with a brief introduction to the three primary functions of processes that are involved in the sorting process: identification, classification, and evaluation, or assessment. Although there continue to be differences of opinion with regard to the meaning of identification, I did then and continue to underscore the meaning of identification as being linked primarily, for a variety of functional purposes, to a single individual. While classification is also an aspect of identification, the fact is that the data used for the classification of an individual is likely to be used to identify other individuals who share similar attributes or characteristics. The sharing of characteristics among variously sized groups of people raises increasingly troublesome concerns about the kinds of rights that people may have or claim, regarding the collection and use of data and information about them as individuals, as well as members of groups. John Cheney-Lippold provides an extensive and insightful examination of what he refers to as “characterization,” and as we will later explore in more depth, the manner in which this process comes to be dominated through the use of sophisticated computers for automated algorithmic processing of masses of data.4
The kinds of power that matter to us today, and appear likely to matter to people like us well into the future, have been described by economists like Randall Bartlett, Samuel Bowles, and Herbert Gintis, in addition to the central contributions made by Marx, Foucault, Giddens, Weber, and Ellul which are explored in some detail in this chapter. Only the briefest of introductions to the role played by information technology is included in this chapter, although it has become a central focus of our contemporary concerns as they relate to the use of computers. Somewhat more critically, their ability to collect, process, and evaluate the knowledge, inferences, and behavioral interventions that might be designed and implemented by autonomous machines is emphasized.
After the chapter offers more details about contributions of Foucault, Giddens, and Marx to our understanding of panopticism, our focus shifts to the role of technology in the transformations taking place within capitalist nations. The criticism directed against the technicism of Ellul’s descriptions of this transformative force has been extensive. Yet I believe that the kinds of consequential decisions that have been taken by powerful actors who actually shared his views seem likely to shape our foreseeable futures.
James Beniger provided us with a well-detailed assessment of the central role being played by information technology as an enabler of the instrumental rationalization privileged by Weber. But like the contributions made by Frank Webster and Kevin Robins, Beniger’s emphasis was more critical of managerial pursuits of efficiency through a form of Social Taylorism that contributed to unexpected conflicts and crises within the capitalist production system. The development of management systems, of which the panoptic sort was merely one of many types, was oriented toward the redistribution of power among different sets of actors and institutions—a process that continues to this day. This effort toward the rationalization of control, both by managers of capitalist firms, as well as by managers of governmental systems, is focused primarily on information technologies.
Changes in the nature of information technologies, many of which were the focus of scholarly writing about The Information Age that were of central concern to these managers were discussed in terms of speed, reliability, and ease of operation. A related feature was miniaturization, which facilitated the addition of more and more subsystems, including microprocessors capable of completing more and more calculations of data being gathered from more and more remotely networked sources. While much of the focus was on information and the nature of its collection, storage, and processing, this moment in time was also marked by the rise of this kind of information as a commodity.5 This development, which helped to generate intense debates about the nature of ownership rights, including those associated with concerns about privacy, would become a central issue in public policy deliberations.6
In Chapter 3, we turned our attention more specifically to the operation of the panoptic sort as a discriminatory technology. It is here that the influence of Klaus Krippendorff becomes central to my attempts to make this process more meaningful as a technology. But it also came to be understood as a technological operation that was increasingly dependent upon social theory, including communications, than one might ordinarily assume when talking about technological systems. This theoretical emphasis is clear in Krippendorff’s definition of content analysis as a “technique for making replicable and valid inferences from data to their context.” However, as you will see as we move a bit further through the process as I see it, especially with regard to the developments in
the processing of “big data,” context is increasingly being cast aside, despite its importance. I invite you here to consider the important contribution made by Helen Nissenbaum in her book about “privacy in context,” and the claims being made by individuals that “contextual integrity” is something of value that should be respected like other important societal norms, that vary among people situated in different societal domains.7
It is probably worth noting here, that while Nissenbaum’s book is focused on concerns about privacy, my first reference to privacy did not come until the end of Chapter 2. There it was framed in quite critical terms because of the manner in which it had generally been discussed in the United States. Those discussions tended to ignore the manner in which that data gathering and analysis enhanced the power of bureaucratic agencies, both public and private. Instead, surveillance was the term of art that I emphasized. This began with Christopher Dandeker’s reference to surveillance early in the Prologue. It ended with references to it in the final chapter that were made with regard to feedback loops, whereby the use of the panoptic sort led to growing mistrust among the public. Unfortunately, this mistrust called for still more of the surveillance that continues to threaten our democracies.
While privacy is an important focus of concern, the rise of surveillance as a critical scholarly and political focus has been impressive. The study of surveillance has become an academic major, primarily at the graduate level, and its success can be attributed largely to the efforts of David Lyon and his colleagues and students at Queens University in Kingston, Ontario, and around the globe.8
A highly influential journal, Surveillance & Society, which Lyon helped to develop, publishes articles that examine the nature and consequences that flow from surveillance in all of its forms.9 A Handbook of Surveillance Studies added to the resources available to support the development of research and scholarship in this area.10
Surveillance makes use of a variety of techniques and strategies to transform large bodies of data gathered by different means from a multitude of sources into information, knowledge, and applications designed to increase control over targets, some of which may have been selected on the basis of potential use. While I take due note of the important role played by government surveillance, including the importance of government data, such as that gathered by the Census Bureau to decision-making within business and industry,11 the primary focus of this chapter and much of the book is not on the government. Instead, its focus is on the profit-seeking firms engaged in the capture and exploitation of the value to be derived from the gathering and transformation of information about global populations.
Characterization of the corporate data machine begins with the demand for and use of information about the labor force, beginning with their applications
for employment. It moves fairly quickly to consideration of the kinds of data and information being gathered about consumers, such as that gleaned from surveys, interviews, and somewhat more indirectly, from records of transactions. A somewhat more specialized method for the gathering of data and information comes from carefully designed experiments, very few of which involve the kinds of informed consent that had become traditional, if not a formal requirement within academic institutions.
The transformation of all this data into meaningful information and knowledge depends upon increasingly sophisticated processing, informed in part by theoretical models, but made increasingly efficient in recent times through computational techniques guided by artificial intelligence and machine learning. A fairly early exploration of developments taking place in the area of artificial intelligence by Philip Agre made a particularly important contribution to scholarly understanding of these developments. This was primarily because of the way he explored the links between technological development and those taking place within the humanities and social sciences.12
The literature in this area is developing quite rapidly, increased in part through the development and growth in academic specializations in data science. Vasant Dhar provides a useful introduction to the variety of ways in which data science, especially that applied to predictive applications, differs substantially from applications we have traditionally associated with statistics.13 While there is increasing concern being expressed about the negative consequences flowing from developments in data science, some critical scholars, such as Jonathan Cinnamon, have attempted to identify some of the socially beneficial outcomes that might emerge from the development of “a grassroots data science.”14 Still others have focused their energies on finding ways to overcome the problems of bias and error that continue to be identified in much of this work.15
Even back in 1990, when this book was being written, the statistical technology involved in the identification of population clusters that could facilitate the use of segmentation strategies that facilitated the delivery of more effective and efficient advertisements, as well as in attempts at behavioral “nudges,”16 had already begun to be used within commercial and governmental spheres. The closing section of this lengthy chapter revisits the important distinctions between some key terms in order to underscore their roles within the operation of the panoptic sort. There are important distinctions that ought to be drawn between classifications as a form of identification and a variety of “documentary tokens,” like licenses and birth certificates, that are used to facilitate the development of confidence that actions being taken apply to particular individuals. While classifications are also parts of identificatory processes, their application is not primarily orientated toward particular individuals but toward particular types of individuals.
As Paul Starr suggested with regard to value of self-classifications offered by individuals, there continue to be disagreements about how much corporate actors should rely on such personal identifications that are being offered by members of population groups defined by race, ethnicity, or gender. As was noted by James Anderson, the identities of consumers were increasingly being defined in terms of tastes, preferences, and commonalities among people perceived by others as being similar.17 Geoffrey Bowker and Susan Starr noted some of the pragmatic considerations shaping systems of segmentation and classification.18 However, Gilad Edelman went so far as to suggest that behaviorally targeted advertising ought to be banned because of the harms being generated within the economy and the political process.19
A rather different dimension of a process that emphasized distinctions, was the increasingly important attempts to predict the future. Prediction, often used retrospectively in order to test a hypothesis by estimating the distribution of variables in a previously excluded segment of the dataset, was increasingly being used to estimate the distribution of those variables in datasets yet to be gathered. Concerns about discrimination were being raised because predictions about future behavior were being used to make decisions about eligibility for credit, medical intervention, criminal sentencing, and parole, and a variety of other opportunities that if denied, would surely affect the quality of life for individuals, their families, and the communities in which they lived. This focus on predicting the future, understood as the management of risk as seen through an actuarial lens, was only briefly limited to insurance. Soon after, it came to dominate strategic planning and decision-making more generally.
The identification of strategic targets for advertising and persuasive communication efforts represented the side of risk management that was never far from view. Attention was soon focused on discovering how would predictions of behavioral responses by members of market segments contribute to profitability and competitive advantage. The emergence of specialist firms like the Claritas Corporation made good use of government data and negotiated access to spatial classifications, originally created for the design and administration of the 1990 U.S. Census. When combined with creative distinctions between populations defined by the characteristics of these neighborhoods, they enabled the transformation of targeting into a high art. As more firms got into the business of facilitating commercial and political targeting, yet another service sector emerged. This one was focused on the generation and sale, or at least transactional use of massive lists of individuals, that had been classified and rated on the basis of their predicted behavior and financial or strategic value. 20
In the center of all this communicative activity was the development of telecommunications networks that improved the process of remotely connecting personal and corporate computers to share data and facilitate the targeting of
discriminatory messages.21 The development of the internet, and the rapid rise of social media in what became known as the “platform economy,”22 was soon followed by a dramatic shift in the nature of direct marketing industry. This shift was marked by the use of increasingly sophisticated algorithmic selection and recommendation systems and services operating online.23 Part of that activity was being focused on the development of multisided markets in which members of the public were no longer simply passive consumers of media content, but they were actively contributing to that content as producers of what to some seemed like exhibitionist self-promotion, as well as pointed social commentary.24 At that point, the panoptic sort took a dramatic leap into the future that we are experiencing today.
In Chapter 4, I focused my attention on trying to understand how the corporate sector understood where it was and where it was hoping to go in the Information Age. This effort began initially with a focus on three major corporate actors: The American Express Corporation (AMEX), TRW, and Equifax. The initial assessment was based primarily on corporate annual reports, but Equifax stood out from the group. It did so primarily because of its utilization of the knowledge, expertise, and global reputations of Louis Harris and his surveys, and those of Alan Westin. Westin was a leading expert in the privacy field, as well as a social scientist who relied upon large public opinion surveys, often funded by corporate sponsors to shape policy and understanding of privacy as a social concern. Because it had not developed into the kinds of public-private partnerships (P3s) that have more recently become the norm within technologically transformed “smart cities,”25 none of the currently dominant global firms were included in this analysis.
After exploring the role of the telemarketing industry and the Direct Marketing Association, which represented its interests within the public policy realm, I developed a survey that was intended to characterize the perspectives of the direct marketing industry.
Not surprisingly, the willingness of these corporate executives to participate in a survey about the nature of their strategic orientations was really quite low, with only 139 usable documents out of the 859 surveys that were mailed out. Tables in the chapter indicated the correlations and other relationships between a set of common practices, including expressions of concern about security and the possibility that customer information would emerge as an important resource. Additional predictors included the market sectors in which these firms operated.
Among the most important factors explaining these multivariate relationships, one measuring corporate concern about negative consumer reactions to their use of transaction generated and other personal information (TGI) was significantly linked to a large number of predictors. Corporate concerns about public
and governmental opposition to commonplace and increasingly sophisticated segmentation and targeting continue to be measured by surveys and critical analyses being conducted around the globe. Among those worthy of your review, I recommend a small number here.26
Chapter 5 is focused on the developing perspectives among the general public regarding what they understood about the nature of the panoptic sort, revealed primarily through their views on corporate informational practices. A small set of focus groups were used in the development and implementation of a larger U.S. national sample survey. Among the more important considerations that emerged from the focus groups was the still troublesome problem of identifying and evaluating the nature of the harms that are believed to flow from corporate and governmental collection and use of TGI. Although the participants reflected a generalized naive view regarding governmental limitations on data sharing, the general sense derived from those informative sessions was that information gathered for marketing purposes was not harmful.
Chapter 6 presented a secondary analysis of three datasets acquired through the Louis Harris Data Center, that included a number of surveys for which Alan Westin served as an academic adviser. It is important to note that all of the commercial sponsors for these surveys were in the insurance industry, and many of the questions reflected the corporate/industrial concerns about privacy and surveillance policy.27 My own surveys administered in 1988–1989 were financed primarily from a grant received from AT&T, and I hasten to note that some of the reviewers from the company expressed concern about how some of those questions had been framed.
A variety of issues and concerns were covered within these surveys. Among the most important were those related to the nature of public anxiety about violations of privacy and trust, the means by which personal information had been gathered, and the extent to which these practices had actually been barred or should be barred by government regulation. Giddens’ notion of structuration was used as a framework for identifying the social origins of the respondents’ expressed opinions in these areas, as well as the willingness of a small segment of respondents who invested time, energy, and resources in order to resist surveillance and protect their privacy interests.
Explanatory variables included the standard sociometric indicators of race, gender, age, and education, as well as job categories and political orientations. Factor analysis was used to reduce the number of variables used to explain responses to particular questions. Membership in a particular age cohort emerged as an important predictor of respondents’ orientations toward privacy and surveillance. Many of these relationships were curvilinear, with younger and older cohorts being more similar in terms of their trust in general, and their trust in insurers, than segments we might have considered to be at greater risk.
Analyses of data from the 1989–1990 survey also made use of hierarchical regression to explore assumptions about the causal order of explanatory variables. Again, membership in age cohorts, which we might understand in terms of the nature of life cycles and generational differences, and respondents who were characterized as being economically active, and therefore were more at risk, were substantially less trusting, although the extent of negative life experiences was associated with declines in trust within all cohorts.
Chapter 7, a rather lengthy examination of the variety of influences on the development of privacy and data protection laws and regulations emerging primarily in the United States, necessarily ends on a hopeful, if not optimistic note. It begins with reference to Giddens’ notion of structuration and the constraints on knowledgeable actors who are never more than marginally informed about goals and incentives shaping the behavior of others within society. As a result, our disappointments regarding the rule of the law bear much in common with criticisms about idealized theories about how markets are supposed to work. As a strategy for understanding the nature of the conflicts and contradictions which are so characteristic of the law as applied to personal information, especially in relation to what we might consider in relation to the public interest, I invite consideration of similar tensions within debates about the value of intellectual property rights and the value of public access to information and knowledge. Here the contributions of Alan Westin and David Flaherty are especially useful as they relate to a variety of interests related to privacy.
Among these interests, the position of honor is assigned to autonomy and the value we place on the rights of individuals to make decisions about how they will pursue those things that matter in their lives. Special reference is made to the central contribution to our thinking in this regard by Samuel Warren and Louis Brandeis in 1890 with regard to “the right to be let alone,” including the right to be protected from the invasions and exploitation of one’s image by photographers. Over the years, the rather narrow scope of Warren and Brandeis’ initial construction of this right was expanded considerably to include reproductive rights and a host of intangible injuries including those to human dignity and integrity. However, the attempts made by William Prosser to bring order to this increasingly messy collection of articulable privacy interests were challenged by many. These included a critical engagement by Neil Richards and Daniel Solove in which Prosser, in their view, needlessly compressed a broad and continually expanding range of privacy harms into four narrow categories.28
Special reference is made to the development of privacy theory by Spiros Simitis, who underscored the importance of privacy for the realization of the goals of a democratic form of government, despite some contradictions linked with the differences in power and wealth that made the protection of one’s privacy a function of one’s economic status. A number of scholarly engagements
with the complex nature of privacy interests, such as those set forth by Anne Branscomb, are discussed, especially with regard to the differences between the ability of businesses to defend their so-called privacy interests and the rather limited abilities of members of the general public to do the same. Brandscomb’s contributions in this area help to illustrate how developments in the market for the collection, processing, and sale of information about citizens and consumers have continued to weaken an already limited right.
As noted earlier, my primary focus in presenting a theory of the panoptic sort was to attract scholarly and regulatory attention to the behavior of corporate actors, whereas previous work in the area had been focused on the activities of governments. This traditional orientation was reflected quite clearly in the passage of the Privacy Act of 1974 in the United States. The far more limited attempts to eliminate, or even markedly constrain corporate abuses in this area are described. These descriptions take due note of the role of Richard Posner’s analyses in characterizing the rather narrow scope of justifications for applying constraints on the use of information in areas of employment and other assessments and decisions likely to affect the economic opportunities available to African-Americans and members of other racial and ethnic groups.29
The rather limited progress in the development of privacy policy and regulation as discussed in this chapter was explored at length in a book by Pricilla Regan. Her book underscores the length of time it had taken for the American Congress to pass legislation that responded to the threats to privacy linked to particular developments in communication and information technology.30 Regan appropriately noted the complexity of the issues and concerns that were calling for a legislative response. She focused particular attention on the role of organized groups and organizations that saw their use of information technologies, and the benefits that would flow from that use, as being endangered by the passage of meaningful regulation. Many fought aggressively and continued to lobby enjoying considerable success in the United States in opposing regulation. Instead, policymakers merely encouraged corporate actors to engage in responsible self-control.
Some twelve years after Regan published her less than hopeful predictions about the future of privacy in the United States, a special Committee on Privacy in the Information Age had the results of its meetings and research efforts that were begun in 2003 finally published in a report that largely reproduced the pace and character of privacy developments that Regan had described. Their list of recommendations included the traditional invocation of individual self-help, where individuals would be provided the opportunity to review the information that has been gathered and shared, in order to determine if it was actually correct. There was reference to the potential for limiting access to this data by “outsiders,” who were defined as those external to the organization that had collected the
information in the first place.31 Yet this problem of limiting access to data and information by third parties continues to escape meaningful legal and regulatory control.32
This is, of course, something that is increasingly recognized as being or soon to become outside the reach of most of us as the procedures used by algorithmic systems to derive assessments and make predictions are either treated as secrets or are barred from meaningful access by their essential inscrutability. Frank Pasquale examines this development from a critical perspective that underscores the massive inequities in the nature and extent of knowledge that corporate actors have about our lives. More importantly, he examined how they use that knowledge to influence our decisions, and he described what members of the public know about those actors and their capabilities.33
While this chapter’s focus on the development of privacy law and regulation was limited primarily to developments in the United States, and continued to reinforce the quite limited thinking about privacy and surveillance in terms of harms to individuals, more recent times have seen the development of significant advances in the nature and scope of public policy in Europe and around the globe.
Perhaps the most significant of these developments was the passage of the European Union’s General Data Protection Regulation (GDPR), which went into effect in 2018. The legislation was meant to return control of the TGI to the users of online resources whose activities were in part responsible for its existence. The assumptions regarding an individual’s rights to data about themselves and their activities meant that at least they had a right to obtain a copy of what had been gathered and perhaps generated through analysis, as well as a right to seek corrections, including erasure. Perhaps most importantly, the regulations included a requirement for data gatherers to seek consent before gathering and sharing personal information. 34 The general consensus that has developed since the GDPR went into effect was that its impact fell far short of what was hoped, or even expected. Nevertheless, some advertising agencies have either abandoned behavioral advertising or limited its use when based on “sensitive information.”35
The final development in the privacy arena is a still developing effort to extend the traditional emphasis on the rights of individuals to an overdue consideration of the rights of groups, including those groups that have been identified algorithmically. As a result of that almost clandestine process, the members of these “groups” are quite limited in their ability to mobilize politically in pursuit of their rights.36 Part of the challenges that have to be addressed as this perspective is being developed is the distinction between a “natural” entity of the sort that human beings are considered to be, versus the kinds of people that are “discovered” theoretically and analytically. As we will discuss more fully in the Afterword, the kinds of “entities” that we may come to consider robots and other
forms of artificial life that may be introduced into the world by autonomous agents will become powerful societal influences after the arrival, or at least the approach of the Singularity. 37
In Chapter 8, the final thoughts that I set forth in the first edition do what most conclusions attempt to do for readers who have made their way through largely critical and pessimistic assessments of where we have been, and what we see before us. An especially critical statement calls the reader’s attention to what I have suggested as the most troubling effect of the panoptic sort on the general public, its amplification of mistrust throughout society. This likely development which, in the context of many deviation amplifying feedback loops, tends to increase the demand for surveillance, and still further departures from democratic ideals.
Part of the challenge we face in attempting to step back from the precipice that we are rapidly approaching, may involve choosing and then developing the kinds of sociotechnical systems that our survival as a democratic society requires. My knowledge and understanding of the potential that rests within artificial intelligence has changed quite substantially since the turn of the century. After that potential is explored in the Afterword, you will see the extent to which Ellul’s vision of the future still has the influence on my thinking that it once had.