Black Boxes
How Science Turns Ignorance into Knowledge
MARCO J. NATHAN
3
Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.
Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.
© Oxford University Press 2021
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above.
You must not circulate this work in any other form and you must impose this same condition on any acquirer.
Library of Congress Cataloging-in-Publication Data
Names: Nathan, Marco J., author.
Title: Black boxes : how science turns ignorance into knowledge / Marco J. Nathan. Description: New York, NY : Oxford University Press, [2021] | Includes bibliographical references and index.
Identifiers: LCCN 2021001998 (print) | LCCN 2021001999 (ebook) | ISBN 9780190095482 (hardback) | ISBN 9780190095505 (epub)
Subjects: LCSH: Science—Philosophy. | Science—Methodology. | Ignorance (Theory of knowledge)
Classification: LCC Q 175 . N38 2021 (print) | LCC Q 175 (ebook) | DDC 501—dc23
LC record available at https://lccn.loc.gov/2021001998
LC ebook record available at https://lccn.loc.gov/2021001999
DOI: 10.1093/oso/9780190095482.001.0001
1 3 5 7 9 8 6 4 2
Printed by Integrated Books International, United States of America
Dedicated, with love, to Jacob Aaron Lee Nathan. Benvenuto al mondo.
Like all general statements, things are not as simple as I have written them, but I am seeking to state a principle and refrain from listing exceptions.
Ernest Hemingway, Death in the Afternoon
5.4.
6. History of Science,
6.1.
6.2.
6.3.
6.4.
6.5.
6.6.
7. Diet
7.1.
7.2.
7.3.
7.5.
7.6.
7.7.
8. Emergence Reframed
8.1.
8.2.
8.3.
8.4.
8.5.
8.6.
8.7.
9. The Fuel of Scientific
9.1.
9.2.
9.3.
9.4. Black Boxes and Reference Potential, Part
9.5. Black Boxes and Reference Potential, Part II
9.6.
10. Sailing through the Strait
10.1.
10.3.
Preface
Every intellectual project has its “eureka!” moment. For this book, it happened a few summers ago during a lonely walk on the beach of Marina di Campo, on the beautiful island of Elba, off the Tuscan coast. I suddenly started seeing a guiding thread, a leitmotiv, that connected much of my reflections on the nature of science since I started working on these issues back in graduate school. Simply put, it dawned upon me how I viewed most scientific constructs as placeholders. Explanations, causal ascriptions, dispositions, counterfactuals, emergents, and much else. They could all be viewed as boxes, more or less opaque, standing in for more detailed descriptions. That got me thinking about how to provide a more unified account that puts all the tiles of the mosaic together. At the same time, I had realized that the very concept of a black box, so frequently cited both in specialized and popular literature, has been unduly neglected in philosophy and in the sciences alike. This book is the result of my attempts to bring both insights together, in a more or less systematic fashion.
The intellectual journey sparked by my preliminary reckoning on a sandy beach has taken several years to complete. Along the way, I have been honored by the help and support of many friends and colleagues. Philip Kitcher and Achille Varzi encouraged me to pursue this project from the get-go. Many others provided constructive comments on various versions of the manuscript. I am especially grateful to John Bickle, Giovanni Boniolo, Andrea Borghini, Stefano Calboli, Guillermo Del Pinal, George DeMartino, Enzo Fano, Tracy Mott, Emanuele Ratti, Sasha Reschechtko, Michael Strevens, and Anubav Vasudevan for their insightful comments. A special thank you goes to Mika Smith, Roscoe Hill, Mallory Hrehor, Naomi Reshotko, and, especially, Bill Anderson, all of whom struggled with me through several drafts and minor tweaks, in my endless—futile, but no less noble—quest for clarity and perspicuity.
Over the years, the University of Denver and, especially, the Department of Philosophy have constantly provided a friendly, supporting, and stimulating environment. Various drafts of the manuscript were presented as part of my advanced seminar Ignorance and Knowledge in Contemporary Scientific
Preface
Practice. I am indebted to all the students who shared thoughts, comments, and frustrations with me—on this note, a shoutout to Blake Harris, Olivia Noakes, and Jack Thomas. Two reading groups, one at the University of Milan and one at the University of Denver, have been quite helpful. Bits and pieces of this project have been presented, in various forms, at several institutions, too many to adequately acknowledge. Audiences across the world have provided invaluable feedback.
I am very grateful to the production staff at Oxford University Press— especially Peter Ohlin and his team for believing in the project from the very beginning and their constant support, as well as Dorothy Bauhoff’s careful proofreading. Two anonymous reviewers provided precious feedback. Also, a heartfelt “Grazie!” to Stefano Mannone of MadMinds for crafting the images and bearing with all my nitpicky-ness.
Finally, none of this would have been possible without the unfaltering help, patience, and support of my extended family. Lots of love to my wife Heather, my parents Alida and Jacob, my sister Shirli, my brother David, my two “adopted” brothers Federico and Matteo, and my nieces and nephews: Martina, Virginia, Alexander, and Nicoló. Thanks for brightening my days. While this book was slowly seeing the light, my own life has been rocked and blessed by the birth of my son, Jacob. This work is dedicated to him: welcome to the world!
Bricks and Boxes
Knowledge is a big subject. Ignorance is bigger. And it is more interesting.
Stuart Firestein, Ignorance, p. 10
§1.1. The Wall
At the outset of a booklet, aptly entitled The Art of the Soluble, the eminent biologist Sir Peter Medawar characterizes scientific inquiry in these terms:
Good scientists study the most important problems they think they can solve. It is, after all, their professional business to solve problems, not merely to grapple with them. The spectacle of a scientist locked in combat with the forces of ignorance is not an inspiring one if, in the outcome, the scientist is routed. (Medawar 1969, p. 11)
Many readers will find this depiction captivating, intuitive, perhaps even self-evident. What is there to dispute? Is modern science not a spectacularly successful attempt at solving problems and securing knowledge?
Yes, it is. Still, one could ask, what makes the spectacle of a scientist locked in combat with the forces of ignorance so uninspiring? Why is it that we seldom celebrate ignorance in science, no matter how enthralling, and glorify success instead, regardless of how it is achieved? To be fair, we may excuse ignorance and failure, when they have a plausible explanation. But ignorance is rarely—arguably never—a goal in and of itself. Has a Nobel Prize ever been awarded for something that was not accomplished?
The key to answering these questions, and for understanding Medawar’s aphorism, I maintain, is to be sought in the context of a long-standing image of science that has, more or less explicitly, dominated the scene well into the twentieth century. The goal of scientific inquiry, from this hallowed
Black Boxes. Marco J. Nathan, Oxford University Press. © Oxford University Press 2021.
DOI: 10.1093/oso/9780190095482.003.0001
perspective, is to provide an accurate and complete description of the universe, or some specific portion of it. Doing science, the story goes, is analogous to erecting a wall. Or, borrowing a slightly artsier metaphor, it is like tiling a mosaic. Walls are typically made out of stone or clay. Mosaics are crafted by skillfully assembling tiles. The bricks of science, its fundamental building blocks, are facts. Science is taken to advance through a slow, painstaking, constant discovery of new truths about the world surrounding us.
Intuitive as this may sound, we must be careful not to read too much into the simile. First, there are clear discrepancies of sheer scale. The task confronting the most creative builder or ambitious artist pales in comparison to the gargantuan endeavor that the scientific community, taken in its entirety, is striving to accomplish. Second, despite an impressive historical record, the current outlook is, at best, very partial. What we have discovered about the universe is dwarfed by what we do not yet know. Third, many of our findings are bound to be inexact, at some level of description, and, when we turn our attention to the most speculative theories, they may even be widely off the mark. In short, the likening of buildings, works of art, and science should be taken with more than just a grain of salt.
Still, the old image is suggestive, optimistically indicating that overall trends and forecasts are positive. Despite a few inevitable hiccups, our grand mosaic of the universe is slowly but surely getting bigger by the day. The goal of the scientist is to identify a problem, solve it, and discover some new facts. This, I maintain, is the backdrop to Medawar’s quote. Paraphrasing Pink Floyd, all in all it’s just another brick in the wall.
Here is where things start to get more interesting. Over the last few decades, a growing number of scientists, philosophers, historians, and sociologists have argued that the depiction of science as progressive accumulation of truth is, at best, a simplification. Consequently, Medawar’s suggestive image of the scientist battling against the evil forces of ignorance has gradually but inexorably fallen out of style. Much has been said about the shortcomings of this old perspective, and some of these arguments will be rehearsed in the ensuing chapters. The point, for the time being, is that very few contemporary scholars, in either the sciences or the humanities, would take it at face value any longer. The wall has slowly crumbled.
What is missing from the traditional picture of science as accumulating truths and facts? Simply put, it is lopsided. Knowledge, broadly construed, is a constituent—and an important one at that—of the scientific enterprise. Yet, it is only one side of the coin. The other side involves a mixture of what we do
not know, what we cannot know, and what we got wrong. In a multifaceted word, what is lacking from the old conception of science is the productive role of ignorance. But what does it mean for ignorance to play a “productive” role? Could ignorance, at least under some circumstances, be positive, perhaps even preferable to the corresponding state of knowledge?
The inevitable presence of ignorance in scientific practice is neither novel nor especially controversial. Generations of philosophers have extensively discussed the nature and limits of human knowledge and their implications. Nevertheless, ignorance was traditionally perceived as a hurdle to be overcome. Over the last few decades it has turned into a springboard, and a more constructive side of ignorance began to emerge. Allow me to elaborate.
In a recent booklet, Stuart Firestein (2012, p. 28), a prominent neurobiologist, remarks that “Science [ . . . ] produces ignorance, possibly at a faster rate than it produces knowledge.” At first blush, this may sound like a pessimistic reading of Medawar, where the spectacle of a scientist routed in combat by forces of ignorance becomes uninspiring. Yet, as Firestein goes on to clarify, this is not what he has in mind: “We now have an important insight. It is that the problem of the unknowable, even the really unknowable, may not be a serious obstacle. The unknowable may itself become a fact. It can serve as a portal to deeper understanding. Most important, it certainly has not interfered with the production of ignorance and therefore of the scientific program. Rather, the very notions of incompleteness or uncertainty should be taken as the herald of science” (2012, p. 44).
Analogous insights abound in psychology, where the study of cognitive limitations has grown into a thriving research program.1 Philosophy, too, has followed suit. Wimsatt (2007, p. 23) fuels his endeavor to “re-engineer philosophy for limited beings” with the observation that “we can’t idealize deviations and errors out of existence in our normative theories because they are central to our methodology. We are error prone and error tolerant errors are unavoidable in the fabric of our lives, so we are well-adapted to living with and learning from them. We learn more when things break down than when they work right. Cognitively speaking, we metabolize mistakes!” In short, ignorance pervades our lives.
Time to connect a few dots. We began with Medawar’s insight that science is in the puzzle-solving business. When one grapples with a problem and ends up routed, it is a sign that something has gone south. All of this fits
1 See, for instance, Gigerenzer (2007) and Kahneman (2011).
in well with the image of science as a cumulative, brick-by-brick endeavor, which has dominated the landscape until and for the better part of the twentieth century. Still, scholars are now well aware that ignorance is not always or necessarily a red flag. The right kind of mistakes can be a portal to success, to a deeper understanding of reality. As Wimsatt puts it, while some errors are important, others are not. Thus, “what we really need to avoid is not errors, but significant ones from which we can’t recover. Even significant errors are okay as long as they’re easy to find” (2007, p. 24).
All of this well-known. But what does it mean to claim that ignorance may become a portal to deeper understanding? Early on in the discussion, Firestein acknowledges that his use of the word “ignorance” is intentionally provocative. He subsequently clarifies his main point by drawing a distinction between two very different kinds of ignorance. As he elegantly puts it, “One kind of ignorance is willful stupidity; worse than simple stupidity, it is a callow indifference to facts of logic. It shows itself as a stubborn devotion to uninformed opinions, ignoring (same root) contrary ideas, opinions, or data. The ignorant are unaware, unenlightened, uninformed, and surprisingly often occupy elected offices. We can all agree that none of this is good” (2012, p. 6). Nevertheless, he continues, “there is another, less pejorative sense of ignorance that describes a particular condition of knowledge: the absence of fact, understanding, insight, or clarity about something. It is not an individual lack of information but a communal gap in knowledge. It is a case where data don’t exist, or more commonly, where the existing data don’t make sense, don’t add up to a coherent explanation, cannot be used to make a prediction or statement about some thing or event. This is knowledgeable ignorance, perceptive ignorance, insightful ignorance. It leads us to frame better questions, the first step to getting better answers. It is the most important resource we scientists have, and using it correctly is the most important thing a scientist does” (2012, pp. 6–7).
Needless to say, it is the latter notion of ignorance—knowledgeable, perceptive, insightful ignorance—that I intend to further explore throughout this book. Recognizing that failure can be important for the advancement of science, and that not all ignorance is made equal, is a small step in a long, tortuous stride. Once the spotlight is pointed in this direction, many provocative questions arise. What is productive ignorance? What is its role in scientific research? Is it merely an indicator of where further work needs to be done, or are there really scientific questions where ignorance may actually be preferable to knowledge? Can ignorance teach us something, as opposed to merely showing us what is missing? What makes some errors fatal and others
negligible or even useful? To put all of this in general terms, how does science turn ignorance and failure into knowledge and success?
Much will be said, in the chapters to follow, about the nature of productive ignorance, what distinguishes it from “a stubborn devotion to uninformed opinions,” and how it is incorporated into scientific practice. Before doing so, in the remainder of this section, I want to draw attention to a related issue, concerning pedagogy: how science is taught to bright young minds. If, as Firestein notes, ignorance is so paramount in science, why is its role not explicitly incorporated into the standard curriculum?
As noted, most scholars no longer take seriously the cumulative ideal of science as the slow, painstaking accumulation of truths. Now, surely, there is heated debate and disagreement on how inaccurate this picture really is. More sympathetic readings consider it a benign caricature, promoting a simple but effective depiction of a complex, multifaceted enterprise. Crankier commentators dismiss it as an inadequate distortion that has overstayed its welcome. Still, few, if any, take it at face value, for good reason.
This being said, this superseded image is still very much alive in some circles. It is especially popular among the general public. Journalists, politicians, entrepreneurs, and many other non-specialists explicitly endorse and actively promote the vision of science as a gradual accumulation of facts and truths. While some tenants of the ivory tower of academia might take this as an opportunity to mock and chastise the incompetence of the masses, we should staunchly resist this temptation. First, note, we are talking about highly educated and influential portions of society. Lack of schooling can hardly be the main culprit. Second, and even more important, it is not hard to figure out where the source of the misconception might lie.
Textbooks, at all levels and intended for all audiences, present science as a bunch of laws, theories, facts, and notions to be internalized uncritically. This pernicious stereotype trickles down from schools and colleges to television shows, newspapers, and magazines, eventually reaching the general public. Now, surely, some promising young scholars will go on to attend graduate schools and pursue research-oriented careers: academic, clinical, governmental, and the like. They will soon learn, often the hard way, that actual scientific practice is very different—and way more interesting!—than what is crystallized in journals, books, and articles. Yet, this is a tiny fraction of the overall population. Most of us are only exposed to science in grade school or college, where the old view still dominates. By the time one leaves the classroom to walk into an office, the damage is typically done.
Where am I going with this? The bottom line is simple. The textbooks by which students learn science, which perpetuate the cumulative approach, are written by those same specialists who, in their research, eschew that brickby-brick analogy. Why are experts mischaracterizing their own disciplines, promoting an old image that they themselves no longer endorse?
This book addresses this old question, which has troubled philosophers of science, at least since the debates of the 1970s in the wake of Kuhn’s Structure My answer can be broken down to a pars destruens and a pars construens Beginning with the negative claim, textbooks do not adopt the cumulative model with the intention to deceive. The reason is that no viable alternative is available. The current scene is dominated by two competing models of science, neither of which supplants the “brick-by-brick” ideal. There must be more effective ways of popularizing research, exposing new generations to the fascinating world of science. The positive portion of my argument involves a constructive proposal. The next two sections present these two theses in embryonic form. Details will emerge in the ensuing chapters.
§1.2. A Theory of Everything?
The previous section concluded by posing a puzzle. Why do textbooks present science as an accumulation of truths and facts, given that their authors know perfectly well how inaccurate this is? The answer, I maintain, is that no better alternative is available. We currently have two competing models of science: reductionism and antireductionism. Neither provides an accurate depiction of the productive interaction between knowledge and ignorance, supplanting the old image of the wall. This section introduces the status quo, the two philosophical models presently dominating the scene.
Before getting started, two quick clarifications. First, these preliminary paragraphs merely set the stage for further discussion to be developed throughout the book. Skeptical readers unsatisfied with my cursory remarks are invited to hold on tight. Second, in claiming that “philosophical” models guide the presentation in textbooks and other popular venues, I am not suggesting that scientific pedagogy is directly responding to the work of philosophers—wishful thinking, one could say. Rather, my claim is that these models, developed systematically within the philosophy of science, reflect the implicit attitude underlying much empirical practice. With these qualifications in mind, we can finally get down to business.
Classic and contemporary philosophy of science has been molded by a long-standing clash between two polarizing metaphysical stances. Allow me to illustrate them by introducing a colorful image that will recur throughout our discussion. Assume that the natural world can be partitioned into distinct, non-overlapping levels, arranged in terms of constitution, with smaller constituents at the bottom and larger-scale entities arranged toward the top. From this standpoint, these levels will correspond to traditional subdivisions between fields or inquiries. Intuitively, physics, broadly construed, will be at the very bottom. At a slightly coarser scale, we find entities described by biology, followed by neuropsychology. Toward the top we will find the macroscopic structures postulated and studied by economics and other social sciences. It is customary to depict this layering of reality as a wedding cake. For this reason, I shall refer to it as the “wedding cake model” (Figure 1.1). While such partition will be further developed and refined, it is important to clarify from the get-go that this representation is admittedly oversimplified and incomplete. It is incomplete because there are numerous fields and subfields of science that are not included, such as sociology and anthropology. It is oversimplified because any student of science worth their salt is perfectly aware that this partition does not capture complex, nuanced relations between disciplines. To wit, I am here clashing together psychology and neuroscience ECONOMICS

Figure 1.1. The “wedding cake” layering of reality.
under the moniker “neuropsychology,” knowing perfectly well that their ontologies and methodologies diverge in important respects. Similarly, parts of biology overlap with psychology, neuroscience, and the social sciences, spawning hybrid, interdisciplinary approaches such as neuroethics, evolutionary psychology, and biolinguistics. Finally, one could contest my placement of physics at the very bottom, since a prominent part of this science, astrophysics, studies some of the largest systems in the universe.
With these limitations well in mind, I present this idealized model of reality to illustrate our two metaphysical stances. On the one hand, reductionism contends that all special sciences boil down to fundamental physics, in the sense that “higher levels” effectively reduce to the bottom level.2 On the other hand, antireductionism maintains that “higher” layers are ontologically or epistemically autonomous from “lower” ones.3 How should these claims be understood? What is the crux of the disagreement? A more precise articulation of the debate will be set aside until Chapter 2. For the time being, my aim is a general reconstruction of the main point of contention between reductionists and their foes.
Allow me to kick things off by introducing a simple thought experiment. Imagine that, at some time in the future, physics—or more accurately, the descendants of our current physical theories—will advance to the point of providing an exhaustive description of the bedrock structure of the entire cosmos, together with a complete specification of all the fundamental laws of nature.
At first, this picture will be easier to visualize if we depict the universe as an enormous container (“absolute space”) filled with an astronomical number of indivisible particles. Further, suppose that these atomic elements interact with each other in a fully deterministic fashion along a temporal dimension (“absolute time”) in a reversible, that is, time-symmetric manner.
This scenario roughly corresponds to the depiction of the cosmos provided by Newton. Note that it is completely deterministic. This means that any complete future state of the universe may, in principle, be predicted
2 The expression “special sciences” refers to all branches of science, with the exception of fundamental (particle) physics. This moniker is somewhat misleading, as there is nothing inherently “special” about these disciplines, aside from having been relatively underexplored in philosophy. Yet, this label has become commonplace and I shall stick to it.
3 To be clear, any characterization of levels as “coarse-grained” vs. “fine-grained,” “higher-level” vs. “lower-level,” or “micro” vs. “macro,” should be understood as relativized to a specific choice of explanandum. From this standpoint, the same explanatory level L n can be “micro” or “lower-level” relative to a coarser description L n+1, and “macro” or “higher” relative to a finer-grained depiction L n 1
precisely from any comprehensive description of a past state, in conjunction with the relevant laws of nature. Analogously, past states can be retrodicted from future ones.
One final flight of fancy. Suppose that, down the line, researchers will also find a way to achieve the astounding computing power required for these derivations. What this means is that, given an exhaustive specification of the location and momentum of every bedrock particle in the universe, any past or future state can be determined with absolute certainty.
This thought experiment was popularized by the physicist and mathematician Pierre Simon de Laplace. For this reason, I shall follow an established tradition and refer to a creature with such stupendous predictive and retrodictive capacities as a “Laplacian Demon.”4
As some readers will surely realize, making Laplace’s insight cohere with contemporary physics requires some fancy footwork. First and foremost, we need a characterization of the universe that avoids any reference to the superseded concepts of absolute space and absolute time, and which reframes these notions in relativistic terms. Second, a Laplacian Demon worth their salt needs to be at least compatible with the fundamental laws of nature being indeterministic. This is a possibility left open by current quantum mechanics but, it should be noted, it is fundamentally at odds with one of Laplace’s core tenets, namely, determinism. On this interpretation, the Demon will, at best, be able to assign to every past and future state a precise probability that falls short of unity, that is, certainty. Finally, a more realistic characterization of Laplace’s thought experiment requires addressing thermodynamic and quantum irreversibility as well as in-principle unpredictable behaviors described by chaos theory.
Be that as it may, suppose that future physics follows this path, developing into a powerful framework capable of describing, down to its most basic constituents, every corner of the physical universe that exists, has existed, and will ever exist. These considerations raise a host of philosophical questions.5 Under these hypothetical circumstances, what would happen to science as we know it today? Could physics explain every event? Or, more modestly, could it explain any event that currently falls under the domain
4 For a discussion of the origins and foundations of Laplacian determinism, see van Strien (2014).
5 “Philosophical” discussions of Laplacian Demons and the future of physics have been undertaken by prominent philosophers and scientists such as Dennett (1987); Putnam (1990); Weinberg (1992); Mayr (2004); Chalmers (2012); Nagel (2012); and Burge (2013).
of some branch of science? Would we still need the special sciences? Would physics replace them and truly become a scientific theory of everything?
There are two broad families of responses to these questions. The first, which may be dubbed reductionism, answers our quandaries in the positive. Reductionism comes in various degrees of strength. Any reductionist worth their salt is perfectly aware that current physics is still eons away from achieving the status of a bona fide accurate and exhaustive description of the universe. And even if our best available candidates for fundamental laws of nature happened to be exact, the computing power required to rephrase relatively simple higher-level events in physical terms remains out of reach, at least for now. This is to say that physics, as we presently have it, is not yet an overarching theory of everything. Still, the more radical reductionists claim, physics will eventually explain all events in the universe, and contemporary theories have already put us on the right track. More modest reductionists make further concessions. Perhaps physics will never actually develop to the point of becoming the utmost theory of reality. Even if it did, gaining the computing power to completely dispose of all non-physical explanations may remain a chimera. Hence, the special sciences will always be required in practice. Nevertheless, in principle, physics could do without them. In this sense, the special sciences are nothing but convenient, but potentially disposable, scaffoldings.
The second family of responses, antireductionism, provides diametrically opposite answers to our questions concerning the future of physics. Like its reductionist counterpart, antireductionism comes in degrees. More radical versions contend that, because of the fundamental disunity or heterogeneity of the universe, a physical theory of everything is a metaphysical impossibility. Even the most grandiose, all-encompassing, and successful physical theories could not explain all that goes on in the material universe because many—or, some would say, most—events covered by the special sciences fall outside of the domain of physics. Less uncompromising antireductionists may make some concessions toward modest forms of reductionism. Perhaps physics could, in principle, explain every scientific event. Still, the special sciences are nevertheless not disposable. This is because they provide objectively “better” explanations of higher-level happenings. In short, the antireductionist says, the success of physics is no threat for higher-level theories. Special sciences are not going anywhere, now or ever.
In sum, the debate between reductionism and antireductionism in general philosophy of science boils down to the prospects of developing a physical
theory of everything, a grand characterization of the world that, in principle, could describe, predict, and explain any and every scientific event. These metaphysical considerations carry methodological implications. Are disciplines other than fundamental physics mere scaffoldings awaiting replacement? Or are special sciences a necessary, ineliminable component of any comprehensive description and explanation of the universe?
These thought-provoking questions have inspired much discussion. The divide between reductionism and antireductionism has dominated classic and contemporary philosophy of science. The widespread propensity to privilege fundamental physics over other sciences digs its roots deep into the history of Naturwissenschaft 6 Logical positivism, the founding movement of the philosophy of science as it is currently understood, assumed that the physical and mathematical sciences set both the agenda and the tone of the conversation. This attitude that physics is the paradigm of true science and all other fields should sooner or later follow suit, still prominent in some circles, has attracted criticism among scientists, historians, and philosophers, spawning healthy debate. However, after decades of discussion, these positions have hardened, stifling productive conversation. The topic of reduction has attracted a disproportional amount of attention in philosophy, at the expense of other equally pressing and important issues. Let me motivate this claim, which lies at the core of my argument.7
First and foremost, debating whether all scientific inquiries can be addressed in terms of physics has a major drawback. It encourages laying all our eggs in one basket—the nature, limits, and boundaries of ideal, future science—setting aside questions pertaining to current inquiries. But how do we adjudicate, in the present, the prospects of research yet to come? In this respect, general philosophy of science has much to learn from contemporary philosophies of particular disciplines, which understood decades ago the importance of focusing on the here-and-now of empirical practice.
Second, and more relevant to our concerns, reductionism and antireductionism, as traditionally conceived, leave us wanting. One problem is that there is little consensus on how to characterize “reduction” and “autonomy,” often leaving disputants talking past each other. In addition, neither stance is
6 For an excellent discussion of the rise of scientific philosophy, see Friedman (2001).
7 Similar conclusions have been reached, via a different route, by Wimsatt (2007), and developed by the “new wave of mechanistic philosophy,” presented and examined in Chapter 7. Relatedly, Gillett (2016) has noted some discrepancy between the models of reduction and emergence developed in philosophy versus the ones adopted in scientific practice.
able to successfully capture the interplay between knowledge and ignorance in contemporary science. Reductionist metaphysics goes together well with the “brick-by-brick” epistemology, which views the advancement of science as breaking down complex, higher-level systems into smaller blocks. This leaves little room for productive ignorance, that is, the idea that ignorance itself may, at times, be preferable to the corresponding state of knowledge. Antireductionism, in contrast, legitimizes the role of ignorance by stressing the autonomy of the macrocosm. Still, it fails to convincingly explain why higher-level explanations can be objectively superior to lower-level ones. These two perspectives need to be combined, not pitched against each other.
Throughout the book, I shall argue that the reductionism vs. antireductionism opposition, as traditionally conceived, confronts us with a false dilemma. Paraphrasing Lakatos, despite its long-standing history and renewed popularity, this debate has now turned into a regressive research program. Both parties often talk past each other, rehashing old “this was never my point to begin with” claims. In the meantime, substantive historical, scientific, and philosophical questions accumulate, awaiting examination. After decades of discussion, it is finally time to move on. What we need is a framework that can bring together both aspects of science—autonomy and reduction— without pitching them against each other. What is needed, in short, is a different, alternative model of science. What could this be?
Section 1.2 introduced my pars destruens: the debate between reductionism and antireductionism fails to provide an adequate image of the dynamic interplay between scientific knowledge and productive ignorance. If the current path is not the way to go, in what direction should we point the discussion?
Answering this question is the chief goal of the constructive portion of my argument. The chapters that follow sketch an account of scientific explanation that is neither reductionist nor antireductionist. Or, perhaps, it is both, in the sense that it combines core aspects from each perspective. The proposed shift brings attention to a prominent construct—the black box which underlies a well-oiled technique for incorporating a productive role of ignorance and failure into the acquisition of empirical knowledge. While
§1.3. Pandora’s Gift to Science: The Black Box
a comprehensive analysis will be undertaken throughout the monograph, let me briefly introduce you to this important albeit neglected concept.
Most readers will likely have some pre-theoretical, intuitive sense of what a black box is and may even have candidate examples in mind. If your acquaintance with electronics is anything like mine, your smartphone constitutes a perfectly good illustration, as it surely does for me. I am able to use my phone reasonably well. I can easily make calls, check my email, and look up driving directions. I am aware that, by opening a specific app, the screen will display my boarding pass, allowing me to board my flight. I obviously know that, if I do not charge the battery every other day, the darn thing will eventually run out of power. For all intents and purposes, I am a typical user. The point is that, like most customers, I have no clear conception of what happens inside the phone itself, as witnessed by the observation that, if something were to break or otherwise stop working, I would have to take it to a specialized store for repair. In brief, I have a fairly good grasp of the functional organization of my phone—systems of inputs and outputs that govern standard usage—and even a vague sense of the algorithms at play. But the inner workings that underlie this functionality and make it possible are a mystery to me. I am perfectly aware that they must be there. But, being no technician, I have no clue as to what exactly they are and how they work. In this sense, a smartphone is a black box to me, and I am confident that many others are in the same boat. Do not let the mundane nature of the example deceive you. There is a longstanding historical tradition referring to “known unknowns,” going back, at least, to the medieval debate on insolubilia and, more recently, Emil du BoisReymond’s ignoramus et ignorabimus. In contemporary settings, references to black boxes can be found throughout the specialized literature across a variety of fields. In contemporary philosophy, the black box par excellence is the human mind itself. As Daniel Dennett (1991, p. 171) notes, in a passage of his book Consciousness Explained extracted from a section entitled “Inside the Black Box of Consciousness”: “It is much easier to imagine the behavior (and behavioral implications) of a device you synthesize ‘from the inside out’ one might say, than to try to analyze the external behavior of a ‘black box’ and figure out what must be going on inside.” The point is reinforced, from a slightly different perspective, in Pinker’s How the Mind Works (1997, p. 4): “In a well-designed system, the components are black boxes that perform their function as if by magic. That is no less true of the mind. The faculty with which we ponder the world has no ability to peer inside itself or our other faculties to see what makes them tick.” But consciousness is just
the tip of the iceberg. References to black boxes can be found in the work of many prominent philosophers, such as Hanson (1963), Quine (1970), and Rorty (1979), just to pick a few notable examples.
And, of course, black boxes are hardly limited to the philosophy of mind, or even the field of philosophy tout court. As a contemporary biologist puts it, “the current state of scientific practice [ . . . ] more and more involves relying upon ‘black box’ methods in order to provide numerically based solutions to complex inference problems that cannot be solved analytically” (Orzack 2008, p. 102). And here is an evolutionary psychologist: “The optimality modeler’s gambit is that evolved rules of thumb can mimic optimal behavior well enough not to disrupt the fit by much, so that they can be left as a black box” (Gigerenzer 2008, p. 55). These are just a few among many representative samples, which can be found across the board.
The use (and abuse) of black boxes is criticized as often as it is praised. Some neuroscientists scornfully dub the authors of functional models containing boxes, question marks, or other filler terms, “boxologists.” In epidemiology— the branch of medicine dealing with the incidence, distribution, and possible control of diseases and other health factors—there is a recent effort to overcome the “black box methodology,” that is, “the methodologic approach that ignores biology and thus treats all levels of the structure below that of the individual as one large opaque box not to be opened” (Weed 1998, p. 13). Many reductionists view black boxes as a necessary evil: something that does occur in science, but that is an embarrassment, not something to celebrate.
In short, without—yet—getting bogged down in details, references to black boxes, for better or for worse, are ubiquitous. Analogous remarks can be found across every field, from engineering to immunology, from neuroscience to machine learning, from analytic philosophy to ecology. What are we to make of these boxes that everyone seems to be talking about?
Familiar as it may ring, talk of boxes here is evidently a figure of speech. You may actually find a black box on an aircraft or a modern train. But you will not find any such thing in a philosophy professor’s dusty office any more than you will find it in a library or research lab. What exactly is a black box? Simply put, it is a theoretical construct. It is a complex system whose structure is left mysterious to its users, or otherwise set aside. More precisely, the process of black-boxing a specific phenomenon involves isolating some of its core features, in such a way that they can be assumed without further microexplanation or detailed description of its structure.
These issues will be developed throughout the monograph. Meanwhile, let me stress that not all black boxes are the same. Some work perfectly well. Others muddy the waters by hiding or obliterating crucial detail. Some boxes are opaque for everyone, as no one is currently able to open them. Others, like phones, depend on the subject in question. Some black boxes are constructed out of necessity: ignorance about the underlying mechanisms or computational intractability. Others are the product of error, laziness, or oversight. Yet again, some are constructed to unify fields. Still others draw disciplinary boundaries. In a nutshell, black-boxing is a more complex, nuanced, and powerful technique than is typically assumed.
Given these epistemic differences in aims and strategies, does it even make sense to talk about the practice of black-boxing, in the singular? This book is founded on the conviction that, yes, it does make sense. I shall argue that there is a methodological kernel that lies at the heart of all black boxes. This core phenomenon involves the identification of some entity, process, or mechanism that, for various reasons, can be idealized, abstracted away, and recast from a higher-level perspective without undermining its effectiveness, explanatory power, or autonomy. But how does this work? Answering these deceptively simple questions will require some effort.
In sum, here is our agenda. What is a black box? How does it work? How do we construct one? How do we determine what to include and what to leave out? What role do boxes play in contemporary scientific practice? If you have the patience to explore some fascinating episodes in the history of science, together we will address all of these issues. I shall argue that a detailed analysis of this widespread practice is descriptively accurate, in the sense that it captures some prominent albeit neglected aspects of scientific practice. It is also normatively adequate, in that it triggers a plausible analysis of the relation between explanations pitched in different fields, at different epistemic levels. Moreover, my proposal promises to bring together the insights on both reductionism and antireductionism, while avoiding the pitfalls of both extremes. Simply put, there are two aspects of ignorance in science. On the one hand, reductionism captures how ignorance points to what needs to be discovered. Antireductionism, on the other hand, reveals how science can proceed in the face of ignorance. Black boxes capture both dimensions, showing how autonomy and reduction are complementary, not incompatible, and offering a fresh perspective on stagnating debates. For all these reasons, black boxes are the perfect candidate for replacing stale models at the heart of philosophical discussions of scientific methodology.