Conducting School Based Functional Behavioral Assessments, Third Edition: A Practitioner’s Guide (The Guilford Practical Intervention in the Schools Series) 3rd Edition, (Ebook PDF)
“A tour de force on How the Brain Works … a masterpiece on brain science and neuro-computing that could only be created by Grossberg.”
—Leon Chua, University of California at Berkeley
“After reading many papers by the author, I always wished that he would present them in a coherent whole. And here it is. A magnificent volume of great science from mind to brain and back, a condensed ars poetica of a great scientist.”
—György Buzsáki, New York University
“Stephen Grossberg is one of the most original and influential theorists in contemporary cognitive science and computational neuroscience. In Conscious MIND Resonant BRAIN, he takes the reader on an eye-opening tour in which he addresses fundamental problems of mind and brain from his unique theoretical perspective. This is an important book that should be of interest to anyone who wonders how a brain can give rise to a mind.
—Daniel L. Schacter, Harvard University
“In this book Stephen Grossberg shares the wisdom and encyclopedic knowledge that he acquired over 50 years of research devoted to unravel the mysteries of the human brain. Stephen pioneered the field of theoretical neuroscience and this approach allowed him to discover general principles that govern functions as diverse as visual perception, learning and memory, attention, emotion, decision making and consciousness. It is the essence of overarching principles to be abstract and to sometimes defy intuition, but Stephen succeeds to convey the essential in a language that is readily accessible to the non-expert. He embeds the discussion of neuronal mechanisms in the rich framework of cognitive psychology and elegantly bridges the gap between scientific evidence and subjective experience. He takes the readers by the hand and lets them discover the often surprising philosophical, ethical and societal implications of neurobiological discoveries. For those who enjoy intellectual adventures and wish to explore the boundaries of the known this scholarly written book is a real treasure.”
—Wolf Singer, Max Plank Institute for Brain Research, Frankfurt
“How often do we have the chance to hold a true masterpiece? Grossberg’s monumental accomplishments developed over multiple decades now written at an accessible level to a broader audience. What a true privilege!”
—Luis Pessoa, University of Maryland
“Steve Grossberg is one of the most insightful and prolific writers on biological intelligence. This book is a masterful presentation of fundamental methods of modeling minds, brains and their interactions with the world, many of which are due to the author and his collaborators. The models are presented as mathematical systems, including computing and neural networks. The variables, parameters and functions represent biological and environmental concepts; mathematical conclusions are interpreted as predictions of biological behavior. In many cases these have been verified experimentally. There are illuminating and surprising connections to other disciplines, including art, music and economics. Highly recommended to a general audience.”
—Morris W. Hirsch, University of California at Berkeley
“This comprehensive overview of Grossberg’s contributions to our understanding of the mind and brain shows exactly how prescient he, and his colleagues, have been. Whatever one’s specific interest, from visual illusions to mental illness, this book provides a principled treatment of it. The principles flow from Grossberg’s early framing of many of the questions that have come to define computational neuroscience—including his early understanding of the centrality of expectations. Kudos to him for pulling it all together here.”
—Lynn Nadel, University of Arizona
“What an ambitious, lucid, eye-opening and engaging book! By using the computational theories he developed, Grossberg attempts nothing less than to integrate our knowledge of how our mind works with our knowledge of how the brain works. The topics he covers range from perception to action, from emotion to memory, and from decision making to love, with consciousness and the mind-body problem figuring prominently throughout. The story he weaves, with many incisive, delightful illustrations, is compelling and accessible. The reader is rewarded with a novel appreciation of the human psyche and artificial intelligence, and is left with admiration for Grossberg’s achievement.”
—Morris
Moscovitch, University of Toronto
“This book is first and foremost an account of a personal odyssey of one of the great and most prolific scientific minds of our time trying to understand itself. As a graduate student in the new field of ‘neuroscience’ in the late 70s I was aware of Grossberg’s work, but it was largely inaccessible to me because of my limited mathematical training. I was not alone. What we have here at last is a genuine attempt by the author to make his ideas accessible to most readers as ‘a simple story, without mathematics’ (or at least with minimal math). The foundation of this story is the concept of ‘resonance’ in neural systems. Resonance has a certain similarity to Hebb’s concept of the cell assembly and its more modern variant, attractor networks. But the resonance concept goes substantially further to capture the idea that when the external input matches the already stored knowledge (expectation and attention) a dynamical structure emerges which can suppress noise and irrelevant details and enable fast and effective responses. When resonance fails, this triggers adaptation. This book is largely a treatise on how the resonance concept can help us understand almost all aspects of sensation, perception, and higher cognition. Even without all the math, this book of 600 plus pages will take considerable dedication to assimilate, but I believe that any student of neuroscience interested in the brain as the basis of mind will find it well worth the effort.”
—Bruce McNaughton, University of California at Irvine
“This book is not for the faint of heart. Stephen Grossberg has been a giant in the field of computational neuroscience for 60 years. In this book he presents his carefully developed, integrative neurobiological theory on how the nervous system generates our conscious lives. It is bold yet self-reflective and therein challenging to all students trying to figure out how the brain does its tricks. A must read.”
—Michael Gazzaniga, University of California at Santa Barbara
“How a brain makes its mind is one of the most perplexing questions in science. In this book, you will find the most comprehensive account to date by a towering pioneer of brain theory of our time.”
—Deliang Wang, Ohio State University
“Don’t read Grossberg in the original—unless you are an adept. Start with this exceptional overview of the lifework of a brilliant cognitive neuroscientist; then, organized and inspired, turn to the journals. Grossberg identifies key phenomena that open windows into the functioning of the brain; identifies the key problems that the brain needs to solve relevant to them; constructs elegant modules that might both solve those problems and give rise to the phenomena noted, and finally assembles them into systems and makes new predictions. This is textbook scientific inquiry, executed by a virtuoso. The book would be a fine component of a seminar, with students selecting the problems and modules for a deeper dive, then explicating them to the class.”
—Peter Killeen, Arizona State University
“An excellent and wide-ranging view of how the brain perceives the world for us by a pioneering brain theoretician.”
—Wolfram Schultz, University of Cambridge
Conscious MIND Resonant BRAIN
Conscious MIND Resonant BRAIN
How Each Brain Makes a Mind
by Stephen Grossberg
Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.
Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above.
You must not circulate this work in any other form and you must impose this same condition on any acquirer.
Library of Congress Control Number: 2021931712
ISBN 978– 0–19– 007055–7
DOI: 10.1093/oso/9780190070557.001.0001
Printed by LSC Communications, United States of America
Dedicated to
Gail Carpenter and
Deborah Grossberg
Katz
with much love and gratitude and in loving memory of Elsie Grossberg
Contents
Preface ix
Biological intelligence in sickness, health, and technology
1. Overview 1
From Complementary Computing and Adaptive Resonance to conscious awareness
2. How a Brain Makes a Mind 50
Physics and psychology split as brain theories were born
3. How a Brain Sees: Constructing Reality 86
Visual reality as illusions that explain how we see art
4. How a Brain Sees: Neural Mechanisms 122
From boundary completion and surface filling-in to figure-ground perception
5. Learning to Attend, Recognize, and Predict the World 184
From vigilant conscious awareness to autism, amnesia, and Alzheimer’s disease
6. Conscious Seeing and Invariant Recognition 250
Complementary cortical streams coordinate attention for seeing and recognition
7. How Do We See a Changing World? 280 How vision regulates object and scene persistence
8. How We See and Recognize Object Motion 289
Visual form and motion perception obey complementary laws
9. Target Tracking, Navigation, and Decision- Making 337
Visual tracking and navigation obey complementary laws
10. Laminar Computing by Cerebral Cortex 353
Towards a unified theory of biological and artificial intelligence
11. How We See the World in Depth 370
From 3D vision to how 2D pictures induce 3D percepts
12. From Seeing and Reaching to Hearing and Speaking 404
Circular reaction, streaming, working memory, chunking, and number
13. From Knowing to Feeling 480
How emotion regulates motivation, attention, decision, and action
14. How Prefrontal Cortex Works 517
Cognitive working memory, planning, and emotion conjointly achieve valued goals
15. Adaptively Timed Learning 539
How timed motivation regulates conscious learning and memory consolidation
16. Learning Maps to Navigate Space 572
From grid, place, and time cells to autonomous mobile agents
17. A Universal Developmental Code 618
Mental measurements embody universal laws of cell biology and physics
Credits 641 References 667 Index 715
Preface
Biological intelligence in sickness, health, and technology
How does your mind work? How does your brain give rise to your mind? These are questions that all of us have wondered about at some point in our lives, if only because everything that we know is experienced in our minds. They are also very hard questions to answer. After all, how can a mind understand itself? How can you understand something as complex as the tool that is being used to understand it?
Even knowing how to begin this quest is difficult, because our brains look so different from the mental phenomena that they support. How does one make the link between the small lump of meat that we call a brain and the world of vivid percepts, thoughts, feelings, hopes, plans, and actions that we consciously experience every day? How can a visual percept like a brilliantly colored autumn scene seem so different from the sound of beautiful music, or from an intense experience of pleasure or pain? How do such diverse experiences get combined into unified moments of conscious awareness that all seem to belong to an integrated sense of self? What, after all, is consciousness and how does it work in each brain? What happens in each of our brains when we consciously see, hear, feel, or know something? And why, from a deep theoretical perspective, was evolution driven to discover consciousness in the first place?
This book provides an introductory and self-contained description of some of the exciting answers to these questions that modern theories of mind and brain have recently proposed. I am fortunate to be one of the pioneers and research leaders who has contributed to this rapidly growing understanding of how brains make minds, a passion that began unexpectedly when I took introductory psychology as a Dartmouth College freshman in 1957. A summary of how my work began, and some of the discoveries that excited me then, and continue to do so to this day, are described in a lecture on YouTube (https://youtu.be/9n5AnvFur7I) that I gave when I was awarded the 2015 Norman Anderson Lifetime Achievement Award of the Society of Experimental Psychologists (SEP, http://www.sepsych.org/awards.php). These initial insights were followed by a steady stream of discoveries that has continued to the present day.
The book tries to explain the essence of these discoveries as a series of stories that interested readers from all walks of life can enjoy. The book is filled with such stories.
Our brains are not digital computers!
You might immediately wonder: If these discoveries are so simple that they can be turned into stories, then why has it taken so long for them to be made? After all, in one sense, the answer that we are seeking is simple: Our minds emerge from the operations of our brains. Such an answer is, however, profoundly unsatisfying, because our conscious awareness seems so different from the brain’s anatomy, physiology, and
biochemistry. In particular, the brain contains a very large number of cells, called neurons, that interact with one another in complex circuits. That is why many people in Artificial Intelligence, or AI, thought for a while that the brain is designed like a digital computer. Some of the greatest pioneers of digital computer design, such as John von Neumann, drew inspiration from what people knew about the brain in the 1940s. Very few people today, however, believe that the brain operates like a digital computer. It is quite a different type of system.
Knowing that your brain is not like the computer on your desk, or more recently in your hand, is a comfort. There seems to be more to our mental lives, after all, than just a morass of operating systems and programs. But what we are not does not teach us what we are. It does not, in particular, help us at all to understand how the brain’s networks of neurons give rise to learned behaviors and introspective experience as we know it. How can such different levels of description ever be linked?
A new paradigm for understanding mind and brain: Autonomous adaptive intelligence
I would argue that it has taken so long to begin to understand how a brain gives rise to a mind in a theoretically satisfying way because, to achieve this, one needed to first create a new scientific paradigm. This paradigm concerns how autonomous adaptive intelligence is achieved. As I will discuss throughout the book, this is a topic that is just as important for understanding our own minds as it is for the design of intelligent devices in multiple areas of computer science, engineering, and technology, including AI.
The discoveries that contribute to this paradigm have required new design principles that unify multiple disciplines, new mathematical concepts and methods, major computer resources, and multiple experimental techniques. I will write more about what this paradigm is below, when it began, and why it has taken so long to develop. In brief, this paradigm has to do with properties of our lives that we take for granted, like your ability to continue learning at a remarkably fast rate throughout life, without your new learning washing away memories of important information that you learned before. I have called this fundamental property the stability-plasticity dilemma. Many gifted colleagues and I have been vigorously developing the theoretical and mathematical foundations of this new paradigm since I began in 1957, as summarized in my YouTube lecture for SEP.
Is the brain just a “bag of tricks”?
The difficulty of solving the mind-body problem, which ranks with the greatest problems ever considered by scientists and philosophers, has led many distinguished thinkers to despair of ever being able to explain how a mind emerges from a brain, despite overwhelming experimental evidence that it does. Some distinguished scientists have suggested that the brain is a “bag of tricks” that has been discovered during many cycles of trial and error during millions of years of natural selection (Buckner, 2013; Ramachandran, 1985). Natural selection has indeed been understood to be the dominant force in shaping the evolution of all living things since the epochal work of Charles Darwin (1859) on the origin of species. However, if a brain were just a bag of tricks, then it would be difficult, if not impossible, to discover unifying theories of how brains make mind.
The work that my colleagues and I have done contributes to a growing understanding that, in addition to opportunistic evolutionary adaptations in response to changing environments, there is also a deeper level of unifying organizational principles and mechanisms upon which coherent theories of brain and mind can securely build.
Mind-body problem: Brain theories assemble laws and modules into modal architectures
Indeed, one can explain and predict large amounts of psychological and neurobiological data using a small set of mathematical laws, such as the laws for short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM), and a somewhat larger set of characteristic microcircuits, or modules, that embody useful combinations of functional properties, such as properties of learning and memory, decision-making, and prediction. Thus, just as in physics, only a few basic laws, or equations, are used to explain and predict myriad facts about mind and brain, when they are embodied in modules that may be thought of as the “atoms” or “molecules” of intelligence.
Specializations of these laws in variations of these modules are then combined into larger systems that I like to call modal architectures, where the word “modal” stands for different modalities of intelligence, such as vision, speech, cognition, emotion, and action. Modal architectures are less general than a general-purpose von Neumann computer, but far more general than a traditional AI algorithm. Modal architectures clarify, for example, why we have the five senses of sight, sound, touch, smell, and taste, and how they work. Continuing with the analogy from physics, modal architectures may be compared with macroscopic objects in the world.
These equations, modules, and modal architectures underlie unifying theoretical principles and mechanisms of all the brain processes that the book will discuss, and that my stories will summarize.
Why so many books about consciousness?
Many scientists are currently working productively on the mind-body problem. An increasing number of popular books has summarized highlights of this exciting progress, often describing interesting facts about mind and brain, including facts about consciousness. Often missing, though, has been a mechanistic theoretical explanation of how a mind emerges from a brain. Without such a mechanistic linkage between mind and brain, however, brain mechanisms have no functional meaning, and behavioral functions have no mechanistic explanation.
This book will describe how, during the past several decades, major progress has been made towards providing such a mechanistic linkage. This incremental progress is embodied in an increasing number of models that individually unify the explanation and prediction of psychological, anatomical, neurophysiological, biophysical, and even biochemical data, thereby crossing the divide between mind and brain on multiple organizational levels. These models can often be derived using a particular kind of scientific story that is called a thought experiment. I will explain what a thought experiment is as I go along.
I believe that now is a particularly good time to share these discoveries with you. In addition to the fact that a lot is now known, my own sense from talking to friends in many walks of life is that many of them are eager to learn more about how their own minds work. Such a desire may be heightened by the fact that our day-to-day knowledge of the physical world, and its many artifacts in our cities, technology, and weapons, has
far outpaced understanding of our internal mental worlds, which cannot be achieved through introspection alone. A book like this can help to better balance how much we understand about our external and internal worlds. I also think that, as we are surrounded by increasingly intelligent machines, we can benefit from a deeper understanding of how we are not “just another machine”, while trying to build satisfying and productive lives and societies.
The varieties of brain resonances: All conscious states are resonant states
For example, I will explain that “all conscious states are resonant states”. The importance of this assertion motivated the title of this book. I will describe the resonances that seem to underlie our conscious experiences of seeing, hearing, feeling, and knowing. These explanations will include where in the brain these resonances take place, how they occur there, and why evolution may have been driven to discover conscious mental states in the first place. I will also clarify how these resonances interact when we simultaneously see, hear, feel, and know something, all at once, about a person or event in our world. This description is part of a burgeoning classification of resonances. I will also explain why not all resonant states are conscious, and why not all brain dynamics are resonant. These results contribute to solving what has been called the Hard Problem of Consciousness.
From brain science to mental disorders, irrational decisions, and the human condition
Mind-brain insights are as important for understanding the normal mind and brain as they are for understanding various mental disorders. I believe that understanding of “abnormal” mental states requires an understanding of how “normal” or “typical” mental states arise. The book will discuss symptoms of mental disorders from this perspective, including Alzheimer’s disease, autism, Fragile X syndrome, schizophrenia, medial temporal amnesia, and visual and auditory agnosia and neglect. These insights typically arose when I noticed, after having derived a sufficient theoretical understanding of an aspect of normal behavior—whether perceptual, cognitive, or affective—that syndromes of clinical symptoms popped out when these normal neural mechanisms became imbalanced in particular ways. These imbalances express themselves in behavior as interactive, or emergent, properties of networks of neurons, indeed whole brain systems, interacting together. These imbalanced emergent properties illustrate a major reason why mental disorders are so hard to understand: Understanding an “imbalance” first requires that you understand the “balance”, and both balance and imbalance are emergent, or interactive, properties of large systems of neurons interacting together. It is not possible to understand such emergent properties without a sufficiently powerful theory to describe and characterize them.
In considering this approach to understanding mental disorders, it is important to realize that there is a vast amount of quantitative psychological and neurobiological data available about normal or typical behaviors. These data provide a secure foundation upon which to derive and test theories of mind and brain. Clinical data tend to be more qualitative and fragmented, if only because of the demands of treating sick people. Although clinical data provide important constraints on theoretical hypotheses, they are typically an insufficient basis upon which to discover them.
An understanding of how brains give rise to minds also leads to practical insights into the “human condition”, and how our minds manage to deal with a world that is full of surprises and unexpected events. Such an understanding sheds new light upon how practical wisdom through the ages has divined important truths about how we can try to live our lives in a way that respects how our minds work best, and how we can better adapt to the world’s unexpected challenges. In particular, it clarifies various maxims that parents use to try to protect their children from “bad influences”.
Along the way, these brain models shed light upon the following challenging questions: If evolution has selected brain mechanisms that can successfully adapt to changing environments, then why are so many behavioral decisions irrational? Why do some people gamble themselves into bankruptcy? More generally, what causes maladaptive decisions in situations that have several outcomes, none of them certain? Our models of cognitive-emotional interactions, or interactions between what we know about the world and how we feel about it, are helpful here. These neural models show how, when several brain processes that are individually essential for our survival are activated together by certain kinds of uncertain or risky environments, they can lead to irrational or even self-destructive behaviors. These adaptive processes thus help to ensure our survival most of the time, but there are situations where they fail. Some of these irrational behavioral properties include preference reversals and self-punitive behaviors. Learning to avoid, or at least to better control, situations where these consequences are most likely to occur is part of practical wisdom.
From brains to autonomously
intelligent technologies that include Adaptive Resonance
Understanding how brains give rise to minds is also important for designing revolutionary “smart systems” in computer science, engineering, and technology, including AI and the design of increasingly smart robots. Many companies have applied biologicallyinspired algorithms of the kind that this book summarizes in multiple engineering and technological applications, ranging from airplane design and satellite remote sensing to automatic target recognition and robotic control. When neural models are adapted in this way, or arise through less direct kinds of biological inspiration, they are often called artificial neural networks
Companies like Apple and Google have been exploiting the learning and recognition properties of the artificial neural networks that are called Deep Learning networks to make useful contributions to several application areas. Deep Learning networks are based upon the Perceptron learning principles introduced by Frank Rosenblatt (Rosenblatt, 1958, 1962) which led to the back propagation algorithm. I will discuss Rosenblatt’s seminal contribution in Chapter 2. I will also discuss more completely in Chapter 2 that back propagation was discovered between the 1970s and early 1980s by people like Shun-ichi Amari, Paul Werbos, and David Parker, reaching its modern form and being successfully simulated in applications by Werbos (1974). The algorithm was then popularized in 1986 in an article by David Rumelhart, Geoffrey Hinton, and Ronald Williams (Rumelhart, Hinton, and Williams, 1986).
Although back propagation was promptly used to classify many different kinds of data, it was also recognized that it has some serious computational limitations. Networks that are based upon back propagation typically require large amounts of data to learn; learn slowly using large numbers of trials; do not solve the stability-plasticity dilemma; and use some nonlocal mathematical operations that are not found in the brain. Huge online databases and ultrafast computers that subsequently came onto the scene helped to compensate for some of these limitations, leading to its recent version
as Deep Learning. Geoffrey Hinton has also been a leader of Deep Learning research (Hinton et al., 2012; LeCun, Bengio, and Hinton, 2015).
When using Deep Learning to categorize a huge database, its susceptibility to catastrophic forgetting is an acknowledged problem, since memories of what has already been learned can suddenly and unexpectedly collapse. Perhaps these problems are why Hinton said in an Axios interview on September 15, 2017 (LeVine, 2017) that he is “deeply suspicious of back propagation . . . I don’t think it’s how the brain works. We clearly don’t need all the labeled data . . . My view is, throw it all away and start over ” (italics mine). This book argues that we do not need to start over.
The problems of back propagation have been well known since the 1980s. In an article that I published in 1988 (Grossberg, 1988), I listed 17 differences between back propagation and the biologically-inspired Adaptive Resonance Theory, or ART, that I introduced in 1976 and that has been steadily developed by many researchers since then, particularly Gail Carpenter. The third of the 17 differences between back propagation and ART is that ART does not need labeled data to learn.
As the book will explain, notably in Chapter 5, ART exists in two forms: as algorithms that are designed for use in large-scale applications to engineering and technology, and as an incrementally developing biological theory. There is also fertile cross-pollination between these parallel developments. As a biological theory, ART is now the leading cognitive and neural theory about how our brains learn to attend, recognize, and predict objects and events in a changing world that is filled with unexpected events. As of this writing, ART has explained and predicted more psychological and neurobiological data than other available theories. In particular, all of the foundational ART hypotheses have been supported by subsequent psychological and neurobiological data.
Moreover, key ART circuit designs can be derived from thought experiments whose hypotheses are ubiquitous properties of environments that we all experience. ART circuits emerge as computational solutions of multiple environmental constraints to which humans and other terrestrial animals have successfully adapted. This fact suggests that ART designs may, in some form, be embodied in all future autonomous adaptive intelligent devices, whether biological or artificial.
Perhaps this is why ART has done well in benchmark studies where it has been compared with other algorithms, and has been used in many large-scale engineering and technological applications, including engineering design retrieval systems that include millions of parts defined by high-dimensional feature vectors, and that were used to design the Boeing 777 (Caudell et al., 1990, 1991, 1994; Escobedo, Smith, and Caudell, 1993). This same Boeing team created the first dedicated ART optoelectronic hardware implementation (Wunsch et al., 1993). Other applications include classification and prediction of sonar and radar signals, of medical, satellite, face imagery, and social media data, and of musical scores; control of mobile robots and nuclear power plants, air quality monitoring, strength prediction for concrete mixes, signature verification, tool failure monitoring, chemical analysis from ultraviolent and infrared spectra, frequencyselective surface design for electromagnetic system devices, and power transmission line fault diagnosis, among others that will be summarized in Chapter 5.
Based upon 50 years of rapid progress in modeling how our brains become intelligent, and in the context of the current explosion of interest in using neural algorithms in AI, it is exciting to think about how much more may be achieved when deeper insights about brain designs are incorporated into highly funded industrial research and applications.
From Laminar Computing to neuromorphic chips
Government agencies and computer companies are also working to design a new generation of computer chips, the “brains” of our household and industrial computers,
that more closely emulate the designs of our biological brains. This development has been inspired by a growing realization that the exponential speed-up of computer chips to which we have grown accustomed, known as Moore’s Law, cannot continue for much longer ( https://en.wikipedia.org/wiki/ Moore%27s_ law). The chips in our computers are typically based on classical von Neumann computer designs that have already revolutionized our lives in myriad ways. There are several reasons why this kind of chip may not be able to continue supporting Moore’s Law: To achieve greater speed, chips pack their components ever more densely to minimize the time needed to transmit signals around the chip. As chip components get denser, however, they also run hotter, and can burn up if their components get too dense. At very small scales, the laws of physics can also lead to chips with noisy components. Unfortunately, von Neumann architectures cannot work with noisy components.
Overcoming these problems may require novel nanoscale chip designs. One inspiration for such designs is the mammalian neocortex, which is the seat of all higher intelligence, including vision, speech and language, cognition, and action. The neurons in neocortical circuits often interact via discrete signals through time, that are called spikes, or action potentials. Communication between a computer chip’s components with discrete spikes, rather than continuous signals in time, can reduce the amount of heat that is generated. Neocortical neuronal networks also work well despite experiencing noise levels that would incapacitate a von Neumann chip. Finally, all neocortical circuits share a similar design in which their neurons are organized into characteristic layers, often six layers in the granular neocortex that controls perception and cognition. Chips with layers would provide an extra degree of freedom to densely pack processing units into a fixed area.
Essentially all aspects of higher biological intelligence are supported by variations of this laminar neocortical design. With our brains as prototypes, we can thus expect future laminar chips to embody all types of higher biological intelligence using variations of the same chip design. Moreover, since all laminar chips would share a design, including the same input-output circuits, they could more easily be assembled into modal architectures that can carry out several different modalities of intelligence, leading in the future to increasingly autonomous adaptive systems, including mobile robots. Computers of the future may thus contain a very fast von Neumann chip, or network of chips, that can do many of the things that humans cannot do well, such as adding or multiplying millions of numbers in a flash, as well as a neural coprocessor chip that will embody increasingly sophisticated and diverse types of human intelligence as our understanding of how our brains work advances.
My colleagues and I played a role over the years in proposing laminar cortical designs for such chips in several basic research programs of the Office of Naval Research, or ONR, and the Defense Advanced Research Projects Agency, or DARPA. DARPA, which has also been called ARPA at various periods in its history, has been a leader in advancing many of the technologies on which our lives currently depend, notably the internet, whose precursor was called the ARPANET (https://en.wikipedia.org/wiki/ ARPANET).
My earliest encounter with ARPA and the ARPANET occurred in the 1970s as part of an experiment to test how scientific collaborations could be carried out between remote scientific labs. In order to carry out this research, I was handed a modem by a member of the CIA on a street in Technology Square in the shadow of MIT in Cambridge, where I was then a professor. I used this modem to collaborate with the laboratory of Emanuel Donchin at the University of Illinois in Urbana- Champaign, by hooking it up to our home telephone line. Manny, who died in 2018, was a principal founder and innovator in recording electrical potentials from scalp electrodes to understand cognitive processes in humans. He was particularly interested in studying an event-related potential that is called the P300. I sought Manny out because ART predicted a role for the P300 in category learning, as I will explain in Chapter 5. The results of our project, which was published in 1977, concerned how decision-related
processes could be inferred from task-related P300 changes through time (Squires et al., 1977).
A much more recent DARPA program in which we participated to better understand laminar cortical designs was called the SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program: https://en.wikipedia.org/wiki/SyNAPSE . One reason we were included is that I and my colleagues had discovered and developed the most advanced neural models of how laminar neocortical circuits control such varied forms of intelligence as vision, speech, and cognition.
Laminar Computing is the name that I have given to the computational paradigm that models how these laminar neocortical models work. All of the laminar cortical models for vision, speech, and cognition that we have introduced use variations of the same canonical laminar circuit design, and provide functional explanations of how these variations can give rise to such different psychological properties. These models, which go by names like the 3D LAMINART model for vision (see Chapter 11); the conscious
FIGURE 0.1. Macrocircuit of the visual system.
ARTWORD, or cARTWORD, model for speech (see Chapter 12); and the LIST PARSE model for cognitive information processing (again see Chapter 12), have provided an existence proof for the exciting prospect that Laminar Computing can achieve its full potential during this century.
Cognitive impenetrability and a theoretical method for penetrating it
A related factor that makes the mind-body problem so hard to solve is that our conscious experiences do not give us direct introspective access to the architecture of our brains. That is why for centuries it was not even realized that our brains are the source of our minds. We need additional tools to bridge this gap, which is often called the property of cognitive impenetrability. Cognitive impenetrability enables us to experience the percepts, thoughts, feelings, memories, and plans that we use to survive in a rapidly changing world, without being needlessly distracted by the intricate brain machinery that underlies these psychological experiences.
Said in a different way, brain evolution is shaped by behavioral success. No matter how beautiful your ancestors’ nerve cells were, they could all too easily have ended up as someone else’s meal if they could not work together to generate behavior that was capable of adapting quickly to new environmental challenges. Survival of each species requires that its brains operate quickly and effectively on the behavioral level. It is a small price to pay that we cannot then also solve the mind-body problem through introspection alone.
One of the problems faced by classical AI was that it often built its models of how the brain might work using concepts and operations that could be derived from introspection and common sense. Such an approach assumes that you can introspect internal states of the brain with concepts and words that we use to describe objects and actions in our daily lives. It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works, and thus failed to emulate, after decades of trying, the amazing capabilities of our brains. Common-sense concepts tried to imitate the results of brain processing on the level of our psychological awareness, rather than probing the mechanisms of brain processing that give rise to these results. They fell victim to the problem of cognitive impenetrability.
If introspection is not a reliable guide to understanding how our brains work, then what procedure can we follow to bridge the gap between mind and brain? I will describe a theoretical method in Chapter 2 whereby properties of the brain have been rapidly discovered over the past half century through an analysis of how each of our brains adapts on its own to changing environmental demands. The book hereby illustrates how our behaviors adapt “on the fly”, and how they do so even if we do not have a good teacher to help us, other than for the world itself. Along the way, the book analyses how we can learn from novel situations, and how unexpected events are integrated into our corpus of knowledge and expectations about the world. The book also discusses how a good teacher can improve our chances of learning well, even if the teacher knows only what the right answer is, Yes or No, but not why the answer is right. ART has all of these properties as well, which is why it does not always require labeled data in order to learn.
In order to adapt on the fly to many different situations, our brains have evolved specialized circuits—the microcircuits and modal architectures described above—to adapt to different aspects of the world in which we live. That is one reason why brain architecture is so specialized and complex. Models of how these architectures work have been developed to explain many paradoxical facts about how we see, hear, speak, learn, remember, recognize, plan, feel, and move. I will summarize the most important examples in this book.
New computational paradigms: Laminar Computing and Complementary Computing
As I noted above in the discussion of the mind-body problem, in a rapidly growing number of examples, a single model can quantitatively simulate the experimentally recorded dynamics of identified nerve cells interacting with one another in identified circuits and the behaviors that these interactions control. One such example concerns how the visual cortex gives rise to conscious visual percepts. It has been known for almost a century that the part of the brain called the cerebral cortex—that convoluted mantle of “little gray cells” which covers much of the surface of our brains and which supports all of our higher intelligence—is typically organized into six characteristic layers of cells. This laminar organization of cells supports neural circuits that interact in prescribed ways. If we want to understand higher intelligence, we therefore need to ask: How does the cerebral cortex work? How does a laminar organization of cells contribute to biological intelligence? As I mentioned above, Laminar Computing is one of the new computational paradigms to which my colleagues and I have made significant contributions. In the special case of how the brain sees, Laminar Computing has helped us to understand: How does the visual cortex develop during childhood, learn during adulthood, group together visual cues into emergent object representations, and pay attention to interesting events in the world? Remarkably, although these seem at first to be separate problems, they all have a single unified solution. Chapter 10 is devoted to a self-contained introduction to Laminar Computing.
Another general brain design clarifies a more global scale of brain organization. This design goes a long way towards explaining the nature of brain specialization. In particular, it has been known for a long time that advanced brains process information into parallel processing streams; that is, into multiple pathways that can all respond at the same time to events in the world. Figure 0.1 shows a famous diagram by Edgar DeYoe and David van Essen of the several processing stages, and the streams to which they belong, that form part of the visual system (DeYoe and Van Essen, 1988). Computer scientists and brain theorists have often proposed that such streams function as independent modules, with each computing a specific property—such as visual color, form, or motion—independent of the other streams. However, a lot of data about how we see suggests that these processing streams do not, in fact, operate independently from one another. For example, during visual perception, changing the form of an object can change its perceived motion, and conversely; or changing the brightness of an object can change its perceived depth, and conversely.
Rather, these parallel streams often compute complementary types of information, much as a key fits into a lock, or pieces of a puzzle fit together. Each processing stream has complementary strengths and weaknesses, and only by interacting together, using multiple processing stages within each stream, can their complementary weaknesses be overcome. I have called this form of computation Complementary Computing. Overcoming complementary deficiencies of individual processing stages when they act alone often requires multiple stages of processing in complementary processing streams. I call this process Hierarchical Resolution of Uncertainty.
Complementary Computing is another revolutionary paradigm for explaining how brains give rise to minds. Accordingly, Complementary Computing helps to explain a lot of data, but more than that, it clarifies how many of the problems of the human condition that we face during life reflect properties of our minds that are deeply built into us. As I noted above in my brief remarks about mental disorders, many problems that we face can be better understood by realizing that they are often problems of balance, often because Complementary Computing balances complementary strengths and weaknesses across processing streams before a synthesis can take place through their hierarchical interaction. A scientific understanding of these processes helps to clarify
what is being balanced, how an imbalance can occur, and what can be done to correct it. Later chapters will provide many specific examples of Complementary Computing. These insights also provide hints about how to live our lives in ways that better respect how our minds actually work. I will outline such implications for daily living throughout the book. These insights make an interpretive leap beyond the science, but they are also supported by it. The first chapter of the book provides an overview of some of the main scientific themes that I will consider throughout the book, before later chapters go into greater detail about the various brain processes that together constitute a mind.
All the chapters strive to be self- contained and mutually independent: Chapter topics
An inspection of the chapter titles shows how the book describes processes that tend to occur at progressively higher levels of brain organization that reflect our perceptioncognition-emotion-action cycles with the world. The chapters thus start with topics in perception, and progress towards topics in cognition, emotion, action, cognitiveemotional interactions, adaptively timed behavior, and spatial navigation. These chapters have been written so that they can be read independently of one another. Reading them in the prescribed order builds cumulative insights, but this order can be broken if personal tastes require it.
Taken together, these chapters describe many of the fundamental processes that enable us to be autonomous, adaptive, and intelligent. The first eleven chapters present a lot of information about visual intelligence because vision is such a major source of our knowledge about the world and our ability to find our way through it. Chapter 12 does the same for auditory intelligence and action, while also comparing and contrasting the neural designs that embody these two critical sources of information about the world. Chapter 13 describes how the processes that regulate emotion and motivation interact with perceptual and cognitive processes to help make decisions that can realize valued goals. Chapter 14 goes further by providing a self-contained explanation of how the prefrontal cortex carries out, and integrates, many of the higher cognitive, emotional, and decision-making processes that define human intelligence, while also controlling the release of actions aimed at achieving valued goals. Chapters 15 and 16 describe how we manage to navigate through the world and carry out actions in an adaptively timed way.
Along the way, these explanations include the following topics:
how we experience conscious moments of seeing, hearing, feeling, and knowing; how these various kinds of awareness can be integrated into unified moments of conscious awareness; how, where, and why our brains have created conscious states of mind; how and why we see visual illusions and how many of the visual scenes that we believe to be “real” are built up from illusory percepts; how we see 2D pictures as representations of 3D scenes, and thus have been able to invent pictorial art, movies, and computer screens with which to represent visual information;
how our ability to see visual motion differs from our ability to see visual form, and how form and motion information interact to help us to track unpredictably moving objects, including prey and predators, through time, even if they are moving at variable speeds behind occluding clutter;
how we learn as children to imitate language from teachers, such as our parents, whose speech sounds occur at different frequencies than our own voices can produce;
how we learn to understand the meaning of language utterances that are spoken by multiple voices at multiple speeds;
how our ability to use tools in space builds upon how we learn to reach objects when we are infants;
how we learn to recognize objects when we see them from multiple viewpoints, distances, and sizes on our retinas, notably how we do this without a teacher as our eyes freely scan complex scenes;
how we learn to pay attention to objects and events that cause valued or threatening consequences, while ignoring those that are predictively irrelevant and can be ignored, again as we freely experience complex situations;
how we learn to adaptively time our responses to occur at the right times in familiar situations, including social situations, where poorly timed responses could cause negative consequences;
how we learn to navigate in space, notably how our minds represent familiar routes to desired goals;
how our ability to navigate in space and our ability to adaptively time our responses exploit similar circuits, and thus why they are computed in the same part of our brains;
how our ability to use numbers, which provides a foundation for all mathematics and thus all technology, arises during evolution from more basic processes of spatial and temporal representation, but ones that are different than those which support spatial navigation and adaptive timing;
how we store sequences of objects, positions, and more general events that we have recently experienced in working memory;
how working memory is designed to enable us to learn plans from this stored information that can be used to acquire valued goals via context-appropriate actions; and how, when these various processes break down in prescribed ways, symptoms of multiple familiar mental disorders arise.
In order to make the chapters self-contained, I review some model properties each time they occur, even if they appear in more than one chapter. This has a deeper purpose than providing self-contained chapters. It clarifies how a small number of brain mechanisms are used in specialized forms in multiple parts of our brains to realize psychological functions that appear in our daily lives to be quite unrelated.
The unifying perspective of autonomous adaptive intelligence
It is important to realize that the words mind and brain need not be mentioned in the derivations of many of the book’s design principles and mechanisms. At bottom, three words characterize the kind of understanding to which this book contributes: autonomous adaptive intelligence. The theories in this book are thus just as relevant to the psychological and brain sciences as they are to the design of new intelligent systems in engineering and technology that are capable of autonomously adapting to a changing world that is filled with unexpected events.
Mind and brain become relevant because huge databases support the hypothesis that brains are a natural physical embodiment of these principles and mechanisms. In particular, the hypotheses that I use in gedanken, or thought, experiments to derive brain models of cognition and cognitive-emotional interactions describe familiar properties of environments that we all experience. Coping with these environmental constraints is
important for any autonomous adaptively intelligent agent, whether natural or artificial. Indeed, Chapter 16 notes that the processes which the book describes can be unified into an autonomous intelligent controller for a mobile agent.
Building from mind to morals, cellular organisms, and the physical world around us
Because of this universality, Chapter 17 can speculatively discuss more universal concerns of humans in the light of what the earlier chapters have taught. These discussions include biological foundations for such varied topics as morality, religion, creativity, and the human condition.
Chapter 17 also discusses design principles that are shared by brains with all living organisms that are composed of cells, notably mechanisms whereby both neural and non-neural cellular organisms develop. Brains can hereby be understand as part of a “universal developmental code”.
Chapter 17 goes on to propose why mental design principles of complementarity, uncertainty, and resonance reflect similar organizational principles of the external physical world. Several examples are provided to illustrate the theme that brains are universal self-organizing measurement devices of, and in, the physical world.
Thank you!
I would like to thank many people and programs for their contributions, both direct and indirect, to my work over the years. First and foremost, I want to thank the love of my life, my wife and best friend, Gail Carpenter, whose love, wise counsel, and adventurous spirit have made my life more fulfilling and happy than I ever dreamed possible. Gail has also been my most important scientific collaborator, and has led many research projects with her own collaborators that have made distinguished contributions to the neural networks literature. It is not possible to briefly capture how essential Gail has been in every part of my life and work during the past 45 years. An article by Barbara Moran that was written when I received the Lifetime Achievement Award of the Society of Experimental Psychologists in 2015 provides more background about our multi-faceted odyssey together (http://www.bu.edu/articles/2015/steve-grossbergpsychologist-brain-research/).
Our daughter, Deborah Grossberg Katz, has made our experiences with parenthood a joy from Day One, and a source of pride as she has gone from success to success, now flourishing as an award-winning architect and co-owner of the architecture firm ISA (http://www.is-architects.com/press). Deb has provided insightful comments about early drafts of this book.
I am thankful to the many colleagues at Boston University and in my extended scientific family around the world who have supported my work in many ways, and participated in harmonious collaborations that have led to a steady stream of scientific discoveries through the years. Along the way, I was lucky to be able to help found new interdisciplinary departments, graduate programs, research institutes, and conferences whereby many scientists and engineers could share their discoveries with a large community of students and scholars. The science and relationships that have emerged from these activities have been precious to me, and have maintained my excitement to try to further understand mind and brain.
I have particularly benefited from fine colleagues and students at the Department of Cognitive and Neural Systems (CNS; http://www.cns.bu.edu), the Center for Adaptive Systems (CAS; http://cns.bu.edu/about/cas.html), and the NSF Center of Excellence for Learning in Education, Science, and Technology (CELEST; https://www.brains-mindsmedia.org/archive/153/) at Boston University. I founded these and other scientific institutions, including the International Neural Network Society (http://inns.org/) and the journal Neural Networks (http://www.journals.elsevier.com/neural-networks), to help create infrastructure for our field. Putting a lot of energy into developing communities for teaching and doing research has been more than repaid by the relationships and collaborations that have been made possible within them.
More information about these infrastructure developments can be found at my Wikipedia page https://en.wikipedia.org/wiki/Stephen_Grossberg and in an invited essay that I wrote in Neural Networks when I stepped down as its founding Editor-inChief in 2010 (http://cns.bu.edu/Profiles/Grossberg/GrossbergNNeditorial2010.pdf ). A special issue of Neural Networks to honor my 80th birthday on December 31, 2019 also contributes to this narrative, including the introductory essay by the Editor of the special issue, Donald C. Wunsch II (https://arxiv.org/pdf/1910.13351.pdf ).
I have been lucky to have support for my work from key academic administrators at several stages of my life. My work may never have gotten off the ground as an undergraduate at Dartmouth College in 1957-1961 were it not for John Kemeny, who was then chairman of the mathematics department, and Albert Hastorf, who was then chairman of the psychology department. They created the academic infrastructure whereby I was able to become the first joint major in mathematics and psychology at Dartmouth. Their continued support enabled me to become a Senior Fellow in my senior year at Dartmouth and to devote that year to doing research in earnest to further develop the discoveries that began in my freshman year. John went on to become president of Dartmouth and managed to co-invent the BASIC programming language as well as one of the world’s first time-sharing systems during that time. Al moved to Stanford in 1961 to become a much-loved chairman, dean, vice president, and provost.
At the Rockefeller University where I earned my PhD in 1967, Gian- Carlo Rota was willing to serve as my PhD advisor. He gave me the freedom to do the additional research that led to my PhD thesis, which proved the first global theorems about how neural content addressable memories store information. Mark Kac, who then supervised all mathematical activities at Rockefeller, also supported this work.
Starting in 1975, Boston University’s President John Silber and its Dean and Provost Dennis Berkey made it possible for many of our educational and research efforts to success, not least by supporting the creation of CNS, which became a leading department for advanced training and research in how brains make minds, and the transfer of these discoveries to large-scale applications in engineering and technology. I am particularly grateful to the CNS staff for their flawless work and friendship over many years, notably Cindy Bradford, Carol Jefferson, and Brian Bowlby. Administrators at the university’s Grant and Accounting Office spent untold hours helping to manage our many grants and to interact with our program managers in Washington, notably Joan Kirkendall, Cynthia Kowal, and Dolores Markey. I can still vividly recall talking to Cynthia over the telephone during a Labor Day weekend working out final details in a major Center grant budget just in time to submit it.
I would particularly like to thank the government agencies that have supported interdisciplinary work such as ours for many years and thereby made it possible. These include the Army Research Office (ARO), Air Force Office of Scientific Research (AFOSR), Defense Advanced Research Projects Agency (DARPA), National Institutes of Health (NIH), National Science Foundation (NSF), and Office of Naval Research (ONR). Every researcher eventually realizes how crucial the vision and support of the program managers at these agencies is to scientific progress. In my case, I would especially like to express my gratitude for the support of program managers like Leila Bram, Genevieve Haddad, Henry Hamburger, Harold Hawkins, Todd Hylton, Soo-Siang Lim, and John
Tangney. They all took a chance on funding our neural network modeling research at a time before it became a hot topic. I very much hope that these agencies will continue to value discoveries at the interdisciplinary cutting edge of science and technology, and will give both young and senior interdisciplinary scientists and engineers in these fields the financial support that they need to help shape the science and technology of the future.
Finally, I would like to thank Martin Baum, Joan Bossert, Phil Vilenov, and Melissa Yanuzzi at Oxford University Press for their support and guidance during the publication of this book, the four anonymous referees who convinced Oxford to accept the book, and the readers Gail Carpenter and Donald Wunsch who have made many useful suggestions for its improvement. Any remaining problems of fact or style will, I hope, not interfere with your pleasure in reading the book.