Steven Pinker Thinking Machines Essay

by W. Daniel Hillis for Physics Today

One day when I was having lunch with Richard Feynman, I mentioned to him that I was planning to start a company to build a parallel computer with a million processors. His reaction was unequivocal, "That is positively the dopiest idea I ever heard." For Richard a crazy idea was an opportunity to either prove it wrong or prove it right. Either way, he was interested. By the end of lunch he had agreed to spend the summer working at the company.

Richard's interest in computing went back to his days at Los Alamos, where he supervised the "computers," that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970's when his son, Carl, began studying computers at MIT.

I got to know Richard through his son. I was a graduate student at the MIT Artificial Intelligence Lab and Carl was one of the undergraduates helping me with my thesis project. I was trying to design a computer fast enough to solve common sense reasoning problems. The machine, as we envisioned it, would contain a million tiny computers, all connected by a communications network. We called it a "Connection Machine." Richard, always interested in his son's activities, followed the project closely. He was skeptical about the idea, but whenever we met at a conference or I visited CalTech, we would stay up until the early hours of the morning discussing details of the planned machine. The first time he ever seemed to believe that we were really going to try to build it was the lunchtime meeting.

Richard arrived in Boston the day after the company was incorporated. We had been busy raising the money, finding a place to rent, issuing stock, etc. We set up in an old mansion just outside of the city, and when Richard showed up we were still recovering from the shock of having the first few million dollars in the bank. No one had thought about anything technical for several months. We were arguing about what the name of the company should be when Richard walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss, what's my assignment?" The assembled group of not-quite-graduated MIT students was astounded.

After a hurried private discussion ("I don't know, you hired him..."), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.

"That sounds like a bunch of baloney," he said. "Give me something real to do."

So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router.

The Machine

The router of the Connection Machine was the part of the hardware that allowed the processors to communicate. It was a complicated device; by comparison, the processors themselves were simple. Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires. Instead, we planned to connect the processors in a 20-dimensional hypercube so that each processor would only need to talk to 20 others directly. Because many processors had to communicate simultaneously, many messages would contend for the same wires. The router's job was to find a free path through this 20-dimensional traffic jam or, if it couldn't, to hold onto the message in a buffer until a path became free. Our question to Richard Feynman was whether we had allowed enough buffers for the router to operate efficiently.

During those first few months, Richard began studying the router circuit diagrams as if they were objects of nature. He was willing to listen to explanations of how and why things worked, but fundamentally he preferred to figure out everything himself by simulating the action of each of the circuits with pencil and paper.

In the meantime, the rest of us, happy to have found something to keep Richard occupied, went about the business of ordering the furniture and computers, hiring the first engineers, and arranging for the Defense Advanced Research Projects Agency (DARPA) to pay for the development of the first prototype. Richard did a remarkable job of focusing on his "assignment," stopping only occasionally to help wire the computer room, set up the machine shop, shake hands with the investors, install the telephones, and cheerfully remind us of how crazy we all were. When we finally picked the name of the company, Thinking Machines Corporation, Richard was delighted. "That's good. Now I don't have to explain to people that I work with a bunch of loonies. I can just tell them the name of the company."

The technical side of the project was definitely stretching our capacities. We had decided to simplify things by starting with only 64,000 processors, but even then the amount of work to do was overwhelming. We had to design our own silicon integrated circuits, with processors and a router. We also had to invent packaging and cooling mechanisms, write compilers and assemblers, devise ways of testing processors simultaneously, and so on. Even simple problems like wiring the boards together took on a whole new meaning when working with tens of thousands of processors. In retrospect, if we had had any understanding of how complicated the project was going to be, we never would have started.

'Get These Guys Organized'

I had never managed a large group before and I was clearly in over my head. Richard volunteered to help out. "We've got to get these guys organized," he told me. "Let me tell you how we did it at Los Alamos."

Every great man that I have known has had a certain time and place in their life that they use as a reference point; a time when things worked as they were supposed to and great things were accomplished. For Richard, that time was at Los Alamos during the Manhattan Project. Whenever things got "cockeyed," Richard would look back and try to understand how now was different than then. Using this approach, Richard decided we should pick an expert in each area of importance in the machine, such as software or packaging or electronics, to become the "group leader" in this area, analogous to the group leaders at Los Alamos.

Part Two of Feynman's "Let's Get Organized" campaign was that we should begin a regular seminar series of invited speakers who might have interesting things to do with our machine. Richard's idea was that we should concentrate on people with new applications, because they would be less conservative about what kind of computer they would use. For our first seminar he invited John Hopfield, a friend of his from CalTech, to give us a talk on his scheme for building neural networks. In 1983, studying neural networks was about as fashionable as studying ESP, so some people considered John Hopfield a little bit crazy. Richard was certain he would fit right in at Thinking Machines Corporation.

What Hopfield had invented was a way of constructing an [associative memory], a device for remembering patterns. To use an associative memory, one trains it on a series of patterns, such as pictures of the letters of the alphabet. Later, when the memory is shown a new pattern it is able to recall a similar pattern that it has seen in the past. A new picture of the letter "A" will "remind" the memory of another "A" that it has seen previously. Hopfield had figured out how such a memory could be built from devices that were similar to biological neurons.

Not only did Hopfield's method seem to work, but it seemed to work well on the Connection Machine. Feynman figured out the details of how to use one processor to simulate each of Hopfield's neurons, with the strength of the connections represented as numbers in the processors' memory. Because of the parallel nature of Hopfield's algorithm, all of the processors could be used concurrently with 100\% efficiency, so the Connection Machine would be hundreds of times faster than any conventional computer.

An Algorithm For Logarithms

Feynman worked out the program for computing Hopfield's network on the Connection Machine in some detail. The part that he was proudest of was the subroutine for computing logarithms. I mention it here not only because it is a clever algorithm, but also because it is a specific contribution Richard made to the mainstream of computer science. He invented it at Los Alamos.

Consider the problem of finding the logarithm of a fractional number between 1.0 and 2.0 (the algorithm can be generalized without too much difficulty). Feynman observed that any such number can be uniquely represented as a product of numbers of the form $1 + 2^{-k]$, where $k$ is an integer. Testing each of these factors in a binary number representation is simply a matter of a shift and a subtraction. Once the factors are determined, the logarithm can be computed by adding together the precomputed logarithms of the factors. The algorithm fit especially well on the Connection Machine, since the small table of the logarithms of $1 + 2^{-k]$ could be shared by all the processors. The entire computation took less time than division.

Concentrating on the algorithm for a basic arithmetic operation was typical of Richard's approach. He loved the details. In studying the router, he paid attention to the action of each individual gate and in writing a program he insisted on understanding the implementation of every instruction. He distrusted abstractions that could not be directly related to the facts. When several years later I wrote a general interest article on the Connection Machine for [Scientific American], he was disappointed that it left out too many details. He asked, "How is anyone supposed to know that this isn't just a bunch of crap?"

Feynman's insistence on looking at the details helped us discover the potential of the machine for numerical computing and physical simulation. We had convinced ourselves at the time that the Connection Machine would not be efficient at "number-crunching," because the first prototype had no special hardware for vectors or floating point arithmetic. Both of these were "known" to be requirements for number-crunching. Feynman decided to test this assumption on a problem that he was familiar with in detail: quantum chromodynamics.

Quantum chromodynamics is a theory of the internal workings of atomic particles such as protons. Using this theory it is possible, in principle, to compute the values of measurable physical quantities, such as a proton's mass. In practice, such a computation requires so much arithmetic that it could keep the fastest computers in the world busy for years. One way to do this calculation is to use a discrete four-dimensional lattice to model a section of space-time. Finding the solution involves adding up the contributions of all of the possible configurations of certain matrices on the links of the lattice, or at least some large representative sample. (This is essentially a Feynman path integral.) The thing that makes this so difficult is that calculating the contribution of even a single configuration involves multiplying the matrices around every little loop in the lattice, and the number of loops grows as the fourth power of the lattice size. Since all of these multiplications can take place concurrently, there is plenty of opportunity to keep all 64,000 processors busy.

To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine.

He was excited by the results. "Hey Danny, you're not going to believe this, but that machine of yours can actually do something [useful]!" According to Feynman's calculations, the Connection Machine, even without any special hardware for floating point arithmetic, would outperform a machine that CalTech was building for doing QCD calculations. From that point on, Richard pushed us more and more toward looking at numerical applications of the machine.

By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

The decision to ignore Feynman's analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman's equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.

Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway's game of Life.

Cellular Automata

The game of Life is an example of a class of computations that interested Feynman called [cellular automata]. Like many physicists who had spent their lives going to successively lower and lower levels of atomic detail, Feynman often wondered what was at the bottom. One possible answer was a cellular automaton. The notion is that the "continuum" might, at its lowest levels, be discrete in both space and time, and that the laws of physics might simply be a macro-consequence of the average behavior of tiny cells. Each cell could be a simple automaton that obeys a small set of rules and communicates only with its nearest neighbors, like the lattice calculation for QCD. If the universe in fact worked this way, then it presumably would have testable consequences, such as an upper limit on the density of information per cubic meter of space.

The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard's recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics. Feynman was always quick to point out to them that he considered their specific models "kooky," but like the Connection Machine, he considered the subject sufficiently crazy to put some energy into.

There are many potential problems with cellular automata as a model of physical space and time; for example, finding a set of rules that obeys special relativity. One of the simplest problems is just making the physics so that things look the same in every direction. The most obvious pattern of cellular automata, such as a fixed three-dimensional grid, have preferred directions along the axes of the grid. Is it possible to implement even Newtonian physics on a fixed lattice of automata?

Feynman had a proposed solution to the anisotropy problem which he attempted (without success) to work out in detail. His notion was that the underlying automata, rather than being connected in a regular lattice like a grid or a pattern of hexagons, might be randomly connected. Waves propagating through this medium would, on the average, propagate at the same rate in every direction.

Cellular automata started getting attention at Thinking Machines when Stephen Wolfram, who was also spending time at the company, suggested that we should use such automata not as a model of physics, but as a practical method of simulating physical systems. Specifically, we could use one processor to simulate each cell and rules that were chosen to model something useful, like fluid dynamics. For two-dimensional problems there was a neat solution to the anisotropy problem since [Frisch, Hasslacher, Pomeau] had shown that a hexagonal lattice with a simple set of rules produced isotropic behavior at the macro scale. Wolfram used this method on the Connection Machine to produce a beautiful movie of a turbulent fluid flow in two dimensions. Watching the movie got all of us, especially Feynman, excited about physical simulation. We all started planning additions to the hardware, such as support of floating point arithmetic that would make it possible for us to perform and display a variety of simulations in real time.

Feynman the Explainer

In the meantime, we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this,

"We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids."

This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality.

We tried to take advantage of Richard's talent for clarity by getting him to critique the technical presentations that we made in our product introductions. Before the commercial announcement of the Connection Machine CM-1 and all of our future products, Richard would give a sentence-by-sentence critique of the planned presentation. "Don't say `reflected acoustic wave.' Say [echo]." Or, "Forget all that `local minima' stuff. Just say there's a bubble caught in the crystal and you have to shake it out." Nothing made him angrier than making something simple sound complicated.

Getting Richard to give advice like that was sometimes tricky. He pretended not to like working on any problem that was outside his claimed area of expertise. Often, at Thinking Machines when he was asked for advice he would gruffly refuse with "That's not my department." I could never figure out just what his department was, but it did not matter anyway, since he spent most of his time working on those "not-my-department" problems. Sometimes he really would give up, but more often than not he would come back a few days after his refusal and remark, "I've been thinking about what you asked the other day and it seems to me..." This worked best if you were careful not to expect it.

I do not mean to imply that Richard was hesitant to do the "dirty work." In fact, he was always volunteering for it. Many a visitor at Thinking Machines was shocked to see that we had a Nobel Laureate soldering circuit boards or painting walls. But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn't understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool.

The charming side of Richard helped people forgive him for his uncharming characteristics. For example, in many ways Richard was a sexist. Whenever it came time for his daily bowl of soup he would look around for the nearest "girl" and ask if she would fetch it to him. It did not matter if she was the cook, an engineer, or the president of the company. I once asked a female engineer who had just been a victim of this if it bothered her. "Yes, it really annoys me," she said. "On the other hand, he is the only one who ever explained quantum mechanics to me as if I could understand it." That was the essence of Richard's charm.

A Kind Of Game

Richard worked at the company on and off for the next five years. Floating point hardware was eventually added to the machine, and as the machine and its successors went into commercial production, they were being used more and more for the kind of numerical simulation problems that Richard had pioneered with his QCD program. Richard's interest shifted from the construction of the machine to its applications. As it turned out, building a big computer is a good excuse to talk to people who are working on some of the most exciting problems in science. We started working with physicists, astronomers, geologists, biologists, chemists --- everyone of them trying to solve some problem that it had never been possible to solve before. Figuring out how to do these calculations on a parallel machine requires understanding of the details of the application, which was exactly the kind of thing that Richard loved to do.

For Richard, figuring out these problems was a kind of a game. He always started by asking very basic questions like, "What is the simplest example?" or "How can you tell if the answer is right?" He asked questions until he reduced the problem to some essential puzzle that he thought he would be able to solve. Then he would set to work, scribbling on a pad of paper and staring at the results. While he was in the middle of this kind of puzzle solving he was impossible to interrupt. "Don't bug me. I'm busy," he would say without even looking up. Eventually he would either decide the problem was too hard (in which case he lost interest), or he would find a solution (in which case he spent the next day or two explaining it to anyone who listened). In this way he worked on problems in database searches, geophysical modeling, protein folding, analyzing images, and reading insurance forms.

The last project that I worked on with Richard was in simulated evolution. I had written a program that simulated the evolution of populations of sexually reproducing creatures over hundreds of thousands of generations. The results were surprising in that the fitness of the population made progress in sudden leaps rather than by the expected steady improvement. The fossil record shows some evidence that real biological evolution might also exhibit such "punctuated equilibrium," so Richard and I decided to look more closely at why it happened. He was feeling ill by that time, so I went out and spent the week with him in Pasadena, and we worked out a model of evolution of finite populations based on the Fokker Planck equations. When I got back to Boston I went to the library and discovered a book by Kimura on the subject, and much to my disappointment, all of our "discoveries" were covered in the first few pages. When I called back and told Richard what I had found, he was elated. "Hey, we got it right!" he said. "Not bad for amateurs."

In retrospect I realize that in almost everything that we worked on together, we were both amateurs. In digital physics, neural networks, even parallel computing, we never really knew what we were doing. But the things that we studied were so new that no one else knew exactly what they were doing either. It was amateurs who made the progress.

Telling The Good Stuff You Know

Actually, I doubt that it was "progress" that most interested Richard. He was always searching for patterns, for connections, for a new way of looking at something, but I suspect his motivation was not so much to understand the world as it was to find new ideas to explain. The act of discovery was not complete for him until he had taught it to someone else.

I remember a conversation we had a year or so before his death, walking in the hills above Pasadena. We were exploring an unfamiliar trail and Richard, recovering from a major operation for the cancer, was walking more slowly than usual. He was telling a long and funny story about how he had been reading up on his disease and surprising his doctors by predicting their diagnosis and his chances of survival. I was hearing for the first time how far his cancer had progressed, so the jokes did not seem so funny. He must have noticed my mood, because he suddenly stopped the story and asked, "Hey, what's the matter?"

I hesitated. "I'm sad because you're going to die."

"Yeah," he sighed, "that bugs me sometimes too. But not so much as you think." And after a few more steps, "When you get as old as I am, you start to realize that you've told most of the good stuff you know to other people anyway."

We walked along in silence for a few minutes. Then we came to a place where another trail crossed and Richard stopped to look around at the surroundings. Suddenly a grin lit up his face. "Hey," he said, all trace of sadness forgotten, "I bet I can show you a better way home."

And so he did.

Visit the Front Page or Subscribe to our Blog

Every so often, a book addressed to scholars and general readers alike attempts to reveal the workings of the human mind in a manner both broadly integrative in scope and abundantly rich in detail. In the mid-1960s, for example, Arthur Koestler's The Act of Creation, sought to explain in terms of a single powerful mental mechanism ("bisociation," the unlikely mental conjoining of two previously unassociated contexts of knowledge or experience) the widely disparate processes of humor, artistic creation, and scientific discovery. Another such book, the subject of this review, is How the Mind Works by Steven Pinker, a psychology professor and head of M.I.T.'s Center for Cognitive Neuroscience. Pinker's latest work is a skillful blend of theory and evidence, sweeping generalizations and concrete illustrations--all aimed at presenting in the space of just under 600 pages "a bird's eye view of the mind and how it enters into human affairs." Pinker's basic thesis is that "a psychology of many computational faculties engineered by natural selection is our best hope for a grasp on how the mind works that does justice to its complexity." (p. 58) He argues well for this view in the three opening chapters, and the weight of evidence in the five chapters of applications that follow make the conclusion seem inescapable. Such a wealth of interesting and valuable material is included in Pinker's hefty tome that this review will of necessity be but a selective glance of the "bird's eye" at some of its most salient virtues and flaws--beginning, appropriately, with Pinker's definition of "mind," which he presents in Chapter 1, "Standard Equipment."

Following Tooby and Cosmides of the Center for Evolutionary Psychology, Pinker synthesizes (another Koestlerian "bisociation"!) computational theory from cognitive psychology and natural selection from evolutionary biology. On this framework, he weaves together a vast array of ideas into a "big picture" about the complex structure of the human mind, which is, he says, "a system of organs of computation, designed by natural selection to solve the kinds of problems our ancestors faced in their foraging way of life, in particular, understanding and outmaneuvering objects, animals, plants, and other people." (p. 21) This definition provides the basis for a wide-ranging discussion which, in many ways, is a model of clarity, precision, liveliness, and wit. Yet, it also contains a serious inconsistency with other things Pinker says, thus sowing seeds of confusion about where he stands on the philosophical issue of the mind-body relation. He says that "the mind is what the brain does; specifically, the brain processes information, and thinking is a kind of computation" (p. 21), that the mind "is not the brain, but...a special thing the brain does, which makes us see, think, feel, choose, and act. That special thing is information processing, or computation." (p. 24) Pinker refers even later to "the overwhelming evidence that the mind is the activity of the brain" (p. 64), Obviously, then, the organ involved in all these mental processes is not the mind, but the brain, "an exquisitely complex organ" (p. 152), which has "a breathtaking complexity of physical structure fully commensurate with the richness of the mind" (p. 64)--"a precision instrument that allows a creature to use information to solve the problems presented by its lifestyle." (p. 182).

Thus, although Pinker refers in his definition to the mind and its component "mental modules" as comprising a system of organs of computation that solve problems, he is really referring to the brain--and, more specifically, to regions of the brain "that are interconnected by fibers that make the regions act as a unit" (p. 30, emphasis added). The net effect of this is to cloud and perhaps even undercut his two most basic insights: (1) mind-brain dualism is false: the mind is not a distinct spiritual entity that somehow coexists temporarily with the brain, but is completely integral with the brain, ceasing to exist along with brain function; and (2) mind-brain identity is false: the mind is not literally the brain, since the brain carries out numerous functions that are not mental. As this reviewer has argued ("A Dual-Aspect Approach to the Mind-Body Problem," Reason Papers #1, 1974), we can be aware of one and the same mental process of the brain through two radically different cognitive channels: first-hand and personally via introspection, and second-hand and scientifically, via extrospection, as a scientist would when discovering and studying an interconnected brain region that carries out a given mental function. To refer to such brain regions and the functions they carry out as "the mind" or "mental modules" or "mental organs" seems altogether reasonable and accurate, and gives Pinker every bit of the semantic leeway he needs.

To best escape this philosophical confusion at the base of his thesis, therefore, Pinker would do well to loosen his stricture that the mind is not the brain, but (some of) what the brain does--instead acknowledging that the mind is the brain insofar as it is doing some of what it does. In other, more graceful words, he ought to modify his definition of "mind" so that it accords with his basic insights: the mind is the brain insofar as it carries out (or able to carry out) mental processes. Or, in more Pinkerian terms: "the mind is a system of brain structures that function as organs of computation...." As noted, Pinker already construes mental modules or mental organs as being any interconnected group of brain parts or brain regions insofar as they carry out (or able to carry out) a mental process. "The mind is organized into modules or mental organs, each with a specialized design..." (p. 21) "[M]ental modules are not likely to be visible to the naked eye as circumscribed territories on the surface of the brain [but instead] sprawling messily over the bulges and crevasses of the brain [or] broken into regions that are interconnected by fibers that make the regions act as a unit...[T]he circuitry underlying a psychological module might be distributed across the brain in a spatially haphazard manner" (p. 30-1) The above proposed modification of his definition of "mind" would thus simply ratify and formalize his insight about mental organs or modules being specialized brain structures, and it would firmly place his work in the best tradition of non-spiritualist, non-reductionist theories of mind, as exemplified by the "mentalist monism" of neuroscientist Roger Sperry (Science and Moral Priority, Merging Mind, Brain, and Human Values, Columbia University Press, 1983) and others.

At heart, Pinker is a realist--both in regard to the nature and existence of the external world and our knowledge of it (see especially pp. 308 and 333), and in regard to the nature of our cognitive faculties. Neither physical objects, nor living organisms, nor minds consist of a single, homogenous kind of stuff, somehow miraculously giving them their powers to do things. Pinker rightly consigns arguments postulating "mental spam" or "connectoplasm" and other formless, nearly-magical entities to the same theoretical dustbin to which biologists long ago relegated the concept of "protoplasm" and physicists did even earlier with the ancient tetrad of "earth, air, fire, and water." Instead, mind like the rest of nature, is hierarchically structured (another recurring them in Koestler's book) and has a "heterogeneous structure of many specialized parts." (p. 31) The balance of Pinker's book is devoted to elucidating the nature of that structure and its many parts--and how they might have arisen through natural selection. Despite the decisive proof and evidence for natural selection, however, there is a great deal of hostility to the idea. "People desperately want Darwinism to be wrong..." (p. 165) As Pinker makes abundantly clear, however, "...our understanding of how the mind works will be woefully incomplete or downright wrong unless it meshes with our understanding of how the mind evolved." (p. 174) Throughout, his concern is to clearly distinguish his view from "the dominant view of the human mind in our intellectual tradition...the Standard Social Science Model [which] proposes a fundamental division between biology and culture." (p. 46) He patiently untangles their numerous errors, such as failing to keep moral and scientific issues distinct from one another, denying that there is an innate human nature and in particular a structure to the human mind, and that the only alternative explanations are "in nature" vs. "socially constructed" which, Pinker says,"omits a third alternative: that some categories are products of a complex mind designed to mesh with what is in nature." (p. 57)

Pinker begins Chapter 2, "Thinking Machines," by carefully distinguishing between the problems of the nature and origin of mind in the sense of intelligence and mind in the sense of consciousness. The former, he says, has been solved by cognitive science, intelligence being "the ability to attain goals in the face of obstacles by means of decisions based on rational (truth-obeying) rules"--in other words, a la Ayn Rand, the mind is the characteristically human means of survival. The source of intelligence, Pinker says, is not "a special kind of spirit or matter or energy but...information," (p. 65) which is some piece of matter that "stands for" the state of affairs that the information is about. This is the basis of the computational theory of mind, the idea that intelligence is computation, "the processing of symbols: arrangements of matter that have both representational and causal properties, that is, that simultaneously carry information about something and take part in a chain of physical events." (p. 76) Thus, even if some special form of matter, spirit, or energy were someday revealed to underlie consciousness, what makes a system intelligent is not any of these factors, but what the symbols it uses stand for and how the dynamic patterns inside the system "are designed to mirror truth-preserving relationships." (p. 77)

The fact that the computational theory of mind "has solved millenia-old problems in philosophy, kicked off the computer revolution, posed the significant questions of neuroscience, and provided psychology with a magnificently fruitful research agenda," however (p. 77), has not prevent thinkers such as philosopher John Searle and mathematical physicist Roger Penrose from attempting to attack it head-on. Both men's arguments have been decisively answered, Pinker maintains; in addition, "unlike the theory they attack, they are so unconnected to discovery and explanation in scientific practice that they have been empirically sterile, contributing no insight and inspiring no discoveries on how the mind works." (p. 97) There is much more in this chapter that merits close study, but it will have to suffice here to mention Pinker's careful, subtle dissection of "connectionism," a rival form of computational theory that attempts to explain intelligence in terms of simple neural networks; instead, Pinker says, "it is the structuring of networks into programs for manipulating symbols that explains much of human intelligence...[especially] human language and the parts of reasoning that interact with it." (p. 112)

As for mind qua consciousness, Pinker slashes his way through the tangle of meanings that has grown up around the term. Sometimes "consciousness" is used as a synonym for intelligence, sometimes for self-knowledge, both of which are understood by cognitive science. Sometimes it is taken to mean access to information (as against information out of reach in the unconscious or subconscious), which is regarded not as a mystery but simply as a problem that is in the process of being solved by cognitive science. The most interesting feature that Pinker attributes to access-consciousness is that "an executive, the 'I', appears to make choices and pull the levers of behavior." (p. 139) This would seem to point to a naturalistic explanation for our experience of a self or will. Unfortunately, Pinker's discussion of the freedom of the will is problemmatic. In saying that "the science game treats people as material objects, and its rules are the physical processes that cause behavior through natural selection and neurophysiology," he spotlights the Humean "event analysis, cause-effect" paradigm that has ruled modern science almost since its inception. On this model, there really is no room for a view of people as sentient, rational, free-willed agents--and no answer to his question: "How can my actions be a choice for which I am responsible, if they are completely caused by my genes, my upbringing and my brain state?" (p. 558)

On the Humean model of causality, it is all too true that "the scientific mode of explanation cannot accomodate the mysterious notion of uncaused causation that underlies the will...[A] random event does not fit the concept of free will any more than a lawful one does, and could not serve as the long-sought locus of moral responsibility...Either we dispense with all morality as unscientific superstition, or we find a way to reconcile causation...with responsibility and free will." (pp. 54-5) The latter is precisely what has to be done, and the way to do it is to proceed along the lines of the Aristotelian, agent-cause model of causality elaborated in the writings of Roger Sperry and Edward Pols. Rather than exploring such an alternative to the metaphysical and methodological dogmas at the foundations of modern science, however, Pinker accepts them as given and resorts instead to the tattered Kantian dodge of segregating science from morality, as if freedom and dignity are no real part of "what makes us tick and how we fit into the physical universe." (p. 56)--and "cloistering scientific and moral reasoning in separate areas" will be an adequate reconciliation of science and morality and an adequate safeguard against dehumanizing people or deontologizing science.

Finally, "consciousness" is sometimes taken in the sense pertaining to the remaining mystery about mind to refer to "sentience, subjective experience, phenomenal awareness, raw feels, first-person present tense, 'what it is like to be or do something'..." (p. 135) Pinker admits that sentience and access may be inseparable, dual aspects of consciousness, despite their being at least conceptually distinguishable, though he has no way (yet) to answer people like Dennett or Rey who claim that qualia (sentient experiences) are either cognitive illusions or inconsequential to our understanding of how the mind works, and he ultimately (in Chapter 8) affects a "perhaps we weren't meant to know" stance that seems to amount to another Kantian cop-out on the research and rethinking that needs to be done. He also seems overly perplexed by thought experiments involving "zombies" and at one point says: "I can imagine a creature whose layer 4 [of the cortex] is active but who does not have the sensation of red or the sensation of anything; no law of biology rules the creature out." (p. 561) True, but imagination should not be confused with empirical research! If indeed there are such creatures who have "access without sentience"--e.g., those suffering from blindsight syndrome--isn't the needed line of research obvious? Following Pinker's own argument regarding intelligence (p. 65), find out how the system provides access without sentience--i.e., what parts of the brain are not working, or are working differently from people with sentience and access. This reviewer has written elsewhere ("Review of Fred Dretske's Naturalizing the Mind," Journal of Consciousness Studies, Vol. 4, No. 3, 1997) of the fact that the background proprioceptive awareness of bodily states and processes is emerging as the the most likely candidate for the "what-is-it-like" quality accompanying the foreground content of conscious awareness. Qualia, too, will yield their mysteries to the inexorable progress of cognitive science--much to the chagrin of the "Mysterians," to be sure.

In Chapter 3, "Revenge of the Nerds," Pinker explores how the mind--and, more broadly, living organisms--could have evolved. He voices his agreement with Richard Dawkins that "a straightforward consequence of the argument for the theory of natural selection [is that] life, anywhere it is found in the universe, will be a product of natural selection," and he reviews the various alternative ideas that have been advanced and later shown to be "impotent to explain the signature of life, complex design." (p. 158) Quoting complexity theorist Stuart Kauffman's remark that evolution may be "a marriage of selection and self-organization," Pinker wisely acknowledges that complexity theory--the idea that mathematic principles of order underlie many complex systems and that "feats like self-organization, order, stability, and coherence may be innate properties of some complex systems"--may help explain how organisms and major organ systems came into being in the first place, and that "if there are abstract principles that govern...web[s] of interacting parts..., natural selection would have to work with those principles." (p. 161) But even if complexity theory does explain the constraints within which adaptation works, that does not render natural selection obselete. The complexity involved is, after all, "functional, adaptive design: complexity in the service of accomplishing some interesting outcome...Natural selection remains the only theory that explains how adaptive complexity, not just any old complexity, can arise, because it is the only nonmiraculous, forward-direction theory in which how well something works plays a causal role in how it came to be. (p. 162) The evidence that life evolved by natural selection is overwhelming. Not only is natural selection readily observable in the wild, and in parallel in the numerous forms of artificial selection humans have practiced for thousands of years, but also mathematical proofs from population genetics and computer simulations from the relatively new field of Artificial Life have shown that natural selection can work.

Considering the obvious selection advantage of having an accurate sense of the real objects in the world, it is therefore no surprise that the study of psychology of perception has been in the forefront of evolutionary psychology's programme to "reverse-engineer" the mind, which Pinker discusses in Chapter 4, "The Mind's Eye". In contrast to skeptical philosophers who try to argue against "our ability to know anything by rubbing our faces in illusions," perception scientists "marvel that it works at all." The accuracy of our brains in analyzing the swirling patterns of energy that strike our sensory receptors and discerning objects and motion "is impressive because the problem the brain is solving is literally unsolvable; [deducing] an object's shape and substance from its projection is an 'ill-posed'problem...which has no unique solution." Through evolution, however, vision has made these problems solvable "by adding.... assumptions about how the world we evolved in is, on average, put together...When the current world resembles the average ancestral environment, we see the world as it is." (pp. 212-3) When these assumptions (some of which are discussed on pp. 234 and 247-9) are violated, illusion can result. The scientific value of the study of illusion is thus its revelation of "the assumption that natural selection installed to allow us to solve unsolvable problems and know, much of the time, what is out there." (pp. 213) Of particular note are Pinker's discussions of the illusions by which stereoscopes trick us into seeing flat pictures as three-dimensional, the various "tricks" ("mental-rotation," "multiple-view," and "geon") our minds use to recognize shapes, the recently gathered evidence that mental images for both perception and imagination are indeed "pictures in the head," and the existence of a critical period in infancy for the development of binocular vision, "as opposed to rigid hard-wiring or life-long openness to experience" (p. 240), the latter being but one of many examples Pinker offers in his book against the oversimplified alternative of innate ideas vs. tabula rasa, favoring a view of learning not as the "indispensable shaper of amorphous brain tissue [but instead] an innate adaptation to the project-scheduling demands of a self-assembling animal." (p. 241)

Because of the limitations of images (see pp. 294-296), human beings also evolved the ability to think in terms of ideas, the subject of Chapter 5, "Good Ideas." In contrast to Darwin, who thought that his evolutionary theory would put psychology on a new foundation, scientists such as his contemporary and rival, Alfred Russel Wallace, and modern-day astronomer Paul Davies could see no good evolutionary reason for human intelligence to exist, turning instead for an explanation to the superior guiding intelligence postulated by creationism or some form of self-organizing process eventually explainable by complexity theory. Pinker follows Stephen Jay Gould in pointing out what Wallace, Davies, and others overlook: that the brain has made use of "exaptations: adaptive structures that are 'fortuitously suited to other roles if elaborated' (such as jaw bones becoming middle-ear bones) and 'features that arise without functions...but remain available for later co-optation' (such as the panda's thumb, which is really a jury-rigged wristbone)." (p. 301) The human mind really isn't "adapted to think about arbitrary abstract entities...We have inherited a pad of forms that capture the key features of encounters among objects and forces, and the features of other consequential themes of the human condition such as fighting, food, and health. By erasing the contents and filling in the blanks with new symbols, we can adapt our inherited forms to more abstruse domains...We pry our faculties loose from the domains they were designed to work in, and use their machinery to make sense of new domains that abstractly resemble the old ones." (pp. 358-9)

Pinker explains at length "why the original structures were suited to being exapted" (p. 301), in the process also showing why the intuitive scientific and mathematical thinking that people do virtually from birth onward (contra William James' "bloomin', buzzin' confusion" model of infant awareness) is not always reliable for problems outside the demands of the natural environment. Faulty inference is to the conceptual level what illusion is to the perceptual; a close study of each kind of glitch reveals the original optimal conditions for the corresponding form of awareness--and how the formal sciences, mathematics, logic, etc. were developed at least partly to compensate for less optimal circumstances. Among the intuitive theories presumed to comprise the mind's natural repertoire for making sense of the world are modules for objects and forces, inanimate beings, artifacts, minds, and natural kinds such as animals, plants, and minerals--as well as "modes of thought and feeling for danger, contamination, status, dominance, fairness, love, friendship, sexuality, children, relatives, and the self." (p. 315) Pinker stresses the point that what is innate is not knowledge itself, but ways of knowing. While exploring how these modules operate as babies learn about objects and motion and how to distinguish inanimate objects from living beings, he dwells on the very important issue of essentialism (are there natural kinds in the world?) and the equally important question of whether there really are objects in the world. Pinker defends essentialism against both the extreme essentialists such as Mortimer Adler who argue that human beings could not have evolved, and the modern anti-essentialists who use "essentialist" as a term of abuse against those who try to genuinely explain human thought and behavior (rather than merely redescribing it along ideological lines).

But do natural kinds exist? And why do we use concepts anyway? What is their biological utility? What in nature dictates that they are a necessity to our survival--if they are? The standard arguments given in psychology texts--memory overload and mental chaos--do not make sense, Pinker says, because we have more than adequate storage space for our experiential data (and we often remember both categories and their members), and "organization for its own sake is useless," if not downright counterproductive. (p. 307) Instead, he argues, the survival value of concepts and categories, the reason they evolved into being, is their predictive power. One kind of categories uses "stereotypes, fuzzy boundaries, and family-like resemblances" and is more useful for simply "recording the clusters in reality," for "examining objects and uninsightfully recording the correlations among their features," their predictive power coming from similarity. Categories of the other type are well-defined, having "definitions, in-or-out boundaries, and common threads running through the members," and they "work by ferreting out the laws that put the clusters there," their predictive power coming from deduction. (p. 309-10) Sometimes the former--registering similarities--is the best we can do; but when we are able to use the latter, with definitions and lawful connections, we are not just fantasizing, Pinker says. The world really is "sculpted and sorted by laws that science and mathematics aim to discover," and "our theories, both folk and scientific, can idealize away from the messiness of the world and lay bare its underlying causal forces." The systems of rules incorporated in "lawful" categories "are idealizations that abstract away from complicating aspects of reality, but are no less real for all that." (pp. 308, 312) Similarly for concrete shapes motions and objects themselves. As against people like Buckminster Fuller or Arthur Koestler who claim that modern science has "dematerialized matter" and that solidity is an illusion, Pinker avers that "the world does have surfaces and chairs and rabbits and minds. They are knots and patterns and vortices of matter and energy that obey their own laws and ripple though the sector of space-time in which we spend our days." (p. 333) Such a ringing endorsement by a scientist of common-sense realism--the view that the contents of our perceptual and conceptual awareness are real effects of real causes--is reassuring and welcome, indeed.

From a humanistic standpoint, Chapters 6 and 7 are arguably the most important sections of Pinker's book. They should be required reading for all college majors in anthropology, sociology, and psychology--and for all parents. In Chapter 6, "Hotheads," one of the shorter chapters of his book, Pinker manages to explode the reason-emotion dichotomy and to enlarge and enhance our concept of a universal human nature--an amazing accomplishment. To this, he adds some other very worthwhile material, including discussions of the biology of the positive and negative emotions, happiness, romantic love, and "altruism." A highlight of the chapter is the set of extremely valuable insights, supported by copious citations of contemporary research, that the human emotions are universal, that (in Darwin's words) "the same state of mind is expressed throughout the world with remarkable uniformity," and that the mistaken belief that emotions differ cross-culturally comes mainly from language vocabulary differences and opinions either naively or deliberately at variance with actual behavior." Just as valuable is the revelation that the emotions are not nonadaptive baggage stowed in the basal ganglia and limbic system (MacLean's Reptilian Brain and Primitive Mammalian Brain) but instead, as Pinker shows, "are adaptations, well-engineered software modules that work in harmony with the intellect and are indispensable to the functioning of the human mind." (p. 370) The topmost goals of human beings, in relation to which subgoals, subsubgoals, etc. are the means, have been wired in through natural selection and, Pinker suggests, include not just the "Four Fs" ("feeding, fighting, fleeing, and sexual behavior") but also, more broadly, "understanding the environment and securing the cooperation of others," each emotion serving to mobilize "the mind and body to meet one of the challenges of living and reproducing in the cognitive niche," both those posed by physical things and those posed by people. (pp. 373, 374) The reason we need emotions to do this, he says, is that we cannot pursue all our goals at once, but instead must selectively commit ourselves "to one goal at a time, and the goals have to be matched with the best moments for achieving them." (p. 373) Pinker thus sees the mechanism that sets the brain's highest-level goals at any given moment as being not, as some might expect, the will, but instead the emotions:

"Once triggered by a propitious moment, an emotion triggers the cascade of subgoals and sub-subgoals that we call thinking and acting. Because the goals and means are woven into a multiply nested control structure of subgoals within subgoals with subgoals, no sharp line divides thinking from feeling, nor does thinking inevitably precede feeling or vice versa (notwithstanding the century of debate within psychology over which comes first)." (p. 373-4)

The emotions certainly are motivating, and it is difficult at times to analytically separate them from the thoughts that generate them. But motivation must be distinguished from self-regulation, which is the essence of the will. As Pinker explains later, the alleged reason-emotion dichotomy often refers to the fact that people sometimes are tempted to sacrifice long-term interests for short-term gratification. This problem of self-control or "weakness of the will" is actually rooted, Pinker says, in the "modularity of the mind": "When the spirit is willing but the flesh is weak, such as in pondering a diet-busting dessert, we can feel two very different kinds of motives fighting with us, one responding to sights and smells, the other to doctor's advice." (p. 396) As Pinker explains it, "self-control is unmistakably a tactical battle between parts of the mind." We have many goals (e.g., food, sex, safety), which "requires a division of labor among mental agents with different priorities and kinds of expertise." These agents are all committed to the interests of the whole person over a lifetime, but in order to balance the person's needs and goals those agents also have to "outwit one another with devious tactics." Thus we are able to "defeat our self-defeating behavior," as Pinker puts it (p. 396), by acting through those mental agents "with the longest view of the voluntarily sacrifice freedom of choice for the body at other times....The self that wants a trim body outwits the self that wants dessert by throwing out the brownies at the opportune moment when it is in control." (pp. 419-20) But how does this module or agent with the longest view get control if its motivating desires are weaker than those of the brownie-seeking module? More "devious tactics" such as giving one's brownie-seeking self "permission" to eat the brownie, along with "permission" not to? Or instead perhaps the psychic equivalent of arm-wrestling with one's brownie-seeking self?

This is one of the weaker parts of Pinker's discussion, for it fails to provide for a master module for the "we," the "whole person" whose interests the lesser modules have been genetically engineered to look out for in a dynamically balanced way, the "whole person" who acts voluntarily, through one mental module or another, to deny pleasure to the body in preference to future well-being, or vice versa. Instead of a master self-regulator, the self/will, we seem to be left with a Dennett-esque congeries of clashing, warring self-regulators, reduced to using coercion and deceit over one another. The closest Pinker comes anywhere in the book to providing an explanation for even our experience of a self or will is his notion of an "executive process" or "set of master decision rules" comprising "a computational demon or agent or good-kind-of-homonculus, sitting at the top of the chain of command" and "charged with giving the reins or the floor to one of the agents at a time...another set of if-then rules or a neural network that shunts control to the loudest, fastest, or strongest agent one level down. (pp. 143-4) Unfortunately, he seems to prefer the model of the "society of the mind" in explaining the emotions. Perhaps, as Pinker says in the next chapter is the case for society, some amount of this conflict will always be present in the "society of the mind," but that doesn't make it morally right and it doesn't mean we should try to reduce it. But how? Pinker does not pursue this, but his analogy between mind and society, expressed in the section "Society of Feelings," suggests that we should find ways for our long-term and short-term modules to cooperate with and begenerous to one another in achieving what each other is after: e.g., delicious, low-fat brownie recipes, along with some combination of suspending or relaxing one's diet during holidays (retreat), not beating up on oneself for eating too much (conciliation), and accepting the fact that some weight gain is an inevitable part of the aging process (live and let live). But how is this cooperation to be implemented: anarchistically, by free-floating negotiation between competing modules--or governed from above by a mediating master module (the self/will)? As noted, Pinker does not address this point, nor do his other discussions of the free will issue help much.

Chapter 7, "Family Values," focuses on the psychology of social relations, which Pinker sees as being largely about inborn motives that put us into conflict with one another. Contrary to several decades of conventional wisdom and romantic wishful thinking, epitomized by Margaret Mead's "spectacularly wrong" portrayal of Samoa as a paradise of idyllic social relationships, conflicts over power, wealth, and sex are traits universal to all human cultures. Yet, as Pinker points out, this does not make exploitation and violence morally correct, nor does it mean that the existing level of them is necessary or the best we can hope for. "People in all societies not only perpetrate violence but deplore it. And people everywhere take steps to reduce violent conflict, such as sanctions, redress, censure, mediation, ostracism, and law." (pp. 428-9) Cooperation and generosity, which also exist in all human cultures, do not "come free with living in groups" but instead, like stereoscopic vision, are "difficult engineering problems," which human beings solved through natural selection, because "even in the harshest competition, an intelligent organism must be a strategist, assessing whether its goals might best be served by retreat, conciliation, or living and letting live." (p. 428) The bulk of this chapter is devoted to a detailed exploration of "the distinct kinds of thoughts and feelings [people should have] about kin and non-kin, and about parents, children, siblings, dates, spouses, acquaintances, friends, rivals, allies, and enemies." (p. 429) Especially helpful are Pinker's asides about feminist theory, in which he explains how evolutionary psychology challenges not the feminist goals of ending sexual discrimination and exploitation, but those feminist arguments that rest on faulty biological, psychological, and ethical premises.

As a part-time aesthetician and music theorist, this reviewer would be remiss not to say a few words about Pinker's application of his thesis to the area of art, which occupies a major portion of Chapter 8, "The Meaning of Life." Why do the arts, humor, religion, and philosophy exist? They seem trivial, futile, biological frivolous, Pinker says; yet we often experience them as the most noble, exalted, rewarding things our minds do. What computational, evolutionary function, if any, do they serve? Basically, Pinker offers a split verdict: some aspects of the arts do perform a biologically adaptive function, but most are non-adaptive by-products. The visual arts and, he suspects, music are sensory "cheesecake...exquisite confection[s] crafted to tickle the sensitive spots of...our mental faculties." (p. 534). Pleasure-giving "patterns of sounds, sights, smells, tastes, and feels" given off by fitness-promoting environments are purified and concentrated so that the brain can stimulate itself with "intense artificial doses of the sights and sounds and smells that ordinarily are given off by healthful environments." (pp. 524-5) As a 25-year veteran parent/consumer of the Montessori method of education, however, this reviewer thinks it is clear that visual art as not just sensory cheesecake, but instead also a means for sensory conditioning or training, as the artist shares her view of, for instance, "Here's how to see (or think of) apples." The very "purifying" and "concentrating" of pleasure-giving patterns Pinker refers to has a didactic or consciousness-molding function--much as the didactic materials of the Montessori method help children form sharper mental images and categories than otherwise would from their unguided everyday experience.

Pinker also shares much of what is now known about the basic design features of music and how it functions as "auditory cheesecake," but he sees no real adaptive use for it. Music shows "the clearest signs of not being" an adaptation, but instead a "pure pleasure technology;" it cannot convey a plot, Pinker says, and "communicates nothing but formless emotion." (pp. 538, 528-9) This is supposed to decisively differentiate music--even dramatic music--from literature, which "not only delights but instructs" and is thus presumably not merely a technology, but an evolved adaptation (p. 541) Pinker describes fiction's function thusly: "the author places a fictitious character in a hypothetical situation in an otherwise real world where ordinary facts and laws hold, and allows the reader to explore the consequences...The protagonist is given a goal and we watch as he or she pursues it in the face of obstacles. It is no coincidence that the standard definition of plot is identical to the definition of intelligence...suggested in Chapter 2. Characters in a fictitious world do exactly what our intelligence allows us to do in the real world. We watch what happens to them and mentally take notes on the outcomes of the strategies and tactics they use in pursuing their goals...", which are predominantly survival and reproduction. The cognitive, biologically adaptive role of fiction, then, is to "supply us with a mental catalogue of the fatal conundrums we might face some day and the outcomes of strategies we could deply in them." (p. 543, emphasis added) As this reviewer has argued elsewhere ("Thoughts on Musical Characterization and Plot: the Symbolic and Emotional Power of Dramatic Music," Art Ideas, 5/1, 1998, pp. 7-9), much the same reasoning and facts apply to the case of musical plot and musical motion itself, often held to be just metaphors if not denied outright, but actually the respective products of the patterns of expectation built on the tonal system of Western music of the past several centuries and of the wired-in physiological/perceptual response to the ways musical tones are organized in relation to one another.

The Rosetta Stone that once and for all reveals the fundamental similarity between dramatic music and literature is unearthed by Pinker--ironically, not in his final chapter's discussion of the arts, but 200 pages and 3 chapters earlier during his account of a film made by social psychologists Heider and Simmel. The plot of their movie consists of the striving of a protagonist to achieve a goal, the interference by an antagonist, and the final success of the protagonist with the aid of a helper. The "stars" of this movie are three dots (!), which Pinker says it is impossible not to see as "trying to get up [a] hill...hindering [the first dot]...and helping it reach its goal." (p. 322) The point is that people, even toddlers, "interpret certain animate agents [which] propel themselves, usually in service of a goal." (p. 322) The behavior of musical tones in dramatic music is completely analogous to that of these dots and, this reviewer submits, is naturally, unavoidably experienced in the same way. Goal-directedness, plot, is an inescapable aspect of both the melodic and the harmonic elements of dramatic music. The "strategies and tactics" of musical tones are, like those of the dots, much more concrete and specific than those of (most) literary characters, but the kaleidoscopic variety of melodic and motivic development in Western music offers what can only be regarded as a vast catalogue of opportunities to perceptually experience goal-seeking. Surely this is adaptive. Surely it is a clear indication that music's alleged "purely emotive" nature and its status as "the language of the emotions" is soon to be replaced by the acknowledgement that it is merely "a" language of the emotions, operating by the same general kinds of imagery and syntax as literature and the theater.

It is rather surprising to hear a psychologist say, in the end, that religion and philosophy are "fascinating but biologically functionless activities." Isn't it obvious that we need religion and/or philosophy? Even if the answers they provide are wrong, we need some kind of plausible answers to the "holistic," orientational questions about life. That is an unavoidable consequence of the fact that humans require not just perception but concepts for successful living. We see beyond the here and now, and we need guidelines, a mental framework, a model to steer us--for better or worse--through our day to day decisions and actions. People without such a view of the world are, in a very important way, maladapted--adrift without a rudder and in danger of crashing. Granted, having such a philosophy of life, correct or not, is not guarantee that one will not end up on the rocks, anway. But the odds are in favor of people who at least try to understand the world they live in and who at least think they know its basic nature. Bewildered, disoriented creatures are, to repeat, maladapted. Philosophy is not a luxury, but a necessity--even in the form of its protean ancestor, religion. Philosophy is a quintessentially human adaptation--not for solving specific life problems, but for solving the "holistic" problem of determining what kind of life to live.

Yet, presumably since their fundamental problems have resisted consensus solution for 2500 years, Pinker suspects that these philosophy and religion are at least partly "the application of mental tools to problems they were not designed to solve" (p. 525) Perhaps they weren't, but why couldn't these mental tools be "exapted" to solving those problems anyway? Pinker suggests that they are not "sufficiently similar to the mundane survival challenges of our ancestors" (p. 525), and that is why people have pondered the nature of subjective experience, the self, free will, meaning, knowledge, and morality for millenia "but have made no progress in solving them." Our minds are well suited to perceiving objects and motion and to discovering causal laws in parts of the universe, but their very excellence at meeting those challenges may have compromised them for dealing with the "peculiarly holistic" kinds of problems as the nature of sentience and will. "Far from being too complicated, they are maddeningly simple-- consciousness and choice inhere in a special dimension or coloring that is somehow pasted onto neural events without meshing with their causal machinery. The challenge is not to discover the correct explanation of how that happens, but to imagine a theory that could explain how it happens, a theory that would place the phenomenon as an effect of some cause, any cause." (p. 562)

If this were indeed an inherent limitation of our kind of consciousness, then Pinker would be right: we should rejoice at all that our minds make possible and let go of the perennial, insoluble conundrums. But such a surrender is not warranted by a mere hypothesis born of frustration and impatience--and the facts argue against it, as well. The vast increase in research into brain function and conscious processes in just the past few decades has led to numerous discoveries and insights, and the writings of researchers and philosophers such as Roger Sperry, Edward Pols, Antonio Damasio, Jerome Kagan, Fred Dretske, Henry B. Veatch, and Panayot Butchvarov increasingly point the way to a non-dualistic, non-reductionist, naturalistic understanding of the self and the will, and of the other basic issues as well. Pinker's own impressive work is a prime exhibit in support of this more optimistic scenario. Especially considering how long religion's supernaturalist premises and theocratic controls over society have impeded scientific discovery, two and a half millenia is not nearly as long a time as it may seem. (What could we measure it against, anyway?) Moreover, as just noted, it is not true that there has been no progress in solving these problems. It may well be that they require a lot more hard work, and that science and philosophy must pool their efforts in order to solve them. This assumption has gotten us a long way already, and there is no good reason not to continue confidently down that road.

0 Replies to “Steven Pinker Thinking Machines Essay”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *