Source: Copyright Randall Munroe (CC BY-NC 2.5)
How should we think about thinking? Is even trying to do this akin to trying to open a box with a crowbar locked inside it?
A widely-shared Aeon article from earlier this year got very angry and confused on the whole issue, concluding that the whole of cognitive science was based on a very simple error—that error being that “humans are computers”. (1) Needless to say the author, Robert Epstein, was very stern and sarcastic at the foolishness of the assertion that we are little clockwork toys beeping around mindlessly, and he was at pains to set us all right.
Unfortunately, in his eagerness to correct, Epstein showed his misunderstanding of science in general, cognitive science in particular, and the march of history into the bargain. The straw man he created has gone down, but nobody in science is going to care. Here is why:
Wheels within wheels
I’d like to deal with the history part first, because it’s something that lots of people don’t know but that I am lucky enough to benefit from directly. The usual story about the “humans as computers” metaphor goes like this:
Humans have always compared thinking to their most impressive technology, hydraulics or clockwork say, computers are just the latest in a long line of this. Every other metaphor has failed, so will this one.
Epstein puts it this way:
“In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.
The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning…By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph. Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller.”
Sorry for the long quote—but actually Epstein gives a pretty good survey of the ways that humans have tried to use metaphors to explain human cognition. And it's a characterization that even Epstein's opponents like Sam Harris and David Krakauer agree with.(2) However, this history of humans as "epistemological narcissists", who project their latest technology onto themselves has one rather glaring omission. Which brings me back to the personal remark at the start. The person missing from this story is the person whose home I can see if I lean out of my office window dangerously far, whose name adorns the lecture halls I teach in and the library I study in. His work, and that of his equally erudite wife, is the major reason for the existence of the machine on which I write this and the reason you can read it. His name is George Boole, and the insights he had—in attempting to analyze all cognition--fully one hundred years before the invention of the physical computer, are the reason computers exist in the first place. (3).
Source: copyright UCC (with permission)
Coming to the Boole
There isn’t the space to go into detail here but it is Boolean algebra—a general way of analyzing the grammar of all possible relations between ideas--that enabled the cognitive revolution and the information revolution. You don’t need to read Boole’s Laws of Thought to get why this is important (although, please do). The fact that you are reading this on a computer, that relies for its very existence on the accuracy of his analysis, is the actual pudding whose juicy proof you are eating. Or rather, reading.
Thus, it is simply factually inaccurate to claim that modern cognitive science is drawing on the most impressive current technology to explain thought. In term of historical progression, it was Boole’s attempts to fully analyze thought that led to the creation, long after his death, of the technology. It was what made the technology possible in the first place--by giving cognition functional analysis. Functionalism is the thread that runs through all science. It’s the insight that essentialism--belief in magic properties--is a blind alley. We know better. Functionalism is the stance that what something is, is what it does. Functionalism about thought—that minds are what brains do--came a hundred years before the technology that was the triumphant vindication of that insight.
- What Is Cognition?
- Find a therapist near me
[And the path from Boole via Shannon, Turing, and von Neumann to the modern understanding of information and order is a deep and interesting one marrying together information theory and physics. I simply don't have time or space for it here]
Thinking is, as thinking does
So much for history. But it segues neatly into the second point that Epstein fails to appreciate. Cognitive scientists (unless they are very confused) do not think that human minds are computers. Rather—they think that computers—the physical objects on your desk top say—are just one way to make functions real. Functions are mathematical operations—but we shouldn’t get too hung up on numbers and equations here. You can turn an equation into a physical object yourself (and back again) and you did it in high school (you called it “making a graph”) and there are many ways to see that functions can be realized in different ways.
Machines in general are ways to make abstracted functions into physical things. And that’s what cognitive scientists study. Functions. E.g. “How do we turn electromagnetic inputs into perceptions?” or “How do our past experiences function to make us wary of similar present dangers?”. Epstein's frustration that cognitive scientists keep on thinking this way is simply misplaced. Hilariously he offers what he thinks is an alternative to functional thinking in explaining catching a ball:
Cognition Essential Reads
5 Barriers to Critical Thinking
Why Chimpanzees Can't Learn Language: 1
“The IP [Information Processing] perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyze an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.”
Now this raises an interesting challenge. Namely: Build something that performs this "simple task". From scratch. No helping yourself to pre-existing (functional) systems that have already solved those problems of alignment and integrated movement that you so cavalierly wave your hand around. I'll watch and chuckle. Epstein has made the classic mistake (and especially egregious for a psychologist) in thinking that what appears simple to conscious access does not have a wealth of highly complex unconscious processing going on beneath the surface. The brilliant Hans Morovec gave his name to the general error of this kind: Morovec’s paradox. (4)
Morovec asked the question: Why is it that it takes the smartest humans to do things like fly planes, diagnose disease, and play chess; when we can make fairly stupid computers that beat all but the best of humans at these things with ease? The other side of this coin is that things that we (wrongly) thought would be computationally easy to program—like walking up stairs, recognizing faces, and so on turned out to be horribly difficult to get computers to do. Why was this? We were making the same mistake as Epstein—forgetting that evolutionarily novel tasks (like chess) have their computational architecture laid bare. People who think a task is easy simply haven’t given serious thought to the millions of years that went into making it so. That’s why we need smart people to do things like play chess, because there isn’t that much (in computational terms) to know (and the humans with the biggest brains know it better and faster). As the brilliant AI technician Rob Brooks pointed out, “Elephants don’t play chess”. Learning to be an elephant took millions of years prior to that particular elephant’s existence. Learning chess takes a few years. (5)
The alternative to omniscience isn't despair
Epstein gets more and more frustrated with the benighted community of cognitive scientists as he goes on.
“To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.”
Or—you could just listen to what that brain is saying? Through its attached mouth, for example? If someone was to object that what I’ve just said in response is a trick—that you don’t know every single thing that that particular brain is doing at any particular moment the appropriate response is: “So what?” I don’t know what every single one of the 10^26 sub-atomic particles in my cup of coffee are doing on an individual basis either. But I know what they are doing on the aggregate as I pick up the cup. This is called “heat”. And it’s the mean kinetic energy of the molecules in the liquid (e.g. the average amount of whizzing about they are doing—and good luck trying to map those on an individual basis). I don’t need to know everything in order to know anything.
We still have many problems to solve—but they are problems, not mysteries. And there is only one game in town to solve them: Functionalism. The alternative that Epstein offers—essentialism--has gone the way of astrology, alchemy, and homeopathy. And for the exact same reason. Essentialism comes from the pre-science time of humans. It’s magical thinking.
There is a saying that those who think that a task is impossible should not get in the way of those achieving it. The irony is that the opponents of cognitive science live in a world where airplanes fly themselves, machines govern investments, artificial eyes and hands can be spliced into the place of lost ones, directly into nervous systems. These things work because functionalism is true. Boole was right all along. Your mind isn't a computer: it's thousands of them. The functionalist account of the human brain isn’t something we are predicting. We are living in the midst of it. (6)