This final consisted of two parts. In the first, we defined some terms from the course; in the second, we chose and answered a question from a list.
The Extended Mind Hypothesis
The extended mind hypothesis is the belief that the mind is not confined to the body, but that it spreads out over the entire environment. In this view, when one modifies the environment to help in solving a problem, you are actually doing cognition outside your body in the physical world. Proponents of this hypothesis point to the brain’s plasticity, but they fail to realize that the brain’s integration and communication are integral to the functioning of the mind.
“Knowledge” as Justified True Belief
Justified true belief is one proposed test for whether something is knowledge. In this understanding, something is knowledge if it is 1) believed, 2) the belief is justified, and 3) the belief is true. For a long time this was considered the best standard of knowledge, but the search for a good definition of knowledge has been renewed after Gettier published an article showing that a belief which satisfied any reasonable standard for JTB could, in fact, be wrong. This set off a firestorm in the philosophical world, but the entire debate about knowledge is a little ridiculous. I haven’t seen an argument against skepticism yet, in or out of the course, and skepticism pretty much guarantees that we can’t know anything except our own existence.
That said, justified true belief is exactly what an omnipotent being would need for knowledge, since an omnipotent being’s justified belief would be an actual test for the literal state of the world. As with Plato’s forms, human beings may not have actual knowledge but we have beliefs that approximate it.
Cartesian dualism is the view, pioneered (or at least best expounded) by Descartes, that the mind consists of some sort of super- or extra-natural process or body that is completely separate from the physical world. Variants of dualism hold either that the extra-natural mind is in control of the body (through mechanisms unknown, probably related to the brain), that the mind exists and grants understanding but has no control, or that the mind and body are kept synchronized by God. This was both the genesis of and the first solution for the mind-body problem.
Cartesian dualism is opposed by identity theory, which holds that the mind is the brain, and functionalism, which hybridizes them by identifying the mind as a set of mental processes which are run in the hardware (or meatware) of the brain.
Functionalism is the belief that our minds consist of mental processes analogous to software, which run in the brain but could run in other hardware (like a digital computer). It was developed when philosophers found both dualism and identity theory to be insufficient, and attempts to meld the best, most factually-based features of each in one theory. Proponents of identity theory tend to believe in strong AI.
These days, one of the most interesting approaches to the mind-body problem is the philosophical position of Functionalism. According to Functionalism, what we call the “mind” is a different level of description of the activity of the brain; the relationship between mind and brain is akin to the relationship between software and hardware on a computer. And, also as in the computer metaphor, the software (or mind, in this case) is not restricted to running on a single kind of hardware. This latter point represents the promise of artificial intelligence (AI). In our readings, Lycan was the primary proponent of Functionalism and AI and Searle the primary critic (Clark & Keats also describe some background material useful for this discussion). Write an essay in which you discuss the debate between Lycan and Searle over Functionalism and the possibility of AI. What do you feel is the best argument on each side? Where do you come down on this debate, and why?
The debate between Lycan and Searle is an interesting one for a multitude of reasons. First, it is interesting for its obvious implications: artificial intelligence, the implied non-existence of a supernatural soul, etc. Second, it is interesting because Lycan believes so strongly in functionalism that he asserts non-obvious premises which functionalism does not guarantee, such as the replaceability of the human brain by computers. Third, it is interesting because debates over the mind are incredibly susceptible to proving implicit premises rather than starting with explicit premises and proving something not assumed.
Despite Lycan’s overreaching argument, I find myself agreeing with him more. There are a few fundamental flaws in Searle’s position on artificial intelligence, and though none of them disprove alternatives to functionalism, the flaws mean he doesn’t disprove functionalism either.
First, remember Searle’s Chinese Room: he (speaking no Chinese), is put into a room filled with books, with an “in” hole and an “out” hole. People outside the room put in notes written in Chinese; Searle then looks at the note, writes down a response by following instructions in the books, and then sends those notes out of the room. Imagine that the instructions in the books produce an output so that people outside the Chinese Room think they are having a conversation with a person inside the room (ie, the Chinese Room and its contents pass the Turing Test). Searle argues that, even if you can produce books so that this room can be built, there is still no understanding of Chinese. Searle, who is reading the input notes and producing the output notes, does not understand Chinese; it is obvious that the books cannot understand Chinese; there is no understanding in the room! Likening this room to an artificial intelligence (in which Searle is the processor and the books are the software), Searle believes he has disproved the possibility of artificial intelligences ever having a mind.
It’s a plausible argument, except for this: any explanation of where understanding comes from must be ridiculous. Either understanding emerges from large collections of smaller formal systems (ie, the books have understanding, or their operator has understanding while he operates them); understanding is done in some supernatural world (dualism); or understanding is somehow unique to the brain. Searle holds the third point of view, but it’s not entirely clear why. After all, the brain is, essentially, a computer. It has some interesting properties that digital computers do not, but there’s nothing fundamental about the brain that gives it understanding. There is no particular reason why our mind should find sex pleasurable — there are obvious evolutionary reasons why our body is encouraged to have sex, and why our mind might acquiesce, but for some reason we interpret the physical release of endorphins in the brain as pleasure in the mind. What gives them that meaning? There is no obvious mechanism; in fact, claiming that the brain produces understanding is ridiculous. So you see, any mechanism for producing understanding will appear to be ridiculous; the mind-body problem exists because of the problems of understanding and free will. Attempting to disprove an argument by saying “but it’s ridiculous” completely misses the point.
Searle does have one other argument he proposes against functionalism: when a computer simulates burning buildings, nobody believes that buildings are being burnt. So why should a computer which simulates a mind be considered to have a mind of its own? Searle actually answers this question himself: minds are collections of processes (as he acknowledges on page 480 while discussing Premise 1). Fires are collections of physical events; computers run processes through a series of physical events (mostly, the movements of electrons). So, anything which is just processes can exist on a computer as realistically as anywhere else, assuming the computer hardware does everything the software requires of it. This distinction between the mental and the physical has been well-discussed earlier in the course, as in the problem of the external world.
All this said, I have of course not proven functionalism. But, assuming there is no supernatural, I tend to prefer functionalism and the possibility of true strong AI over the alternatives such as identity theory. It feels as though my brain is not my mind, and that my mind is a collection of processes. Study has shown that brain damage can harm the mental processes, but that these processes can be regained in time. This suggests to me that the brain is the hardware our mind runs on, and that damaging that hardware can damage the mind’s output, but that as the mind becomes aware of the damage it can be worked around by doing things differently. Additionally, if the mind was dependent on the specific human (or at least biological) brain I don’t think the mind would adapt so well to controlling extra limbs or accommodate additional senses as well as it clearly does. An identity theory simply ties intelligence too tightly to humans.
But my belief in functionalism does not mean that Lycan’s vision, of a world in which we simply buy plugins to expand our memory, is correct. I think the properties of the biological brain are important to the integrity of our minds; though these properties can be simulated by a computer they cannot be maintained by trying to spread the mind over multiple kinds of hardware.