Can Computers Think?
Searle makes a strong argument against computers ever being able to think in this piece. The Chinese room is an excellent metaphor discussed early in many Artificial Intelligence classes, and his analogy to simulating a tornado is fairly persuasive. Unfortunately, his argument suffers from a few serious issues.
First, the Chinese Room is fundamentally broken in a very specific way. While the Chinese Room demonstrates the difference between syntax and semantics, it then simply claims without proof “The Chinese Room is exactly what a computer does; it lacks semantics; therefore a computer cannot have a mind.” As a demonstration of semantics versus syntax this is persuasive, but the Chinese Room argument does not demonstrate where semantics come from. We have no idea where semantics come from. Perhaps semantics are simply relationships to a very few basic syntactic states like pleasure and pain, which are hardwired into our bodies. If that is the case, then a computer, with the inherent concept of differing voltage levels (which could correspond to pleasure and pain), can quite easily develop semantics if the program it runs models human development closely enough.
My use of the word “model” brings up another faulty argument employed by Searle. He draws an analogue between the expectation of modeling a mind and modeling a fire. Because nobody believes that modeling a fire actually kills people or destroys buildings, how could we possible believe that modeling a mind actually creates a mind? This argument ignores the necessary distinction between mental and physical, though. Even if the mental is a subpart of the physical world, we have seen in our discussions of the Matrix that concepts like stories or paintings in many ways exist even if they are not physically manifested somewhere. Since minds are necessarily a part of the mental world, it is reasonable to think that modeling a mind in fact creates a mind.
But we don’t know, and that is the problem with Searle’s whole argument: it assumes what it seeks to prove. Searle concludes that there is something inherently biological about the mind, because it is only in the biological that we can develop semantics. Why is it only in the biological that we can develop semantics? Because semantics don’t arise in the carrying out of simple instructions! This is hardly a proof — at least, it is not a proof that computers will forever not have minds. It is an excellent demonstration of why we need more information before we will know what the human mind is (if we ever can).