Views on Computationalism: Clark vs. Searle
Computationalism: the view that computation, an abstract notion of materialism lacking semantics and real-world interaction, offers an explanatory basis for human comprehension. The main purpose of this paper is to discuss and compare different views regarding computationalism, and the arguments associated with these views. The two main arguments I feel are the strongest are proposed by Andy Clark, in “Mindware: Meat Machines”, and John Searle in “Minds, Brains, and Programs.”
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows: p1. The brain is constructed like a computer, since both contain parts which enable them to function. p2. The brain, like a computer, uses symbols to make calculations and perform functions. p3. The brain contains mindware similarly as a computer contains software.
c. Therefore, computers are capable of being intelligent beings.
I find, however, that Clark’s conclusion is false, and that the following considerations provide a convincing argument for the premises leading to this conclusion, starting with premise one: “the brain is constructed like a computer, since both contain parts which enable them to function.” This statement is plausible, yet questionable. Yes, the mind contains tissue, veins, and nerves etc. which enable it to function, the same way that a computer contains wires, chips, and gigabytes etc. which it needs to function. However, can it be possible to compare the two when humans devised these parts and the computer itself so that it can function? If both “machines”, as Clark believes, were constructed by the same being this comparison might be more credible. Clark might argue that humans were made just as computers were made so therefore it could be appropriate to categorize them together. I feel that this response would fail because it is uncertain where exactly humans were made and how, unless one relies on faith, whereas computers are constructed by humans in warehouses or factories.
My second argument against Clark’s claims applies to premise two: “the brain, like a computer, uses symbols to make calculations and perform functions.” Before I state what I find is wrong with this claim, I should explain the example Clark uses to support this premise, which is from the work of Jerry Fodor:
... in 21th century, and it might already dominate humans’ life. Jastrow predicted computer will be part of human society in the future, and Levy’s real life examples matched Jastrow’s prediction. The computer intelligence that Jastrow mentioned was about imitated human brain and reasoning mechanism. However, according to Levy, computer intelligence nowadays is about developing AI’s own reasoning pattern and handling complicated task from data sets and algorithms, which is nothing like human. From Levy’s view on today’s version of AI technology, Jastrow’s prediction about AI evolution is not going to happen. As computer intelligence does not aim to recreate a human brain, the whole idea of computer substitutes human does not exist. Also, Levy said it is irrelevant to fear AI may control human, as people in today’s society cannot live without computer intelligence.
The article I chose to review from the website http://faculty.washington.edu/chudler/nuerok.html was “A Computer in Your Head?” by Eric Chudler, Ph.D. This article was originally published in ODYSSEY magazine, 10:6-7, 2001 (March), by Cobblestone Publishing Co. The reason I chose this article is it has always interested me in how similar the brain is to a computer. It also helps that I am currently taking a Computer information Systems class and I personally find that applying my classes to each other assists me in understanding material more accurately. The article had a lot of interesting ideas but did not go very in depth into the capabilities of the brain in comparison to a computer. There were many interesting facts that were prevalent throughout the article and the comparison is fairly easy to follow.
It is easy for Searle to respond to this claim. There is no evidence that he needs to refute. He even says that he is "embarrassed" to respond to the idea that a whole system aside from the human brain was capable of understanding. He asks the key question which will never be answered by Lycan, "Where is the understanding in this system?" Although Lycan tries to refute Searle's views, his arguments are not backed with proof. Lycan responded by explaining that Searle is looking only at the "fine details" of the system and not at the system as a whole. While it is a possibility that Searle is not looking at the system as a whole, it still does not explain in any way or show any proof whatsoever as to where the thinking in the system is.
What role will computers play in the future? What happens when artificial intelligence gets to the point of actually allowing machines to give birth to original thoughts, or suppose artificial intelligence became identical or superior to human intelligence? While attempting to answer these thought-provoking questions deeper questions arise that are more pertinent in our lives such as what defines being human, or as Morpheous says, “What is…real?” The Matrix as well as the novel Do Androids Dream of Electric Sheep, by Phillip K. Dick, attempt to answer these questions through different matrices. These matrices are implemented into stories to provoke thought and ask the question, what if?
Searle's argument delineates what he believes to be the invalidity of the computational paradigm's and artificial intelligence's (AI) view of the human mind. He first distinguishes between strong and weak AI. Searle finds weak AI as a perfectly acceptable investigation in that it uses the computer as a strong tool for studying the mind. This in effect does not observe or formulate any contentions as to the operation of the mind, but is used as another psychological, investigative mechanism. In contrast, strong AI states that the computer can be created so that it actually is the mind. We must first describe what exactly this entails. In order to be the mind, the computer must be able to not only understand, but to have cognitive states. Also, the programs by which the computer operates are the focus of the computational paradigm, and these are the explanations of the mental states. Searle's argument is against the claims of Shank and other computationalists who have created SHRDLU and ELIZA, that their computer programs can (1) be ascribe...
This paper purports to re-examine the Lucas-Penrose argument against Artificial Intelligence in the light of Complexity Theory. Arguments against strong AI based on some philosophical consequences derived from an interpretation of Gödel's proof have been around for many years since their initial formulation by Lucas (1961) and their recent revival by Penrose (1989,1994). For one thing, Penrose is right in sustaining that mental activity cannot be modeled as a Turing Machine. However, such a view does not have to follow from the uncomputable nature of some human cognitive capabilities such as mathematical intuition. In what follows I intend to show that even if mathematical intuition were mechanizable (as part of a conception of mental activity understood as the realization of an algorithm) the Turing Machine model of the human mind becomes self-refuting.
Computers are machines that take syntactical information only and then function based on a program made from syntactical information. They cannot change the function of that program unless formally stated to through more information. That is inherently different from a human mind, in that a computer never takes semantic information into account when it comes to its programming. Searle’s formal argument thus amounts to that brains cause minds. Semantics cannot be derived from syntax alone. Computers are defined by a formal structure, in other words, a syntactical structure. Finally, minds have semantic content. The argument then concludes that the way the mind functions in the brain cannot be likened to running a program in a computer, and programs themselves are insufficient to give a system thought. (Searle, p.682) In conclusion, a computer cannot think and the view of strong AI is false. Further evidence for this argument is provided in Searle’s Chinese Room thought-experiment. The Chinese Room states that I, who does not know Chinese, am locked in a room that has several baskets filled with Chinese symbols. Also in that room is a rulebook that specifies the various manipulations of the symbols purely based on their syntax, not their semantics. For example, a rule might say move the squiggly
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
Cognitive processes are the unseen systems used by our minds to complete tasks such as solving problems, recognising an object, or learning a language. These unseen mental processes take place in the brain, which is a complex piece of equipment often compared to a computer. When the internal workings of a computer are exposed, all that we see are microchips, circuit boards, hard drives and other assorted pieces, which, all work and ...
Lycan, W. G. (1980) Reply to: "Minds, brains, and programs", The B.B.S. 3, p. 431.
Speculations on the origin of the mind have ranged from ghosts to society. Each new theory brings about more speculation and disagreement than the last. Where the mind resides, where it came from and if the brain has any involvement with the concept are common questions that fuel theory paradigms. Those questions are also the foundation of the debate about the roll of experience versus the existence of innate capacities. Steven Pinker theorizes the mind as a computing system created by the brain to fill the gap between innate capacities and capacities missing using common sense and learned critical thinking skills.
The position that computers are intelligent is supported by three points: refusing to say that computers are intelligent is prejudice towards computers, being intelligent does not mean that one must be knowledgable in all fields; being intelligent in a single area also proves to display intelligence, and there is no single qualification for intelligence; intelligence is measure...
Keil, F. C. and Wilson, R. A. (1999) The MIT Encyclopedia of the Cognitive Sciences. Cambridge, Massachusetts & London, England: The MIT Press
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
In the past few decades we have seen how computers are becoming more and more advance, challenging the abilities of the human brain. We have seen computers doing complex assignments like launching of a rocket or analysis from outer space. But the human brain is responsible for, thought, feelings, creativity, and other qualities that make us humans. So the brain has to be more complex and more complete than any computer. Besides if the brain created the computer, the computer cannot be better than the brain. There are many differences between the human brain and the computer, for example, the capacity to learn new things. Even the most advance computer can never learn like a human does. While we might be able to install new information onto a computer it can never learn new material by itself. Also computers are limited to what they “learn”, depending on the memory left or space in the hard disk not like the human brain which is constantly learning everyday. Computers can neither make judgments on what they are “learning” or disagree with the new material. They must accept into their memory what it’s being programmed onto them. Besides everything that is found in a computer is based on what the human brain has acquired though experience.