An Alternative Means to Intelligence
Through cognitive science, computer science, and psychology there has been an underlying question as to what qualifies for intelligent action. Allen Newell and Herbert A. Simon have proposed that a physical symbol system has the necessary and sufficient means for intelligent action. This is a view shared by many other notable figures from a variety of disciplines.
What I would like to do in this essay is present an alternative means to attribute intelligent action. I will try to show that there are limitations to the physical symbol system, and that something is missing in the theory.
Part 2: Method and Presuppositions
In order to show that the physical symbol is not the only means for intelligent action, I am going to attempt to give examples of alternative methods. I will also point out where I feel that Newell and Simon's theory is missing a piece of the puzzle. First I will state the theory of the physical symbol system. I will then give what I feel are appropriate criticisms of the theory. Finally I will show that there are alternative means for ascribing intelligent actions. I presuppose what is meant by intelligent action. This is the underlying question and if this is not already understood then I do not believe that we should be discussing a means for describing it. I will also presuppose what qualitative laws are and how they are used in science.
Part 3: The Text's Argument
Newell and Simon believe that symbols and physical symbol systems are fundamental in explaining intelligent action. In order to understand what a physical symbol system is one must first understand what symbols are. According to Newell and Simon symbols lie at the root of intel...
... middle of paper ...
...ad to concepts that the digital framework cannot achieve, such as human like learning and a strong reliance upon it's environment.
Part 5: Conclusion
I do not believe that the argument I have given for the accuracy against the physical symbol system is fully complete. What I do claim, however, is that I have shown that there are weakness in the theory of physical symbol systems. Overall, I believe that, to say anything that displays intelligent action must be a physical symbol system, such as one described by Newell and Simon, is not fully justified. This being because of the examples stated above.
Word Count: 1, 421
Bibliography:
Part 6: References
Newell, Allen & Simon, Herbert. "Computer Science as Empirical Inquiry: Symbols and Search." In J. Hougeland (Ed.), Mind Design II (pp. 81-95). Cambridge, Massachusetts: The MIT Press, 1997.
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
The purpose of this paper is to present John Searle’s Chinese room argument in which it challenges the notions of the computational paradigm, specifically the ability of intentionality. Then I will outline two of the commentaries following, the first by Bruce Bridgeman, which is in opposition to Searle and uses the super robot to exemplify his point. Then I will discuss John Eccles’ response, which entails a general agreement with Searle with a few objections to definitions and comparisons. My own argument will take a minimalist computational approach delineating understanding and its importance to the concepts of the computational paradigm.
We will discuss on the article of Intentional System Theory by a philosopher Daniel Dennett. The argument that we are going to use from this theory is about the intentional theory where Daniel Dennett thinks that both human and objects have beliefs and desires and from that the behaviors can be interpreted. From the article itself, Intentional System Theory is defined as an analysis of the meanings where people use the terms such as ‘believe’, ‘desire’, ‘expect’, ‘decide’, and ‘intend’ or in the terms of ‘folk psychology’ that we use to interpret, explain, and predict the behavior of other human beings including ourselves, animals and some artifacts such as robots and computers (Daniel, 2009).
This paper purports to re-examine the Lucas-Penrose argument against Artificial Intelligence in the light of Complexity Theory. Arguments against strong AI based on some philosophical consequences derived from an interpretation of Gödel's proof have been around for many years since their initial formulation by Lucas (1961) and their recent revival by Penrose (1989,1994). For one thing, Penrose is right in sustaining that mental activity cannot be modeled as a Turing Machine. However, such a view does not have to follow from the uncomputable nature of some human cognitive capabilities such as mathematical intuition. In what follows I intend to show that even if mathematical intuition were mechanizable (as part of a conception of mental activity understood as the realization of an algorithm) the Turing Machine model of the human mind becomes self-refuting.
The Chinese room argument certainly shows a distinction between a human mind and strong AI. However, it seems that the depths of human understanding can also be a weakness to how it compares to strong AI and the way that knowledge and understanding is derived.
This leaves a particularly large hole in identity theory. From neural dependence and the causal problem, it is almost impractical to endorse any type of dualism. But multiple realizability makes identity theory suspect as well. Also emotional additives, and the fact that epiphenomenalism is self undermining but not impossible, lead to slight suspicion of physicalism in general. Basically, this paper set out to endorse and defend identity theory but has concluded nothing definitively.
Therefore, the human organism although made of multiple “swarms,” is different from other organisms or programs because of the capacity to make conclusions and make illogical and “unnatural” decisions not based on the rudimentary interworking of the brain cells. Therefore although multi agent distributed parallel processing programs, can produce emergent behavior that could possibly be equated to our illogical decisions and creativity, human behavior, although somewhat emergent, stems from a deeper consciousness not generated by the interactions of brain
In order to make sense of the ambiguous and complicated world we live in we need a way in which to perceive phenomena. For any given event there could be numerous causes, and instinctively we choose the cause of most significance. These causes are generally ones that represents a humanlike agent. As these agents are not always easy to detect - we often assume there is a humanlike agent behind phenomena regardless of whether we can identify their presence. He notes that Wegner and Mar and Marcae propose we are inclined to see agency even in things such a geometric figures or 'abstract non living
In this paper I will evaluate and present A.M. Turing’s test for machine intelligence and describe how the test works. I will explain how the Turing test is a good way to answer if machines can think. I will also discuss Objection (4) the argument from Consciousness and Objection (6) Lady Lovelace’s Objection and how Turing responded to both of the objections. And lastly, I will give my opinion on about the Turing test and if the test is a good way to answer if a machine can think.
This essay will address the question of whether computers can think, possess intelligence or mental states. It will proceed from two angles. Firstly it is required to define what constitutes “thinking.” An investigation into this debate however demonstrate that the very definition of thought is contested ground. Secondly, it is required for a reflection on what form artificial intelligence should take, be it a notion of “simulated intelligence,” the weak AI hypothesis, or “actual thinking,” the strong AI hypothesis. (Russell, Norvig p 1020) The first angle informs us of the theoretical pursuit of what it means for something to think, whereas the second seeks to probe how it could demonstrated that thinking is occurring. As a result we have two fissures: on one hand, a disagreement of what constitutes thinking and on the other a question of the methodological approaches to AI. However, this essay will argue that both proponents of the possibility of AI and its detractors, are guilty of an anthropomorphic conception of thought. This is the idea that implicit in the question of whether computers can think, we are really asking whether they can think like us. As a result this debate can be characterised being concerned with narrow human understanding of the concept of thought. This I will argue that this flaw characterises the various philosophical theories of artificial intelligence.
We all know that computers can help a jumbo jet land safely in the worst of weather, aid astronauts in complex maneuvers in space, guide missiles accurately over vast stretches of land, and assist doctors and physicians in creating images of the interior of the human body. We are lucky and pleased that computers can perform these functions for us. But in doing them, computers show no intelligence, but merely carry out lengthy complex calculations while serving as our obedient helpers. Yet the question of whether computers can think, whether they are able to show any true intelligence has been a controversial one from the day humans first realized the full potential of computers. Exactly what intelligence is, how it comes about, and how we test for it have become issues central to computer science and, more specifically, to artificial intelligence. In searching for a domain in which to study these issues, many scientists have selected the field of strategic games. Strategic games require what is generally understood to a high level of intelligence, and through these games, researchers hope to measure the full potential of computers as thinking machines (Levy & Newborn 1).
7. Reingold, Eyal. “Expert Systems”, “Artificial Neural Networks”, “Game Playing”, “Robotics and Computer vision”, “Artificial Life”
In order to see how artificial intelligence plays a role on today’s society, I believe it is important to dispel any misconceptions about what artificial intelligence is. Artificial intelligence has been defined many different ways, but the commonality between all of them is that artificial intelligence theory and development of computer systems that are able to perform tasks that would normally require a human intelligence such as decision making, visual recognition, or speech recognition. However, human intelligence is a very ambiguous term. I believe there are three main attributes an artificial intelligence system has that makes it representative of human intelligence (Source 1). The first is problem solving, the ability to look ahead several steps in the decision making process and being able to choose the best solution (Source 1). The second is the representation of knowledge (Source 1). While knowledge is usually gained through experience or education, intelligent agents could very well possibly have a different form of knowledge. Access to the internet, the la...
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
Artificial intelligence, a figment of our imaginations in the past, but a reality of our futures. As a kid, movies like Smart House and i, robot were just cool ideas that I never could have imagined would be real someday. Artificial intelligence has made false realities of the past, real. Joi Ito, Neil Harbisson, and the movie i, robot all discuss different views from which we can understand artificial intelligence. Through the views of Ito, Harbisson, and i, robot we can analyze how artificial intelligence has and will change the future within the ideas and conclusions these authors have come to.