Essay PreviewMore ↓
One of the "characters" in the 1968 film 2001: A Space Odyssey (directed by Stanley Kubrick) is a computer called HAL 9000. In addition to having a highly developed artificial intelligence, this computer is shown to have certain human emotions. The computer has a great deal of pride in its own abilities, and this leads to it having feelings of jealousy. At one point in the film, HAL lies about a malfunction on the outside of the spaceship. The computer then proceeds to kill all of the crewmembers except one, who is able to kill HAL by disconnecting it. The characterization of HAL in the film raises an interesting question regarding artificial intelligence: Can a computer lie?
In order for this question to be answered, it needs to be rephrased. First, we must consider what factors enable human beings to lie. In my opinion, human beings lie because they have emotions. More specifically, people are motivated to lie to one another because of their desires. In this regard, people who lie generally do so either because they want to get something out of another person or because they want to avoid getting something from another person. The next issue to consider is where desires come from. I feel that desires arise because people have a particular kind of consciousness, which can be called self-awareness. Before desires can arise, a being must have awareness of itself as being distinct from other beings. Furthermore, the being must have a sense that other beings have things that it lacks. Emotions, desires and self-awareness are obviously things which are found in human beings and not in machines. Therefore, the question to be considered is whether or not computers with artificial intelligence will be able to imitate these types of human behaviors in the future. If we can answer this question, we will be able to determine whether or not a computer can have the capability for lying. Unfortunately, this is not an easy question to answer and there is currently a great deal of controversy surrounding this topic. There are many experts who feel that computers will eventually be able to simulate the human mind; however, there are just as many other experts who adamantly disagree with this point of view.
How to Cite this Page
"A Computer's Capability of Lying." 123HelpMe.com. 26 Feb 2020
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- Have people ever wondered what it would be like to find out they have a long lost twin sister. In this novel, The Lying Game by Sara Shepard, Emma is thrust into a world where she is not just a lonely orphan living in a foster home, but a twin to a girl she has never met before. Emma and Sutton live in a very different world from each other. Sara Shepard published The Lying Game in 2010, with the help of Alloy Entertainment. The Lying Game is a novel for young adults ages 14 and up with a sense of mystery, thrill, and drama.... [tags: twins, lying game, alloy entertainment]
1271 words (3.6 pages)
- The works of Shelly Kagan and Charles Fried will be studied in determining the wrongness of lying. The definition of lying will first be presented followed by the arguments as to why lying is perceived as wrong. It will then be argued that lying is not necessary wrong. Lying involves asserting a claim that one knows as false to the other with the intention to mislead the listener. There is generally a widely accepted constraint against lying. Constraint against lying should not be mistaken with a requirement to tell the truth since no such requirement exists.... [tags: Shelly Kagan, Charles Fried, Wrongness of Lying]
1342 words (3.8 pages)
- Having read the book “Animal Farm” I thought it was a good book with a lot of similarities that I could tie it in with reality. I also thought the author explained behavior very nicely. During this book lots of things happen that are much like reality, like tyrants, fake power, and lots of lying. To start off I will tell you about the main characters, and second what they will accomplish during their time in animal farm. I will also discuss what they want whether they're innocent or they lie to be in power.... [tags: tyrants, realistic, behavior, power, lying]
865 words (2.5 pages)
- Microcomputers: (Notebook, Laptop, Desktop and Tablet PC) Microcomputers are the smallest systems in the market. The size range is from calculator to desktop size. Its CPU is microprocessor. It also known as PC, their aim is to satisfy the needs of public and are single-user computers. They are used for common applications such as gaming, social networking, and internet research. Justify the performance, limitations Justify why this type is suitable to a particular environment Desktop: Desktop wasn’t designed to be portable.... [tags: Personal computer, Computer, Minicomputer]
1212 words (3.5 pages)
- Programming languages are the inner workings of computers, enabling them to accomplish all the amazing things they do today. From the apps on a phone, to the video games on a desktop, and even the operation of microwaves, computers are a major part of our everyday lives. Their languages are separated into five generations determined mostly by a specific radical innovation at the end of a time period that expands from the previous generation; however, these innovations are unique in that new generations don’t cause the previous generations to become obsolete.... [tags: Programming language, Computer]
1318 words (3.8 pages)
- The computer, one of man’s smartest inventions ever constructed. A device that was once a near thousands pounds, cost over a million dollars now fits into the palm of a hand and is only a fraction of the cost. Over a century ago, people had to accomplish routine tasks manually. Hand writing letters, scouring terms in the dictionary, and interacting with people face to face are functions that were once a norm for everyone but now are obsolete. In this automatonic, simplistic world we live in, objectives can be completed in fractions of the time it took over a hundred years ago, all thanks to the computer.... [tags: Konrad Zuse, Computer, Computer data storage]
732 words (2.1 pages)
- Statement of Purpose In the pristine place of Goa where parties and fun is at its peak, I was present there for the coding competition of Google. Though standing runner-up I learnt the vital phase of fun with work, in this Endeavour. This competition paved the way for me to pursue Master in Computer Science, which would help me to be a Google employee. Being in the kernel of technological and computing revolution, the field of Artificial Intelligence tends to change the human-computer interaction.... [tags: Computer science, Computer]
1015 words (2.9 pages)
- In computer security, a vulnerability is a weakness which allows an attacker to reduce a system 's information assurance. Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw. To exploit vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface. (Wikipedia). With this previous definition we can assume that a threat occurs when a weakness is Identified and attacked from an outside source for malicious purposes.... [tags: Internet, Computer, Computer security]
716 words (2 pages)
- Lying vs. Truth-Telling in Genesis, Othello, and The Lie As children we are taught to always tell the truth in every situation. Catchy clichés such as "the truth will set you free" are used to reinforce honesty in our minds. However, is it possible that lying can further your success in life, more so than honesty. Literary evidence seems to support this. Even the Bible offers stories of lying and cheating without consequence. Three literary works–the book of Genesis, William Shakespeare’s Othello, and Sir Walter Ralegh’s poem The Lie–offer support that, perhaps, the truth is not always what it’s cracked up to be.... [tags: Genesis Othello Lie Lying Essays]
1135 words (3.2 pages)
- History of the Computer The first devices that resemble modern computers date to the mid-20th century (around 1940 - 1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers. Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery.... [tags: Technology Innovation Computer History]
1231 words (3.5 pages)
In the 1950s, Allen Newell and Herbert Simon were among the first researchers to develop the theory that computers would eventually be able to imitate the human mind. Newell and Simon compared artificial intelligence to the human mind by creating an "information-processing model." According to this model, a computer and a human mind are similar to one another in that both are designed to process information by manipulating symbols. On the basis of this model, Newell and Simon concluded it would not be long before computers would be able to "think" like humans; not merely in a superficial sense, but in the exact same way. Their hypothesis was that "a physical symbol system has the necessary and sufficient means for general intelligent action."
Further support for this view came from the British mathematician Alan Turing. Turing devised a theoretical experiment (now known as the "Turing Test") in which a person has a conversation over a teletype machine. The person does not know whether the teletype machine is connected to a human being or to a computer on the other end. A series of questions are asked in an effort to determine which one it is. According to this test, "if no amount of questioning or conversation allows you to tell which it is, then you have to concede that a machine can think." Alan Turing used this idea to formulate one of the fundamental principles of artificial intelligence: "If a machine behaves intelligently, we must credit it with intelligence." Another way of saying this is: "A perfect simulation of thinking is thinking." On the surface, this seems like a rather weak conclusion. However, Robert Sokolowski makes an interesting point in his article "Natural and Artificial Intelligence." In that article, Sokolowski notes that there are actually two different ways of looking at the word "artificial." On the one hand, it can relate to something like "artificial flowers," which are made of paper or plastic and are therefore obviously not real flowers. On the other hand, it can relate to an idea like "artificial light," which really is light. Thus, in the words of Sokolowski, artificial light "is fabricated as a substitute for natural light, but once fabricated it is what it seems to be." The proponents of AI believe that the word "artificial," as used in "artificial intelligence," is capable of reflecting this second meaning as well as the first.
The followers of Newell, Simon, and Turing would agree with the idea that a machine could have both desires and awareness. In this regard, scientists have learned that emotions in human beings are aroused by chemicals in the brain. Although computers work with electricity instead of chemicals, an analogy can easily be drawn between the processing of information in a computer and the processing of emotional "information" in the brain. In his book Man-Made Minds: The Promise of Artificial Intelligence, M. Mitchell Waldrop makes the point that emotions are not simply random events; rather, they serve important functions in the lives of human beings. Based on recent discoveries in the field of psychology, Waldrop claims that emotions serve two major functions. The first is to help people focus their attention on things that are important to them. The second purpose, which is related to the first, is to help people determine what goals and motivations are important to them. According to Waldrop, there is no reason why a computer could not be programmed to carry out these same functions. In fact, Waldrop indicates that computer programs have been developed in recent years that seem to be on the border of expressing rudimentary emotions. These programs could be used to enable a computer to tell when its operator is sad or angry. In the words of Waldrop, "such a computer could then be programmed to make the appropriate responses - saying comforting words in the first case, or moving the conversation toward a less provocative subject in the second." As Waldrop points out, this behavior would make it seem as if the computer was "empathizing" with the operator.
In defense of those who believe that computers might someday be able to imitate human beings, Waldrop claims that people can be easily confused on this issue if they only think of a computer as a cold, empty machine consisting of lights and switches. This perspective changes dramatically when a person imagines the type of robot that might be possible in the future: "A computer equipped with artificial eyes, ears, arms, legs, and skin - even artificial glands." This type of android is often seen in science fiction books, movies and television shows. The issue of whether a computer shares human traits becomes more confusing when it is made to look and act like a human being.
Despite the efforts of the AI proponents, they have not yet been able to create a machine that is truly capable of simulating the human mind. The process of creating this type of computer has turned out to be much more difficult than the researchers of the 1950s anticipated. Furthermore, since the 1960s, there has been a group of scientists who disagree with the idea that a computer can ever have a human-like consciousness. In actuality, the belief that a machine is incapable of thinking can be traced back to the seventeenth century and the theories of the French philosopher Rene Descartes. However, this argument gained new life in the 1960s, in rebuttal to the efforts of AI researchers such as Alan Turing and others. The opponents of AI research feel that there is something unique about human nature that can never be duplicated in a computer. In the words of Hilary Putnam, a professor at Harvard University: "The question that won't go away is how much what we call intelligence presupposes the rest of human nature." M. Mitchell Waldrop agrees that there is something special about human consciousness. According to Waldrop, "the essence of humanity isn't reason, logic, or any of the other things that computers can do; it's intuition, sensuality, and emotion."
This perspective is known as the holistic point of view because its proponents believe that the human brain is not simply a device for information processing. Rather, it is believed that there is something more to the mind - an intuitive or spiritual side. According to this point of view, the differences between computers and humans cannot be fully understood unless the entirety of human experience is taken into consideration. One of the first researchers to advocate this point of view was Hubert Dreyfus, author of a book entitled What Computers Can't Do. Later, Hubert Dreyfus wrote more books and articles on the topic with the assistance of his brother Stuart. The Dreyfus brothers feel that the arguments of the AI researchers are too simplistic. In this regard, they claim: "Too often, computer enthusiasm leads to a simplistic view of human skill and expertise." The AI proponents are accused of having a limited perspective that fails to address such things as human intuition. According to Hubert and Stuart Dreyfus, this failure occurs because intuition is not apparent within the matter of the brain. As such, the Dreyfus brothers reject the "information-processing model" and propose instead "a nonmechanistic model of human skill."
The noted philosopher Mortimer J. Adler agrees that human intelligence is not a material thing. For this reason, Adler likewise agrees that computers cannot truly compete with the marvelous powers of the human mind. Although computers can imitate the mind in many ways, "they cannot do some of the things that almost all human beings can do, especially those flights of fancy, those insights attained without antecedent logical thought, those innovative yet nonrational leaps of the mind." In fact, Adler makes a direct rebuttal against the theoretical viewpoint of Newell and Simon with his claim "that the brain is only a necessary, but not the sufficient, condition of conceptual thought, and that an immaterial intellect as a component of the human mind is required in addition to the brain as a necessary condition."
There are many other elements of the human mind that are currently inapplicable to computers. For example, computers are incapable of utilizing what Dreyfus and Dreyfus refer to as "everyday know-how." By this, the Dreyfus brothers "do not mean procedural rules but knowing what to do in a vast number of special cases." The Dreyfus brothers also note that computers lack the ability to generalize, as well as the ability to learn from their own experiences. In order for a machine to be truly intelligent, "it must be able to generalize; that is, given sufficient examples of inputs associated with one particular output, it should associate further inputs of the same type with that same output." Hilary Putnam points out that true human intelligence requires more than the manipulation of codes and symbols. Thus, "to figure out what is the information implicit in the things people say, the machine must simulate understanding a human language." Again, this is something that is currently missing in computer technology. Furthermore, it is a thing that may never be achievable. In this regard, Putnam refers to the research of the linguist Noam Chomsky, who discovered that there might be a "template for natural language" within the human mind. This "template," which enables people to learn languages, may be at least partially innate, or "hard-wired-in," to use the terminology of Hilary Putnam. Even if this type of thing could be transferred to a computer, the programming involved would be extremely complex and could take years to accomplish. After all, it takes a human child many years to learn to master a language in all its subtlety.
Robert Sokolowski also describes some of the human things that are lacking in computers. These include "quotation," or the ability to appreciate another person's point of view. Sokolowski also mentions the inability of computers to make creative distinctions. In addition, Sokolowski refers to the fact that today's computers are incapable of having passionate desires. According to Hubert Dreyfus, there is yet another vital thing that is found in human beings but missing in computers - a body. In his book What Computers Can't Do, Dreyfus claims that pattern recognition is an important aspect in true artificial intelligence. However, he also claims that this ability "is possible only for embodied beings," because it "requires a certain sort of indeterminate, global anticipation." Although Dreyfus acknowledges the possibility of androids with human-like bodies in the future, he does not think that this will ever be the same as having a real human body. The difficulties of trying to make a computer behave like a human being can be seen in a program created by K. M. Colby called "Simulation of a Neurotic Process." This program is supposed to simulate the thinking of a woman who is suffering from repressed emotions, as well as feelings of anxiety and guilt. However, as noted by Margaret A. Boden, there are several failings in this program and thus the results are not as deep and complex as what would be found in a real human being. Because of this, Boden claims that this "neurotic program" is not a true representation of neurotic behavior; rather, "it embodies theories representing clumsy approximations of these psychological phenomena."
Thus, in answering the question "Can a computer lie?," the answer is clearly, no, not at this time. Of course, it is possible that a present-day computer could be programmed to lie; however, on its own, a computer lacks the necessary self-awareness and desire to have the motivation to commit such an act. Yet, I have to agree with the idea that anything may be possible in the future. Even Mortimer J. Adler, while arguing in favor of the immateriality of intelligence, admits that "the present difference in the degree of structural complexity between the human brain and that of artificial intelligence machines can certainly be overcome in the future." Perhaps the AI advocates like Alan Turing, Allen Newell and Herbert Simon will eventually be proved to be right. Perhaps future computers will be programmed to imitate human intelligence so well that androids will begin to act like humans in emotional and intuitive ways. If that happens, then, as in the case of "artificial light," it will no longer be possible for people to distinguish between a human mind and artificial mind. It is even possible that machines may someday learn to duplicate themselves. If that happens, it is possible that computers will evolve over time as they learn to adapt to the environment. One of the things that computers would probably acquire is human-like emotion, because these would help them in their successful adaptation, just as they have helped human beings in the same way. At that time, it would certainly be possible for a computer to lie in order to gain its "selfish" ends, just as a human is capable of doing now. The idea of a computer of the future lying is really not too far-fetched. This is true, even if such a computer were still less intelligent than an average adult human. After all, children and even pet animals are capable of practicing deceit on one level or another in order to get something that they want out of a parent or master.
The main objection to the idea that computers might someday become human-like is that it would imply that humans are not as special and unique as they like to think they are. In the words of Daniel C. Dennet: "There is something about the prospect of an engineering approach to the mind that is deeply repugnant to a certain sort of humanist." Marvin Minsky, an AI enthusiast from the Massachusetts Institute of Technology, put this threat more vividly with this claim that "the brain happens to be a meat machine." However, as Waldrop points out, scientific progress has always represented uncomfortable change for human beings. This can be seen, for example, in the discoveries of Copernicus, Darwin, and Freud, all of which marked dramatic changes in the ways human beings saw themselves and their place in the world. Perhaps, as Waldrop argues, such scientific advances don't have to be taken as a "message of doom." Perhaps, as computers become more intelligent, the more subtle and vital differences that make humans unique from machines will become apparent. Thus, we will gain deeper insights into the human mind and what it really means to be a human being. The scientist Douglas Hofstadter, who claims that the “reductionism” of comparing the human mind to a computer does not bother him, gives another optimistic opinion. In Hofstadter's words: "To me, reductionism does not 'explain away'; rather, it adds mystery." Therefore, a future in which machines are more human-like is not only possible; it also might not be as bad as present-day human’s fear.
Adler, Mortimer J. Intellect: Mind Over Matter. New York: Collier Books, 1990.
Boden, Margaret A. Artificial Intelligence and Natural Man. New York: Basic Books, 1977.
Dennett, Daniel C. "When Philosophers Encounter Artificial Intelligence." Daedalus 117 (Winter 1988), 283-295.
Dreyfus, Hubert L. What Computers Can't Do: The Limits of Artificial Intelligence. Revised Edition. New York: Harper Colophon Books, 1979.
Dreyfus, Hubert L., and Stuart E. Dreyfus. "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint." Daedalus 117 (Winter 1988), 15-43.
Dreyfus, Hubert L., and Stuart E. Dreyfus. Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York: The Free Press, 1986.
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. New York: W. H. Freeman and Company, 1979.
Putnam, Hilary. "Much Ado About Not Very Much." Daedalus 117 (Winter 1988), 269-281.
Sokolowski, Robert. "Natural and Artificial Intelligence." Daedalus 117 (Winter 1988), 45-64.
Waldrop, M. Mitchell. Man-Made Minds: The Promise of Artificial Intelligence. New York: Walker and Company, 1987.