Essay PreviewMore ↓
Sigmund Freud said that we have an uncanny reaction to the inanimate. This is probably because we know that - despite pretensions and layers of philosophizing - we are nothing but recursive, self aware, introspective, conscious machines. Special machines, no doubt, but machines althesame.
The series of James bond movies constitutes a decades-spanning gallery of human paranoia. Villains change: communists, neo-nazis, media moguls. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: the machine. James Bond always finds himself confronted with hideous, vicious, malicious machines and automata.
It was precisely to counter this wave of unease, even terror, irrational but all-pervasive, that Isaac Asimov, the late Sci-fi writer (and scientist) invented the Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Many have noticed the lack of consistency and the virtual inapplicability of these laws put together. First, they are not the derivative of any coherent worldview and background. To be properly implemented and to avoid a potentially dangerous interpretation of them - the robots in which they are embedded must be also equipped with a reasonably full model of the physical and of the human spheres of existence. Devoid of such a context, these laws soon lead to intractable paradoxes (experiences as a nervous breakdown by one of Asimov's robots). Conflicts are ruinous in automata based on recursive functions (Turing machines) as all robots must be. Godel pointed at one such self destructive paradox in the "Principia Mathematica" ostensibly comprehensive and self consistent logical system. It was enough to discredit the whole magnificent edifice constructed by Russel and Whitehead over a decade.
Some will argue against this and say that robots need not be automata in the classical, Church-Turing, sense. That they could act according to heuristic, probabilistic rules of decision making. There are many other types of functions (non-recursive) that can be incorporated in a robot. True, but then, how can one guarantee full predictability of behaviour? How can one be certain that the robots will fully and always implement the three laws?
How to Cite this Page
"The Fourth Law Of Robotics." 123HelpMe.com. 02 Apr 2020
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- Many people today have faced a time or two where their person, property, or homes have been search by law enforcement. Search and seizure is when law enforcement authorities or police officers suspect someone of criminal activity and performs a search. During the search the officer may take anything that can be used as evidence to present to the courts. It is a chance that some people’s rights and privacy have been violated during these searches and seizures. The United States Constitution Fourth Amendment has been put into place to protect the rights of citizens against unreasonable searches and seizure by law enforcement authorities.... [tags: Fourth Amendment to the US Constitution]
706 words (2 pages)
- According to author Janis Svilpis, science fiction works as a “literature of ideas,” functioning as inspiration for theorists, scientists, and technological engineers (430). In robotic intimacies, the Turing Test is renowned for developing a test for measuring the intelligence of an artificial intelligence (AI). It can be assumed that the Turing Test will become more relevant as robotic technology advances, with authors like Rodney Brooks claiming that we are in a “robotics revolution” (10). With the inevitability of the robotics revolution, this essay will ask the question of what conditions are necessary for robots to be considered equal to humans.... [tags: Robotics, Robot, Psychology, Alan Turing]
1768 words (5.1 pages)
- Research and Development of Artificial intelligence in Robotics This paper is to see how far Artificial intelligence in robotics has come and helped make our life easier. In the first paragraph I’ll be talking about what is artificial intelligence and what’s the functions with robots. The next paragraph I’ll be talking about how far artificial intelligence has come. The third paragraph will discuss how we use artificial intelligence now in robots. The fourth paragraph will be about how the military use artificial intelligence to help them out.... [tags: Robotics, Artificial intelligence, Psychology]
1008 words (2.9 pages)
- This paper is to see how far Artificial intelligence in robotics has come and helped make our life easier. In the first paragraph, I’ll be talking about what is artificial intelligence and what’s the functions with robots. The next paragraph I’ll be talking about how far artificial intelligence has come. The third paragraph will discuss how we use artificial intelligence now in robots. The fourth paragraph will be about how the military use artificial intelligence to help them out. My fifth paragraph will be how advance humanized AI robots look like.... [tags: Robotics, Artificial intelligence, Psychology]
1000 words (2.9 pages)
- The Fourth Amendment of the Constitution pertains to search and seizure and exists in order to protect citizens of the United States from unreasonable inquiries and detainment. The exact wording of the Fourth Amendment is as follows: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized (“Fourth Amendment”, 2014., p.... [tags: Fourth Amendment to the US Constitution]
1659 words (4.7 pages)
- The BEST Robotics Competition is a fairly new program to be implemented in full at High School. It started last year and has been a source of both exasperation and mirth to the competitors. There are many different aspects of the competition, and the most important of these are the robot and the program. The robot is the main focus of our program for reasons of manpower and time constraints. The robot is subdivided out into many groups and assignments. The base team had to build a lightweight design that could support the arm, attach the wheels, and be able to push around the balls.... [tags: robotics]
1001 words (2.9 pages)
- The Fourth Amendment passed by congress in September 1789, and ratified again in December of 1791. It gave people the right to feel safe in their own houses, their castles. The Fourth Amendment gives people the right to be secure in their homes, their persons, that they cannot be subject to unreasonable searches and seizures. That no warrants shall be issued without any probable cause; and that in the warrant the places to be searched, or things to be seized must be clearly described and listed.... [tags: Fourth Amendment to the US Constitution]
813 words (2.3 pages)
- Abstract The United States Constitution contains basic rights, and some of those rights are the First Ten Amendments, that are known as Bill of Rights. In the Bill of Rights, the Fourth Amendment protects the people from unreasonable search, seizure and arrest. This paper will explore the history of the Fourth Amendment from the beginning until it was incorporated in the Bill of Rights, arrests, searches, exclusionary rule, warrant requirements, the fruit of the poisonous tree and what it is the USA Patriot Act.... [tags: Fourth Amendment to the United States Constitution]
1997 words (5.7 pages)
- Robotics ABSTRACT This paper discusses three aspects of the field of robotics The first is the history of where the ideas of robotics originated. Second, what was the effect that these ideas had on society. Finally, what developments in the field have proved to be useful to society. INTRODUCTION "Klatuu verita nicto!" These are the words spoken to turn away the robot that would destroy the earth in the movie The Day the Earth Stood Still. Hollywood has portrayed the robot as both a friend and an enemy.... [tags: Robotics Robot]
1553 words (4.4 pages)
- Abstract Robotics is the branch of Engineering and Science that deals with the designing, constructing, and usage of Robots as well as other systems. Robots are made from numerous mechanical systems that allow it to move or do other stuff like saying something. Robotics allows us to develop machines that can do our work for us, mostly in manufacturing but they can also be used in dangerous environments such as the Lunar Rover currently on the moon. Robots are a very popular topic in the entertainment industry with examples such as “The Iron Giant” and “Pacific Rim”.... [tags: Robotics, Robot, Industrial robot, George Devol]
1436 words (4.1 pages)
This article will deal with some commonsense, basic problems immediately discernible upon close inspection of the Laws. The next article in this series will analyse the Laws from a few vantage points: philosophy, artificial intelligence and some systems theories.
An immediate question springs to mind : HOW will a robot identify a human being? Surely, in an age of perfect androids, constructed of organic materials, no superficial, outer scanning will suffice. Structure and composition will not be sufficient factors of differentiation. There are two possibilities to settle this very practical issue: one is to endow the robot with the ability to conduct a Converse Turing Test, the other is to somehow "barcode" all the robots by implanting some signalling device inside them. Both present additional difficulties.
In the second case, the robot will never be able to positively identify a human being. He will surely identify robots. This is ignoring, for discussion's sake, defects in manufacturing or loss of the implanted identification tag - if the robot will get rid of the tag, presumably this will fall under the "defect in manufacturing" category. But the robot will be forced to make a binary selection: one type of physical entities will be classified as robots - all the others will be grouped into "non-robots". Will non-robots include monkeys and parrots ? Yes, unless the manufacturers equip the robots with digital or optical or molecular equivalent of the human image in varying positions (standing, sitting, lying down). But this is a cumbersome solution and not a very effective one: there will always be the odd position which the robot will find hard to locate in its library. A human disk thrower or swimmer may easily be passed over as "non-human" by a robot. So will certain types of amputated invalids.
The first solution is even more seriously flawed. It is possible to design a test which the robot will apply to distinguish a robot from a human. But it will have to be non-intrusive and devoid of communication or with very limited communication. The alternative is a prolonged teletype session behind a curtain, after which the robot will issue its verdict: the respondent is a human or a robot. This is ridiculous. Moreover, the application of such a test will make the robot human in most of the important respects. A human knows other humans for what they are because he is human. A robot will have to be human to recognize another, it takes one to know one, the saying (rightly) goes.
Let us assume that by some miraculous way the problem will be overcome and robots will unfailingly identify humans. The next question pertains to the notion of "injury" (still in the First Law). Is it limited only to a physical injury (the disturbance of the physical continuity of human tissues or of the normal functioning of the human body)? Should it encompass the no less serious mental, verbal and social injuries (after all, they are all known to have physical side effects which are, at times, no less severe than direct physical "injuries"). Is an insult an injury? What about being grossly impolite, or psychologically abusing or tormenting someone? Or offending religious sensitivities, being politically incorrect ? The bulk of human (and, therefore, inhuman) actions actually offend a human being, has the potential to do so or seem to be doing so. Take surgery, driving a car, or investing all your money in the stock exchange - they might end in coma, accident, or a stock exchange crash respectively. Should a robot refuse to obey human instructions which embody a potential to injure said instruction-givers? Take a mountain climber - should a robot refuse to hand him his equipment lest he falls off the mountain in an unsuccessful bid to reach the peak? Should a robot abstain from obeying human commands pertaining to crossing busy roads or driving sports cars? Which level of risk should trigger the refusal program? In which stage of a collaboration should it be activated? Should a robot refuse to bring a stool to a person who intends to commit suicide by hanging himself (that's an easy one), should he ignore an instruction to push someone jump off a cliff (definitely), climb the cliff (less assuredly so), get to the cliff (maybe so), get to his car in order to drive to the cliff in case he is an invalid - where does the responsibility and obeisance buck stop?
Whatever the answer, one thing is clear: such a robot must be equipped with more than a rudimentary sense of judgement, with the ability to appraise and analyse complex situations, to predict the future and to base his decisions on very fuzzy algorithms (no programmer can foresee all possible circumstances). To me, this sounds much more dangerous than any recursive automaton which will NOT include the famous Three Laws.
Moreover, what, exactly, constitutes "inaction"? How can we set apart inaction from failed action or, worse, from an action which failed by design, intentionally? If a human is in danger and the robot tried to save him and failed - how will we be able to determine to what extent it exerted itself and did everything that it could do?
How much of the responsibility for the inaction or partial action or failed action should be attributed to the manufacturer - and how much imputed to the robot itself? When a robot decides finally to ignore its own programming - how will we be informed of this momentous event? Outside appearances should hardly be expected to help us distinguish a rebellious robot from a lackadaisical one.
The situation gets much more complicated when we consider conflict states. Imagine that a robot has to hurt one human in order to prevent him from hurting another. The Laws are absolutely inadequate in this case. The robot should either establish an empirical hierarchy of injuries - or an empirical hierarchy of humans. Should we, as humans, rely on robots or on their manufacturers (however wise and intelligent) to make this selection for us? Should abide by their judgement - which injury is more serious than the other and warrants their intervention?
A summary of the Asimov Laws would give us the following "truth table":
A robot must obey human orders with the following two exceptions:
That obeying them will cause injury to a human through an action or
That obeying them will let a human be injured
A robot must protect its own existence with three exceptions:
That such protection will be injurious to a human
That such protection entails inaction in the face of potential injury to a human
That such protection will bring about insubordination (not obeying human instructions).
Here is an exercise: create a truth table based on these conditions. There is no better way to demonstrate the problematic nature of Asimov's idealized yet highly impractical world.