In this paper, I will explore ethical issues to the artificial intelligence. In Wallach and Allen coauthored “Moral machines: teaching robots right from wrong”, they explore on many theories and practical issues for AMAs. I will use this book to interpret Wallach and Allen’s ideas of ethical design.
I. Wallach and Allen describe the necessity for good construction on AMAs
(A) The world needs good build AMAs
In the introductory part of Wallach and Allen’s book, they present three essential issues, one of them is “does the world need AMAs?”(p.9) Wallach and Allen give a positive answer, they believe that with the rapid development in new technology and mechanization, intelligent autonomous robot begin coming into our lives, AMAs may bring a variety of ethical and social issues, in support of artificial intelligence, to human society. This makes the AMAs show a certain degree of harm to society. However, the development of AMAs cannot be stopped because some futurists and social critics are criticizing the issues in the future technology of AMAs may arise. Facing the importance of risk assessment in the progress of building AMAs, they seek “precautionary principle” from ethics to regulate the AMAs, and point out that there should be a standard to use or not use those ethical principles. (p.52) conditionally assessing risk using appropriate methods, and use this basis to reassess the danger of developing AMAs is greater than risk or not. Wallach and Allen maintain an optimistic attitude about the future, they believe in the near future, “It will be possible to engineer systems that are more sensitive to the laws and moral considerations that inform ethical decisions than anything presently available.”(p.214) in view of this, Wallach a...
... middle of paper ...
...tinction theory is founded on James Moor’s level theory, Moor classified AMAs as three categories, “implicit ethical agents”, “explicit ethical agents”, and “full ethical agents”, while Wallach and Allen divide ethical level within AMAs situations into “operational morality”, “functional morality” and “full-blown moral agency”, this is so called “three-fold distinction” theory. From this I can infer that in Moore’s theoretical framework, a “autopilot” is treated as an implicit ethical agents; but from Wallach and Allen’s point of view, although its ethical value has a very low sensitivity, but in some way that it matches the implication of functional morality.
Turing, Alan M. "Computing machinery and intelligence." Mind 10 1950: 433-460.
Wallach, Wendell and Colin Allen. Moral machines: teaching robots right from wrong. Oxford University Press, 2010.