1. Computer lacks conscious awareness because it lacks real emotions. It only does what it is programmed to do. For example, if a computer were to hit its head on a door and its response is “ouch!”, that’s is way of pretending to have conscious awareness. In actuality it is only doing what was pre-programmed for it to say when responding to what looks painful to a human. If a computer were to excel on a test it will not really be aware that it did well. Even if the computer gave the impression of a high intelligence level and was able to answer most questions like a smart self-aware human, it wouldn’t have a continuance of attentiveness, nor would it be aware of what it thinks or knows because it only does what it has been calculated to do. …show more content…
Yet the likeliness of a computer and brains isn't only superficial: In their elementary levels, computers and brains develop information in a comparable way. As computers use zeros and ones to reserve and use data, neurons in our brains send data in binary, back and forth. This essential likeness with computers and a human brain is what dualist manipulates to fool someone into thinking that a computer is conscious of its awareness.
2. In Grounding for the Metaphysics of Morals, Immanuel Kant explains extensively the morals of the Categorical Imperative. He gives many examples such as the one about false promises. An individual is facing financial issues and is in need of money. This person wants to obtain cash by asking for a loan and agrees/promises to pay it all back. S/he doesn’t really plan on paying back. What will be the purpose for being dishonest and claiming this? Is it good or
…show more content…
There is two ways of reflection. For objective reflection, truth becomes an object, and the perspective may be to disregard the knowing subject (the individual). As opposed to subjective reflection, truth becomes personal appropriation, a life, inwardness, and the purpose is to immerse oneself in this subjectivity. “Objective knowledge” could basically allude on information from claiming an objective truth. Subjective information may afterward be learning about any subjective actuality. There are, however, different utilization of the wording identified with objectivity. Large portions scholars utilize the expression “subjective knowledge” to allude best with information about one’s identity or subjective state. Such information is separated starting with one’s learning of an additional individual’s subjective states, also from learning about target reality, which could be targeted information under the display definitions. Your learning of an additional person’s subjective states can be called objective knowledge since it is presumably and only that reality that is “object” for you, in the same way that you just as your subjective states need aid and only that reality that is “object” for the opposite man. This is a conspicuous qualification for epistemology (the philosophical consider of knowledge) in light of a number scholars have supported that subjective learning in this sense need an extraordinary status. They assert, roughly, that information of one’s identity or
Andy Clark strongly argues for the theory that computers have the potential for being intelligent beings in his work “Mindware: Meat Machines.” The support Clark uses to defend his claims states the similar comparison of humans and machines using an array of symbols to perform functions. The main argument of his work can be interpreted as follows:
In the essay titled “Foundations of the Metaphysics of Morals” published in the Morality and Moral Controversies course textbook, Immanuel Kant argues that the view of the world and its laws is structured by human concepts and categories, and the rationale of it is the source of morality which depends upon belief in the existence of God. In Kant’s work, categorical imperative was established in order to have a standard rationale from where all moral requirements derive. Therefore, categorical imperative is an obligation to act morally, out of duty and good will alone. In Immanuel Kant’s writing human reason and or rational are innate morals which are responsible for helping human. Needless to say, this also allows people to be able to distinct right from wrong. For the aforementioned reasons, there is no doubt that any action has to be executed solely out of a duty alone and it should not focus on the consequence but on the motive and intent of the action. Kant supports his argument by dividing the essay into three sections. In the first section he calls attention to common sense mor...
Kant describes them by stating, “When I conceive a hypothetical imperative in general, I do not know beforehand what it will contain- until its condition is give. But if I conceive a categorical imperative, I know at once what it contains,” (88). Like before, categorical imperatives are absolutely moral in themselves, meaning they do not rely on a person’s desires or feelings. This is compared with hypothetical imperatives, which are obligations that have an end result of your action, which in turn results in your personal desires or thoughts. An example of a hypothetical imperative is, “I need to ea... ...
Immanuel Kant’s work on Grounding for the Metaphysics of Morals explores the understanding of morels, and the process of which these morals are developed through philosophy. He also disentangled the usefulness and foundation of the instituted of religion.
Immanuel Kant is steadfast in his belief that before anyone can do anything absolutely moral, they must reason what would occur if every person on Earth did this exact thing, or as he puts it, “Act only according to that maxim whereby you can at the same time will that it should become a universal law” (Kant, Grounding for the Metaphysics of Morals, 30). This philosophy seems sound, but is actually inherently flawed, as when it comes into conflict with his opinions on lying, it makes both points to be somewhat impossible to live by. It also does not account for different people operating in different situations all over the world, instead opting for some sort of absolute, infallible morality. This casts ethics in a disturbingly black and white
Philosophy is the study of knowledge, reality, existence and thought processes. Immanuel Kant from Prussia, (currently Russia) for whom was influential during the Enlightenment period; and John Stuart Mill from Great Britain whom was present during the Romantic era, explored ideas that they believed would create a more fair and just society, by trying to legislate morality. Morality cannot be legislated because it is a concept of right and wrong created by each different religion, region and culture; issues are not black and white.
Immanuel Kant is a popular modern day philosopher. He was a modest and humble man of his time. He never left his hometown, never married and never strayed from his schedule. Kant may come off as boring, while he was an introvert but he had a great amount to offer. His thoughts and concepts from the 1700s are still observed today. His most recognized work is from the Groundwork of the Metaphysics of Morals. Here Kant expresses his idea of ‘The Good Will’ and the ‘Categorical Imperative’.
If a machine passes the test, then it is clear that for many ordinary people it would be a sufficient reason to say that that is a thinking machine. And, in fact, since it is able to conversate with a human and to actually fool him and convince him that the machine is human, this would seem t...
Cognitive processes are the unseen systems used by our minds to complete tasks such as solving problems, recognising an object, or learning a language. These unseen mental processes take place in the brain, which is a complex piece of equipment often compared to a computer. When the internal workings of a computer are exposed, all that we see are microchips, circuit boards, hard drives and other assorted pieces, which, all work and ...
If Dualism is true and minds are nonphysical, then it is impossible for us to make contact with the minds of others. We assume that if another person behaves in the same manner as us, then they must have a mind. This could be troublesome because according to Turing, machines may soon be able to mimic human beha...
For years philosophers have enquired into the nature of the mind, and specifically the mysteries of intelligence and consciousness. (O’Brien 2017) One of these mysteries is how a material object, the brain, can produce thoughts and rational reasoning. The Computational Theory of Mind (CTM) was devised in response to this problem, and suggests that the brain is quite literally a computer, and that thinking is essentially computation. (BOOK) This idea was first theorised by philosopher Hilary Putnam, but was later developed by Jerry Fodor, and continues to be further investigated today as cognitive science, modern computers, and artificial intelligence continue to advance. [REF] Computer processing machines ‘think’ by recognising information
In Grounding for the Metaphysics of Morals, Immanuel Kant argues that human beings inherently have capability to make purely rational decisions that are not based on inclinations and such rational decisions prevent people from interfering with freedom of another. Kant’s view of inherent ability to reason brings different perspective to ways which human beings can pursue morality thus it requires a close analytical examination.
The “human sense of self control and purposefulness, is a user illusion,” therefore, if computational systems are comparable to human consciousness, it raises the questions of whether such artificial systems should be treated as humans. (261) Such programs are even capable of learning like children, with time and experience; the programs “[get] better at their jobs with experience,” however, many can argue the difference is self-awareness and that there are many organisms that can conduct such complex behavior but have no sense of identity.
The traditional notion that seeks to compare human minds, with all its intricacies and biochemical functions, to that of artificially programmed digital computers, is self-defeating and it should be discredited in dialogs regarding the theory of artificial intelligence. This traditional notion is akin to comparing, in crude terms, cars and aeroplanes or ice cream and cream cheese. Human mental states are caused by various behaviours of elements in the brain, and these behaviours in are adjudged by the biochemical composition of our brains, which are responsible for our thoughts and functions. When we discuss mental states of systems it is important to distinguish between human brains and that of any natural or artificial organisms which is said to have central processing systems (i.e. brains of chimpanzees, microchips etc.). Although various similarities may exist between those systems in terms of functions and behaviourism, the intrinsic intentionality within those systems differ extensively. Although it may not be possible to prove that whether or not mental states exist at all in systems other than our own, in this paper I will strive to present arguments that a machine that computes and responds to inputs does indeed have a state of mind, but one that does not necessarily result in a form of mentality. This paper will discuss how the states and intentionality of digital computers are different from the states of human brains and yet they are indeed states of a mind resulting from various functions in their central processing systems.
In the past few decades we have seen how computers are becoming more and more advance, challenging the abilities of the human brain. We have seen computers doing complex assignments like launching of a rocket or analysis from outer space. But the human brain is responsible for, thought, feelings, creativity, and other qualities that make us humans. So the brain has to be more complex and more complete than any computer. Besides if the brain created the computer, the computer cannot be better than the brain. There are many differences between the human brain and the computer, for example, the capacity to learn new things. Even the most advance computer can never learn like a human does. While we might be able to install new information onto a computer it can never learn new material by itself. Also computers are limited to what they “learn”, depending on the memory left or space in the hard disk not like the human brain which is constantly learning everyday. Computers can neither make judgments on what they are “learning” or disagree with the new material. They must accept into their memory what it’s being programmed onto them. Besides everything that is found in a computer is based on what the human brain has acquired though experience.