Artificial Intelligence, Superintelligence, And Ethical Dilemmas

1487 Words3 Pages

Imagine a scenario in the near future where auto-driving cars are a common sight. People are familiar with machines making decision for them. Nobody questions the effectiveness of these machines. One day, a car is driving its occupant down a windy road, all of a sudden a child runs into the street. The car must now make a decision based on the instructions given to it upon creation. Does the car swerve and crash to miss the child, killing the passenger? Or does it kill the child to save the passenger? This is an ethical problem that has been debated for many years. Ever since the first work into artificial intelligence. When we create intelligent machines which are able to make decisions on their own, it is inevitable that decisions unfavorable …show more content…

Many are worried that if a military were to give an intelligent machine, one that can make its own decisions, control over weapons and systems then catastrophe would soon follow. If an AI were to decide at some point that humans were no longer necessary or that conflict was necessary, it would have control over powerful weapons and be able to wreak havoc in human society. The use of AI for hostile or malicious reasons is almost guaranteed to backfire and cause more damage than ever anticipated. The routes could be varied and complex- corporations seeking technological advantage, countries seeking to beat their enemies, or a slow boiled frog kind of evolution leading to enfeeblement and dependency …show more content…

We would need to create a machine with the ability to hold more memory and processing power that the human brain. Such computers have actually been created. One such computer in China was able to make three times as many calculations as the human brain. The problem arises from creating such a computer with all the abilities an AI would need to overtake human intelligence. Mere technological power isn’t the only limiter though. In order to create an artificial intelligence we would need to know how to program a machine to think and learn on its own (AI Takeover). There have been many technologies that try to imitate this effect. SIRI, and other smart assistants, being an example. To create a machine that actually learns is much more difficult. The machine would have to be able to take in information, determine its importance, and know when such information would be useful. All this without an outside force giving it direction as to what information to use. Essentially, it would need to be programmed to use information like a human, only exponentially faster and more efficient. All these capabilities would need to be designed by humans in some way. After such capabilities are created, would have to somehow program the way the AI uses intelligence. Would it make all decisions based off logic and statistics and probabilities? Would it be able to understand human

Open Document