The Driverless Dilemma

1417 Words3 Pages

Driverless Dilemma One of the Google self-driving cars experienced an accident on September 23 of 2016. The car drove through a green light but stopped in the middle of the intersection. The car sensed another car going to run a red light and applied its brakes. The car kicked into manual mode, but the passenger’s reaction took too much time. The speeding car rammed into the autonomous car and caused an accident. Both vehicles sustained heavy damage. (Hartmans 2) Driverless cars kill people. With the years flying by, driverless cars seem very close to coming into the world. New technology comes with new issues all the time. Sometimes these problems don’t matter, but people must see the issues with the driverless car. Driverless cars should not be utilized due to the massive ethical programming debate and technical problems that make the car’s safety questionable. Driverless cars do hold potential in reducing the amount of accidents on the road. One article states that human mistakes make up more than 90 percent of car accidents and that no matter what problems the autonomous vehicle (AV) possesses, it will still reduce this percentage (Ackerman 3). Humans sometimes make blunders that create an accident …show more content…

“A delayed software response of as little as one tenth of a second is likely to be hazardous in traffic” (Shladover 5). Computers must take time to process the situation. While the computer processes, the environment around it continue to change making it easy for accidents to happen. “An (AV) will have to track dozens of other vehicles and obstacles and make decisions within fractions of a second. The code required will be orders of magnitude more complex than what it takes to fly an airplane” (Shladover 5). The computers require massive code which takes more time and money. A code that makes decisions that complex takes years to write. Proving that the AV meets proper safety takes a couple more

Open Document