Wait a second!
More handpicked essays just for you.
More handpicked essays just for you.
Risks of self - driving cars
Self driving car thesis
Self driving car thesis
Don’t take our word for it - see why 10 million students trust us with their essay needs.
Recommended: Risks of self - driving cars
In my essay, I’m planning on writing about self-driving cars. The moral issue I want to focus on is the idea of artificial intelligence replacing human drivers. This identifies as a moral issue because we place our safety in the hands of the artificial intelligence that supposedly reduce human error. This issue is important to an engineer considering the group that is implementing this AI to help keep drivers safe and the possible risks they will face of the consequences if things take a wrong turn. In the public eyes, there would be some criticizing self-driving cars as unsafe and there will be those who are willing to try them. The important thing to note, is how great is the decision making of an AI? Will it make the proper decision to swerve
Have you ever feared that your loved one or even someone very close to you will be involved in a fatal car accident every time they left the house? Drunk driving is a factor in nearly one-third of all fatal accidents. Even if you aren’t the one driving, you are still at risk any moment to get involved in an accident that could’ve been prevented. By legalizing fully self-driving cars, we won’t have to fear the pain of losing a loved one. We could have a quick fix to all of this madness easily. The number of traffic accidents are soaring at 1.3 million deaths a year. Drunk Driving is still one of the number one causes of vehicle deaths; therefore, the government should allow self-driving cars to become legal to combat the issue. If we don’t act now to combat this issue we will have to deal with the consequences it will bring.
There are a huge number of details that need to be worked out. My first thought is to go with the utilitarian approach and minimize the loss of life and save the greatest number of people, but upon farther reflection I started to see the problems with it. The utilitarian approach is too simplistic. It raised all kinds of questions such as will the computer make any decisions as to fault when it makes the decision on what to do. For example, if I am in the car with my young child, and three eighty-year-old drunks wander out in front of my car because they are drunk by their own choice, is the car going to choose them over me and my child because there are three of them? I would not want the computer to make that decision for me because frankly I probably would not make that decision. That kind of computer decision would probably curtail many people including me from buying a self-driving car. It is the same paradox that MIT Review refers to when it says, “People are in favor of cars that sacrifice the occupant to save other lives—as long as they don’t have to drive one themselves” (“Why
Self-driving cars are now hitting a few roadways in America, and are showing people just a small glimpse into what could be the future of automobiles. Although Google’s self-driving cars are getting a lot of attention now, the idea of a self-driving car has been around for quite a while actually. These cars have been tested to their limits, but the American people have yet to adopt the technology into their everyday lives. A brief description of their history, how they work, and finally answer the question, will self-driving cars ever be adopted widely by the American public?
...ailable provide much more protection than harm to humans. Automotive makers should continue to offer safe features and advance the possibilities of a collision-free future as much as possible. Attention must also be turned to the potential harm new features could cause. Safety features should be a precaution, or safety net, to true accidents that happen. They should not continue to replace bad driving habits that are abundant in our country. By allowing computer technology to provide an instant fix to human error, the error itself is never corrected. When involving something as deadly as vehicle accidents, fixing the error is just as, if not more, critical as providing a safety net. The ninth commandment: thou shalt think about the social consequences of the program you are writing. How far will vehicle safety go until computers are driving the car for us?
Ethical issues are, among those, the most notable ones. In “Why Self-Driving Cars”(2015), it arises a typical ethics dilemma when a driverless car can be programmed to either save the passengers by endangering the innocent nearby or sacrifice its owner to avoid crashing into a crowd. Knight(2015) cites Chris Gerdes, a professor at Stanford University, who gave another scenario when a automated car can save a child’s life but injure the occupant in the car. The real problem is, as indicated by Deng(2015), a car cannot reason and come up with ethical choices and decisions itself like a human does as it must be preprogrammed to respond, which leads to mass concerns. In fact, programmers and designers shoulder the responsibility since those tough choices and decisions should all be made by them prior to any of those specific emergencies while the public tends tolerates those “pre-made errors” less(Knight, 2015; Lin, 2015). In addition to the subjective factors of SDCs developing, Bonnefon and co concludes a paradox in public opinions: people are disposed to be positive with the automated algorithm which is designed to minimize the casualty while being cautious about owning a vehicle with such algorithm which can possibly endanger themselves.(“Why Self-Driving Cars”,
"Unlike a human who reacts instinctively in an emergency, an autonomous car will have to calculate and choose the appropriate response to each scenerio." "Self Driving Cars, by Samuel Gibbs" Basically this shows that cars response times in emergency are going to be alot slower than human reaction time, which could potentially risk the lives of the passengers or
But are they safe? On March 24 2017, an Uber autonomous car was involved with a 3 way crash in Tempe Arizona. With no serious injuries, everyone walked away with a few bruises and cuts from the Uber’s overturned car; but the accident had nothing to do with a “glitch” or a “compilation error” but instead a distracted driver trying to merge and side swiping the Autonomous suv. This and Google’s accident (One of their self driving lexus SUV’s cars drove slowly into a bus {2 mph kek}) have cast a shadow that AV (autonomous vehicles) are “unsafe” and “computers can't do what a human can” that's been dampening their reputation and slowing their development on making them safer, even though, statistically, there 70% more efficient than a human driver.
Finally, if an accident were to occur involving a self-driving car, the question of “who is responsible” is raised. This is a difficult question that needs to be addressed with laws that govern liability in these situations.
Imagine a scenario in the near future where auto-driving cars are a common sight. People are familiar with machines making decision for them. Nobody questions the effectiveness of these machines. One day, a car is driving its occupant down a windy road, all of a sudden a child runs into the street. The car must now make a decision based on the instructions given to it upon creation. Does the car swerve and crash to miss the child, killing the passenger? Or does it kill the child to save the passenger? This is an ethical problem that has been debated for many years. Ever since the first work into artificial intelligence. When we create intelligent machines which are able to make decisions on their own, it is inevitable that decisions unfavorable
Another safety benefit of the self-driving car is the issue of unsafe teen drivers on the roads. In a study conducted by Sheila Sarkar and Marie Andreas, fifty five percent of 1,430 teenage drivers admitted to engaging in risky behaviors while driving (Sarkar 687). The newscast regularly reports about fatal car accidents which involved teen drivers who were racing or driving drunk. In addition, teen drivers are a novice on the road and have a learning curve, this at times can be dangerous. Self-driving cars would not have the learning curve nor would they have the urge to drive unsafe like many teens
1. What is the difference between a. and a. Introduction/overview of topic and issues to be discussed: While the idea of self-driving cars seems futuristic and far away, society is actually very close to seeing them on the road. Taking the wheel away from humans and putting them into the hands of computers and artificial intelligence will obviously change travel forever. As a result, there are many questions that need to be addressed before people feel comfortable trusting automated vehicles. What type of technology will be necessary to ensure self-driving cars operate safely and think like humans?
Inventors hope to help people with autonomous cars because “autonomous cars can do things that human drivers can’t” (qtd. in “Making Robot Cars More Human). One of the advantages that driverless cars have is that “They can see through fog or other inclement weather, and sense a stalled car or other hazard ahead and take appropriate action” (qtd. in “Making Robot Cars More Human). Harsh weather conditions make it difficult and dangerous for people to drive, however, the car’s ability to drive through inclement weather “frees the user’s time, creates opportunities for individuals with less mobility, and increases overall road safety” (Bose 1326). With all the technology and software in the car, it can “improve road traffic system[s] and reduces road accidents” (Kumar). One of the purposes for creating the driverless car was to help “make lives easier for senior citizens, people with disabilities, people who are ill, or people who are under influence of alcohol” (Kumar). It can be frightening to know that that we share share our roads with drivers that could potentially endanger our lives as well as other people’s lives. How can people not feel a sense of worry when “cars kill roughly 32,000 people a year in the U.S.” (Fisher 60)? Drivers who text while driving or drink and drive greatly impact the safety of other people, and Google hopes to reduces the risk of accidents and save lives with the
Self-driving cars However, with each passing day, humans are becoming more and more dependent on artificial intelligence. Smart phones have become ubiquitous and are more than just devices for making phone calls; they are high powered mini-AI computers that are pretty much attached to our hands. We rely on them for directions, banking, commerce, and answers to any random conceivable question. They help us keep in touch with family and friends; they organize our ideas and provide us with countless forms of entertainment.
In the world, there is a huge development of technology’s possibility to work without or instead of people. In factories, there are many machines working replaced with people. In public spaces, such as shopping malls, stations, and hotels, there are robots helping people to tell correct direction of where they want to go. Even at an airport, we can check-in with machine, not in person.
Automotive executives touting self-driving cars as a way to make commuting more productive or relaxing may want to consider another potential marketing pitch: safety (Hirschauge, 2016). The biggest reason why these cars will make a safer world is that accident rates will enormously drop. There is a lot of bad behavior a driver exhibit behind the wheel, and a computer is actually an ideal motorist. Since 81 percent of car crashes are the result of human error, computers would take a lot of danger out of the equation entirely. Also, some of the major causes of accidents are drivers who become ill at the time of driving. Some of the examples of this would be a seizure, heart attack, diabetic reactions, fainting, and high or low blood pressure. Autonomous cars will surely remedy these types of occurrences making us