The New Transformer or the next Terminator? : An In-Depth Look at the Future of Self-Driving Cars

13 minute read

Published:

“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” - Dr. Ian Malcom, Jurassic Park

When German inventor Carl Benz introduced the first automobile, the world was astounded. People were amazed at the machinery. Those rich enough demanded to have one of their own. However, when Google introduced their self-driving car company, Waymo, in 2016 public opinion was much more diverse. Some people wanted to buy it immediately, some did not want anything to do with it, and others wanted to prevent it from ever entering the market. These polar opposite reactions sparked debate amongst scientists, engineers, lawyers, ethics professors, etc. The main question they all are debating is whether self-driving cars should be introduced into the market due to their wide ranging effect on ethics, the law, cybersecurity, and the economy. However, the benefits the autonomous cars bring: reduced risk of accidents, growth in economy, and allowing more independence to those in need of it, outweigh the relatively small adverse consequences that are brought along with it. Today, with self-driving cars already driving on the road, there is no doubt they will be part of the foreseeable future.

Similarly, when Google first publicly introduced their self-driving car, people were pleasantly surprised. Engineers had created an autonomous vehicle that could drive around with minimal effort from a safety driver. Although it sounds futuristic, this idea is not a novel one. It was first thought of in the 15th century by the famous Leonardo De Vincie. He had drawn a rough blueprint for a self-propelled cart where springs were used as propulsion, similar to a self-driving car (Nguyen). Now, fast-forward a few centuries to today, many well-established companies have now started designing and testing self-driving cars with the purpose of reducing human related crashes. So far, the most successful self-driving car has been created by Waymo, a company that belongs to Google’s parent company Alphabet. It has been test driven in Mountain View, California; Kirkland, Washington; Metro, Phoenix; and Austin, Texas, collecting data since 2009. How this car is able to maneuver around obstacles and drive on the road is broken down into five main components: a Lidar unit produces laser beams to generate a 360-degree image of the surroundings, additional Lidar units are on the front of the car to detect obstacles in front, a Radar sensor measures the distance between objects surrounding the car, a camera finds the parallax (the displacement in positions of an object viewed in different lines of sight) from multiple images which can detect everyday object most people find on the road such as traffic lights and signs, bicyclists, pedestrians etc., and finally the main computer is stored in the trunk which is where the main program for the car is located, analyzing all the pictures and actually maneuvering the car around obstacles (Gates). With all these safety features in place, Waymo conducted their 6-year study to find that their car had only been in minor accidents and of those, only one was the fault of the autonomous car! Furthermore, the number of safety disengagement that occurred drastically reduced from .8 times per 1,000 miles to .2 times per 1,000 miles in 2016. And, the numbers are continuing to drop.

Despite these facts, people still have their doubts whether or not self-driving cars are ready for the road. One of these doubts raises the question of ethics. The main concern everyone has is how the car will behave in an unavoidable dilemma similar to that of the trolley problem. Imagine one is driving in their brand new self-driving car. They are on the highway and all four sides of the car are surrounded. In front, there is an eighteen-wheeler, in the back there is an SUV, on their left there is a motorcycle, and on their right, is a car of similar weight. All these vehicles surrounding the self-driving car are non-autonomous. Suddenly, the truck in front of the self-driving car breaks hard, and now the car has to make a decision. But what course of action should the car take? Should it minimize the risk of others getting hurt or maximize the probability of saving the occupants? When the public’s opinion was polled by Doctor in Cognitive Psychology and Professor at Toulouse School in France, Jean-Francois Bonnefon, he found that “People are in favor of cars that sacrifice the occupant to save other lives–as long they don’t have to drive on themselves,” (MIT Review). And therein lies the paradox that casts doubt on what should be deemed the “correct” decision. However, many engineers argue that overthinking the trolley problem could cause potential more problems. Alex Shashkevich interviewed Stephen Zoepf, executive director of the Center for Automotive Research at Stanford (CARS), discovering that it is not productive. He claims, “If there is a way to improve on [decisions made on the road] with driverless cars, why wouldn’t we?” (Shashkevich). Since self-driving will reduce human error, which is the cause of 94% of accidents on the road (Waymo), it is better to introduce self-driving cars to the market.

Many companies, such as Tesla, Apple and Uber etc. have taken on the endeavor to create a marketable self-driving car, however, all these companies have answered the trolley dilemma, stated above, differently. The lack of a common standard creates a problem. In his article “Self-Driving Cars Might Need Standards, but Whose?”, John Quain describes the effect of having self-driving cars that have different programs. For example, Hyundai’s self-driving car Ioniq treats every decision captiously. It will stop nine feet short of crosswalks and will not go forward if there is a pedestrian on the sidewalk, even if he is standing their poised. On the other hand, Delphi’s test car, uses an Intel computing platform which is much more aggressive and treats the pedestrian with more deference. This difference can cause problems because it forces the pedestrian or non-autonomous vehicles to be responsible in knowing the model of the self-driving car and making a decision based on that information. The solution to this problem is creating a common standard among all companies who create self-driving cars. This will ensure that the law and the people are aware of what course of action to take. In addition, at a Stanford University conference on “The future of artificial intelligence and self-driving cars”, mechanical engineer and Stanford professor Chris Gerdes says that creating guidelines of safety that a self-driving car will have to follow will enforce the public policies for self-driving cars (Stanford University School of Engineering).

When a common standard is placed, the law has to be updated to fit what is the right course of action that has to take place. Harry Surden and Mary-Anne Williams researched how the Highway Traffic Safety Administration would change their rules when self-driving cars are introduced into society. One question that stands out amongst others is this: if a self-driving car is responsible for a crash, who is to blame? The people who programmed it to crash intentionally in that situation? The person who owns the self-driving car? The car itself? There are so many factors to consider that it is not easily understandable who holds the blame. So far in the law, there is a term called tort law. This penalizes the person driving if and only if the accidents were clearly avoidable. However, while we know that a person may not stop if we cross in front of it really quickly, how is a pedestrian supposed to know if the self-driving car has the same reflexes. For it to be even more safe, a self-driving car should be able to communicate that it understands someone or something is in its vicinity.

Another part of the self-driving car that must be updated is the predictability of the car. “One concern is that the development of self-driving vehicles is such a new and constantly evolving technological area, that government agencies may not have the technological expertise to prescribe efforts beyond high-level coordination and focus,” (Surden 175). As of now, there are many components that self-driving cars have, and they behave differently per company because each follows a different program. This can create a problem by law because the law cannot discriminate, and therefore cannot create a law that is different and only fits for one company. Increasing the predictability of the car will make it easier to create and enforce the laws that is implied for all. Also, increasing the predictability will make it easier for pedestrians to understand how to react, increasing the overall safety.

However, there is still one problem that the law has barely managed to enforce: law against cybersecurity. Although few in number, there are cyber criminals who are willing to hack into self-driving car’s program and create a potentially perilous situation. However, engineers have already foreseen this problem. In Gil Press’s article “Stopping Self-Driving Cars from Becoming Cybersecurity Weapons”, he interviewed Diskin, former head of Israel’s internal security service. He has stated that behind cyberattacks are human beings, and not computers. Human beings make many mistakes, and because of this weakness, there is a better approach to dealing with cybersecurity. “Identify specific behaviors and intentions of specific attackers, construct their ‘digital signature’ –their unique patterns of behavior, make behavioral science an important aspect of cyber defense”.By analyzing previous cyber-attacks and the person behind them, one can learn and understand how they work, adapting their code to prevent the attack from ever happening. Furthermore, creating a smart IT system, will allow for it to sense cyber-attacks before it is about to occur, which can stop it all together.

Lastly, the economy is something that will change once self-driving cars are introduced into the market. A specific group of people who will be affected is the taxi drivers, Uber, and truck drivers. They are worried that with self-driving cars they will lose their jobs as companies are more likely to buy self-driving cars because they are cheaper, and have a lower percent chance of not getting in an accident. People would probably feel more safe traveling in a self-driving car than a taxi. However, people need not worry according to Ken Taylor, professors of philosophy. He claims, “You can’t outsource driving…Technology has always destroyed jobs but created other jobs,” For example, A san Francisco Company, Otto, have been thinking of using self-driving trucks that can take cargo for longer distances and faster time than humans. While, this may sound this is getting rid of human truck drivers that is not the case. There is still a need of a human driver who can drive the truck. The level of autonomous driving these trucks of reached is level 3, meaning they still need safety drivers to take control in some situations. Therefore, there will still be a need of truck drivers. In addition, those jobs, which pay an average of about $40,000 a year, could even improve,” (Freedman).

There are a plethora of factors that come into picture while dealing with self-driving cars. The way the economy will change, the effects of cybersecurity, the laws that will be amended, and the ethical questions that have to be answered all take part in the decision of when self-driving cars will be on the road. Elon Musk, founder and CEO of Tesla, was quoted “‘Almost all cars produced will be autonomous in 10 years,” he said. “It will be rare to find one that is not in 10 years. That’s going to be a huge transformation.’” (Boyle). With the technology developing at a rapid speed, Self-driving cars will reach Level 4, where the program will have control and authority in decision making. Autonomous cars are no longer a science fiction notion or a futuristic idea. Self-driving cars are already on the road and will continue to drive on.

Work Cited

Boyle, Alan. “Elon Musk on how our cars will drive us – and how AI could threaten us – by 2037.” GeekWire. Jul 17, 2017. https://www.geekwire.com/2017/elon-musk-cars-will-drive-us-ai-threaten-us-2037/. Accessed Dec 1 2017.

“Chris Urmson: How a driverless car sees the road.” YouTube, uploaded by TED. Jun 26, 2015. https://www.youtube.com/watch?v=tiwVMrTLUWg

Freedman, David. “Self-Driving Trucks.” MIT Technology Review, 2017, https://www.technologyreview.com/s/603493/10-breakthrough-technologies-2017-self-driving-trucks/. Accessed October 6, 2017

Gates, Guilbert. “The Race for Self-Driving Cars.” The New York Times. June 6, 2017. https://www.nytimes.com/interactive/2016/12/14/technology/how-self-driving-cars-work.html. Accessed Nov 31 2017.

Nguyen, Tuan. “History of Self-Driving Cars.” Thought Co. Nov 30 2016, https://www.thoughtco.com/history-of-self-driving-cars-4117191. Accessed Nov 1 2017.

Press, Gil. “Stopping Self-Driving Cars from Becoming Cybersecurity Weapons.” Forbes. Jul 19, 2017. https://www.forbes.com/sites/gilpress/2017/07/19/stopping-self-driving-cars-from-becoming-cybersecurity-weapons/#2cbdb8406723. Accessed Nov 20 2017.

Quain, John. “Self-Driving Cars Might Need Standards, but Whose?” The New York Times. Feb 23 2017, https://www.nytimes.com/2017/02/23/automobiles/wheels/self-driving-cars-standards.html. Accessed Oct 8 2017.

Shashkevich, Alex. “Stanford Scholars, researchers discuss key ethical questions self-driving cars present.” Stanford News. May 22 2017, http://news.stanford.edu/2017/05/22/stanford-scholars-researchers-discuss-key-ethical-questions-self-driving-cars-present/. Accessed Oct 8 2017.

Surden, Harry. “Technological Opacity Predictability, and Self-Driving Cars.” Cardozo Law Review, vol. 38, no. 1, 2016, pp.121-181.

“Technology.” Waymo, n.D., https://waymo.com/tech/. Accessed October 6, 2017

Teoh, Eric and Kidd, David. “Rage against the machine? Google’s self-driving cars versus human drivers.” Journal of Safety Research. vol. 63, 2017, pp. 57-60.

“The future of artificial intelligence and self-driving cars.” YouTube, uploaded by Stanford University School of Engineering. May 15, 2017. https://www.youtube.com/watch?v=JLWB-eY9iIU

“Why Self-Driving Cars Must Be Programmed to Kill.” MIT Technology Review. Oct 22 2015, https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/. Accessed Oct 8 2017.

Leave a Comment