Maker Pro

Addressing the Major Ethical Issues of Autonomous Vehicles

July 18, 2020 by Luke James
Share
banner

There’s a serious flaw in the way that programmers are currently addressing ethical concerns in relation to artificial intelligence (AI) and autonomous vehicles (AVs). Although they cannot account for everything, there are a few standout ethical issues that really ought to be looked at.

Mankind is now driven by technology, and nowhere is this seen more than in the area of transportation. Planes, trains, and automobiles have been major drivers of new technology and innovation not only today but also in the past. What was set in motion by the likes of Henry Ford, James Watt, and the Wright Brothers continues to this day, propelling our imaginations forward? 

With 1.2 billion vehicles on our roads, one of the most prominent priority concerns relates to improving driver safety. In fact, it always has been. Since the inception of the automobile, we have seen innovations in mechanical, electrical, and electro-optical technologies that have made them safer. And, more recently, we have seen the use of artificial intelligence (AI), too. However, AI is unlike these preceding technologies in that there are potential ethical implications involved, especially when we are looking at a future where AI not only makes cars safer for us but is also used to control them entirely. 

 

Advanced Driver Systems

Many electronics companies that support the automotive industry are focused on addressing the technical needs of Advanced Driver Assistance Systems (ADAS) by developing systems and components that will allow for better and safer driving. ADAS assists drivers by providing warnings and/or taking actions to reduce risk through the automation of some portion of the task of driving a vehicle so that safety and performance can be improved.

Today, ADAS is largely cooperative in nature, insofar as that the driver has the ultimate say over and control of the vehicle; human-to-machine interface in ADAS functions more like a co-pilot with the human still maintaining overall responsibility for the vehicle. Over time, however, it is expected that ADAS may develop further into autonomous systems that will offer a superior level of intelligence and control and will be able to respond quickly to scenarios on the road with better results. And although this sounds great in theory, what about the ethical side of things? 

 

Flawed AV Ethics

The way that programmers are currently approaching and addressing these ethical concerns is fundamentally flawed, primarily because they don’t account for the fact that humans may try to use AVs to do something bad. 

For example, if an AV with no passengers is travelling down a road and is about to crash into a car carrying five people, it may have two options: i) continue and crash into the car or ii) swerve out of the road and avoid a car, but hit a pedestrian.

Indeed, there is no shortage of potential scenarios like this, and most discussions of ethics surrounding them focus on whether the AV’s AI should protect the vehicle and its cargo or instead choose an action that harms the fewest people. This either/or approach, as an assistant professor in the Science, Technology & Science (STS) program at North Carolina State University Veljko Dubljević puts it, can raise more problems than it solves. 

“Current approaches to ethics and autonomous vehicles are a dangerous oversimplification – moral judgment is more complex than that,” says Dubljević. He gives the example of what if the five people in the car are terrorists who are trying to exploit AI’s programming to kill the nearby pedestrian or otherwise hurt other people? In this case, you may well want the AV to hit the car with the five people in it.

“In other words, the simplistic approach currently being used to address ethical considerations in AI and autonomous vehicles doesn’t account for malicious intent. And it should.”

 

Autonomous vehciel crash.

An autonomous vehicle following a crashing incident. 

 

Addressing the Issue

As a potential solution, Dubljević proposes the so-called Agent-Deed-Consequence (ADC) approach, which judges the morality of a decision based on three variables, as a framework that AIs could use to make snap moral judgements. First, is the agent’s intent good or bad? Second, is the deed or action itself good or bad? Lastly, is the outcome or consequence good or bad? This approach allows for considerable nuance.

For example, most drivers would agree that running a red light is bad. But what if a driver needs to run a red light to get out of the way of an ambulance? And what if running this red light means that you would avoid a collision with the ambulance? What if it means the ambulance could get to its destination more quickly, potentially saving a life? This is the example that Dubljević provides to illustrate the ADC approach. 

“The ADC model would allow us to get closer to the flexibility and stability that we see in human moral judgment, but that does not yet exist in AI,” says Dubljević. “Here’s what I mean by stable and flexible. Human moral judgment is stable because most people would agree that lying is morally bad. But it’s flexible because most people would also agree that people who lied to Nazis in order to protect Jews were doing something morally good.”

Dubljević agrees that while the ADC offers a potential route forward, more research is needed.  “I have led experimental work on how both philosophers and lay people approach moral judgment, and the results were valuable. However, that work gave people information in writing. More studies of human moral judgment are needed that rely on more immediate means of communication, such as virtual reality, if we want to confirm our earlier findings and implement them in AVs,” he said. 

He also believes that vigorous testing with driving simulation studies should be carried out before any so-called ‘ethical’ AVs take to the road with humans on a regular basis. “Vehicle terror attacks have, unfortunately, become more common, and we need to be sure that AV technology will not be misused for nefarious purposes,” he concludes.

Related Content

Comments


You May Also Like