Introducing the New SmartExpert: Self-driving Car "Drivers"

Advertisement

The National Highway Traffic Safety Administration has deemed the artificial intelligence that controls Google’s self-driving car a qualified “driver” under federal regulations. So, if a computer can drive, must we have a computer testify as to whether this new “driver” was negligent? It sounds laughable: “Do you, computer, swear to tell the truth?” But, with so many new potential avenues of litigation opening up as a result of “machines at the wheel,” it made us wonder how smart the new expert will have to be?

With its heart beating in Silicon Valley and its position well-established as a proponent of computer invention and progress, it was surprising when California was the first state to suggest we need a human looking over the computer’s shoulder. That is essentially what the draft regulations from the California Department of Motor Vehicles for the regulation of self-driving vehicles proposes – that self-driving cars have a specially-licensed driver prepared to take the wheel at all times. After years spent developing and testing self-driving cars in its home town of Mountain View, California, Google may now be looking elsewhere for testing and production. The rule proposed by the California DMV would make Google’s car impossible in the state.  Why?  Because humans cannot drive the Google self-driving car. It has no steering wheel and no pedals. The Google car could not let a human take over the wheel. Does that thought make you pause?

Advertisement

It apparently didn’t give the National Highway Traffic Safety Administration any cause for concern, as they approved Google’s self-driving software, finding the artificial intelligence program could be considered a bonafide “driver” under federal regulations. In essence, Google’s driving and you are simply a passenger. If you would hesitate to get in, Google’s Chris Urmson, lead engineer on the self-driving car program explains: “We need to be careful about the assumption that having a person behind the wheel will make the technology safer.” Urmson is basically saying computers are safer than humans. When you think about the number of automobile accident-related deaths in the United States alone, he may be right.  If he is right, wouldn’t artificial intelligences sophisticated enough to drive a car more safely than humans be able to learn to do other things better as well? Couldn’t they drive a forklift, perform surgery on humans, manage a billion dollar hedge fund? If that is where things are heading, who will testify as to the applicable standards of behavior for these machines? In the hedge fund example, will it be a former hedge fund manager who has years of experience handling large, bundled securities or a software developer who has years of experience programming artificial intelligence?

Who do you think will be able to testify in cases where an artificially-intelligent machine plays a role? Liability at the hands of a machine is bound to emerge. Someone will have to speak to the standard of judgment, discretion, and care applicable to machines. Maybe Google will be allowed to text while driving. Who’s to say?

Advertisement

© Copyright 2002-2016 IMS ExpertServices, All Rights Reserved.

Advertisement

Published by

National Law Forum

A group of in-house attorneys developed the National Law Review on-line edition to create an easy to use resource to capture legal trends and news as they first start to emerge. We were looking for a better way to organize, vet and easily retrieve all the updates that were being sent to us on a daily basis.In the process, we’ve become one of the highest volume business law websites in the U.S. Today, the National Law Review’s seasoned editors screen and classify breaking news and analysis authored by recognized legal professionals and our own journalists. There is no log in to access the database and new articles are added hourly. The National Law Review revolutionized legal publication in 1888 and this cutting-edge tradition continues today.