Will Louisiana’s New Laws for Self-Driving Delivery Devices Prevent Accidents?

Retail chains, grocery stores, and restaurants have been actively developing methods to deliver products faster. With inventions like self-driving cars and drones, it was only a matter of time before delivery services took advantage. But with any new technology comes safety concerns.

Republican Sen. Rick Ward sponsored a new bill to outline how driverless delivery devices are allowed to operate on Louisiana streets. Governor John Bel Edwards signed the bill and it went into effect immediately. With these new laws in place, Louisiana lawmakers hope to keep motorists and pedestrians safe as they share travel alongside self-driving delivery devices.

Self-Driving Technology

Self-driving cars have been in the works since the 1920s. Carnegie Mellon University’s Navlab and ALV projects built a computer-controlled vehicle in 1984. Since then autonomous technology has expanded to other devices to serve various markets.

Autonomous Cars

When most people think of self-driving technology, they think of autonomous cars. This type of self-driving machine can go long distances and carry larger cargo. Autonomous cars typically transport people while vans may transport smaller self-driving bots.

Surprisingly, the largest safety risk posed to autonomous cars is human unpredictability. The vehicles are programmed to obey strict safety guidelines along with the rules of the road. In theory, that should be adequate to ensure safety. In reality, these vehicles are bullied by human motorists who drive aggressively.

Drones

The first modern drone was developed in 1935. These small, unmanned aircrafts transformed from military equipment to personal aerial cameras. In June 2021, Kroger became the next company to utilize self-driving devices by launching their first drone flight to deliver groceries.

Airborne drones can fly over traffic jams or obstructions. This would allow them to make deliveries in rural areas that traditional delivery trucks cannot reach. Potential complications arise from privacy issues and Federal Aviation Administration (FAA) regulations.

Last-Mile Bots

Last-mile bots, also known as ground drones, are typically small robots that travel short distances. They may cross streets but otherwise tend to remain on sidewalks as they complete a delivery’s last leg of the journey. These robots are designed to navigate sleep inclines, curbs, and unpaved surfaces.

The biggest limitation of last-mile bots is the size and weight of the deliveries they can carry. Severe weather may also pose challenges. Companies like DoorDash and Postmates successfully use last-mile bots to make multiple short deliveries that delivery drivers typically don’t want to accept.

Louisiana’s New Laws for Self-Driving Delivery Devices

As new technologies emerge, so do new laws to govern their usage. Under Louisiana’s Senate Bill 147, self-driving delivery devices must move at low speeds. They cannot exceed 20 miles per hour. They are limited to 12 miles per hour in pedestrian areas, which is roughly the speed of a person jogging.

These autonomous delivery robots must yield to pedestrians. They cannot obstruct the flow of traffic. They must also be equipped with lights on the front and rear.

The companies utilizing robot delivery must ensure each vehicle carries at least $100,000 insurance coverage. Additionally, these devices are not permitted to transport hazardous materials.

Are Self-Driving Delivery Devices Safe?

Due to the high standards of robotics developers, driverless vehicles are generally safer than cars with human drivers. Safety is paramount, since according to a car accident lawyer in New Orleans, nearly 14% of Louisiana drivers don’t have auto insurance.

Louisiana’s new laws aim to prevent accidents both to motorists and pedestrians. Multiple states have passed similar legislation to protect people sharing space with these vehicles. However, Louisiana’s bill permits governing officials and airport authorities to establish additional laws or ban self-driving delivery devices if they pose a danger to public safety in the future.

Who Is Liable When a Self-Driving Delivery Vehicle Causes an Accident?

At this time, no delivery vehicles that are 100% automated are in use, so there are no laws or regulations to determine who would be liable in an accident. However, if there were an accident involving a self-driving delivery vehicle and it could be proven that the vehicle’s operators were negligent, in theory, they would be legally liable.

There are several ways a company’s negligence could lead to an accident. For example, they could fail to maintain the vehicle or to perform critical software updates. Just as with any other type of vehicle on the road, self-driving delivery vehicles can and will get into accidents. When it happens, expect to see increased regulations and lawsuits.


© Laborde Earles Law Firm 2021

The Promise and Peril of Autonomous Vehicles

The possibility of self-driving cars on our roads is prompting both excitement and anxiety. Advocates point to the possibility of increased safety, lower pollution, even less congestion. Critics aren’t sold on many of the supposed advantages.

So, what happens when driverless vehicles start hitting our roads? As with so many innovations, there are likely to be pluses and minuses.

Let’s consider safety. The United States Department of Transportation estimates that roughly 95% of road accidents are caused by human mistakes; driving too fast for conditions, not paying attention to the road, or illegal maneuvers such as driving through red lights. Given human tendencies to get distracted, one would expect that autonomous vehicles will be safer.

Autonomous vehicles are outfitted with sensors and cameras, which enable them to “see” their surroundings and react to traffic and pedestrians. Companies working on these vehicles have been testing these vehicles in simulated settings as well as on real roads. There is much to tout about their safety aspects: they’re not distracted like humans, they obey speed limits and traffic signs, they don’t drive fatigued.

But driving in traffic has turned out to be more challenging than expected, and a few well-publicized accidents – one involving a Tesla and one an Uber vehicle that killed a pedestrian – have prompted concerns the self-driving technology is not ready for prime time. In particular, that sensors and cameras may not be able to react in real-time to cope with humans who behave like, well, humans.

More choices or less?

“We’re moving to a future where people don’t own cars,” says Dr. Daniel Sperling, director of the Institute of Transportation Studies at the University of California, Davis. “You’ll have a subscription service, maybe, that emphasizes smaller vehicles, or you might want a cheaper service where it’s a van,” he adds.

Dr. Alain Kornhauser, director of the program in transportation at Princeton University, agrees to a point, saying privately owned cars are not likely to vanish completely — especially in rural areas, where getting a driverless taxi may be more challenging. Still, he says, the number of people who own cars — and the number of cars owned per family — will drop sharply.

In many cities with ridesharing services like Lyft or Uber, owning a vehicle has become less urgent. Autonomous vehicles could multiply ridesharing options.

But what if you’re in a rural area without these services? Should rural communities consider investing in self-driving vehicles as a form of public transport? What if you’re in a major city but can’t afford to either own an autonomous vehicle or even subscribe to the service?

There’s also the question of what happens to public transport as self-driving vehicles increase. Will we continue to support and improve the infrastructure for public transportation?

Public transport systems in the U.S. are not as robust as in some European nations. One of the main concerns for Seleta Reynolds, General Manager of Department of Transportation for Los Angeles, is managing access for people in different parts of Los Angeles because she understands how much that can impact one’s financial well-being.

“You can get to about 12 times as many jobs in an hour in a car as you can by transit in L.A.,” she said.

If autonomous vehicles end up reducing access, the financial and social impact could ripple across communities.

Then there is the question of what autonomous vehicles will do to people who drive for a living. According to the U.S. Bureau of Labor Statistics, more than 2.5 million people earn their living from driving – employed as tractor-trailer truck drivers, taxi and delivery drivers, and as bus drivers. If those jobs disappear, that could represent a potential loss of employment equal to what we saw during the Great Recession of 2008.

Many of the people driving vehicles for a living are classified as low-skilled workers. It will be difficult for such unemployed workers to quickly find new work, and the cost of re-training them could be high.

Autonomous vehicles have the potential to spur a massive and exciting paradigm shift. But there are darker clouds on the horizon too. The question is: will we be able to manage the changes wrought by self-driving vehicles in a positive way?


Copyright © 2020 Godfrey & Kahn S.C.

For more on autonomous vehicle developments, see the National Law Review Utilities & Transport Law section.

Not So Fast And Furious – Executive Indicted for Stealing Self-Driving Car Trade Secrets

Back in March, 2017, we posted about a civil lawsuit against Anthony Levandowski, who allegedly sped off with a trove of trade secrets after resigning from Waymo LLC, Google’s self-driving technology company. Waymo not only sued Levandowski, but also his new employer, Uber, and another co-conspirator, Lior Ron. Since our initial post, things have gotten progressively worse for the Not So Fast and Furious trio: (1) Levandowski was fired in May, 2017; (2) Uber settled, giving up 5% of its stock, which totaled $245 million dollar;  and (3) the case against Levandowski and Ron was sent to arbitration, where the arbitration panel reportedly issued a $128 million interim award to Waymo.

Just when things couldn’t seem to get any worse, they did.

On August 15, 2019, a federal grand jury indicted Levandowski on 33 counts relating to trade secret theft. Levandowski has pled not guilty, has been released on $2 million dollars bail, and  is currently wearing an ankle monitor.

This legal saga is a reminder that trade secret theft is serious… it not only has civil consequences, but also criminal ones.  Unfortunately, trade secret theft happens every day.  And regardless of whether your company has trade secrets regarding self-driving car technology, worth hundreds of millions of dollars, or customer information worth less than a hundred thousand dollars, it’s important to make sure your company’s information is protected.

Equally important is knowing how to investigate potential trade secret theft. Some helpful tips as you launch your investigation:

1. Secure and preserve all relevant computing devices and email/file-sharing accounts.

2. Consider enlisting the help of outside computer forensic experts.

3. Analyze the employee’s computing activities on company computers and accounts.

4. Determine whether there is any abnormal file access, including during non-business hours.

5. Examine the employee’s use of external storage devices and whether those devices have been returned.

6. Review text message and call history from the employee’s company issued cell phone (and never instruct anyone to factory reset cell phones).

7. Enlist the help of outside counsel to set the parameters of the investigation.


© 2019 Jones Walker LLP
For more on trade secret law, see the National Law Review Intellectual Property law page.

Who Benefits from Self-Driving Cars?

Everyone will benefit from self-driving cars, but to varying degrees. Society, from a safety standpoint, benefits from eliminating some or all of the 34,247 motor vehicle fatalities per year. The elderly and disabled can benefit by regaining independence. Commuters can benefit by turning their dreaded drive to work into a relaxing or productive session they look forward to. But what about car manufacturers?

Car manufacturers may potentially benefit the most from self-driving cars. Assuming that they develop safe, fully autonomous robotaxis, then a car manufacturer may be able to operate the car as a robotaxi and potentially generate ten times the sale price of a vehicle over the life of the vehicle. But before this can happen, a company has to produce a fully self-driving vehicle at a reasonable cost. From a hardware standpoint, a key challenge is sensor technology. Lidar, a critical sensor for autonomous cars that can bridge the deficiencies in today’s camera and radar systems, is a significant hurdle due to its cost (e.g., up to $75,000), size, and complexity.

Therefore, it comes as no surprise that lidar companies are benefiting from large investments and partnerships this year to develop advanced lidar solutions. For example, Sense Photonics recently emerged from stealth mode and made headlines with a $26 million round advertising a whole new approach that allows for an ultra-wide field of view and flexible installation. Sense Photonics claims they have a “flash” lidar which can illuminate the entire scene with one giant flash, as opposed to the scanning or sweeping systems employed by the early popular lidars systems, such as those from Velodyne. Luminar recently announced they developed a new LIDAR sensor that weighs less than 2 pounds, is the size of a soda can, and will cost as little as $500. Another upstart, Lumotive, announced that it has a solid-state sensor with metamaterial (e.g., a non-naturally occurring material that can have a negative refractive index) that includes tiny tunable components that can slow down parts of the laser beam in order to steer the beam. Steering a laser beam in this manner, according to Lumotive, may eliminate the need for mechanically moving parts. Yet another lidar company, Quanergy, touts that they have a fully solid-state automotive grade lidar based on optical phased arrays that do not include any moving parts on any scale, while offering an unparalleled level of quality and reliability.

While the timeline is uncertain, it is likely that self-driving cars will be safer than human drivers, and that auto manufacturers and technology suppliers will find opportunities to increase profits. However, this will likely bring about certain disadvantageous. Some disadvantages are obvious, such as the loss of transportation-related jobs due to automation, but there may be other less obvious disadvantages. If a car manufacturer can make more money by keeping their car, why would they sell it to consumers? Elon Musk thinks that is the case, and part of his “Master Plan” is to enable self-driving hardware to operate as autonomous robotaxis to generate revenue for Tesla itself. While autonomous robotaxis may have many benefits, the inability to buy a reasonably priced car because it is more profitable in the hands of the car manufacturer does not benefit the car shopper!


© 2019 Foley & Lardner LLP

More more on self-driving cars, see the Utilities & Transport law page on the National Law Review.

Drive.ai Introduces External Communication Panels to Talk to Public

Self-driving cars are inherently presented with a challenge—communicating with their surroundings. However, Drive.ai has attempted to address that challenge by equipping its self-driving cars with external communication panels that convey a variety of messages for drivers, pedestrians, cyclists and everyone else on the road. Drive.ai CEO Bijit Halder said, “Our external communication panels are intended to mimic what an interaction with a human drive would look like. Normally you’d make eye contact, wave someone along, or otherwise signal your intentions. With [these] panels everyone on the road is kept in the loop of the car’s intentions, ensuring maximum comfort and trust, even for people interacting with a self-driving car for the first time.” To help the company build its platform, one of the company’s founders recorded herself driving around normally and analyzed all the interactions she had with other drivers, including eye contact and hand motions.

Specifically, the panel uses lidar sensors, cameras, and radar to determine if any pedestrians are in or near a crosswalk as it approaches the crosswalk. If the vehicle detects that pedestrian’s path, the car begins to slow down and displays the message “Stopping for you.” Once the vehicle comes to a complete stop, it displays the message “Waiting for you.” When there are no more pedestrians are detected, the vehicle will display the message “Going now, please wait” to let other pedestrians to wait to cross.

Drive.ai continues to conduct research to determine the best means of communication including the best location for such communications, which is currently right above the wheels based on its previous studies. Halder said, “The more you can effectively communicate how a self-driving car will act, the more confidence the public will have in the technology, and that trust will lead to adoption on a broader scale.”

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
More more in vehicle technology advances, see the Utilities & Transport page on the National Law Review.

Steering Wheels Become Increasingly Optional

Florida is the latest state to allow vehicles to operate on the road without a steering wheel.  In doing so, Florida became the third state after Michigan and Texas to allow vehicles on its roads without a human even having the ability to drive them.  The legislation signed into law includes:

The bill authorizes operation of a fully autonomous vehicle on Florida roads regardless of whether a human operator is physically present in the vehicle. Under the bill, a licensed human operator is not required to operate a fully autonomous vehicle. The bill authorizes an autonomous vehicle or a fully autonomous vehicle equipped with a teleoperation system to operate without a human operator physically present in the vehicle when the teleoperation system is engaged. A remote human operator must be physically present in the United States and be licensed to operate a motor vehicle by a United States jurisdiction.

Florida is sure to become a hotbed of autonomous vehicle testing with this new law.  Starsky Robotics is one of the companies expected to take advantage by putting driverless vehicles on the road in 2020.  These would not be just any vehicles, but big rig trucks.  These trucks have already hit 55 mph, without a driver or crew.  Most predict that this is just the beginning with as many as eight million autonomous vehicles expected on the road by 2025 and 30 million by 2030. Of course, the devil is in some of the details. There are six levels of autonomous vehicles, with level 5, Full Automation, being the highest.

Not everyone agrees that this is all happening so quickly.  As the New York Times noted, “A growing consensus holds that driver-free transport will begin with a trickle, not a flood.” Of course, this makes sense.  Outside of people with a vested interest (we are looking at you Mr. Musk), few seem to truly believe that millions of level 5, completely driverless vehicles will be on the road.

But this does not mean that they will not make an impact. While vehicles may not be navigating complex systems in dense areas next year, they are likely to find plenty of uses.  Gated communities with known road structures and limited traffic might be a good location for the first generation of fully autonomous vehicles. And think of the myriad of shuttles at various locations that run the same route, over and over, day after day. That seems like a good use of a fully autonomous vehicle run by something other than gasoline. How about college campuses, with autonomous vehicles running all day and all night providing safe routes and passage for vulnerable students at all hours. Suffice to say, the day when people can wake up, get into a fully autonomous vehicle, and go to sleep while it takes them to work is perhaps not something the current work force will enjoy (except apparently for the occasional Tesla rider taped sleeping behind the wheel).

But whatever generation comes after Generation Z is unlikely to know a driving experience like what exists today, if there is any driving at all. Will they even drive at all, or will they fly in their autonomous flying cars? Project Vahana aims to offer just that. In their own words: “Project Vahana intends to open up urban airways by developing the first certified electric, self-piloted vertical take-off and landing (VTOL) passenger aircraft.” Getting to work will never be easier.  Unless of course, all this transportation runs into the fact that everyone works remotely.

 

© 2019 Foley & Lardner LLP
For more on Vehicle Legislation see the National Law Review page on Utilities & Transport.

Self-Driving Ride Sharing Cars on Road in 2016 (!)

Self-Driving Ride SharingYou know how flying cars, jet packs and things like that always seem way off into the future? Many feel the same about self-driving cars being on the road in the United States. However, Volvo and Uber are out to prove them wrong. As the Automotive News reported, the existing vehicles plus some Uber modifications “will enable the seven-seat SUV to drive itself…” Wow!

Pittsburgh gets to be the guinea pig for this project. Who had Pittsburgh high on its list of locations for self-driving cars to premiere? Volvo developed its self-driving hardware and software at its Pittsburgh tech center, so the location makes some sense. As reported all over, Uber will put two employees in the front seats when the vehicles debut. Volvo will provide tech support. These vehicles reportedly could be on the road in a matter of weeks – by the time you read this they could be rolling out.

Of course, “self-driving” with two employees in the front seat is not quite autonomous. Volvo is working toward a new version of its XC90 to enable level 4 autonomy, which still requires a driver in the driver’s seat. Consequently, while we are not quite at the moment of having the “Johnny Cab” found in Total Recall, before the end of the year, the automotive industry looks to be one large step closer.

© 2016 Foley & Lardner LLP

Introducing the New SmartExpert: Self-driving Car "Drivers"

The National Highway Traffic Safety Administration has deemed the artificial intelligence that controls Google’s self-driving car a qualified “driver” under federal regulations. So, if a computer can drive, must we have a computer testify as to whether this new “driver” was negligent? It sounds laughable: “Do you, computer, swear to tell the truth?” But, with so many new potential avenues of litigation opening up as a result of “machines at the wheel,” it made us wonder how smart the new expert will have to be?

With its heart beating in Silicon Valley and its position well-established as a proponent of computer invention and progress, it was surprising when California was the first state to suggest we need a human looking over the computer’s shoulder. That is essentially what the draft regulations from the California Department of Motor Vehicles for the regulation of self-driving vehicles proposes – that self-driving cars have a specially-licensed driver prepared to take the wheel at all times. After years spent developing and testing self-driving cars in its home town of Mountain View, California, Google may now be looking elsewhere for testing and production. The rule proposed by the California DMV would make Google’s car impossible in the state.  Why?  Because humans cannot drive the Google self-driving car. It has no steering wheel and no pedals. The Google car could not let a human take over the wheel. Does that thought make you pause?

It apparently didn’t give the National Highway Traffic Safety Administration any cause for concern, as they approved Google’s self-driving software, finding the artificial intelligence program could be considered a bonafide “driver” under federal regulations. In essence, Google’s driving and you are simply a passenger. If you would hesitate to get in, Google’s Chris Urmson, lead engineer on the self-driving car program explains: “We need to be careful about the assumption that having a person behind the wheel will make the technology safer.” Urmson is basically saying computers are safer than humans. When you think about the number of automobile accident-related deaths in the United States alone, he may be right.  If he is right, wouldn’t artificial intelligences sophisticated enough to drive a car more safely than humans be able to learn to do other things better as well? Couldn’t they drive a forklift, perform surgery on humans, manage a billion dollar hedge fund? If that is where things are heading, who will testify as to the applicable standards of behavior for these machines? In the hedge fund example, will it be a former hedge fund manager who has years of experience handling large, bundled securities or a software developer who has years of experience programming artificial intelligence?

Who do you think will be able to testify in cases where an artificially-intelligent machine plays a role? Liability at the hands of a machine is bound to emerge. Someone will have to speak to the standard of judgment, discretion, and care applicable to machines. Maybe Google will be allowed to text while driving. Who’s to say?

© Copyright 2002-2016 IMS ExpertServices, All Rights Reserved.

Introducing the New SmartExpert: Self-driving Car “Drivers”

The National Highway Traffic Safety Administration has deemed the artificial intelligence that controls Google’s self-driving car a qualified “driver” under federal regulations. So, if a computer can drive, must we have a computer testify as to whether this new “driver” was negligent? It sounds laughable: “Do you, computer, swear to tell the truth?” But, with so many new potential avenues of litigation opening up as a result of “machines at the wheel,” it made us wonder how smart the new expert will have to be?

With its heart beating in Silicon Valley and its position well-established as a proponent of computer invention and progress, it was surprising when California was the first state to suggest we need a human looking over the computer’s shoulder. That is essentially what the draft regulations from the California Department of Motor Vehicles for the regulation of self-driving vehicles proposes – that self-driving cars have a specially-licensed driver prepared to take the wheel at all times. After years spent developing and testing self-driving cars in its home town of Mountain View, California, Google may now be looking elsewhere for testing and production. The rule proposed by the California DMV would make Google’s car impossible in the state.  Why?  Because humans cannot drive the Google self-driving car. It has no steering wheel and no pedals. The Google car could not let a human take over the wheel. Does that thought make you pause?

It apparently didn’t give the National Highway Traffic Safety Administration any cause for concern, as they approved Google’s self-driving software, finding the artificial intelligence program could be considered a bonafide “driver” under federal regulations. In essence, Google’s driving and you are simply a passenger. If you would hesitate to get in, Google’s Chris Urmson, lead engineer on the self-driving car program explains: “We need to be careful about the assumption that having a person behind the wheel will make the technology safer.” Urmson is basically saying computers are safer than humans. When you think about the number of automobile accident-related deaths in the United States alone, he may be right.  If he is right, wouldn’t artificial intelligences sophisticated enough to drive a car more safely than humans be able to learn to do other things better as well? Couldn’t they drive a forklift, perform surgery on humans, manage a billion dollar hedge fund? If that is where things are heading, who will testify as to the applicable standards of behavior for these machines? In the hedge fund example, will it be a former hedge fund manager who has years of experience handling large, bundled securities or a software developer who has years of experience programming artificial intelligence?

Who do you think will be able to testify in cases where an artificially-intelligent machine plays a role? Liability at the hands of a machine is bound to emerge. Someone will have to speak to the standard of judgment, discretion, and care applicable to machines. Maybe Google will be allowed to text while driving. Who’s to say?

© Copyright 2002-2016 IMS ExpertServices, All Rights Reserved.