Navigating the Data Privacy Landscape for Autonomous and Connected Vehicles: Implementing Effective Data Security

Autonomous vehicles can be vulnerable to cyber attacks, including those with malicious intent. Identifying an appropriate framework with policies and procedures will help mitigate the risk of a potential attack.

The National Highway Traffic Safety Administration (NHTSA) recommends a layered approach to reduce the likelihood of an attack’s success and mitigate ramifications if one does occur. NHTSA’s Cybersecurity Framework is structured around the five principles of identify, protect, detect, respond and recover, and can be used as a basis for developing comprehensive data security policies.

NHTSA goes on to describe how this approach “at the vehicle level” includes:

  • Protective/Preventive Measures and Techniques: These measures, such as isolation of safety-critical control systems networks or encryption, implement hardware and software solutions that lower the likelihood of a successful hack and diminish the potential impact of a successful hack.
  • Real-time Intrusion (Hacking) Detection Measures: These measures continually monitor signatures of potential intrusions in the electronic system architecture.
  • Real-time Response Methods: These measures mitigate the potential adverse effects of a successful hack, preserving the driver’s ability to control the vehicle.
  • Assessment of Solutions: This [analysis] involves methods such as information sharing and analysis of a hack by affected parties, development of a fix, and dissemination of the fix to all relevant stakeholders (such as through an ISAC). This layer ensures that once a potential vulnerability or a hacking technique is identified, information about the issue and potential solutions are quickly shared with other stakeholders.

Other industry associations are also weighing in on best practices, including the Automotive Information Sharing and Analysis Center’s (Auto-ISAC) seven Key Cybersecurity Functions and, from a technology development perspective, SAE International’s J3061, a Cybersecurity Guidebook for Cyber-Physical Vehicle Systems to help AV companies “[minimize] the exploitation of vulnerabilities that can lead to losses, such as financial, operational, privacy, and safety.”

© 2022 Varnum LLP

Trump Administration Issues New Guidance for Automated Driving Systems

The National Highway Traffic Safety Administration (NHTSA) announced yesterday the Trump administration’s first significant guidance concerning autonomous vehicles and Automated Driving Systems (ADS).

The new voluntary guidelines, titled Automated Driving Systems: A Vision for Safety, are intended to encourage innovation in the industry and are being touted as the administration’s “new, non-regulatory approach to promoting the safe testing and development of automated vehicles.” One of the most important aspects of these guidelines is the NHTSA’s clarification of its view of the delineation between the roles of the states and the federal government with respect to ADS technology.

The new guidelines replace the Federal Automated Vehicle Policy (FAVP), which was released by the Obama administration in 2016A Vision for Safety comprises voluntary guidance for vehicle manufacturers, best practices for state legislatures when drafting ADS legislation, and a request for further comment.

Autonomous-vehicle manufacturers are asked to undertake a voluntary self-assessment addressing 12 safety elements discussed in the new guidance. That is a slight departure from the FAVP, which detailed a 15-point safety assessment. The safety self-assessment remains voluntary, and NHTSA emphasizes that there is no mechanism to compel manufacturers to participate. The agency also stated that the testing or deployment of new ADS technologies need not be delayed to complete a self-assessment.

In what may be the most significant component of the guidance, NHTSA made clear its role as the primary regulator of ADS technology by “strongly encourage[ing] States not to codify th[e] Voluntary Guidance . . . as a legal requirement for any phases of development, testing, or deployment of ADSs.”

Further acknowledging the potential problems associated with a patchwork of state laws, the agency expressed its belief that “[a]llowing NHTSA alone to regulate the safety design and performance aspects of ADS technology will help avoid conflicting Federal and State laws and regulations that could impede deployment.” States are instead tasked by A Vision for Safety with regulating licensing of human drivers, motor vehicle registration, traffic laws, safety inspections, and insurance.

The new guidance comes just one week after the House of Representatives passed the SELF-DRIVE Act designed to eliminate legal obstacles that could interfere with the deployment of autonomous vehicles. However, as NHTSA and Congress are seeking to speed up ADS development by removing regulatory and legal impediments, it is noteworthy that on the same day NHTSA announced A Vision for Safety, the National Transportation Safety Board (NTSB) called for NHTSA to require automakers to install “system safeguards to limit the use of automated vehicle systems to those conditions for which they were designed.”

In an abstract of its forthcoming final report on the 2016 fatal crash involving a Tesla Model S operating in semi-autonomous mode, the NTSB concluded that “operational limitations” in the Tesla’s system played a major role in the fatal crash and that the vehicle’s semi-autonomous system lacked the safeguards necessary to ensure that the system was not misused. These recent developments only underscore the uncertainty facing the industry as regulators attempt to keep pace with fast-developing technology.

This post was written by Neal Walters and Casey G. Watkins of  Ballard Spahr LLP Copyright ©
For more legal analysis go to The National Law Review

First Reported Tesla Autopilot Fatality in Central Florida

model S tesla autopilot fatalityIn a recent blog post, Tesla revealed that the Model S vehicle that was involved in a fatal accident on May 7 in Williston, Florida was in Autopilot mode at the time of the collision. This marks the first known fatality in a Tesla vehicle where Autopilot was active. The National Highway Transportation Safety Administration is currently in the midst of an investigation of the cause of the collision which is believed to include a determination of whether the Autopilot system was working properly at the time of the accident.

According to various news sources, the accident occurred when a tractor trailer drove across a divided highway and in front of the Tesla vehicle. Due to the height of the trailer, the Model S actually passed under the trailer with the initial impact occurring to the vehicle’s windshield. Tesla CEO Elon Musk stated on Twitter that the radar system used by the Autopilot feature did not help in this case because of the height of the trailer. According to Musk, the system “tunes out what looks like an overhead road sign to avoid false breaking events.” Tesla believes that the Autopilot system would have prevented the accident if the impact had occurred to the front or rear of the trailer.

This accident represents the first in what will undoubtedly be many similar accidents that will raise questions regarding the safety of Autopilot systems. Tesla is one of the first automakers to utilize such technology and they have reiterated that they require customers to sign an agreement acknowledging that the system is in a “public beta phase” before they can use it. Some driving experts have criticized Tesla for introducing an Autopilot feature too early believing that the system gives drivers the false impression that the car can handle anything it encounters. By way of contrast, GM has only tested their Autopilot feature privately and Volvo has indicated that they intend to take full liability for their cars when the feature is activated.

ARTICLE BY Ian S. Abovitz of Stark & Stark

COPYRIGHT © 2016, STARK & STARK