Offered Free Cyber Services? You May Not Need to Look That Gift Horse in the Mouth Any Longer.

Cyberattacks continue to plague health care entities. In an effort to promote improved cybersecurity and prevent those attacks, HHS has proposed new rules under Stark and the Anti-Kickback Statute (“AKS”) to protect in-kind donations of cybersecurity technology and related services from hospitals to physician groups. There is already an EHR exception1 which protects certain donations of software, information technology and training associated with (and closely related to) an EHR, and HHS is now clarifying that this existing exception has always been available to protect certain cybersecurity software and services. However, the new proposed rule explicitly addresses cybersecurity and is designed to be more permissive then the existing EHR protection.

The proposed exception under Stark and safe harbor under AKS are substantially similar and unless noted, the following analysis applies to both. The proposed rules allow for the donation of cybersecurity technology such as malware prevention and encryption software. The donation of hardware is not currently contemplated, but HHS is soliciting comment on this matter as discussed below. Specifically, the proposed rules also allow for the donation of cybersecurity services that are necessary to implement and maintain cybersecurity of the recipient’s systems. Such services could include:

  • Services associated with developing, installing, and updating cybersecurity software;

  • Cybersecurity training, including breach response, troubleshooting and general “help desk” services;

  • Business continuity and data recovery services;

  • “Cybersecurity as a service” models that rely on a third-party service provider to manage, monitor, or operate cybersecurity of a recipient;

  • Services associated with performing a cybersecurity risk assessment or analysis, vulnerability analysis, or penetration test; or

  • Services associated with sharing information about known cyber threats, and assisting recipients responding to threats or attacks on their systems.

The intent of these rules is to allow the donation of these cybersecurity technology and services in order to encourage its proliferation throughout the health care community, and especially with providers who may not be able to afford to undertake such efforts on their own. Therefore, these rules are expressly intended to be less restrictive than the previous EHR exception and safe harbor. The proposed restrictions are as follows2:

  • The donation must be necessary to implement, maintain, or reestablish cybersecurity;

  • The donor cannot condition the donations on the making of referrals by the recipient, and the making of referrals by the recipient cannot be conditioned on receiving a donation; and

  • The donation arrangement must be documented in writing.

AKS has an additional requirement that the donor must not shift the costs of any technology or services to a Federal health care program. Currently, there are no “deeming provisions” within these proposed rules for the purpose of meeting the necessity requirement, but HHS is considering, and is seeking comment on, whether to add deeming provisions which essentially designate certain arrangements as acceptable. Some in the industry appreciate the safety of knowing what is expressly considered acceptable and others find this approach more restrictive out of fears that the list comes to be considered exhaustive.

HHS is also considering adding a restriction regarding what types of entities are eligible for the donation. Previously for other rules, HHS has distinguished between entities with direct and primary patient care relationships, such as hospitals and physician practices, and suppliers of ancillary services, such as laboratories and device manufacturers.

Additionally, HHS is soliciting comment on whether to allow the donation of cybersecurity hardware to entities for which a risk assessment identifies a risk to the donor’s cybersecurity. Under this potential rule, the recipient must also have a risk assessment stating that the hardware would reasonably address a threat.


1 AKS Safe Harbor 42 CFR §1001.952(y); Stark Exception §411.357(bb)
2 AKS Safe Harbor 42 CFR §1001.952(jj); Stark Exception §411.357(w)(4)


©2020 von Briesen & Roper, s.c

More on cybersecurity software donation regulation on the National Law Review Communications, Media & Internet law page.

Venmo’ Money: Another Front Opens in the Data Wars

When I see stories about continuing data spats between banks, fintechs and other players in the payments ecosystem, I tend to muse about how the more things change the more they stay the same. And so it is with this story about a bank, PNC, shutting off the flow of customer financial data to a fintech, in this case, the Millennial’s best friend, Venmo. And JP Morgan Chase recently made an announcement dealing with similar issues.

Venmo has to use PNC’s customer’s data in order to allow (for example) Squi to use it to pay P.J. for his share of the brews.  Venmo needs that financial data in order for its system to work.  But Venmo isn’t the only one with a mobile payments solution; the banks have their own competing platform called Zelle.  If you bank with one of the major banks, chances are good that Zelle is already baked into your mobile banking app.  And unlike Venmo, Zelle doesn’t need anyone’s permission but that of its customers to use those data.

You can probably guess the rest.  PNC recently invoked security concerns to largely shut off the data faucet and “poof”, Venmo promptly went dark for PNC customers.  To its aggrieved erstwhile Venmo-loving customers, PNC offered a solution: Zelle.  PNC subtly hinted that its security enhancements were too much for Venmo to handle, the subtext being that PNC customers might be safer using Zelle.

Access to customer data has been up until now a formidable barrier to entry for fintechs and others whose efforts to make the customer payment experience “frictionless” have depended in large measure on others being willing to do the heavy lifting for them.  The author of Venmo article suggests that pressure from customers may force banks to yield any strategic advantage that control of customer data may give them.  So far, however, consumer adoption of mobile payments is still miniscule in the grand scheme of things, so that pressure may not be felt for a very long time, if ever.

In the European Union, the regulators have implemented PSD2 which forces a more open playing field for banking customers. But realistically, it can’t be surprising that the major financial institutions don’t want to open up their customer bases to competitors and get nothing in return – except a potential stampede of customers moving their money. And some of these fintech apps haven’t jumped through the numerous hoops required to be a bank holding company or federally insured – meaning unwitting consumers may have less fraud protection when they move their precious money to a cool-looking fintech app.

A recent study by the Pew Trusts make it clear that consumers are still not fully embracing mobile for any number of reasons.  The prime reason is that current mobile payment options still rely on the same payments ecosystem as credit and debit cards yet mobile payments don’t offer as much consumer protection. As long as that is the case, banks and fintechs and merchants will continue to fight over data and the regulators are likely to weigh in at some point.

It is not unlike the early mobile phone issue when one couldn’t change mobile phone providers without getting a new phone number – that handcuff kept customers with a provider for years but has since gone by the wayside. It is likely we will see some sort of similar solution with banking details.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on fintech & banking data, see the National Law Review Financial Institutions & Banking law page.

Escalated Tension with Iran Heightens Cybersecurity Threat Despite Military De-Escalation

The recent conflict between the United States and Iran has heightened America’s long-time concern of an imminent, potentially lethal Iranian cyber-attack on critical infrastructure in America.   Below, is the latest information including the United States Government’s analysis on the current standing of these threats as of January 8, 2020. 

CISA Alert

The U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) issued Alert (AA20-006A) in light of “Iran’s historic use of cyber offensive activities to retaliate against perceived harm.”  In general, CISA’s Alert recommends two courses of action in the face of potential threats from Iranian actors: vulnerability mitigation and incident preparation.  The Alert specifically instructs organizations to increase awareness and vigilance, confirm reporting processes and exercise organizational response plans to prepare for a potential cyber incident.  CISA also suggests ensuring facilities are appropriately staffed with well-trained security personnel who are privy to the tactics of Iranian cyber-attacks.  Lastly, CISA recommends disabling unnecessary computer ports, monitoring network, and email traffic, patching externally facing equipment, and ensuring that backups are up to date.

Iranian Threat Profile

CISA asserts that Iranian cyber actors continually improve their offensive cyber capabilities. These actors are also increasingly willing to engage in destructive, kinetic, and even lethal cyber-attacks.  In the recent past, such threats have included disruptive cyber operations against strategic targets, including energy and telecommunications organizations. There has also been an increased interest in industrial control systems (such as SCADA) and operational technology (OT).  Refer to CISA’s Alert and the Agency’s “Increased Geopolitical Tensions and Threats” publication for specific Iranian advanced persistent threats to the nation’s cybersecurity.

Imminence of an Iranian Cyber-attack

While CISA urges vigilance and heightened prudence as it pertains to cybersecurity, DHS has been clear that there is “no information indicating a specific, credible threat to the Homeland.”  Nevertheless, the same National Terrorism Advisory System Bulletin publication (dated January 4, 2020) warns that Iran maintains a robust cyber program. This program can carry out attacks with varying degrees of disruption against U.S. critical infrastructure. The bulletin further states that “an attack in the homeland may come with little to no warning.”  There is also a concern that homegrown violent extremists could capitalize on the heightened tensions to launch individual attacks.  With the ongoing tension, it is unlikely that the imminence of an Iranian cyber-attack will dissipate in the near term.

Implications

It is vital for businesses, especially those deemed critical infrastructure, to stay apprised of new advances on these matters.  Given that the Alert calls for organizations to take heightened preventative measures, it is imperative that critical infrastructure entities revisit their cybersecurity protocols and practices and adjust them accordingly.  A deeper understanding of the organizational vulnerabilities in relation to this particular threat will be imperative.


© 2020 Van Ness Feldman LLP

For more on cybersecurity, see the Communications, Media & Internet section of the National Law Review.

Hackers Eavesdrop and Obtain Sensitive Data of Users Through Home Smart Assistants

Although Amazon and Google respond to reports of vulnerabilities in popular home smart assistants Alexa and Google Home, hackers continually work hard to exploit any vulnerabilities to be able to listen to users’ every word to obtain sensitive information that can be used in future attacks.

Last week, it was reported by ZDNet that two security researchers at Security Research Labs (SRLabs) discovered that phishing and eavesdropping vectors are being used by hackers to “provide access to functions that developers can use to customize the commands to which a smart assistant responds, and the way the assistant replies.” The hackers can use the technology that Amazon and Google provides to app developers for the Alexa and Google Home products.

By putting certain commands into the back end of a normal Alexa/Google Home app, the attacker can silence the assistant for long periods of time, although the assistant is still active. After the silence, the attacker sends a phishing message, which makes the user believe had nothing to do with the app that they interacted with. The user is then asked for the Amazon/Google password and sends a fake message to the user that looks like it is from Amazon or Google. The user is then sent a message claiming to be from Amazon or Google and asking for the user’s password. Once the hacker has access to the home assistant, the hacker can eavesdrop on the user, keep the listening device active and record the users’ conversations. Obviously, when attackers eavesdrop on every word, even when it appears the device is turned off, they can obtain information that is highly personal and can be used malevolently in the future.

The manufacturers of the home smart assistants reiterate to users that the devices will never ask for their account password. Cyber hygiene for home assistants is no different than cyber hygiene with emails.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more hacking risk mitigation, see the National Law Review Communications, Media & Internet law page.

Small and Mid-Sized Businesses Continue to Be Targeted by Cybercriminals

A recent Ponemon Institute study finds that small and mid-sized businesses continue to be targeted by cybercriminals, and are struggling to direct an appropriate amount of resources to combat the attacks.

The Ponemon study finds that 76 percent of the 592 companies surveyed had experienced a cyber-attack in the previous year, up from 70 percent last year. Phishing and social engineering attacks and scams were the most common form of attack reported by 57 percent of the companies,  while 44 percent of those surveyed said the attack came through a malicious website that a user accessed. I attended a meeting of Chief Information Security Officers this week and was shocked at one statistic that was discussed—that a large company filters 97 percent of the email that is directed at its employees every day. That means that only 3 percent of all email that is addressed to users in a company is legitimate business.

A recent Accenture report shows that 43 percent of all cyber-attacks are aimed at small businesses, but only 14 percent of them are prepared to respond. Business insurance company Hiscox estimates that the average cost of a cyber-attack for small companies is $200,000, and that 60 percent of those companies go out of business within six months of the attack.

These statistics confirm what we all know: cyber-attackers are targeting the lowest hanging fruit—small to mid-sized businesses, and municipalities and other governmental entities that are known to have limited resources to invest in cybersecurity defensive tools. Small and mid-sized businesses that cannot devote sufficient resources to protecting their systems and data may wish to consider other ways to limit risk, including prohibiting employees from accessing websites or emails for personal reasons during working hours. This may sound Draconian, but employees are putting companies at risk by surfing the web while at work and clicking on malicious emails that promise free merchandise. Stopping risky digital behavior is no different than prohibiting other forms of risky behavior in the working environment—we’ve just never thought of it this way before.

Up to this point, employers have allowed employees to access their personal phones, emails and websites during working hours. This has contributed to the crisis we now face, with companies often being attacked as a result of their employees’ behavior. No matter how much money is devoted to securing the perimeter, firewalls, spam filters or black listing, employees still cause a large majority of security incidents or breaches because they click on malicious websites or are duped into clicking on a malicious email. We have to figure out how employees can do their jobs while also protecting their employers.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more on cybersecurity, see the National Law Review Communications, Media & Internet law page.

Is Your Iphone Spying on you (Again)?

In the latest installment of this seemingly ongoing tale, Google uncovered (for the second time in a month) security flaws in Apple’s iOS, which put thousands of users at risk of inadvertently installing spyware on their iPhones. For two years.

Google’s team of hackers – working on Project Zero – say the cyberattack occurred when Apple users visited a seemingly genuine webpage, with the spyware then installing itself on their phones. It was capable of then sending the user’s texts, emails, photos, real-time location,  contacts, account details (you get the picture) almost instantaneously back to the perpetrators of the hack (which some reports suggest was a nation state). The hack wasn’t limited to Apple apps either, with reports the malware was able to extract data from WhatsApp, GoogleMaps and Gmail.

For us, the scare factor goes beyond data from our smart devices inadvertently revealing secret locations, or being used against us in court – the data and information the cyberspies could have had access to could wreak absolute havoc on the everyday iPhone users’ (and, the people whose details they have in their phones) lives.

We’re talking about this in past tense because while it was only discovered by Project Zero recently, Apple reportedly fixed the vulnerability without much ado in February this year, by releasing a software update.

So how do you protect yourself from being spied on? It seems there’s no sure-fire way to entirely prevent yourself from becoming a victim, or, if you were a victim of this particular attack, to mitigate the damage. But, according to Apple,  “keeping your software up to date is one of the most important things you can do to maintain your Apple product’s security”. We might not be ignoring those pesky “a new update is available for your phone” messages, anymore.


Copyright 2019 K & L Gates

ARTICLE BY Cameron Abbott and Allison Wallace of K&L Gates.
For more on device cyber-vulnerability, see the National Law Review Communications, Media & Internet law page.

Facebook “Tagged” in Certified Facial Scanning Class Action

Recently, the Ninth Circuit Court of Appeals held that an Illinois class of Facebook users can pursue a class action lawsuit arising out of Facebook’s use of facial scanning technology. A three-judge panel in Nimesh Patel, et al v. Facebook, Inc., Case No. 18-15982 issued an unanimous ruling that the mere collection of an individual’s biometric data was a sufficient actual or threatened injury under the Illinois Biometric Information Privacy Act (“BIPA”) to establish standing to sue in federal court. The Court affirmed the district court’s decision certifying a class. This creates a significant financial risk to Facebook, because the BIPA provides for statutory damages of $1,000-$5,000 for each time Facebook’s use of facial scanning technology was used in the State of Illinois.

This case is important for several reasons. First, the decision recognizes that the mere collection of biometric information may be actionable, because it creates harm to an individual’s privacy. Second, the decision highlights the possible extraterritorial application of state data privacy laws, even those that have been passed by state legislatures intending to protect only their own residents. Third, the decision lays the groundwork for a potential circuit split on what constitutes a “sufficiently concrete injury” to convey standing under the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016). Fourth, due to the Illinois courts’ liberal construction and interpretation of the statute, class actions in this sphere are likely to continue to increase.

The Illinois class is challenging Facebook’s “Tag Suggestions” program, which scans for and identifies people in uploaded photographs for photo tagging. The class plaintiffs alleged that Facebook collected and stored biometric data without prior notice or consent, and without a data retention schedule that complies with BIPA. Passed in 2008, Illinois’ BIPA prohibits gathering the “scan of hand or face geometry” without users’ permission.

The district court previously denied Facebook’s numerous motions to dismiss the BIPA action on both procedural and substantive grounds and certified the class. In moving to decertify the class, Facebook argued that any BIPA violations were merely procedural and did not amount to “an injury of a concrete interest” as required by the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016).

In its ruling, the Ninth Circuit determined that Facebook’s use of facial recognition technology without users’ consent “invades an individual’s private affairs and concrete interests.” According to the Court, such privacy concerns were a sufficient injury-in-fact to establish standing, because “Facebook’s alleged collection, use, and storage of plaintiffs’ face templates here is the very substantive harm targeted by BIPA.” The Court cited with approval Rosenbach v. Six Flags Entertainment Corp., — N.E.3d —, 2019 IL 123186 (Ill. 2019), a recent Illinois Supreme Court decision similarly finding that individuals can sue under BIPA even if they suffered no damage beyond mere violation of the statute. The Ninth Circuit also suggested that “[s]imilar conduct is actionable at common law.”

On the issue of class certification, the Ninth Circuit’s decision creates a precedent for extraterritorial application of the BIPA. Facebook unsuccessfully argued that (1) the BIPA did not apply because Facebook’s collection of biometric data occurred on servers located outside of Illinois, and (2) even if BIPA could apply, individual trials must be conducted to determine whether users uploaded photos in Illinois. The Ninth Circuit rejected both arguments. The Court determined that (1) the BIPA applied if users uploaded photos or had their faces scanned in Illinois, and (2) jurisdiction could be decided on a class-wide basis. Given the cross-border nature of data use, the Court’s reasoning could be influential in future cases where a company challenges the applicability of data breach or data privacy laws that have been passed by state legislatures intending to protect their own residents.

The Ninth Circuit’s decision also lays the groundwork for a potential circuit split. In two cases from December 2018 and January 2019, a federal judge in the Northern District of Illinois reached a different conclusion than the Ninth Circuit on the issue of BIPA standing. In both cases, the Northern District of Illinois ruled that retaining an individual’s private information is not a sufficiently concrete injury to satisfy Article III standing under Spokeo. One of these cases, which concerned Google’s free Google Photos service that collects and stores face-geometry scans of uploaded photos, is currently on appeal to the Seventh Circuit.

The Ninth Circuit’s decision paves the way for a class action trial against Facebook. The case was previously only weeks away from trial when the Ninth Circuit accepted Facebook’s Rule 23(f) appeal, so the litigation is expected to return to the district court’s trial calendar soon. If Facebook is found to have violated the Illinois statute, it could be held liable for substantial damages – as much as $1000 for every “negligent” violation and $5000 for every “reckless or intentional” violation of BIPA.

BIPA class action litigation has become increasingly popular since the Illinois Legislature enacted it: over 300 putative class actions asserting BIPA violations have been filed since 2015. Illinois’ BIPA has also opened the door to other recent state legislation regulating the collection and use of biometric information. Two other states, Texas and Washington, already have specific biometric identifier privacy laws in place, although enforcement of those laws is accomplished by the state Attorney General, not private individuals. A similar California law is set to go into effect in 2020. Legislation similar to Illinois’ BIPA is also currently pending in several other states.

The Facebook case will continue to be closely watched, both in terms of the standing ruling as well as the potential extended reach of the Illinois law.


© Polsinelli PC, Polsinelli LLP in California

For more in biometric data privacy, see the National Law Review Communications, Media & Internet law page.

DOJ Gets Involved in Antitrust Case Against Symantec and Others Over Malware Testing Standards

The U.S. Department of Justice Antitrust Division has inserted itself into a case that questions whether the Anti-Malware Testing Standards Organization, Inc. (AMTSO) and some of its members are creating standards in a manner that violates antitrust laws.

AMTSO says it is exempt from such per se claims by the Standards Development Organization Act of 2004 (SDOA). Symantec Corp., an AMTSO member, says the more flexible “rule of reason” applies – that it must be proven that standards actually undermine competition, which the recommended guidelines do not.

Malware BugNSS Labs, Inc., is an Austin, Texas-based cybersecurity testing company which offers services including “data center intrusion prevention” and “threat detection analytics.”

In addition to Symantec, AMTSO members include widely recognized names like McAfee and Microsoft, as well as names known well in cybersecurity circles: CarbonBlack, CrowdStrike, FireEye, ICSA, and TrendMicro. NSS Labs also is a member, but says it is among a small number of testing service providers. The organization is dominated by product vendors who easily outvote the service providers like NSS, AV-Comparatives, AV-Test and SKD LABS, NSS maintains, claims disputed by the organization.

On Sept. 19, 2018, NSS Labs filed suit in U.S. District Court for the Northern District of California against AMTSO, CrowdStrike (since voluntarily dismissed), Symantec, and ESET, alleging the product companies used their power in AMTSO to control the design of the malware testing standards, “actively conspiring to prevent independent testing that uncovers product deficiencies to prevent consumers from finding out about them.” The industry standard requires a group boycott that restrains trade, NSS Labs argues, hurting service providers (NSS Labs v. CrowdStrike, et al., No. 5:18-cv-05711-BLF, N.D. Calif.).

The case is before U.S. District Judge Beth Labson Freeman in Palo Alto, who has presided over a number of high-profile matters.

AMTSO moved to dismiss NSS Labs’ suit, citing its exemption from per se antitrust claims because of its status as a standards development organization (SDO). Further, it argues that the group is open to anyone and, while there are three times more vendors than testing service providers in the organization, that reflects the market itself.

On June 26, the DOJ Antitrust Division asked the court not to dismiss the case because further evidence is needed to determine whether the exemption under the SDOAA is justified.

AMTSO countered that the primary reason the case should be dismissed has “nothing to do” with the SDOAA. NSS failed to allege that AMTSO participated in any boycott, the organization says. All the group has done is “adopt a voluntary standard and foster debate about its merits, which is not illegal at all, let alone per se illegal,” the group says, adding that the Antitrust Division is asking the court to “eviscerate the SDOAA.”

Symantec first responded to the suit with a public attack on NSS Labs itself, criticizing its methodology and lack of transparency in its testing procedures, as well as the company’s technical capability and it’s “pay to play” model in conducting public tests. NSS Labs’ leadership team includes a former principal engineer in the Office of the Chief Security Architect at Cisco, a former Hewlett-Packard professional who established and managed competitive intelligence network programs, and an information systems management professional who formerly held senior management positions at Deloitte, IBM and Aon Hewitt.

On July 8, Symantec responded to the Antitrust Division’s statement of interest. It argued that the SDOAA does not provide an exemption from antitrust laws. Instead, it offers “a legislative determination that the rule of reason – not the per se rule” to standard setting activities. “That simply means the plaintiff must prove actual harm to competition, rather than relying on an inflexible rule of law,” Symantec says.

The company wrote that the government may have a point, albeit a moot one. “Symantec does not believe so, but perhaps the Division is right that there is a factual question about whether AMTSO’s membership lacks the balance the statute requires for the exclusion from per se analysis to apply,” Symantec says. Either way, the company argues, it doesn’t matter to the motions for dismissal because the per se rule does not apply.

Judge Freeman has set deadlines for disclosures, discovery, expert designations, and Daubert motions, with a trial date of Feb. 7, 2022.

Commentary

The antitrust analysis of standards setting is one of the sharpest of two-edged swords: When it works properly, it reflects a technology-driven process of reaching an industry consensus that often brings commercialization and interoperability of new technologies to market. When it is undermined, however, it reflects concerted action among competitors that agree to exclude disfavored technologies in a way that looks very much like a group boycott, a per se violation of Section 1 of the Sherman Act.

Accordingly, the Standards Development Organization Advancement Act of 2004 (SDOAA) recognizes that, when they are functioning properly, exempting bone fide standards development organizations (SDOs) from liability for per se antitrust violations can promote the pro-competitive standard setting process. But, when do SDOs “function properly”? The answer is entirely procedural, and is embodied in the statutory definition of SDO: an organization that “incorporate[s] the attributes of openness, balance of interests, due process, an appeals process, and consensus … “

The essential claim in the complaint by NSS Labs, therefore, is that the rules and procedures followed by AMTSO do not provide sufficient procedural safeguards to ensure that the organization arrives at a pro-competitive industry consensus rather than a group boycott for the benefit of one or a few industry players dressed in the garb of standard setting.

This is a factual inquiry that cannot be countered by a legal defense that simply declares the defendant is an SDO and, therefore, immune to suit under the statute. Whether the AMTSO is an SDO under the law or not depends on how it conducts itself, the make-up of its members, and its fidelity to the procedural principles embodied in the statute. The plaintiff’s claim is that AMTSO has not followed the procedural principles required to qualify as an SDO under the Act. This is a purely factual issue and, as such, cannot be resolved on a motion to dismiss.

The DOJ should be commended for urging the court to proceed to discovery to adduce the necessary facts to distinguish between legitimate standard setting and an unlawful group boycott and it should continue to be vigilant in the face of SDOs and would-be SODs that might be tempted to use the wrong side of the standard setting sword to commit anticompetitive acts instead of the right side to produce welfare-enhancing industry consensus.

This is particularly true in vital industries like cybersecurity. Government agencies, businesses, and consumers are constantly and increasingly at risk from ever-evolving cyber threats. It is therefore imperative that the cybersecurity market remains competitive to ensure development of the most effective security products.


© MoginRubin LLP
This article was written by Jonathan Rubin and Timothy Z. LaComb of MoginRubin & edited by Tom Hagy for MoginRubin.
For more DOJ Antitrust activities, see the National Law Review Antitrust & Trade Regulation page.

Heavy Metal Murder Machines and the People Who Love Them

What is the heaviest computer you own?  Chances are, you are driving it.

And with all of the hacking news flying past us day after day, our imaginations have not even begun to grasp what could happen if a hostile person decided to hack our automotive computers – individually or en masse. What better way to attack the American way of life but disable and crash armies of cars, stranding them on the road, killing tens of thousands, shutting down functionality of every city? Set every Ford F-150 to accelerated to 80 miles an hour at the same time on the same day and don’t stick around to clean up the mess.

We learned the cyberwarfare could turn corporal with the US/Israeli STUXNET bug forcing Iran’s nuclear centrifuges to overwork and physically break themselves (along with a few stray Indian centrifuges caught in the crossfire). This seems like a classic solution for terror attacks – slip malicious code into machines that will actually kill people. Imagine if the World Trade Center attack was carried out from a distance by simply taking over the airplanes’ computer operations and programing them to fly into public buildings.  Spectacular mission achieved and no terrorist would be at risk.

This would be easy to do with automobiles. For example, buy a recent year used car on credit at most U.S. lots and the car comes with a remote operation tool that allows the lender to shut off the car, to keep it from starting up, and to home in on its location so the car can either be “bricked” or grabbed by agents of the lender due to non-payment. We know that a luxury car includes more than 100 million lines of code, where a Boeing 787 Dreamliner contains merely 6.5 million lines of code and a U.S. Airforce F-22 Raptor Jet holds only 1.7 million lines of code.  Such complexity leads to further vulnerability.

The diaphanous separation between the real and electronic worlds is thinning every day, and not enough people are concentrating on the problem of keeping enormous, powerful machines from being hijacked from afar. We are a society that loves its freedom machines, but that love may lead to our downfall.

An organization called Consumer Watchdog has issued a report subtly titled KILL SWITCH: WHY CONNECTED CARS CAN BE KILLING MACHINES AND HOW TO TURN THEM OFF, which urges auto manufacturers to install physical kill switches in cars and trucks that would allow the vehicles to be disconnected from the internet. The switch would cost about fifty cents and could prevent an apocalyptic loss of control for nearly every vehicle on the road at the same time. (The IoT definition of a bad day)

“Experts agree that connecting safety-critical components to the internet through a complex information and entertainment device is a security flaw. This design allows hackers to control a vehicle’s operations and take it over from across the internet. . . . By 2022, no less than two-thirds of new cars on American roads will have online connections to the cars’ safety-critical system, putting them at risk of deadly hacks.”

And if that isn’t frightening enough, the report continued,

“Millions of cars on the internet running the same software means a single exploit can affect millions of vehicles simultaneously. A hacker with only modest resources could launch a massive attack against our automotive infrastructure, potentially causing thousands of fatalities and disrupting our most critical form of transportation,”

If the government dictates seat belts and auto emissions standards, why on earth wouldn’t the Transportation Department require a certain level of security of connectivity and software invulnerability from the auto industry.  We send millions of multi-ton killing machines capable of blinding speeds out on our roads every day, and there seems to be no standard for securing the hackability of these machines.  Why not?

And why not require the 50 cent kill switch that can isolate each vehicle from the internet?

50 years ago, when Ralph Nader’s Unsafe at Any Speed demonstrated the need for government regulation of the auto industry so that car companies’ raw greed would not override customer safety concerns.  Soon after, Lee Iacocca led a Ford design team that calculated it was worth the horrific flaming deaths of 180 Ford customers each year in 2,100 vehicle explosions due to flawed gas tank design that was eventually fixed with a tool costing less than one dollar per car.

Granted that safety is a much more important issue for auto manufacturers now than in the 1970s, but if so, why have we not seen industry teams meeting to devise safety standards in auto electronics the same way standards have been accepted in auto mechanics? If the industry won’t take this standard-setting task seriously, then the government should force them to do so.

And the government should be providing help in this space anyway. Vehicle manufacturers have only a commercially reasonable amount of money to spend addressing this electronic safety problem.  The Russian and Iranian governments have a commercially unreasonable amount of money to spend attacking us. Who makes up the difference in this crital infrastructure space? Recognizing our current state of cyber warfare – hostile government sponsored hackers are already attacking our banking and power systems on a regular basis, not to mention attempting to manipulate our electorate – our government should be rushing in to bolster electronic and software security for the automotive and trucking sectors. Why doesn’t the TSB regulate the area and provide professional assistance to build better protections based on military grade standards?

Nothing in our daily lives is more dangerous than our vehicles out of control. Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million per year are injured or disabled. A terrorist or hostile government attack on the electronic infrastructure controlling our cars would easily multiply this number as well as shutting down the US roads, economy and health care system for all practical purposes.

We are not addressing the issue now with nearly the seriousness that it demands.

How many true car–mageddons will need to occur before we all take electric security seriously?


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

This article was written by Theodore F. Claypoole of Womble Bond Dickinson (US) LLP.
For more on vehicle security, please see the National Law Review Consumer Protection law page.

You Can be Anonymised But You Can’t Hide

If you think there is safety in numbers when it comes to the privacy of your personal information, think again. A recent study in Nature Communications found that, given a large enough dataset, anonymised personal information is only an algorithm away from being re-identified.

Anonymised data refers to data that has been stripped of any identifiable information, such as a name or email address. Under many privacy laws, anonymising data allows organisations and public bodies to use and share information without infringing an individual’s privacy, or having to obtain necessary authorisations or consents to do so.

But what happens when that anonymised data is combined with other data sets?

Researchers behind the Nature Communications study found that using only 15 demographic attributes can re-identify 99.98% of Americans in any incomplete dataset. While fascinating for data analysts, individuals may be alarmed to hear that their anonymised data can be re-identified so easily and potentially then accessed or disclosed by others in a way they have not envisaged.

Re-identification techniques were recently used by the New York Times. In March this year, they pulled together various public data sources, including an anonymised dataset from the Internal Revenue Service, in order to reveal a decade’s worth of Donald Trump’s negatively adjusted income tax returns. His tax returns had been the subject of great public speculation.

What does this mean for business? Depending on the circumstances, it could mean that simply removing personal information such as names and email addresses is not enough to anonymise data and may be in breach of many privacy laws.

To address these risks, companies like Google, Uber and Apple use “differential privacy” techniques, which adds “noise” to datasets so that individuals cannot be re-identified, while still allowing access to the information outcomes they need.

It is a surprise for many businesses using data anonymisation as a quick and cost effective way to de-personalise data that more may be needed to protect individuals’ personal information.

If you would like to know more about other similar studies, check out our previous blog post ‘The Co-Existence of Open Data and Privacy in a Digital World’.

Copyright 2019 K & L Gates
This article is by Cameron Abbott of  K&L Gates.
For more on internet privacy, see the National Law Review Communications, Media & Internet law page.