Supreme Court to Decide Who Can Sue Under Privacy Law

Does a consumer, as an individual, have standing to sue a consumer reporting agency for a “knowing violation” of the Fair Credit Reporting Act (“FCRA”), even if the individual may not have suffered any “actual damages”?

The question will be decided by the U.S. Supreme Court in Spokeo, Inc. v. Robins, 742 F.3d 409 (9th Cir. 2014), cert. granted, 2015 U.S. LEXIS 2947 (U.S. Apr. 27, 2015) (No. 13-1339). The Court’s decision will have far-reaching implications for suits under the FCRA and other statutes that regulate privacy and consumer credit information.

FCRA

Enacted in 1970, the Fair Credit Reporting Act obligates consumer reporting agencies to maintain procedures to assure the “maximum possible accuracy” of any consumer report it creates. Under the statute, consumer reporting agencies are persons who regularly engage “in the practice of assembling or evaluating consumer credit information or other information on consumers for the purpose of furnishing consumer reports to third parties.” Information about a consumer is considered to be a consumer report when a consumer reporting agency has communicated that information to another party and “is used or expected to be used or collected” for certain purposes, such as extending credit, underwriting insurance, or considering an applicant for employment. The information in a consumer report must relate to a “consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living.”

Under the FCRA, consumers may bring a private cause of action for alleged violations of their FCRA rights resulting from a consumer reporting agency’s negligent or willful actions. For a negligent violation, the consumer may recover the actual damages he or she may have sustained. For a “willful” or “knowing” violation, a consumer may recover either actual damages or statutory monetary damages of $100 to $1,000.

Background

Spokeo is a website that aggregates personal data from public records that it sells for many purposes, including employment screening. The information provided on the site may include an individual’s contact information, age, address, income, credit status, ethnicity, religion, photographs, and social media use.

Spokeo, Inc., has the dubious distinction of receiving the first fine ($800,000) from the Federal Trade Commission (“FTC”) for FCRA violations involving the sale of Internet and social media data in the employment screening context. The FTC alleged that the company was a consumer reporting agency and that it failed to comply with the FCRA’s requirements when it marketed consumer information to companies in the human resources, background screening, and recruiting industries.

Conflict in Circuit Courts

In Robins v. Spokeo, Inc., Thomas Robins had alleged several FCRA violations, including the reckless production of false information to potential employers. Robins did not allege he had suffered or was about to suffer any actual or imminent harm resulting from the information that was produced, raising only the possibility of a future injury.

The U.S. Court of Appeals for the Ninth Circuit, based in San Francisco, held that allegations of willful FCRA violations are sufficient to confer Article III standing to sue upon a plaintiff who suffers no concrete harm, and who therefore could not otherwise invoke the jurisdiction of a federal court, by authorizing a private right of action based on a bare violation of the statute. In other words, the consumer need not allege any resulting damage caused by a violation; the “knowing violation” of a consumer’s FCRA rights alone, the Ninth Circuit held, injures the consumer. The Ninth Circuit’s holding is consistent with other circuits that have addressed the issue. See e.g., Beaudry v. TeleCheck Servs., Inc., 579 F.3d 702, 705-07 (6th Cir. 2009). It refused to follow the U.S. Court of Appeals for the Eighth Circuit in finding that one “reasonable reading of the [FCRA] could still require proof of actual damages but simply substitute statutory rather than actual damages for the purpose of calculating the damage award.” Dowell v. Wells Fargo Bank, NA, 517 F.3d 1024, 1026 (8th Cir. 2008).

The constitutional question before the U.S. Supreme Court is the scope of Congress’ authority to confer Article III standing, particularly, whether a violation of consumers’ statutory rights under the FCRA are the type of injury for which Congress may create a private cause of action to redress. In Beaudry, the Sixth Circuit identified two limitations on Congress’ ability to confer standing:

  1. the plaintiff must be “among the injured,” and

  2. the statutory right must protect against harm to an individual rather than a collective.

The defendant companies in Beaudry provided check-verification services. They had failed to account for a change in the numbering system for Tennessee driver’s licenses. This led to reports incorrectly identifying consumers as first-time check-writers.

The Sixth Circuit did not require the plaintiffs in Beaudry to allege the consequential damages resulting from the incorrect information. Instead, it held that the FCRA “does not require a consumer to wait for consequential harm” (such as the denial of credit) before bringing suit under FCRA for failure to implement reasonable procedures in the preparation of consumer reports. The Ninth Circuit endorsed this position, holding that the other standing requirements of causation and redressability are satisfied “[w]hen the injury in fact is the violation of a statutory right that [is] inferred from the existence of a private cause of action.”

Authored by: Jason C. Gavejian and Tyler Philippi of Jackson Lewis P.C.

Jackson Lewis P.C. © 2015

When Coworkers Invade Your Space re: Personal Privacy in the Workplace

Raymond Law Group LLC Connecticut, and Boston law firm

Invasion of personal privacy in the work place concerns all of us, but can you sue for that? A Connecticut trial court recently addressed this compelling privacy issue.  A Board of Education employee sued her coworkers for intentional infliction of emotional distress and for invasion of privacy. The employee alleged that her co-workers had, for several months, gathered together without her knowledge or permission to open and read her personal materials that she had stored on her work computer. The Waterbury Superior Court held that the employee had not stated a claim for intentional infliction of emotional distress, as the alleged conduct was only “undesirable and inappropriate”, and thus did not meet the “extreme and outrageous” standard of an intentional infliction of emotional distress claim. However, the court held that the employee had stated a claim for invasion of privacy, since her coworkers’ uninvited intrusion into her personal material was behavior that a reasonable person would find highly offensive. Referencing a 2009 District of Connecticut case, where the court held that employees have a reasonable expectation of privacy for their work emails, the Waterbury Superior Court noted that although the employee’s computer was a work computer, and not a personal device, this fact did not preclude her from bringing an invasion of privacy claim.

The right of privacy was first recognized by the Connecticut Supreme Court in 1982, when the Court adopted the standards for invasion of privacy listed in the Restatement (Second) of Torts. The Restatement explains that “[o]ne who intentionally intrudes, physically or otherwise, upon the solitude or seclusion of another or his private affairs or concerns, is subject to liability to the other for invasion of his privacy if the intrusion would be highly offensive to a reasonable person.” 3 Restatement (Second), Torts, Invasion of Privacy § 652B, p. 378 (1977). Following the Restatement, Connecticut law now categorizes four classes of invasion of privacy: 1) unreasonable intrusion upon the seclusion of another; 2) appropriation of the other’s name or likeness; 3) unreasonable publicity given to the other’s private life; or 4) publicity that unreasonably places the other in a false light before the public. Goodrich v. Waterbury Republican-Am., Inc., 188 Conn. 107, 127-28 1982). A few years later, a Connecticut Appellate Court adopted the invasion of privacy damages listed in the Restatement (Second) of Torts. In that decision, the court held that a plaintiff who has established a cause of action for invasion of his privacy is entitled to recover damages for: 1) the harm to his interest in privacy resulting from the invasion; 2) his mental distress proved to have been suffered if it is of a kind that normally results from such an invasion; and 3) special damages of which the invasion is a legal cause. Jonap v. Silver, 1 Conn. App. 550, 557 (. 1984)

This raises an interesting legal question. If a plaintiff’s claim is found to have fulfilled the standards of an invasion of privacy claim, yet not the standards of an intentional infliction of emotional distress claim, what damages can the plaintiff recover? An intentional infliction of emotional distress claim must involve “extreme and outrageous” conduct, while an invasion of privacy claim must involve conduct that is “highly offensive to a reasonable person.” It seems incongruous that a plaintiff is unable to recover for emotional distress under an intentional infliction of emotional distress claim, yet is able to recover for “mental distress” arising from an invasion of privacy claim. However, it appears that courts have determined that conduct qualifying as invasion of privacy needs to meet a less stringent standard of distress than conduct qualifying as intentional infliction of emotional distress. If this is true, then it makes sense that a plaintiff could be unable to recover for emotional distress under an intentional infliction of emotional distress claim, while still being able to recover for mental distress under a less stringent invasion of privacy claim.

ARTICLE BY

Connecticut Workplace Privacy Law

The Data Security and Breach Notification Act of 2015

Jackson Lewis P.C.

On March 25, 2015, the United States House of Representative, Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade approved draft legislation which would replace state data breach notification laws with a national standard.  This draft legislation comes on the heels of the President’s call for a national data breach notification law.  The proposed legislation is identified as the “Data Security and Breach Notification Act of 2015.”

The overview of the draft provides that “Data breaches are a growing problem as e-commerce evolves and Americans spend more of their time and conduct more of their activities online. Technology has empowered consumers to purchase goods and services on demand, but it has also empowered criminals to target businesses and steal a host of personal data. This costs consumers tens of billions of dollars each year, imposes all kinds of hassles, and can have a lasting impact on their credit.”  Like many existing state laws, the proposal would require companies to secure the personal data they collect and maintain about consumers and to provide notice to individuals in the event of a breach of security involving personal information.

The draft legislation contains several key provisions:

  • Companies would be required to implement and maintain reasonable security measures and practices to protect and secure personal information;

  • The definition of personal information is more expansive than most state breach notification laws, including home address, telephone number, mother’s maiden name, and date of birth as data elements;

  • Companies are not required to provide notice if there is no reasonable risk of identity theft, economic loss, economic harm, or financial harm;

  • Companies would be required to provide notice to affected individuals within 30 days after discovery of a breach;

  • The law would preempt all state data breach notification laws;

  • Enforcement would be by the Federal Trade Commission (FTC) or state attorneys general; and

  • No private right of action would be permitted.

The measure must now be formally introduced in the House of Representatives before further action can be taken.  Notably, similar measures introduced in the past in an effort to nationalize data breach response have all failed.  However, given the number of individuals affected by, or likely to be affected by, a data breach and the fact identity theft has topped the FTC’s ranking of consumer complaints for the 15th consecutive year, support for a national data breach notification law has never been stronger.

ARTICLE BY

Workplace Privacy Blog

New Data Security Bill Seeks Uniformity in Protection of Consumers’ Personal Information

Morgan, Lewis & Bockius LLP.

Last week, House lawmakers floated a bipartisan bill titled the Data Security and Breach Notification Act (the Bill). The Bill comes on the heels of legislation proposed by US President Barack Obama, which we recently discussed in a previous post. The Bill would require certain entities that collect and maintain consumers’ personal information to maintain reasonable data security measures in light of the applicable context, to promptly investigate a security breach, and to notify affected individuals of the breach in detail. In our Contract Corner series, we have examined contract provisions related to cybersecurity, including addressing a security incident if one occurs.

Some notable aspects of the Bill include the following:

  • Notification to individuals affected by a breach would generally be required within 30 days after a company has begun taking investigatory and corrective measures (rather than based on the date of the breach’s discovery).

  • Notification to the Federal Trade Commission (FTC) and the Secret Service or the Federal Bureau of Investigation would be required if the number of individuals whose personal information was (or there is a reasonable basis to conclude was) leaked exceeds 10,000.

  • To advance uniform and consistently applied standards throughout the United Sates, the Bill would preempt state data security and notification laws. However, the scope of preemption continues to be discussed, and certain entities would be excluded from the Bill’s requirements, including entities subject to existing data security regulatory regimes (e.g., entities covered by the Health Insurance Portability and Accountability Act).

  • Violations of the Bill would be enforced by the FTC or state attorneys general (and not by a private right of action).

ARTICLE BY

IoT – It’s All About the Data, Right?

Foley and Lardner LLP

A few weeks ago, the FTC released a report on the Internet of Things (IoT). IoT refers to “things” such as devices or sensors – other than computers, smartphones, or tablets – that connect, communicate or transmit information with or between each other through the Internet. This year, there are estimated to be over 25 billion connected devices, and by 2020, 50 billion. With the ubiquity of IoT devices raising various concerns, the FTC has provided several recommendations.

Security

The report includes the following security recommendations for companies developing Internet of Things devices:

  • Build security into devices at the outset, rather than as an afterthought in the design process

  • Train employees about the importance of security, and ensure that security is managed at an appropriate level in the organization

  • Ensure that when outside service providers are hired, that those providers are capable of maintaining reasonable security, and provide reasonable oversight of the providers

  • When a security risk is identified, consider a “defense-in-depth” strategy whereby multiple layers of security may be used to defend against a particular risk

  • Consider measures to keep unauthorized users from accessing a consumer’s device, data, or personal information stored on the network

  • Monitor connected devices throughout their expected life cycle, and where feasible, provide security patches to cover known risks

Data Minimization

The report suggested companies consider data minimization – that is, limiting the collection of consumer data, and retaining that information only for a set period of time, and not indefinitely. Data minimization addresses two key privacy risks: first, the risk that a company with a large store of consumer data will become a more enticing target for data thieves or hackers, and second, that consumer data will be used in ways contrary to consumers’ expectations.

Notice and Choice

The FTC provided further recommendations relating to notice and choice. It is recommended that companies notify consumers and give them choices about how their information will be used, particularly when the data collection is beyond consumers’ reasonable expectations.

What Does This Mean for Device Manufacturers?

It is evident from the FTC’s report that security and data governance are important features for IoT device manufacturers to consider. Although the report suggests implementing data minimization protocols to limit the type and amount of data collected and stored, IoT device manufacturers should not be short-sighted when deciding what data to collect and store through their IoT devices. For many IoT device manufacturers, the data collected may be immensely valuable to them and other stakeholders. It would be naïve to decide not to collect certain types of data simply because there is no clear use or application of the data, the costs and risks of storing such data are cost prohibitive or because they want to reduce their exposure due to a security breach. In fact, quite often IoT device manufacturers do not realize what types of data may be useful. IoT device manufacturers would be best served by analyzing who the stakeholders of their data may be.

For instance, an IoT device manufacturer that monitors soil conditions of farms may realize that the data they collect can be useful, not only to farmers, but also to insurance companies to better understand water table levels, produce suppliers, wholesalers, and retailers to predict produce inventory, farm equipment suppliers, among others. Because of this, IoT device manufacturers should identify the stakeholders of the data they collect early and revisit the data they collect to identify new stakeholders not previously identified based on trends that can be determined from the data.

Moreover, IoT device manufacturers should constantly consider ways to monetize or otherwise leverage the data they gather and collect. IoT device manufacturers tend to shy away from owning the data they collect in an effort to respect their customers’ privacy. Instead of not collecting sensitive data at all, IoT device manufacturers would be best served by exploring and implementing data collection and storage techniques that reduce their exposure to security breaches while at the same time allay the fears of customers.

ARTICLE BY

OF

Secure Sockets Layer (SSL) 3.0 Encryption Declared “No Longer Acceptable” to Protect Data

McDermott Will & Emery

On Friday, February 13, 2015, the Payment Cards Industry (PCI) Security Standards Council (Council) posted a bulletin to its website, becoming the first regulatory body to publicly pronounce that Secure Socket Layers (SSL) version 3.0 (and by inference, any earlier version) is “no longer… acceptable for protection of data due to inherent weaknesses within the protocol” and, because of the weaknesses, “no version of SSL meets PCI SSC’s definition of ‘strong cryptography.’” The bulletin does not offer an alternative means that would be acceptable, but rather “urges organizations to work with [their] IT departments and/or partners to understand if [they] are using SSL and determine available options for upgrading to a strong cryptographic protocol as soon as possible.” The Council reports that it intends to publish soon an updated version of PCI-DSS and the related PA-DSS that will address this issue. These developments follow news of the Heartbleed and POODLE attacks from 2014 that exposed SSL vulnerabilities.

Although the PCI standards only apply to merchants and other companies involved in the payment processing ecosystem, the Council’s public pronouncement that SSL is vulnerable and weak is a wakeup call to any organization that still uses an older version of SSL to encrypt its data, regardless of whether these standards apply.

As a result, every company should consider taking the following immediate action:

  1. Work with your IT stakeholders and those responsible for website operation to determine if your organization or a vendor for your organization uses SSL v. 3.0 (or any earlier version);

  2. If it does, evaluate with those stakeholders how to best disable these older versions, while immediately upgrading to an acceptable strong cryptographic protocol as needed;

  3. Review vendor obligations to ensure compliance with a stronger encryption protocol is mandated and audit vendors to ensure the vendor is implementing greater protection;

  4. If needed, consider retaining a reputable security firm to audit or evaluate your and your vendors’ encryption protocols and ensure vulnerabilities are properly remediated; and

  5.  Ensure proper testing prior to rollout of any new protocol.

OF

Responding to the Anthem Cyber Attack

Proskauer Rose LLP, Law Firm

Anthem Inc. (Anthem), the nation’s second-largest health insurer, revealed late on Wednesday, February 4 that it was the victim of a significant cyber attack. According to Anthem, the attack exposed personal information of approximately 80 million individuals, including those insured by related Anthem companies.Anthem has reported that the exposed information includes member names, member health ID and Social Security numbers, dates of birth, addresses, telephone numbers, email addresses and employment information. The investigation of the massive data breach is ongoing, and media outlets have reported that class action suits have already been filed against Anthem in California and Alabama, claiming that lax Anthem security measures contributed to this incident.

Employers, multiemployer health plans, and others responsible for employee health benefit programs should take note that theHealth Insurance Portability and Accountability Act (HIPAA) and state data breach notification laws may hold them responsible for ensuring that certain notifications are made related to the incident. The nature of these obligations will depend on whether the benefits offered through Anthem are provided under an insurance policy, and so are considered to be “fully insured,” or whether the Anthem benefits are provided under a “self-insured” arrangement, where Anthem does not insure the benefits, but instead administers the benefits. The most significant legal obligations on the part of employers, multiemployer health plans, and others responsible for employee health benefit programs will apply to Anthem benefits that are self-insured.

Where notifications must be made, the notifications may be due to former and present employees and their dependents, government agencies, and the media.  Where HIPAA applies, the notifications will need to be made “without unreasonable delay” and in any event no later than 60 days after the employer or other responsible party becomes aware that the breach has affected its own health plan participants. Where state data breach laws apply, notifications generally must be made in the most expedient time possible and without unreasonable delay, subject to certain permitted delays. Some state laws impose outside timeframes as short as 30 days. Under the state laws, reporting obligations on the part of employers, multiemployer health plans, and others responsible for employee health benefit programs will generally turn on whether they, or Anthem, “own” the breached data. Since the state laws apply to breaches of data of their residents, regardless of the states in which the compromised entities and data owners are located, and since former employees and dependents could reside anywhere, a comprehensive state law analysis is required to determine the legal requirements arising from this data breach. Fortunately, depending on the circumstances, some (but not all) state data breach notification laws defer to HIPAA breach notification procedures, and do not require additional action where HIPAA applies and is followed.

As potentially affected parties wait for confirmation from Anthem as to whether any of their employees, former employees or their covered dependents has had their data compromised, we recommend that affected parties work with their legal counsel to determine what their responsibilities, if any, might be to respond to this incident. Among other things, for self-insured arrangements, HIPAA business associate agreements and other contracts with Anthem should be reviewed to assess how data breaches are addressed, whether data ownership has been addressed by contract, and whether indemnification provisions may apply. Consideration should also be given to promptly reaching out to Anthem to clarify the extent to which Anthem will be addressing notification responsibilities. Once parties are in a position to make required notifications, we also recommend that companies consult with legal counsel to review the notifications and the distribution plans for those notifications to assure that applicable legal requirements have been satisfied.

ARTICLE BY

OF

It’s Data Privacy Day 2015

Mintz Levin Law Firm

Today is Data Privacy Day, and as you might expect, we have a few bits and bytes for you.

Use the Opportunity

Data Privacy Day is another opportunity to push out a note to employees regarding their own privacy and security — and how that can help the company.

The Federal Trade Commission Issues IoT (Internet of Things) Report

Following up on its November 2013 workshop on the Internet of Things, the Federal Trade Commission (“FTC”) has released a staff report on privacy and security in the context of the Internet of Things (“IoT”), “Internet of Things: Privacy & Security in a Connected World” along with a document that summarizes the best practices for businesses contained in the Report.  The primary focus of the Report is the application of four of the Fair Information Practice Principles (“FIPPs”) to the IoT – data security, data minimization, notice, and choice.

Data PrivacyThe report begins by defining IoT for the FTC’s purposes as “‘things’ such as devices or sensors – other than computers, smartphones, or tablets – that connect, communicate or transmit information with or between each other through the Internet,” but limits this to devices that are sold to or used by consumers, rather than businesses, in line with the FTC’s consumer protection mandate.  Before discussing the best practices, the FTC goes on to delineate several benefits and risks of the IoT.  Among the benefits are (1) improvements to health care, such as insulin pumps and blood-pressure cuffs that allow people avoid trips to the doctor the tools to monitor their own vital signs from home; (2) more efficient energy use at home, through smart meters and home automation systems; and (3) safer roadways as connected cars can notify drivers of dangerous road conditions and offer real-time diagnostics of a vehicle.

The risks highlighted by the Report include, among others, (1) unauthorized access and misuse of personal information; (2) unexpected uses of personal information; (3) collection of unexpected types of information; (4) security vulnerabilities in IoT devices that could facilitate attacks on other systems; and (5) risks to physical safety, such as may arise from hacking an insulin pump.

In light of these risks, the FTC staff suggests a number of best practices based on four FIPPs. At the workshop from which this report was generated, all participants agreed on the importance of applying the data security principle.  However, participants disagreed concerning the suitability of applying the data minimization, notice, and choice principles to the IoT, arguing that minimization might limit potential opportunities for IoT devices, and notice and choice might not be practical depending on the device’s interface – for example, some do not have screens.  The FTC recognized these concerns but still proposed best practices based on these principles.

Recommendations

Data Security Best Practices:

  • Security by design.  This includes building in security from the outset and constantly reconsidering security at every stage of development. It also includes testing products thoroughly and conducting risk assessments throughout a product’s development

  • Personnel practices.  Responsibility for product security should rests at an appropriate level within the organization.  This could be a Chief Privacy Officer, but the higher-up the responsible part, the better off a product and company will be.

  • Oversee third party providers.  Companies should provide sufficient oversight of their service providers and require reasonable security by contract.

  • Defense-in-depth.  Security measures should be considered at each level at which data is collected stored, and transmitted, including a customer’s home Wi-Fi network over which the data collected will travel.  Sensitive data should be encrypted.

  • Reasonable access control.  Strong authentication and identity validation techniques will help to protect against unauthorized access to devices and customer data.

Data Minimization Best Practices:

  • Carefully consider data collected.  Companies should be fully cognizant of why some category of data is collected and how long that data should be stored.

  • Only collect necessary data.  Avoid collecting data that is not needed to serve the purpose for which a customer purchases the device. Establish a reasonable retention limit on data the device does collect.

  • Deidentify data where possible.  If deidentified data would be sufficient companies should only maintain such data in a deidentified form and work to prevent reidentification.

Notice and Choice Best Practices:  The FTC initially notes that the context in which data is collected may mean that notice and choice is not necessary. For example, when information is collected to support the specific purpose for which the device was purchased.

When notice or choice are necessary, the FTC offers several suggestions for how a company might give or obtain that, including (1) offer choice at point of sale; (2) direct customers to online tutorials; (3) print QR codes on the device that take customers to a website for notice and choice; provide choices during initial set-up; (4) provide icons to convey important privacy-relevant information, such a flashing light that appears when a device connects to the Internet; (5) provide notice through emails or texts when requested by consumers; and (6) make use of a user experience approach, such personalizing privacy preferences based on the choices a customer already made on another device.

Legislation.  The FTC staff recommends against IoT-specific legislation in the Report, citing the infancy of the industry and the potential for federal legislation to stifle innovation.  Instead, the FTC recommends technology-neutral privacy and data security legislation.  Without saying it explicitly, this appears to be a recommendation for something akin to the Consumer Privacy Bill of Rights recently proposed by the President, along with giving the FTC authority to enforce certain privacy protections, including notice and choice, even in the absence of a showing of deceptive or unfair acts or practices.

In the meantime, the FTC notes that it will continue to provide privacy and data security oversight of IoT as it has in other areas of privacy.  Specifically, the FTC would continue to enforce the FTC Act, the Children’s Online Privacy Protection Act, and other relevant statutes.  Other initiatives would include developing education materials, advocating on behalf of consumer privacy, and participating in multi-stakeholder groups to develop IoT guidelines for industry.

Three Lessons for Mitigating Network Security Risks in 2015: Bring Your Own Device

Risk-Management-Monitor-Com

Not too long ago, organizations fell into one of two camps when it came to personal mobile devices in the workplace – these devices were either connected to their networks or they weren’t.

But times have changed. Mobile devices have become so ubiquitous that every business has to acknowledge that employees will connect their personal devices to the corporate network, whether there’s a bring-your-own-device (BYOD) policy in place or not. So really, those two camps we mentioned earlier have evolved – the devices are a given, and now, it’s just a question of whether or not you choose to regulate them.

This decision has significant implications for network security. If you aren’t regulating the use of these devices, you could be putting the integrity of your entire network at risk. As data protection specialist Vinod Banerjee told CNBC, “You have employees doing more on a mobile device and doing it ad hoc here and there and perhaps therefore not thinking about some of the risks that are apparent.” What’s worse, this has the potential to happen on a wide scale – Gartner predicted that, by 2018, more than half of all mobile users will turn first to their phone or tablet to complete online tasks. The potential for substantial remote access vulnerabilities is high.

So what can risk practitioners within IT departments do to regain control over company-related information stored on employees’ personal devices? Here are three steps to improve network security:

1. Focus on the Increasing Number of Endpoints, Not New Types

Employees are expected to have returned from holiday time off with all sorts of new gadgets they received as gifts, from fitness trackers to smart cameras and other connected devices.

Although these personal connected devices do pose some network security risk if they’re used in the workplace, securing different network-enabled mobile endpoints is really nothing special for an IT security professional. It doesn’t matter if it’s a smartphone, a tablet or a smart toilet that connects to the network – in the end, all of these devices are computers and enterprises will treat them as such.

The real problem for IT departments involves the number of new network-enabled endpoints. With each additional endpoint comes more network traffic and, subsequently, more risk. Together, a high number of endpoints has the potential to create more severe remote access vulnerabilities within corporate networks.

To mitigate the risk that accompanies these endpoints, IT departments will rely on centralized authentication and authorization functions to ensure user access control and network policy adherence. Appropriate filtering of all the traffic, data and information that is sent into the network by users is also very important. Just as drivers create environmental waste every time they get behind the wheel, network users constantly send waste – in this case, private web and data traffic, as well as malicious software – into the network through their personal devices. Enterprises need to prepare their networks for this onslaught.

2. Raise the Base Level of Security

Another way that new endpoints could chip away at a network security infrastructure is if risk practitioners fall into a trap where they focus so much on securing new endpoints, such as phones and tablets, that they lose focus on securing devices like laptops and desktops that have been in use for much longer.

It’s not difficult to see how this could happen – information security professionals know that attackers constantly change their modus operandi as they look for security vulnerabilities, often through new, potentially unprotected devices. So, in response, IT departments pour more resources into protecting these devices. In a worst-case scenario, enterprises could find themselves lacking the resources to both pivot and mitigate new vulnerabilities, while still adequately protecting remote endpoints that have been attached to the corporate network for years.

To offset this concern, IT departments need to maintain a heightened level of security across the entire network. It’s not enough to address devices ad hoc. It’s about raising the floor of network security, to protect all devices – regardless of their shape or operating system.

3. Link IT and HR When Deprovisioning Users

Another area of concern around mobile devices involves ex-employees. Employee termination procedures now need to account for BYOD and remote access, in order to prevent former employees from accessing the corporate network after their last day on the job. This is particularly important because IT staff have minimal visibility over ex-employees who could be abusing their remote access capabilities.

As IT departments know, generally the best approach to network security is to adopt policies that are centrally managed and strictly enforced. In this case, by connecting the human resources database with the user deprovisioning process, a company ensures all access to corporate systems is denied from devices, across-the-board, as soon as the employee is marked “terminated” in the HR database. This eliminates any likelihood of remote access vulnerabilities.

Similarly, there also needs to be a process for removing all company data from an ex-employee’s personal mobile device. By implementing a mobile device management or container solution, which creates a distinct work environment on the device, you’ll have an easy-to-administer method of deleting all traces of corporate data whenever an employee leaves the company. This approach is doubly effective, as it also neatly handles situations when a device is lost or stolen.

New Risks, New Resolutions

As the network security landscape continues to shift, the BYOD and remote access policies and processes of yesterday will no longer be sufficient for IT departments to manage the personal devices of employees. The New Year brings with it new challenges, and risk practitioners need new approaches to keep their networks safe and secure.

OF

President Obama Seeks to Strengthen and Clarify Cybercrime Law Enforcement

Covington_NL

On Tuesday, President Obama introduced a legislative proposal on privacy and data security that seeks to strengthen and clarify law enforcement’s ability to investigate and prosecute cybercrimes.

The first section of the proposed legislation would expand the definition of “racketeering activity” under the Racketeering Influenced and Corrupt Organizations (“RICO”) Act to include felony offenses under the Computer Fraud and Abuse Act (“CFAA”)—the federal anti-hacking statute.  The second section would amend existing law to deter “the development and sale of computer and cell phone spying devices.”  The third section proposes substantial changes intended to modernize the CFAA.  Finally, the proposal’s fourth section is aimed at strengthening the government’s ability to disrupt and shut down botnets—networks of computers often deployed to commit crimes, such as spreading malware.

Although much of the proposal is modeled off a similar proposal advanced by the White House in 2011, there are key differences, including making clear that it is a crime to access a computer in breach of a use restriction, while at the same time limiting the scope of liability for such access to cases that the Administration believes are serious enough to warrant prosecution under the CFAA.

Updating and Expanding the RICO Act to Include CFAA Offenses

The White House proposal would include felony violations of the CFAA in the definition of “racketeering activity” under the RICO Act.  This would provide for increased penalties for cybercrimes and afford prosecutors the ability to more easily charge certain members of organized criminal groups engaged in computer network attacks and related cybercrimes.

Deterring the Development and Sale of Computer and Cell Phone Spying Devices

The White House proposal seeks to deter the development and sale of computer and cell phone spying devices by instituting two changes.  First, the legislative proposal would amend 18 U.S.C. § 1956 to “enabl[e] appropriate charges for defendants who engage in money laundering to conceal profits from the sale of surreptitious interception devices.”  Second, it would amend 18 U.S.C. § 2513 “to allow for the criminal and civil forfeiture proceeds from the sale of surreptitious interception devices and property used to facilitate the crime.”  This would expand the scope of section 2513, which currently provides for the forfeiture of only the surreptitious devices themselves.

Modernizing the CFAA

According to the White House, the goal of the proposal’s third section is to “enhance [the CFAA’s] effectiveness against attackers on computers and computer networks, including those by insiders.”  The proposed legislation contains several key amendments to various CFAA provisions:

First, the proposal would make access in violation of certain use restrictions an illegal act under the CFAA by amending the definition of “exceeds authorized access” to include instances in which a user accesses a computer with authorization to obtain or alter information “for the purpose that the accessor knows is not authorized by the computer owner.”  Language of this sort would address, at least in part, an existing circuit split on the meaning of the language “exceeds authorized access,” as used in the CFAA.  Some commentators, however, have questioned whether the proposed language will resolve the current ambiguity over the CFAA’s reach.  For example, if an employee accessed a computer for a non-work-related purpose, it would be obvious that the employee would be violating the CFAA (as amended by the White House’s proposed language) if there were a written policy that states “company computers can be accessed only for work-related purposes.”  However, if a non-employee accessed the computer, there may not be a clear violation of the CFAA because the non-employee is not bound by—and thus would not be breaching—the employer’s policy.  As a result, the courts may still have disagreements about the scope of the phrase “exceeds authorized access” even with the new language.

The White House’s proposal would also add a new provision to the CFAA by amending 18 U.S.C. § 1030(a)—the subsection of the CFAA that lists the punishable offenses under the statute.  The added provision would provide new threshold requirements for criminal offenses resulting from users exceeding their authorized access.  The proposal would punish a user who “intentionally exceeds authorized access to a protected computer, and thereby obtains information from such computer” if one of three conditions are met: “(i) the value of the information obtained exceeds $5,000; (ii) the offense was committed in furtherance of any felony violation of the laws of the United States or of any State, unless such violation would be based solely on obtaining the information without authorization or in excess of authorization; or (iii) the protected computer is owned or operated by or on behalf of a governmental entity.”  While courts must still interpret the meaning of these conditions, they provide a clearer framework for prosecution of offenses under the statute and, in theory, would constrain the government’s ability to prosecute individuals under the CFAA for minor offenses.

Additionally, the White House proposal would amend the CFAA “to enable the prosecution of the sale of a ‘means of access’ such as a botnet.”  Further, instead of requiring the government to prove “intent to defraud” under this subsection (the intent standard applicable to violations motived by financial gain), the legislation would require prosecutors only to establish “willfulness,” so as to criminalize unlawful trafficking of access to “other types of wrongdoing perpetrated using botnets” and not just password and similar information.

The proposal would also enhance CFAA penalties and enforcement mechanisms by raising penalties for circumventing technological barriers to access a computer (e.g., hacking into or breaking into a computer), and by making such violations felonies  carrying a prison term of up to ten years.  This is a significant change from the current law, which allows for either a misdemeanor or a felony carrying a maximum prison term of only five years.  The proposal would also create civil forfeiture procedures, “clarify that the ‘proceeds’ forfeitable [under the CFAA] are gross proceeds, as opposed to net proceeds,” and in appropriate circumstances, allow for the forfeiture of real property used to facilitate offenses under the statute.  And the proposal would clarify “that both conspiracy and attempt to commit a computer hacking offense are subject to the same penalties as completed, substantive offenses.”

Shutting Down Botnets

Finally, the legislative proposal would add to existing civil remedies by explicitly providing courts with the authority to issue injunctions aimed at disrupting or shutting down botnets.  Under the proposal, the Attorney General would be authorized to seek injunctive relief under 18 U.S.C. § 1345 if the government can show that the criminal conduct alleged would affect 100 or more protected computers during a one-year period.  Criminal conduct under the proposal would include “denying access to or operation of the computers [denial of services attacks], installing unwanted software on the computers [malware], using the computers without authorization, or obtaining information from the computers without authorization.”  The legislation would also protect from liability individuals or entities that comply with courts orders and would allow courts to order the government to reimburse those individuals or entities for costs directly incurred in complying with such orders.

This post was written with contributions from Jim Garland.

ARTICLE BY

OF