UNDER SURVEILLANCE: Police Commander and City of Pittsburgh Face Wiretap Lawsuit

Hi CIPAWorld! The Baroness here and I have an interesting filing that just came in the other day.

This one involves alleged violations of the Pennsylvania Wiretapping and Electronic Surveillance Act, 18 Pa.C.S.A. § 5703, et seq., and the Federal Wiretap Act, 18 U.S.C. § 2511, et seq.

Pursuant to the Pennsylvania Wiretapping and Electronic Surveillance Act, 18 Pa.C.S.A. § 5703, et seq., a person is guilty of a felony of the third degree if he:

(1) intentionally intercepts, endeavors to intercept, or procures any other person to intercept or endeavor to intercept any wire, electronic or oral communication;

(2) intentionally discloses or endeavors to disclose to any other person the contents of any wire, electronic or oral communication, or evidence derived therefrom, knowing or having reason to know that the information was obtained through the interception of a wire, electronic or oral communication; or

(3) intentionally uses or endeavors to use the contents of any wire, electronic or oral communication, or evidence derived therefrom, knowing or having reason to know, that the information was obtained through the interception of a wire, electronic or oral communication.

Seven police officers employed by the City of Pittsburg Bureau of Police team up to sue Matthew Lackner (Commander) and the City of Pittsburgh.

Plaintiffs, Colleen Jumba Baker, Brittany Mercer, Matthew O’Brien, Jonathan Sharp, Matthew Zuccher, Christopher Sedlak and Devlyn Valencic Keller allege that beginning on September 27, 2003 through October 4, 2003, Matthew Lacker utilized body worn cameras to video and audio records Plaintiffs along with utilizing the GPS component of the body worn camera to track them.

Yes. To track them.

Plaintiffs allege they were unaware that Lacker was utilizing a body worn camera to video and auto them and utilizing the GPS function of the body worn camera. Nor did they consent to have their conversations audio recorded by Lacker and/or the City of Pittsburgh.

Interestingly, Lackner was already charged with four (4) counts of Illegal Use of Wire or Oral Communication pursuant to the Pennsylvania Wiretapping and Electronic Surveillance Act. 18 Pa.C.S.A. § 5703(1) in a criminal suit.

So now Plaintiffs seek compensatory damages, including actual damages or statutory damages, punitive damages, and reasonably attorneys’ fees.

This case was just filed so it will be interesting to see how this case progresses. But this case is an important reminder that many states have their own privacy laws and to take these laws seriously to avoid lawsuits like this one.

Case No.: Case 2:24-cv-00461

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

FCC Updated Data Breach Notification Rules Go into Effect Despite Challenges

On March 13, 2024, the Federal Communications Commission’s updates to the FCC data breach notification rules (the “Rules”) went into effect. They were adopted in December 2023 pursuant to an FCC Report and Order (the “Order”).

The Rules went into effect despite challenges brought in the United States Court of Appeals for the Sixth Circuit. Two trade groups, the Ohio Telecom Association and the Texas Association of Business, petitioned the United States Court of Appeals for the Sixth Circuit and Fifth Circuit, respectively, to vacate the FCC’s Order modifying the Rules. The Order was published in the Federal Register on February 12, 2024, and the petitions were filed shortly thereafter. The challenges, which the United States Panel on Multidistrict Litigation consolidated to the Sixth Circuit, argue that the Rules exceed the FCC’s authority and are arbitrary and capricious. The Order addresses the argument that the Rules are “substantially the same” as breach rules nullified by Congress in 2017. The challenges, however, have not progressed since the Rules went into effect.

Read our previous blog post to learn more about the Rules.

Listen to this post

Montana Passes 9th Comprehensive Consumer Privacy Law in the U.S.

On May 19, 2023, Montana’s Governor signed Senate Bill 384, the Consumer Data Privacy Act. Montana joins California, Colorado, Connecticut, Indiana, Iowa, Tennessee, Utah, and Virginia in enacting a comprehensive consumer privacy law. The law is scheduled to take effect on October 1, 2024.

When does the law apply?

The law applies to a person who conducts business in the state of Montana and:

  • Controls or processes the personal data of not less than 50,000 consumers (defined as Montana residents), excluding data controlled or processed solely to complete a payment transaction.
  • Controls and processes the personal data of not less than 25,000 consumers and derive more than 25% of gross revenue from the sale of personal data.

Hereafter these covered persons are referred to as controllers.

The following entities are exempt from coverage under the law:

  • Body, authority, board, bureau, commission, district, or agency of this state or any political subdivision of this state;
  • Nonprofit organization;
  • Institution of higher education;
  • National securities association that is registered under 15 U.S.C. 78o-3 of the federal Securities Exchange Act of 1934;
  • A financial institution or an affiliate of a financial institution governed by Title V of the Gramm- Leach-Bliley Act;
  • Covered entity or business associate as defined in the privacy regulations of the federal Health Insurance Portability and Accountability Act (HIPAA);

Who is protected by the law?

Under the law, a protected consumer is defined as an individual who resides in the state of Montana.

However, the term consumer does not include an individual acting in a commercial or employment context or as an employee, owner, director, officer, or contractor of a company partnership, sole proprietorship, nonprofit, or government agency whose communications or transactions with the controller occur solely within the context of that individual’s role with the company, partnership, sole proprietorship, nonprofit, or government agency.

What data is protected by the law?

The statute protects personal data defined as information that is linked or reasonably linkable to an identified or identifiable individual.

There are several exemptions to protected personal data, including for data protected under HIPAA and other federal statutes.

What are the rights of consumers?

Under the new law, consumers have the right to:

  • Confirm whether a controller is processing the consumer’s personal data
  • Access Personal Data processed by a controller
  • Delete personal data
  • Obtain a copy of personal data previously provided to a controller.
  • Opt-out of the processing of the consumer’s personal data for the purpose of targeted advertising, sales of personal data, and profiling in furtherance of solely automated decisions that produce legal or similarly significant effects.

What obligations do businesses have?

The controller shall comply with requests by a consumer set forth in the statute without undue delay but no later than 45 days after receipt of the request.

If a controller declines to act regarding a consumer’s request, the business shall inform the consumer without undue delay, but no later than 45 days after receipt of the request, of the reason for declining.

The controller shall also conduct and document a data protection assessment for each of their processing activities that present a heightened risk of harm to a consumer.

How is the law enforced?

Under the statute, the state attorney general has exclusive authority to enforce violations of the statute. There is no private right of action under Montana’s statute.

Jackson Lewis P.C. © 2023

For more Privacy Legal News, click here to visit the National Law Review.

First BIPA Trial Results in $228M Judgment for Plaintiffs

Businesses defending class actions under the Illinois Biometric Information Privacy Act (BIPA) have struggled to defeat claims in recent years, as courts have rejected a succession of defenses.

We have been following this issue and have previously reported on this trend, which continued last week in the first BIPA class action to go to trial. The Illinois federal jury found that BNSF Railway Co. violated BIPA, resulting in a $228 million award to a class of more than 45,000 truck drivers.

Named plaintiff Richard Rogers filed suit in Illinois state court in April 2019, and BNSF removed the case to the US District Court for the Northern District of Illinois. Plaintiff alleged on behalf of a putative class of BNSF truck drivers that BNSF required the drivers to provide biometric identifiers in the form of fingerprints and hand geometry to access BNSF’s facilities. The lawsuit alleged BNSF violated BIPA by (i) failing to inform class members their biometric identifiers or information were being collected or stored prior to collection, (ii) failing to inform class members of the specific purpose and length of term for which the biometric identifiers or information were being collected, and (iii) failing to obtain informed written consent from class members prior to collection.

In October 2019, the court rejected BNSF’s legal defenses that the class’s BIPA claims were preempted by three federal statutes governing interstate commerce and transportation: the Federal Railroad Safety Act, the Interstate Commerce Commission Termination Act, and the Federal Aviation Administration Authorization Act. The court held that BIPA’s regulation of how BNSF obtained biometric identifiers or information did not unreasonably interfere with federal regulation of rail transportation, motor carrier prices, routes, or services, or safety and security of railroads.

Throughout the case, including at trial, BNSF also argued it should not be held liable where the biometric data was collected by its third-party contractor, Remprex LLC, which BNSF hired to process drivers at the gates of BNSF’s facilities. In March 2022, the court denied BNSF’s motion for summary judgment, pointing to evidence that BNSF employees were also involved in registering drivers in the biometric systems and that BNSF gave direction to Remprex regarding the management and use of the systems. The court concluded (correctly, as it turned out) that a jury could find that BNSF, not just Remprex, had violated BIPA.

The case proceeded to trial in October 2022 before US District Judge Matthew Kennelly. At trial, BNSF continued to argue it should not be held responsible for Remprex’s collection of drivers’ fingerprints. Plaintiff’s counsel argued BNSF could not avoid liability by pleading ignorance and pointing to a third-party contractor that BNSF controlled. Following a five-day trial and roughly one hour of deliberations, the jury returned a verdict in favor of the class, finding that BNSF recklessly or intentionally violated BIPA 45,600 times. The jury did not calculate damages. Rather, because BIPA provides for $5,000 in liquidated damages for every willful or reckless violation (and $1,000 for every negligent violation), Judge Kennelly applied BIPA’s damages provision, which resulted in a judgment of $228 million in damages. The judgment does not include attorneys’ fees, which plaintiff is entitled to and will inevitably seek under BIPA.

While an appeal will almost certainly follow, the BNSF case serves as a stark reminder of the potential exposure companies face under BIPA. Businesses that collect biometric data must ensure they do so in compliance with BIPA and other biometric privacy regulations. Where BIPA claims have been asserted, companies should promptly seek outside counsel to develop a legal strategy for a successful resolution.

For more Privacy and Cybersecurity Legal News, click here to visit the National Law Review.

© 2022 ArentFox Schiff LLP

Federal Privacy Law – Could It Happen in 2019?

This was a busy week for activity and discussions on the federal level regarding existing privacy laws – namely the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). But the real question is, could a federal privacy law actually happen in 2019? Cybersecurity issues and the possibility of a federal privacy law were in the spotlight at the recent Senate Judiciary Committee hearing. This week also saw the introduction of bipartisan federal legislation regarding Internet of Things (IoT)-connected devices.

Senate Judiciary Committee Hearing on GDPR and CCPA

Let’s start by discussing this week’s hearing before the Senate Judiciary Committee in Washington. On March 12, the Committee convened a hearing entitled GDPR & CCPA: Opt-ins, Consumer Control, and the Impact on Competition and Innovation.  The Committee received testimony from several interested parties who discussed the pros and cons of both laws from various perspectives. One thing was clear – technology has outpaced the law, and several of those who provided testimony to the Committee argued strongly for one uniform federal privacy law rather than the collection of 50 different state laws.

Some of the testimony focused on the impact of the GDPR, both on businesses and economic concerns, and some felt it is too early yet to truly know the full impact. Others discussed ethical concerns regarding data use, competition, artificial intelligence, and the necessity for meaningful enforcement by the Federal Trade Commission (FTC).

One thing made clear by the testimony presented is that people want their data protected, and maybe they even want to prevent it from being shared and sold, but the current landscape makes that difficult for consumers to navigate. The reality is that many of us simply can’t keep track of every privacy policy we read, or every “cookie” we consent to. It’s also increasingly clear that putting the burden on consumers to opt in/opt out or try to figure out the puzzle of where our data is going and how it’s used, may not be the most effective means of legislating privacy protections.

Model Federal Privacy Law

Several of the presenters at the Senate hearing included legislative proposals for a federal privacy law. (See the link included above to the Committee website with links to individual testimony). Recently, the U.S. Chamber of Commerce also released its version of a model federal privacy law. The model legislation proposal contains consumer opt-out rights and a deletion option, and would empower the FTC to enforce violations and impose civil penalties for violations.

IoT Federal Legislation Is Back – Sort of

In 2017, federal legislation regarding IoT was introduced but didn’t pass. This week, the Internet of Things Cybersecurity Improvement Act of 2019 was introduced in Congress in a bipartisan effort to impose cybersecurity standards on IoT devices purchased by the federal government. The new bipartisan bill’s supporters acknowledge the proliferation of internet-connected things and devices and the risks to the federal government of IoT cybersecurity vulnerabilities. This latest federal legislation applies to federal government purchases of IoT devices and not to a broader audience. We recently discussed the California IoT law that was enacted last year. Effective January 1, 2020, all IoT devices sold in California will require a manufacturer to equip the device with “reasonable security feature or features” to “protect the device and any information contained therein from unauthorized access, destruction, use modification or disclosure.”

The convergence of the new California law and the prospect of federal IoT legislation begs the question of whether the changes to California law and on the federal level would be enough to drive change in the industry to increase the security of all IoT devices. The even bigger question is whether there is the political will in 2019 to drive change to enact a comprehensive federal privacy law. That remains to be seen as the year progresses.

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
This post was written by Deborah A. George of Robinson & Cole LLP.