Exploring the Future of Information Governance: Key Predictions for 2024

Information governance has evolved rapidly, with technology driving the pace of change. Looking ahead to 2024, we anticipate technology playing an even larger role in data management and protection. In this blog post, we’ll delve into the key predictions for information governance in 2024 and how they’ll impact businesses of all sizes.

  1. Embracing AI and Automation: Artificial intelligence and automation are revolutionizing industries, bringing about significant changes in information governance practices. Over the next few years, it is anticipated that an increasing number of companies will harness the power of AI and automation to drive efficient data analysis, classification, and management. This transformative approach will not only enhance risk identification and compliance but also streamline workflows and alleviate administrative burdens, leading to improved overall operational efficiency and effectiveness. As organizations adapt and embrace these technological advancements, they will be better equipped to navigate the evolving landscape of data governance and stay ahead in an increasingly competitive business environment.
  2. Prioritizing Data Privacy and Security: In recent years, data breaches and cyber-attacks have significantly increased concerns regarding the usage and protection of personal data. As we look ahead to 2024, the importance of data privacy and security will be paramount. This heightened emphasis is driven by regulatory measures such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). These regulations necessitate that businesses take proactive measures to protect sensitive data and provide transparency in their data practices. By doing so, businesses can instill trust in their customers and ensure the responsible handling of personal information.
  3. Fostering Collaboration Across Departments: In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world.
  4. Exploring Blockchain Technology: Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers. This transformative technology not only enhances data integrity but also mitigates the risks of tampering, ensuring trust and accountability in the digital age. With its ability to provide a robust and reliable framework for data management, blockchain is poised to reshape the way we handle and secure information, paving the way for a more efficient and trustworthy future.
  5. Prioritizing Data Ethics: As data-driven decision-making becomes increasingly crucial in the business landscape, the importance of ethical data usage cannot be overstated. In the year 2024, businesses will place even greater emphasis on data ethics, recognizing the need to establish clear guidelines and protocols to navigate potential ethical dilemmas that may arise. To ensure responsible and ethical data practices, organizations will invest in enhancing data literacy among their workforce, prioritizing education and training initiatives. Additionally, there will be a growing focus on transparency in data collection and usage, with businesses striving to build trust and maintain the privacy of individuals while harnessing the power of data for informed decision-making.

The future of information governance will be shaped by technology, regulations, and ethical considerations. Businesses that adapt to these changes will thrive in a data-driven world. By investing in AI and automation, prioritizing data privacy and security, fostering collaboration, exploring blockchain technology, and upholding data ethics, companies can prepare for the challenges and opportunities of 2024 and beyond.

Jim Merrifield, Robinson+Cole’s Director of Information Governance & Business Intake, contributed to this report.

Under the GDPR, Are Companies that Utilize Personal Information to Train Artificial Intelligence (AI) Controllers or Processors?

The EU’s General Data Protection Regulation (GDPR) applies to two types of entities – “controllers” and “processors.”

A “controller” refers to an entity that “determines the purposes and means” of how personal information will be processed.[1] Determining the “means” of processing refers to deciding “how” information will be processed.[2] That does not necessitate, however, that a controller makes every decision with respect to information processing. The European Data Protection Board (EDPB) distinguishes between “essential means” and “non-essential means.[3] “Essential means” refers to those processing decisions that are closely linked to the purpose and the scope of processing and, therefore, are considered “traditionally and inherently reserved to the controller.”[4] “Non-essential means” refers to more practical aspects of implementing a processing activity that may be left to third parties – such as processors.[5]

A “processor” refers to a company (or a person such as an independent contractor) that “processes personal data on behalf of [a] controller.”[6]

Data typically is needed to train and fine-tune modern artificial intelligence models. They use data – including personal information – in order to recognize patterns and predict results.

Whether an organization that utilizes personal information to train an artificial intelligence engine is a controller or a processor depends on the degree to which the organization determines the purpose for which the data will be used and the essential means of processing. The following chart discusses these variables in the context of training AI:

The following chart discusses these variables in the context of training AI:

Function

Activities Indicative of a Controller

Activities Indicative of a Processor

Purpose of processing

Why the AI is being trained.

If an organization makes its own decision to utilize personal information to train an AI, then the organization will likely be considered a “controller.”

If an organization is using personal information provided by a third party to train an AI, and is doing so at the direction of the third party, then the organization may be considered a processor.

Essential means

Data types used in training.

If an organization selects which data fields will be used to train an AI, the organization will likely be considered a “controller.”

If an organization is instructed by a third party to utilize particular data types to train an AI, the organization may be a processor.

Duration personal information is held within the training engine

If an organization determines how long the AI can retain training data, it will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, and does not control how long the AI may access the training data, the organization may be a processor.

Recipients of the personal information

If an organization determines which third parties may access the training data that is provided to the AI, that organization will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, but does not control who will be able to access the AI (and the training data to which the AI has access), the organization may be a processor.

Individuals whose information is included

If an organization is selecting whose personal information will be used as part of training an AI, the organization will likely be considered a “controller.”

If an organization is being instructed by a third party to utilize particular individuals’ data to train an AI, the organization may be a processor.

 

[1] GDPR, Article 4(7).

[1] GDPR, Article 4(7).

[2] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 33.

[3] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[4] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[5] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[6] GDPR, Article 4(8).

©2023 Greenberg Traurig, LLP. All rights reserved.

For more Privacy Legal News, click here to visit the National Law Review.

What’s in the American Data Privacy and Protection Act?

Congress is considering omnibus privacy legislation, and it reportedly has bipartisan support. If passed, this would be a massive shake-up for American consumer privacy, which has been left to the states up to this point. So, how does the American Data Privacy and Protection Act (ADPPA) stack up against existing privacy legislation such as the California Consumer Privacy Act and the Virginia Consumer Data Protection Act?

The ADPPA includes a much broader definition of sensitive data than we’ve seen in state-level laws. Some notable inclusions are income level, voicemails and text messages, calendar information, data relating to a known child under the age of 17, and depictions of an individual’s “undergarment-clad” private area. These enumerated categories go much further than recent state laws, which tend to focus on health and demographic information. One asterisk though – unlike other state laws, the ADPPA only considers sexual orientation information to be sensitive when it is “inconsistent with the individual’s reasonable expectation” of disclosure. It’s unclear at this point, for example, if a member of the LGBTQ+ community who is out to friends would have a “reasonable expectation” not to be outed to their employer.

Like the European Union’s General Data Protection Regulation, the ADPPA includes a duty of data minimization on covered entities (the ADPPA borrows the term “covered entity” from HIPAA). There is a laundry list of exceptions to this rule, including one for using data collected prior to passage “to conduct internal research.” Companies used to kitchen-sink analytics practices may appreciate this savings clause as they adjust to making do with less access to consumer data.

Another innovation is a tiered applicability, in which all commercial entities are “covered entities,” but “large data holders” – those making over $250,000,000 gross revenue and that process either 5,000,000 individuals’ data or 200,000 individuals’ sensitive data – are subject to additional requirements and limitations, while “small businesses” enjoy additional exemptions. Until now, state consumer privacy laws have made applicability an all-or-nothing proposition. All covered entities, though, would be required to comply with browser opt-out signals, following a trend started by the California Privacy Protection Agency’s recent draft regulations. Additionally, individuals have a private right of action against covered entities to seek monetary and injunctive relief.

Finally, and controversially, the ADPPA explicitly preempts all state privacy laws. It makes sense – the globalized nature of the internet means that any less-stringent state law would become the exception that kills the rule. Still, companies that only recently finalized CCPA- and CPRA-compliance programs won’t appreciate being sent back to the drawing board.

Read the bill for yourself here.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

Italian Garante Bans Google Analytics

On June 23, 2022, Italy’s data protection authority (the “Garante”) determined that a website’s use of the audience measurement tool Google Analytics is not compliant with the EU General Data Protection Regulation (“GDPR”), as the tool transfers personal data to the United States, which does not offer an adequate level of data protection. In making this determination, the Garante joins other EU data protection authorities, including the French and Austrian regulators, that also have found use of the tool to be unlawful.

The Garante determined that websites using Google Analytics collected via cookies personal data including user interactions with the website, pages visited, browser information, operating system, screen resolution, selected language, date and time of page views and user device IP address. This information was transferred to the United States without the additional safeguards for personal data required under the GDPR following the Schrems II determination, and therefore faced the possibility of governmental access. In the Garante’s ruling, website operator Caffeina Media S.r.l. was ordered to bring its processing into compliance with the GDPR within 90 days, but the ruling has wider implications as the Garante commented that it had received many “alerts and queries” relating to Google Analytics. It also stated that it called upon “all controllers to verify that the use of cookies and other tracking tools on their websites is compliant with data protection law; this applies in particular to Google Analytics and similar services.”

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Thailand’s Personal Data Protection Act Enters into Force

On June 1, 2022, Thailand’s Personal Data Protection Act (“PDPA”) entered into force after three years of delays. The PDPA, originally enacted in May 2019, provides for a one-year grace period, with the main operative provisions of the law originally set to come into force in 2020. Due to the COVID-19 pandemic, however, the Thai government issued royal decrees to extend the compliance deadline to June 1, 2022. 

The PDPA mirrors the EU General Data Protection Regulation (“GDPR”) in many respects. Specifically, it requires data controllers and processors to have a valid legal basis for processing personal data (i.e., data that can identify living natural persons directly or indirectly). If such personal data is sensitive personal data (such as health data, biometric data, race, religion, sexual preference and criminal record), data controllers and processors must ensure that data subjects give explicit consent for any collection, use or disclosure of such data. Exemptions are granted for public interest, contractual obligations, vital interest or compliance with the law.

The PDPA applies both to entities in Thailand and abroad that process personal data for the provision of products or services in Thailand. Like the GDPR, data subjects are guaranteed rights, including the right to be informed, access, rectify and update data; restrict and object to processing; and the right to data erasure and portability. Breaches may result in fines between THB500,000 (U.S.$14,432) and THB5 million, plus punitive compensation. Certain breaches involving sensitive personal data and unlawful disclosure also carry criminal penalties including imprisonment of up to one year.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

GDPR Privacy Rules: The Other Shoe Drops

Four years after GDPR was implemented, we are seeing the pillars of the internet business destroyed. Given two new EU decisions affecting the practical management of data, all companies collecting consumer data in the EU are re-evaluating their business models and will soon be considering wholesale changes.

On one hand, the GDPR is creating the world its drafters intended – a world where personal data is less of a commodity exploited and traded by business. On the other hand, GDPR enforcement has taken the form of a wrecking ball, leading to data localization in Europe and substitution of government meddling for consumer choice.

For years we have watched the EU courts and enforcement agencies apply GDPR text to real-life cases, wondering if the legal application would be more of a nip and tuck operation on ecommerce or something more bloody and brutal. In 2022, we received our answer, and the bodies are dropping.

In January Austrian courts decided that companies can’t use Google Analytics to study their own site’s web traffic. The same conclusion was reached last week by French regulators. While Google doesn’t announce statistics about product usage, website tracker BuiltWith published that 29.3 million websites use Google Analytics, including 69.5 percent of Quantcast’s Top 10,000 sites, and that is more than ten times the next most popular option. So vast numbers of companies operating in Europe will need to change their platform analytics provider – if the Euro-crats will allow them to use site analytics at all.

But these decisions were not based on the functionality of Google Analytics, a tool that does not even capture personally identifiable information – no names, no home or office address, no phone numbers. Instead, these decisions that will harm thousands of businesses were a result of the Schrems II decision, finding fault in the transfer of this non-identifiable data to a company based in the United States. The problem here for European decision-makers is that US law enforcement may have access to this data if courts allow them. I have written before about this illogical conclusion and won’t restate the many arguments here, other than to say that EU law enforcement behaves the same way.

The effects of this decision will be felt far beyond the huge customer base of Google Analytics.  The logic of this decision effectively means that companies collecting data from EU citizens can no longer use US-based cloud services like Amazon Web Services, IBM, Google, Oracle or Microsoft. I would anticipate that huge cloud player Alibaba Cloud could suffer the same proscription if Europe’s privacy panjandrums decide that China’s privacy protection is as threatening as the US.

The Austrians held that all the sophisticated measures taken by Google to encrypt analytic data meant nothing, because if Google could decrypt it, so could the US government. By this logic, no US cloud provider – the world’s primary business data support network – could “safely” hold EU data. Which means that the Euro-crats are preparing to fine any EU company that uses a US cloud provider. Max Schrems saw this decision in stark terms, stating, “The bottom line is: Companies can’t use US cloud services in Europe anymore.”

This decision will ultimately support the Euro-crats’ goal of data localization as companies try to organize local storage/processing solutions to avoid fines. Readers of this blog have seen coverage of the EU’s tilt toward data localization (for example, here and here) and away from the open internet that European politicians once held as the ideal. The Euro-crats are taking serious steps toward forcing localized data processing and cutting US businesses out of the ecommerce business ecosystem. The Google Analytics decision is likely to be seen as a tipping point in years to come.

In a second major practical online privacy decision, earlier this month the Belgian Data Protection Authority ruled that the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF), a widely-used technical standard built for publishers, advertisers, and technology vendors to obtain user consent for data processing, does not comply with the GDPR. The TCF allows users to accept or reject cookie-based advertising, relieving websites of the need to create their own expensive technical solutions, and creating a consistent experience for consumers. Now the TCF is considered per-se illegal under EU privacy rules, casting thousands of businesses to search for or design their own alternatives, and removing online choices for European residents.

The Belgian privacy authority reached this conclusion by holding that the Interactive Advertising Bureau was a “controller” of all the data managed under its proposed framework. As stated by the Center for Data Innovation, this decision implies “that any good-faith effort to implement a common data protection protocol by an umbrella organization that wants to uphold GDPR makes said organization liable for the data processing that takes place under this protocol.” No industry group will want to put itself in this position, leaving businesses to their own devices and making ecommerce data collection much less consistent and much more expensive – even if that data collection is necessary to fulfill the requests of consumers.

For years companies thought that informed consumer consent would be a way to personalize messaging and keep consumer costs low online, but the EU has thrown all online consent regimes into question. EU regulators have effectively decided that people can’t make their own decisions about allowing data to be collected. If TCF – the consent system used by 80% of the European internet and a system designed specifically to meet the demands of the GDPR – is now illegal, then, for a second time in a month, all online consumer commerce is thrown into confusion. Thousands were operating websites with TCF and Google Analytics, believing they were following the letter of the law.  That confidence has been smashed.

We are finally seeing the practical effects of the GDPR beyond its simple utility for fining US tech companies.  Those effects are leading to a closed-border internet around Europe and a costlier, less customizable internet for EU citizens. The EU is clearly harming businesses around the world and making its internet a more cramped place. I have trouble seeing the logic and benefit of these decisions, but the GDPR was written to shake the system, and privacy benefits may emerge.

Copyright © 2022 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more articles about international privacy, visit the NLR Cybersecurity, Media & FCC section.

New Poll Underscores Growing Support for National Data Privacy Legislation

Over half of all Americans would support a federal data privacy law, according to a recent poll from Politico and Morning Consult. The poll found that 56 percent of registered voters would either strongly or somewhat support a proposal to “make it illegal for social media companies to use personal data to recommend content via algorithms.” Democrats were most likely to support the proposal at 62 percent, compared to 54 percent of Republicans and 50 percent of Independents. Still, the numbers may show that bipartisan action is possible.

The poll is indicative of American’s increasing data privacy awareness and concerns. Colorado, Virginia, and California all passed or updated data privacy laws within the last year, and nearly every state is considering similar legislation. Additionally, Congress held several high-profile hearings last year soliciting testimony from several tech industry leaders and whistleblower Frances Haugen. In the private sector, Meta CEO Mark Zuckerberg has come out in favor of a national data privacy standard similar to the EU’s General Data Protection Regulation (GDPR).

Politico and Morning Consult released the poll results days after Senator Ron Wyden (D-OR) accepted a 24,000-signature petition calling for Congress to pass a federal data protection law. Senator Wyden, who recently introduced his own data privacy proposal called the “Mind Your Own Business Act,” said it was “past time” for Congress to act.

He may be right: U.S./EU data flows have been on borrowed time since 2020. The GDPR prohibits data flows from the EU to countries with inadequate data protection laws, including the United States. The U.S. Privacy Shield regulations allowed the United States to circumvent the rule, but an EU court invalidated the agreement in 2020, and data flows between the US and the EU have been in legal limbo ever since. Eventually, Congress and the EU will need to address the situation and a federal data protection law would be a long-term solution.

This post was authored by C. Blair Robinson, legal intern at Robinson+Cole. Blair is not yet admitted to practice law. Click here to read more about the Data Privacy and Cybersecurity practice at Robinson & Cole LLP.

For more data privacy and cybersecurity news, click here to visit the National Law Review.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

Trifecta of New Privacy Laws Protect Personal Data

Following California’s lead, two states recently enacted new privacy laws designed to protect consumers’ rights over their personal data. The Colorado Privacy Act and the Virginia Consumer Data Protection Act mimic California privacy laws and the EU General Data Protection Regulation (GDPR) by imposing stringent requirements on companies that collect or process personal data of state residents. Failure to comply may subject companies to enforcement actions and stiff fines and penalties by regulators.

Virginia Consumer Data Protection Act

On March 2, 2021, Virginia’s legislature passed the Consumer Data Protection Act (CDPA, the Act), which goes into effect on January 1, 2023.

Organizations Subject to the CDPA

The Act generally applies to entities that conduct business in the state of Virginia or that produce products or services targeted to residents of the state and meet one or both of the following criteria: (1) control or process personal data of 100,000 Virginia consumers annually, (2) control or process personal data of at least 25,000 consumers (statute silent as to whether this is an annual requirement) and derive more than 50 percent of gross revenue from the sale of personal data. The processing of personal data includes the collection, use, storage, disclosure, analysis, deletion or modification of personal data.

Notably, certain organizations are exempt from compliance with the CDPA, including government agencies, financial institutions subject to the Gramm-Leach-Bliley Act (GLBA), entities subject to the Health Insurance Portability and Accountability Act (HIPAA), nonprofit organizations and institutions of higher education.

Broad Definition of Personal Data

The CDPA broadly defines personal data to include any information that is linked to an identifiable individual, but does not include de-identified or publicly available information. The Act distinguishes personal sensitive data, which includes specific categories of data such as race, ethnicity, religion, mental or physical health diagnosis, sexual orientation, citizenship or immigration status, genetic or biometric data, children’s data and geolocation data.

Consumers’ Data Protection Rights

The new Virginia privacy law recognizes certain data protection rights over consumers’ personal information, including the right to access their data, correct inaccuracies in their data, request deletion of their data, receive a copy of their data, and opt out of the processing of their personal data for purposes of targeted advertising, the sale of their data or profiling.

If a consumer exercises any of these rights under the CDPA, a company must respond within 45 days – subject to a one-time 45-day extension. If the company declines to take action in response to the consumer’s request, the company must notify the consumer within 45 days of receipt of the request. Any information provided in response to a consumer’s request shall be provided by the company free of charge, up to twice annually per consumer. The company must establish a procedure for a consumer to appeal the company’s refusal to take action on the consumer’s request. The company is required to provide the consumer with written notice of the decision on appeal within 60 days of receipt of an appeal.

Responsibilities of Data Controllers

The CDPA imposes several requirements on companies/data controllers, including limiting the collection of personal data, safeguarding personal data by implementing reasonable data security practices and obtaining a consumer’s consent prior to processing any sensitive data.

Moreover, data controllers should have a Privacy Notice that clearly explains the categories of personal data collected and processed; the purpose for processing personal data; how consumers can exercise their rights over their personal data; any categories of personal data shared with third parties; the categories of third parties with which personal data is shared; and consumers’ right to opt out of the processing of their personal data.

Importantly, all data controllers are required to conduct and document a data protection assessment (DPA). The DPA should identify and weigh the benefits and risks of processing consumers’ personal data and the safeguards that can reduce such risks. The Virginia Attorney General (VA AG) may require a controller to produce a copy of its DPA upon request.

Furthermore, data controllers must enter into a binding written contract with any third parties that process personal data (data processors) at the direction of the controller. This contract should address the following issues: instructions for processing personal data; nature and purpose of processing; type of data subject to processing; duration of processing; duty of confidentiality with respect to the data; and deletion or return of data to the data controller. In addition, the contract should include a provision that enables the data controller or a third party to conduct an assessment of the data processor’s policies and procedures for compliance with the protection of personal data.

Regulatory Enforcement

The VA AG has the exclusive authority to enforce the CDPA. Prior to initiating an enforcement action, the VA AG is required to provide the company/data controller with written notice identifying violations of the Act. If the company cures the violations within 30 days and provides the VA AG with express notice of the same, then no action will be taken against the company. The law permits the VA AG to impose statutory civil penalties of up to $7,500 for each violation of the Act. Moreover, the VA AG also may seek recovery of its attorneys’ fees and costs incurred in investigating and enforcing the resolution of violations of the Act.

Colorado Privacy Act

On July 7, 2021, Colorado passed the Colorado Privacy Act (CPA), which takes effect on July 1, 2023. In many respects, the CPA mirrors Virginia’s new privacy law.

Organizations Subject to the Law

The CPA applies to companies/data controllers that:

  • Conduct business in the state of Colorado or
  • Produce or deliver commercial products or services that are targeted to residents of Colorado and
  • Satisfy one or both of the following criteria:
    • Control or process personal data of 100,000 or more Colorado consumers annually
    • Derive revenue from the sale of personal data and process or control personal data of 25,000 or more Colorado consumers (statute silent as to whether this is an annual requirement).

Notably, the CPA does not apply to personal data that is protected under certain other laws, including GLBA, HIPAA, the Fair Credit Reporting Act, the Driver’s Privacy Protection Act, Children’s Online Privacy Protection Act (COPPA), Family Educational Rights and Privacy Act (FERPA), customer data maintained by a public utility, employment records or data maintained by an institution of higher education. 

Broad Definition of Personal Data

The CPA broadly defines personal data as information that can be linked to an identifiable individual, but does not include de-identified or publicly available information. The law also distinguishes personal sensitive data that may include race, ethnicity, religion, mental or physical health condition or diagnosis, sexual orientation or citizenship. 

Consumers’ Data Protection Rights

The law sets forth consumers’ data protection rights, including the right to access their personal data; the right to correct inaccuracies in their data; the right to request deletion of their data; the right to obtain a copy of their data; and the right to opt out of the processing of their personal data for the purposes of targeted advertising, the sale of their data or profiling.

A company/data controller must respond to a consumer’s request within 45 days – subject to a single 45-day extension as reasonably required. The company must notify the consumer within 45 days if the company declines to take action in response to a consumer’s request. Information provided in response to a consumer request shall be provided by the company free of charge, once annually per consumer. The company must establish a procedure for a consumer to appeal the company’s refusal to take action on a consumer’s request. The company shall provide the consumer a written decision on an appeal within 45 days of receipt of the appeal. The company may extend the appeal response deadline by 60 additional days where reasonably necessary.

Responsibilities of Data Controllers

The CPA imposes a number of stringent requirements on companies, including limiting the collection of personal data to what is reasonably necessary; taking reasonable measures to secure personal data from unauthorized acquisition during both storage and use; and obtaining a consumer’s consent prior to processing any sensitive data.

The data controller should have a clear and conspicuous Privacy Notice that sets forth the categories of personal data processed by the company, the purpose for processing personal data and the means by which consumers can withdraw their consent to processing of their data. The Privacy Notice should identify the categories of personal data collected or processed, categories of personal data shared with third parties and the categories of third parties with which personal data is shared. The Privacy Notice also must disclose whether the company sells personal data or processes personal data for targeted advertising, and the means by which consumers can opt out of the sale or processing of their data. 

A data controller shall not process any personal data that represents a heightened risk of harm to a consumer without conducting a data protection assessment (DPA). The DPA must identify and weigh the benefits from the processing of personal data that may flow to the controller, the consumer and the public against the potential risks to the rights of the consumer. These risks may be mitigated by safeguards adopted by the company. The company may be required to produce its DPA to the Colorado Attorney General (CO AG) upon request.

A company/data controller must enter into a binding contract with any third parties (data processors) that process personal data at the direction of the data controller. This contract should address the following issues: data processing procedures, instructions for processing personal data, nature and purpose of processing, type of data subject to processing, duration of processing, and deletion or return of data by the data processor. The contract also should include a provision that allows the controller to perform audits and inspections of the processor at least once annually and at the processor’s expense. The audit should examine the processor’s policies and procedures regarding the protection of personal data. If an audit is performed by a third party, the processor shall provide a copy of the audit report to the controller upon request. 

Regulatory Enforcement

The CO AG has the exclusive authority to enforce the DPA by bringing an enforcement action on behalf of Colorado consumers. A violation of the DPA is considered to be a deceptive trade practice. Prior to initiating an enforcement action, the CO AG must issue a notice of violation to the company and provide an opportunity to cure the violation. If the company fails to cure the violation within 60 days of receipt of notice of the violation, the CO AG may commence an enforcement action. Civil penalties may be imposed for violations of the Act.

Conclusion

Companies that collect or process consumer data are well advised to heed these new privacy laws imposed by Virginia and Colorado, since more states are sure to adopt similar laws. Failure to adhere to these new stringent legal requirements summarized in the table below may subject companies to regulatory enforcement actions, in addition to fines and penalties.

Requirements Virginia  Colorado
Consumer Data Protection Rights
Right to access personal data X X
Right to correct personal data X X
Right to delete personal data X X
Right to receive a copy of personal data X X
Right to opt out of processing personal data X X
Duty to Respond to Consumer Requests
Within 45 days (subject to one-time extension) X X
Notice of refusal to take action X X
Provide information free of charge X X
Appeal process X X
Privacy Notice
Categories of personal data collected or processed X X
Purpose for processing data X X
How consumers can exercise their rights X X
Categories of personal data shared with third parties X X
Categories of third parties with which personal data is shared X X
How consumers can opt out of the sale or processing of their personal data X X
Data Protection Assessment (DPA)
Documented DPA weighing the benefits and risks of processing consumers’ personal data, and the safeguards that can reduce such risks X X
Binding Contract Between Data Controller and Third-Party Data Processor
Instructions for processing personal data X X
Nature and purpose of the processing X X
Type of data subject to processing X X
Duration of processing X X
Duty of confidentiality X X
Deletion or return of data X X
Audits of data processor’s policies and procedures to safeguard data and comply with privacy laws X X
Enforcement
Enforcement by Attorney General X X
Fines and penalties X X

© 2021 Wilson Elser


Article By

For more articles on data privacy legislation, visit the NLR Communications, Media, Internet and Privacy Law News section.

Can We Really Forget?

I expected this post would turn out differently.

I had intended to commend the European Court of Justice for placing sensible limits on the extraterritorial enforcement of the EU’s Right to be Forgotten. They did, albeit in a limited way,[1] and it was a good decision. There.  I did it. In 154 words.

Now for the remaining 1400 or so words.

But reading the decision pushes me back into frustration at the entire Right to be Forgotten regime and its illogical and destructive basis. The fact that a court recognizes the clear fact that the EU cannot (generally) force foreign companies to violate the laws of their own countries in internet sites that are intended for use within those countries (and NOT the EU), does not come close to offsetting the logical, practical and societal problems with the way the EU perceives and enforces the Right to be Forgotten.

As a lawyer, with all decisions grounded in the U.S. Constitution, I am comfortable with the First Amendment’s protection of Freedom of Speech – that nearly any truthful utterance or publication is inviolate, and that the foundation of our political and social system depends on open exposure of facts to sunlight. Intentionally shoving those true facts into the dark is wrong in our system and openness will be protected by U.S. courts.

Believe it or not, the European Union also has such a concept at the core of its foundation too. Article 10 of the European Convention on Human Rights states that:

“Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

So we have the same values, right? In both jurisdictions the right to impart information can be exercised without interference by public authority.  Not so fast.  The EU contains a litany of restrictions on this right, including a limitation of your right to free speech by the policy to protect the reputation of others.

This seems like a complete evisceration of a right to open communication if a court can force obfuscation of facts just to protect someone’s reputation.  Does this person deserve a bad reputation? Has he or she committed a crime, failed to pay his or her debts, harmed animals or children, stalked an ex-lover, or violated an oath of office, marriage, priesthood or citizenship? It doesn’t much matter in the EU. The right of that person to hide his/her bad or dangerous behavior outweighs both the allegedly fundamental right to freedom to impart true information AND the public’s right to protect itself from someone who has proven himself/herself to be a risk to the community.

So how does this tension play out over the internet? In the EU, it is law that Google and other search engines must remove links to true facts about any wrongdoer who feels his/her reputation may be tarnished by the discovery of the truth about that person’s behavior. Get into a bar fight?  Don’t worry, the EU will put the entire force of law behind your request to wipe that off your record. Stiff your painting contractors for tens of thousands of Euros despite their good performance? Don’t worry, the EU will make sure nobody can find out . Get fired, removed from office or defrocked for dishonesty? Don’t worry, the EU has your back.

And that undercutting of speech rights has now been codified in Article 17 of Regulation 2016/679, the Right to be Forgotten.

And how does this new decision affect the rule? In the past couple weeks, the Grand Chamber of the EU Court of Justice issued an opinion limiting the extraterritorial reach of the Right to be Forgotten. (Google vs CNIL, Case C‑507/17) The decision confirms that search engines must remove links to certain embarrassing instances of true reporting, but must only do so on the versions of the search engine that are intentionally servicing the EU, and not necessarily in versions of the search engines for non-EU jurisdictions.

The problems with appointing Google to be an extrajudicial magistrate enforcing vague EU-granted rights under a highly ambiguous set of standards and then fining them when you don’t like a decision you forced them to make, deserve a separate post.

Why did we even need this decision? Because the French data privacy protection agency, known as CNIL, fined Google for not removing presumably true data from non-EU search results concerning, as Reuters described, “a satirical photomontage of a female politician, an article referring to someone as a public relations officer of the Church of Scientology, the placing under investigation of a male politician and the conviction of someone for sexual assaults against minors.”  So, to be clear, while the official French agency believes it should enforce a right for people to obscure that they have been convicted of sexual assault against children from the whole world, the Grand Chamber of the European Court of Justice believes that the people convicted child sexual assault should be protected in their right to obscure these facts only from people in Europe. This is progress.

Of course, in the U.S., politicians and other public figures, under investigation or subject to satire or people convicted of sexual assault against children do not have a right to protect their reputations by forcing Google to remove links to public records or stories in news outlets. We believe both that society is better when facts are allowed to be reported and disseminated and that society is protected by reporting on formal allegations against public figures or criminal convictions of private ones.

I am glad that the EU Court of Justice is willing to restrict rules to remain within its jurisdiction where they openly conflict with the basic laws of other jurisdictions. The Court sensibly held,

“The idea of worldwide de-referencing may seem appealing on the ground that it is radical, clear, simple and effective. Nonetheless, I do not find that solution convincing, because it takes into account only one side of the coin, namely the protection of a private person’s data.[2] . . . [T]he operator of a search engine is not required, when granting a request for de-referencing, to operate that de-referencing on all the domain names of its search engine in such a way that the links at issue no longer appear, regardless of the place from which the search on the basis of the requester’s name is carried out.”

Any other decision would be wildly overreaching. Believe me, every country in the EU would be howling in protest if the US decided that its views of personal privacy must be enforced in Europe by European companies due to operations aimed only to affect Europe. It should work both ways. So this was a well-reasoned limitation.

But I just cannot bring myself to be complimentary of a regime that I find so repugnant – where nearly any bad action can be swept under the rug in the name of protecting a person’s reputation.

As I have written in books and articles in the past, government protection of personal privacy is crucial for the clean and correct operation of a democracy.  However, privacy is also the obvious refuge of scoundrels – people prefer to keep the bad things they do private. Who wouldn’t? But one can go overboard protecting this right, and it feels like the EU has institutionalized its leap overboard.

I would rather err on the side of sunshine, giving up some privacy in the service of revealing the truth, than err on the side of darkness, allowing bad deeds to be obscured so that those who commit them can maintain their reputations.  Clearly, the EU doesn’t agree with me.


[1] The Court, in this case, wrote, “The issues at stake therefore do not require that the provisions of Directive 95/46 be applied outside the territory of the European Union. That does not mean, however, that EU law can never require a search engine such as Google to take action at worldwide level. I do not exclude the possibility that there may be situations in which the interest of the European Union requires the application of the provisions of Directive 95/46 beyond the territory of the European Union; but in a situation such as that of the present case, there is no reason to apply the provisions of Directive 95/46 in such a way.”

[2] EU Court of Justice case C-136/17, which states, “While the data subject’s rights [to privacy] override, as a general rule, the freedom of information of internet users, that balance may, however, depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information. . . .”

 


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more EU’s GDPR enforcement, see the National Law Review Communications, Media & Internet law page.

Lessons in Becoming a Second Rate Intellectual Power – Through Privacy Regulation!

The EU’s endless regulation imposed on data usage has spooled over into academia, providing another lesson in kneecapping your own society by overregulating it. And they wonder why none of the big internet companies arose from the EU (or ever will). This time, the European data regulators seem to be doing everything they can to hamstring clinical trials and drive the research (and the resulting tens of billions of dollars of annual spend) outside the EU. That’s bad for pharma and biotech companies, but it’s also bad for universities that want to attract, retain, and teach top-notch talent.

The European Data Protection Board’s Opinion 3/2019 (the “Opinion”) fires an early and self-wounding shot in the coming war over the GDPR meaning and application of “informed consent.” The EU Board insists on defining “informed consent” in a manner that would cripple most serious health research on humans and human tissue that could have taken place in European hospitals and universities.

As discussed in a US law review article from Former Microsoft Chief Privacy Counsel Mike Hintz called Science and Privacy: Data Protection Laws and Their Impact on Research (14 Washington Journal of Law, Technology & Arts 103 (2019)), noted in a recent IAPP story from Hintz and Gary LaFever, both the strict interpretation of “informed consent” and the GDPR’s right to withdraw consent can both cripple serious clinical trials. Further, according to LaFever and Hintz, researchers have raised concerns that “requirements to obtain consent for accessing data for research purposes can lead to inadequate sample sizes, delays and other costs that can interfere with efforts to produce timely and useful research results.”

A clinical researcher must have a “legal basis” to use personal information, especially health information, in trials.  One of the primary legal basis options is simply gaining permission from the test subject for data use.  Only this is not so simple.

On its face, the GDPR requires clear affirmative consent for using personal data (including health data) to be “freely given, specific, informed and unambiguous.” The Opinion clarifies that nearly all operations of a clinical trial – start to finish – are considered regulated transactions involving use of personal information, and special “explicit consent” is required for use of health data. Explicit consent requirements are satisfied by written statements signed by the data subject.

That consent would need to include, among other things:

  • the purpose of each of the processing operations for which consent is sought,
  • what (type of) data will be collected and used, and
  • the existence of the right to withdraw consent.

The Opinion is clear that the EU Board authors believe the nature of clinical trials to be one of  an imbalance of power between the data subject and the sponsor of the trial, so that consent for use of personal data would likely be coercive and not “freely given.” This raises the specter that not only can the data subject pull out of trials at any time (or insist his/ her data be removed upon completion of the trial), but EU Privacy Regulators are likely to simply cancel the right to use personal health data because the signatures could not be freely given where the trial sponsor had an imbalance of power over the data subject. Imagine spending years and tens of millions of euros conducting clinical trials, only to have the results rendered meaningless because, suddenly, the trial participants are of an insufficient sample size.

Further, if the clinical trial operator does not get permission to use personal information for analytics, academic publication/presentation, or any other use of the trial results, then the trial operator cannot use the results in these manners. This means that either the trial sponsor insists on broad permissions to use clinical results for almost any purpose (which would raise the specter of coercive permissions), or the trial is hobbled by inability to use data in opportunities that might arise later. All in all, using subject permission as a basis for supporting legal use of personal data creates unnecessary problems for clinical trials.

That leaves the following legal bases for use of personal data in clinical trials:

  • a task carried out in the public interest under Article 6(1)(e) in conjunction with Article 9(2), (i) or (j) of the GDPR; or

  • the legitimate interests of the controller under Article 6(1)(f) in conjunction with Article 9(2) (j) of the GDPR;

Not every clinical trial will be able to establish it is being conducted in the public interest, especially where the trial doesn’t fall “within the mandate, missions and tasks vested in a public or private body by national law.”  Relying on this basis means that a trial could be challenged later as not supported by national law, and unless the researchers have legislators or regulators pass or promulgate a clear statement of support for the research, this basis is vulnerable to privacy regulators’ whims.

Further, as observed by Hintze and LaFever, relying on “the legal basis involves a balancing test between those legitimate interests pursued by the controller or by a third party and the risks to the interests or rights of the data subject.” So even the most controller-centric of legal supports can be reversed if the local privacy regulator feels that a legitimate use is outweighed by the interests of the data subject.  I suppose the case of Henrietta Lacks, if arising in the EU in the present day, would be a clear situation where a non-scientific regulator can squelch a clinical trial because the data subjects rights to privacy were considered more important than any trial using her genetic material.

So none of the “legal basis” options is either easy or guaranteed not to be reversed later, once millions in resources have been spent on the clinical trial. Further, as Hintze observes, “The GDPR also includes data minimization principles, including retention limitations which may be in tension with the idea that researchers need to gather and retain large volumes of data to conduct big data analytics tools and machine learning.” Meaning that privacy regulators could step in and decide that a clinician has been too ambitious in her use of personal data in violation of data minimization rules and shut down further use of data for scientific purposes.

The regulators emphasize that “appropriate safeguards” will help protect clinical trials from interference, but I read such promises in the inverse.  If a hacker gains access to data in a clinical trial, or if some of this data is accidentally emailed to the wrong people, or if one of the 50,000 lost laptops each day contains clinical research, then the regulators will pounce with both feet and attack the academic institution (rarely paragons of cutting edge data security) as demonstrating a lack of appropriate safeguards.  Recent staggeringly high fines against Marriott and British Airways demonstrate the presumption of the ICO, at least, that an entity suffering a hack or losing data some other way will be viciously punished.

If clinicians choosing where to set human trials knew about this all-encompassing privacy law and how it throws the very nature of their trials into suspicion and possible jeopardy, I can’t see why they would risk holding trials with residents of the European Economic Zone. The uncertainty and risk involved in the aggressively intrusive privacy regulators now having specific interest in clinical trials may drive important academic work overseas. If we see a data breach in a European university or an academic enforcement action based on the laws cited above, it will drive home the risks.

In that case, this particular European shot in the privacy wars is likely to end up pushing serious researchers out of Europe, to the detriment of academic and intellectual life in the Union.

Damaging friendly fire indeed.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.