Top Competition Enforcers in the US, EU, and UK Release Joint Statement on AI Competition – AI: The Washington Report


On July 23, the top competition enforcers at the US Federal Trade Commission (FTC) and Department of Justice (DOJ), the UK Competition and Markets Authority (CMA), and the European Commission (EC) released a Joint Statement on Competition in Generative AI Foundation Models and AI products. The statement outlines risks in the AI ecosystem and shared principles for protecting and fostering competition.

While the statement does not lay out specific enforcement actions, the statement’s release suggests that the top competition enforcers in all three jurisdictions are focusing on AI’s effects on competition in general and competition within the AI ecosystem—and are likely to take concrete action in the near future.

A Shared Focus on AI

The competition enforcers did not just discover AI. In recent years, the top competition enforcers in the US, UK, and EU have all been examining both the effects AI may have on competition in various sectors as well as competition within the AI ecosystem. In September 2023, the CMA released a report on AI Foundation Models, which described the “significant impact” that AI technologies may have on competition and consumers, followed by an updated April 2024 report on AI. In June 2024, French competition authorities released a report on Generative AI, which focused on competition issues related to AI. At its January 2024 Tech Summit, the FTC examined the “real-world impacts of AI on consumers and competition.”

AI as a Technological Inflection Point

In the new joint statement, the top enforcers described the recent evolution of AI technologies, including foundational models and generative AI, as “a technological inflection point.” As “one of the most significant technological developments of the past couple decades,” AI has the potential to increase innovation and economic growth and benefit the lives of citizens around the world.

But with any technological inflection point, which may create “new means of competing” and catalyze innovation and growth, the enforcers must act “to ensure the public reaps the full benefits” of the AI evolution. The enforcers are concerned that several risks, described below, could undermine competition in the AI ecosystem. According to the enforcers, they are “committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Risks to Competition in the AI Ecosystem

The top enforcers highlight three main risks to competition in the AI ecosystem.

  1. Concentrated control of key inputs – Because AI technologies rely on a few specific “critical ingredients,” including specialized chips and technical expertise, a number of firms may be “in a position to exploit existing or emerging bottlenecks across the AI stack and to have outside influence over the future development of these tools.” This concentration may stifle competition, disrupt innovation, or be exploited by certain firms.
  2. Entrenching or extending market power in AI-related markets – The recent advancements in AI technologies come “at a time when large incumbent digital firms already enjoy strong accumulated advantages.” The regulators are concerned that these firms, due to their power, may have “the ability to protect against AI-driven disruption, or harness it to their particular advantage,” potentially to extend or strengthen their positions.
  3. Arrangements involving key players could amplify risks – While arrangements between firms, including investments and partnerships, related to the development of AI may not necessarily harm competition, major firms may use these partnerships and investments to “undermine or coopt competitive threats and steer market outcomes” to their advantage.

Beyond these three main risks, the statement also acknowledges that other competition and consumer risks are also associated with AI. Algorithms may “allow competitors to share competitively sensitive information” and engage in price discrimination and fixing. Consumers may be harmed, too, by AI. As the CMA, DOJ, and the FTC have consumer protection authority, these authorities will “also be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Sovereign Jurisdictions but Shared Concerns

While the enforcers share areas of concern, the joint statement recognizes that the EU, UK, and US’s “legal powers and jurisdictions contexts differ, and ultimately, our decisions will always remain sovereign and independent.” Nonetheless, the competition enforcers assert that “if the risks described [in the statement] materialize, they will likely do so in a way that does not respect international boundaries,” making it necessary for the different jurisdictions to “share an understanding of the issues” and be “committed to using our respective powers where appropriate.”

Three Unifying Principles

With the goal of acting together, the enforcers outline three shared principles that will “serve to enable competition and foster innovation.”

  1. Fair Dealing – Firms that engage in fair dealing will make the AI ecosystem as a whole better off. Exclusionary tactics often “discourage investments and innovation” and undermine competition.
  2. Interoperability – Interoperability, the ability of different systems to communicate and work together seamlessly, will increase competition and innovation around AI. The enforcers note that “any claims that interoperability requires sacrifice to privacy and security will be closely scrutinized.”
  3. Choice – Everyone in the AI ecosystem, from businesses to consumers, will benefit from having “choices among the diverse products and business models resulting from a competitive process.” Regulators may scrutinize three activities in particular: (1) company lock-in mechanisms that could limit choices for companies and individuals, (2) partnerships between incumbents and newcomers that could “sidestep merger enforcement” or provide “incumbents undue influence or control in ways that undermine competition,” and (3) for content creators, “choice among buyers,” which could be used to limit the “free flow of information in the marketplace of ideas.”

Conclusion: Potential Future Activity

While the statement does not address specific enforcement tools and actions the enforcers may take, the statement’s release suggests that the enforcers may all be gearing up to take action related to AI competition in the near future. Interested stakeholders, especially international ones, should closely track potential activity from these enforcers. We will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

Struck by CrowdStrike Outage? Your Business Loss Could Be Covered

Over the last week, organizations around the globe have struggled to bring operations back online following a botched software update from cybersecurity company CrowdStrike. As the dust settles, affected organizations should consider whether they are insured against losses or claims arising from the outage. The Wall Street Journal has already reported that insurers are bracing for claims arising from the outage and that according to one cyber insurance broker “[t]he insurance world was expecting to cover situations like this.” A cyber analytics firm has estimated that insured losses following the outage could reach $1.5 billion.

Your cyber insurance policy may cover losses resulting from the CrowdStrike outage. These policies often include “business interruption” or “contingent business interruption” insurance that protects against disruptions from a covered loss. Business interruption insurance covers losses from disruptions to your own operations. This insurance may cover losses if the outage affected your own computer systems. Contingent business interruption insurance, on the other hand, covers your losses when another entity’s operations are disrupted. This coverage could apply if the outage affected a supplier or cloud service provider that your organization relies on.

Cyber policies often vary in the precise risks they cover. Evaluating potential coverage requires comparing your losses to the policy’s coverage. Cyber policies also include limitations and exclusions on coverage. For example, many cyber policies contain a “waiting period” that requires affected systems to be disrupted for a certain period before the policy provides coverage. These waiting periods can be as short as one hour or as long as several days.

Other commercial insurance policies could also provide coverage depending on the loss or claim and the policy endorsements and exclusions. For example, your organization may have procured liability insurance that protects against third-party claims or litigation. This insurance could protect you from claims made by customers or other businesses related to the outage.

If your operations have been impacted by the CrowdStrike outage, there are a few steps you can take now to maximize your potential insurance recovery.

First, read your policies to determine the available coverage. As you review your policies, pay careful attention to policy limits, endorsements, and exclusions. A policy endorsement may significantly expand policy coverage, even though it is located long after the relevant policy section. Keep in mind that courts generally interpret coverage provisions in a policy generously in favor of an insured and interpret exclusions or limitations narrowly against an insurance company.

Second, track your losses. The outage likely cost your organization lost profits or extra expenses. Common business interruption losses may also include overtime expenses to remedy the outage, expenses to hire third-party consultants or technicians, and penalties arising from the outage’s disruption to your operations. Whatever the nature of your loss, tracking and documenting your loss now will help you secure a full insurance recovery later.

Third, carefully review and comply with your policy’s notice requirements. If you have experienced a loss or a claim, you should immediately notify your insurer. Even if you are only aware of a potential claim, your policy may require you to provide notice to your insurer of the events that could ultimately lead to a claim or loss. Some notice requirements in cyber policies can be quite short. After providing notice, you may receive a coverage response or “reservation of rights” from your insurer. Be cautious in taking any unfavorable response at face value. Particularly in cases of widespread loss, an insurer’s initial coverage evaluation may not accurately reflect the available coverage.

If you are unsure of your policy’s notice obligations or available coverage, or if you suspect your insurer is not affording your organization the coverage that you purchased, coverage counsel can assist your organization in securing coverage. Above all, don’t hesitate to secure the coverage to which you are entitled.

Listen to this post

FTC/FDA Send Letters to THC Edibles Companies Warning of Risks to Children

Earlier this week, the Federal Trade Commission (FTC) and Food and Drug Administration (FDA) sent cease-and-desist letters to several companies warning them that their products, which were marketed to mimic popular children’s snacks, ran the risk of unintended consumption of the Delta-8 THC by children. In addition to the FDA’s concerns regarding marketing an unsafe food additive, the agencies warned that imitating non-THC-containing food products often consumed by children through the use of advertising or labeling is misleading under Section 5 of the FTC Act. The FTC noted that “preventing practices that present unwarranted health and safety risks, particularly to children, is one of the Commission’s highest priorities.”

The FTC’s focus on these particular companies and products shouldn’t come as a surprise. One such company advertises edible products labelled as “Stoney Ranchers Hard Candy,” mimicking the common Jolly Ranchers candy, and “Trips Ahoy” closely resembling the well-known “Chips Ahoy.” Another company advertises a product closely resembling a Nerds Rope candy, with similar background coloring, and copy-cats of the Nerds logo and mascot. This is not the first time the FTC has warned companies about the dangers of advertising products containing THC in a way that could mislead consumers, particularly minors. In July of 2023, the FTC sent cease-and-desist letters to six organizations for the same violations alleged this week – there companies copied popular snack brands such as Doritos and Cheetos, mimicking the brands’ color, mascot, font, bag style, and more.

This batch of warning letters orders the companies to stop marketing the edibles immediately, to review their products for compliance, and to inform the FTC within 15 days of the specific actions taken to address the FTC’s concerns. The companies also are required to report to the FDA on corrective actions taken.

The Economic Benefits of AI in Civil Defense Litigation

The integration of artificial intelligence (AI) into various industries has revolutionized the way we approach complex problems, and the field of civil defense litigation is no exception. As lawyers and legal professionals navigate the complex and often cumbersome landscape of civil defense, AI can offer a transformative assistance that not only enhances efficiency but also significantly reduces client costs. In this blog, we’ll explore the economic savings associated with employing AI in civil defense litigation.

Streamlining Document Review
One of the most labor-intensive and costly aspects of civil defense litigation is the review of vast amounts of discovery documents. Traditionally, lawyers and legal teams spend countless hours sifting through documents to identify and categorize relevant information, a process that is both time-consuming and costly. AI-powered tools, such as Large Language Models (LLM) can automate and expedite this process.

By using AI to assist in closed system document review, law firms can drastically cut down on the number of billable hours required for this task. AI assistance can quickly and accurately identify relevant documents, flagging pertinent information and reducing the risk of material oversight. This not only speeds up the review process and allows a legal team to concentrate on analysis rather than document digest and chronology, but significantly lowers the overall cost of litigation to the client.

By way of example – a case in which 50,000 medical treatment record and bills must be analyzed, put in chronology and reviewed for patient complaints, diagnosis, treatment, medial history and prescription medicine use, could literally take a legal team weeks to complete. With AI assistance the preliminary ground work such as document organization, chronologizing complaints and treatments and compiling prescription drug lists can be completed in a matter of minutes, allowing the lawyer to spend her time in verification, analysis and defense development and strategy, rather than information translation and time consuming data organization.

Enhanced Legal Research
Legal research is another growing area where AI can yield substantial economic benefits. Traditional legal research methods involve lawyers poring over case law, statutes, and legal precedents to find those cases that best fit the facts and legal issues at hand. This process can be incredibly time-intensive, driving up costs for clients. Closed AI-powered legal research platforms can rapidly analyze vast databases of verified legal precedent and information, providing attorneys with precise and relevant case law in a fraction of the time. Rather than conducting time consuming exhaustive searches for the right cases to analysis, a lawyer can now stream line the process with AI assistance by flagging on-point cases for verification, review, analysis and argument development.

The efficiency of AI-driven legal research can translate into significant cost savings for the client. Attorneys can now spend more time on argument development and drafting, rather than bogged down in manual research. For clients, this means lower legal fees and faster resolution of cases, both of which contribute to overall economic savings.

Predictive Analytics and Case Strategy
AI’s evolving ability to analyze legal historical data and identify patterns is particularly valuable in the realm of predictive analytics. In civil defense litigation, AI can be used to assist in predicting the likely outcomes of cases based on jurisdictionally specific verdicts and settlements, helping attorneys to formulate more effective strategies. By sharpening focus on probable outcomes, legal teams can make informed decisions about whether to settle a case or proceed to trial. Such predictive analytics allow clients to better manage their risk, thereby reducing the financial burden on defendants.

Automating Routine Tasks
Many routine tasks in civil defense litigation, such as preparation of document and pleading chronologies, scheduling, and case management, can now be automated using AI. Such automation reduces the need for manual intervention, allowing legal professionals to focus on more complex and value-added case tasks. By automating such routine tasks, law firms can operate more efficiently, reducing overhead costs and improving their bottom line. Clients benefit from quicker turnaround times and lower legal fees, resulting in overall economic savings.

Conclusion
The economic savings for clients associated with using AI in civil defense litigation can be substantial. From streamlining document review and enhancing legal research to automating routine tasks and reducing discovery costs, AI offers a powerful tool for improving efficiency and lowering case costs. As the legal industry continues to embrace technological advancements, the adoption of AI in civil defense litigation is poised to become a standard practice, benefiting both law firms and their clients economically. The future of civil defense litigation is undoubtedly intertwined with AI, promising a more cost-effective and efficient approach to resolving legal disputes.

A Lawyer’s Guide to Understanding AI Hallucinations in a Closed System

Understanding Artificial Intelligence (AI) and the possibility of hallucinations in a closed system is necessary for the use of any such technology by a lawyer. AI has made significant strides in recent years, demonstrating remarkable capabilities in various fields, from natural language processing to large language models to generative AI. Despite these advancements, AI systems can sometimes produce outputs that are unexpectedly inaccurate or even nonsensical – a phenomenon often referred to as “hallucinations.” Understanding why these hallucinations occur, especially in a closed systems, is crucial for improving AI reliability in the practice of law.

What are AI Hallucinations
AI hallucinations are instances where AI systems generate information that seems plausible but is incorrect or entirely fabricated. These hallucinations can manifest in various forms, such as incorrect responses to prompt, fabricated case details, false medical analysis or even imagined elements in an image.

The Nature of Closed Systems
A closed system in AI refers to a context where the AI operates with a fixed dataset and pre-defined parameters, without real-time interaction or external updates. In the area of legal practice this can include environments or legal AI tools which rely upon a selected universe of information from which to access such information as a case file database, saved case specific medical records, discovery responses, deposition transcripts and pleadings.

Causes of AI Hallucinations in Closed Systems
Closed systems, as opposed to open facing AI which can access the internet, rely entirely on the data they were trained on. If the data is incomplete, biased, or not representative of the real world the AI may fill gaps in its knowledge with incorrect information. This is particularly problematic when the AI encounters scenarios not-well presented in its training data. Similarly, if an AI tool is used incorrectly by way of misused data prompts, a closed system could result in incorrect or nonsensical outputs.

Overfitting
Overfitting occurs when the AI model learns the noise and peculiarities in the training data rather than the underlying patterns. In a closed system, where the training data can be limited and static, the model might generate outputs based on these peculiarities, leading to hallucinations when faced with new or slightly different inputs.

Extrapolation Error
AI models can generalize from their training data to handle new inputs. In a closed system, the lack of continuous learning and updated data may cause the model to make inaccurate extrapolations. For example, a language model might generate plausible sounding but factually incorrect information based upon incomplete context.

Implication of Hallucination for lawyers
For lawyers, AI hallucinations can have serious implications. Relying on AI- generated content without verification could possibly lead to the dissemination or reliance upon false information, which can grievously effect both a client and the lawyer. Lawyers have a duty to provide accurate and reliable advise, information and court filings. Using AI tools that can possibly produce hallucinations without proper checks could very well breach a lawyer’s ethical duty to her client and such errors could damage a lawyer’s reputation or standing. A lawyer must stay vigilant in her practice to safe guard against hallucinations. A lawyer should always verify any AI generated information against reliable sources and treat AI as an assistant, not a replacement. Attorney oversight of outputs especially in critical areas such as legal research, document drafting and case analysis is an ethical requirement.

Notably, the lawyer’s chose of AI tool is critical. A well vetted closed system allows for the tracing of the origin of output and a lawyer to maintain control over the source materials. In the instance of prompt-based data searches, with multiple task prompts, a comprehensive understanding of how the prompts were designed to be used and the proper use of same is also essential to avoid hallucinations in a closed system. Improper use of the AI tool, even in a closed system designed for legal use, can lead to illogical outputs or hallucinations. A lawyer who wishes to utilize AI tools should stay informed about AI developments and understand the limitations and capabilities of the tools used. Regular training and updates can provide a more effective use of AI tools and help to safeguard against hallucinations.

Take Away
AI hallucinations present a unique challenge for the legal profession, but with careful tool vetting, management and training a lawyer can safeguard against false outputs. By understanding the nature of hallucinations and their origins, implementing robust verification processes and maintaining human oversight, lawyers can harness the power of AI while upholding their commitment to accuracy and ethical practice.

Confused About the FCC’s New One-to-One Consent Rules– You’re Not Alone. Here Are Some FAQs Answered For YOU!

Heard a lot about what folks are concerned about in the industry. Still seems to be a lot of confusion about it. So let me help with some answers to critical questions.

None of this is legal advice. Absolutely critical you hire a lawyer–AND A GOOD ONE–to assist you here. But this should help orient.

What is the new FCC One-to-One Ruling?

The FCC’s one-to-one ruling is a new federal regulation that alters the TCPA’s express written consent definition in a manner that requires consumers to select each “seller”–that is the ultimate good or service provider–the consumer chooses to receive calls from individually.

The ruling also limits the scope of consent to matters “logically and topically” related to the transaction that lead to consent.

Under the TCPA express written consent is required for any call that is made using regulated technology, which includes autodialers (ATDS), prerecorded or artificial voice calls, AI voice calls, and any form of outbound IVR or voicemail technology (including ringless) using prerecorded or artifical voice messages.

Why Does the FCC’s New One-to-One Ruling Matter?

Currently online webforms and comparison shopping websites are used to generate “leads” for direct to consumer marketers, insurance agents, real estate agents, and product sellers in numerous verticals.

Millions of leads a month are sold by tens of thousands of lead generation websites, leading to hundreds of millions of regulated marketing calls by businesses that rely on these websites to provide “leads”–consumers interested in hearing about their goods or services.

Prior to the new one-to-one ruling website operators were free to include partner pages that linked thousands of companies the consumer might be providing consent to receive calls from. And fine-print disclosures might allow a consumer to receive calls from business selling products unrelated to the consumer’s request. (For instance a website offering information about a home for sale might include fine print allowing the consumer’s data to be sold to a mortgage lender or insurance broker to receive calls.)

The new one-to-one rule stop these practices and requires website operators to specifically identify each good or service provider that might be contacting the consumer and requires the consumer to select each such provider on a one by one basis in order for consent to be valid.

Will the FCC’s One-to-One Ruling Impact Me?

If you are buying or selling leads, YES this ruling will effect you.

If you are a BPO or call center that relies on leads– YES this ruling will effect you.

If you are a CPaaS or communication platform–YES this ruling will effect you.

If you are a telecom carrier–YES this ruling will effect you.

If you are lead gen platform or service provider–YES this ruling will effect you.

If you generate first-party leads–Yes this ruling will effect you.

When Does the Rule Go Into Effect?

The ruling applies to all calls made in reliance on leads beginning January 27, 2025.

However, the ruling applies regardless of the date the lead was generated. So compliance efforts need to begin early so as to assure a pipeline of available leads to contact on that date.

In other words, all leads NOT in compliance with the FCC’s one-to-one rule CANNOT be called beginning January 27, 2025.

What Do I have to Do to Comply?

Three things:

i) Comply with the rather complex, but navigable new one-to-one rule paradigm. (The Troutman Amin Fifteen is a handy checklist to assist you);

ii) Assure the lead is being captured in a manner that is “logically and topically” related to the calls that will be placed; and

iii) Assure the caller has possession of the consent record before the call is made.

The Privacy Patchwork: Beyond US State “Comprehensive” Laws

We’ve cautioned before about the danger of thinking only about US state “comprehensive” laws when looking to legal privacy and data security obligations in the United States. We’ve also mentioned that the US has a patchwork of privacy laws. That patchwork is found to a certain extent outside of the US as well. What laws exist in the patchwork that relate to a company’s activities?

There are laws that apply when companies host websites, including the most well-known, the California Privacy Protection Act (CalOPPA). It has been in effect since July 2004, thus predating COPPA by 14 years. Then there are laws the apply if a company is collecting and using biometric identifiers, like Illinois’ Biometric Information Privacy Act.

Companies are subject to specific laws both in the US and elsewhere when engaging in digital communications. These laws include the US federal laws TCPA and TCFAPA, as well as CAN-SPAM. Digital communication laws exist in countries as wide ranging as Australia, Canada, Morocco, and many others. Then we have laws that apply when collecting information during a credit card transaction, like the Song Beverly Credit Card Act (California).

Putting It Into Practice: When assessing your company’s obligations under privacy and data security laws, keep activity specific privacy laws in mind. Depending on what you are doing, and in what jurisdictions, you may have more obligations to address than simply those found in comprehensive privacy laws.

Cybersecurity Crunch: Building Strong Data Security Programs with Limited Resources – Insights from Tech and Financial Services Sectors

In today’s digital age, cybersecurity has become a paramount concern for executives navigating the complexities of their corporate ecosystems. With resources often limited and the ever-present threat of cyberattacks, establishing clear priorities is essential to safeguarding company assets.

Building the right team of security experts is a critical step in this process, ensuring that the organization is well-equipped to fend off potential threats. Equally important is securing buy-in from all stakeholders, as a unified approach to cybersecurity fosters a robust defense mechanism across all levels of the company.Digit

This insider’s look at cybersecurity will delve into the strategic imperatives for companies aiming to protect their digital frontiers effectively.

Where Do You Start on Cybersecurity?
Resources are limited, and pressures on corporate security teams are growing, both from internal stakeholders and outside threats. But resources to do the job aren’t. So how can companies protect themselves in real world environment, where finances, employee time, and other resources are finite?

“You really have to understand what your company is in the business of doing,” Wilson said. “Every business will have different needs. Their risk tolerances will be different.”

“You really have to understand what your company is in the business of doing. Every business will have different needs. Their risk tolerances will be different.”

BRIAN WILSON, CHIEF INFORMATION SECURITY OFFICER, SAS
For example, Tuttle said in the manufacturing sector, digital assets and data have become increasingly important in recent years. The physical product no longer is the end-all, be-all of the company’s success.

For cybersecurity professionals, this new reality leads to challenges and tough choices. Having a perfect cybersecurity system isn’t possible—not for a company doing business in a modern, digital world. Tuttle said, “If we’re going to enable this business to grow, we’re going to have to be forward-thinking.”

That means setting priorities for cybersecurity. Inskeep, who previously worked in cybersecurity for one of the world’s largest financial services institutions, said multi-factor authentication and controlling access is a good starting point, particularly against phishing and ransomware attacks. Also, he said companies need good back-up systems that enable them to recover lost data as well as robust incident response plans.

“Bad things are going to happen,” Wilson said. “You need to have logs and SIEMs to tell a story.”

Tuttle said one challenge in implementing an incident response plan is engaging team members who aren’t on the front lines of cybersecurity. “They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right,” she said. “They need to be thinking, ‘What should I be looking for and what’s my response?’”

“They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right. They need to be thinking, ‘What should I be looking for and what’s my response?’”

LISA TUTTLE, CHIEF INFORMATION SECURITY OFFICER, SPX TECHNOLOGIES
Wilson said tabletop exercises and security awareness training “are a good feedback loop to have to make sure you’re including the right people. They have to know what to do when something bad happens.”

Building a Security Team
Hiring and maintaining good people in a harrowing field can be a challenge. Companies should leverage their external and internal networks to find data privacy and cybersecurity team members.

Wilson said SAS uses an intern program to help ensure they have trained professionals already in-house. He also said a company’s Help Desk can be a good source of talent.

Remote work also allows companies to cast a wider net for hiring employees. The challenge becomes keeping remote workers engaged, and companies should consider how they can make these far-flung team members feel part of the team.

Inskeep said burnout is a problem in the cybersecurity field. “It’s a job that can feel overwhelming sometimes,” he said. “Interacting with people and protecting them from that burnout has become more critical than ever.”

“It’s a job that can feel overwhelming sometimes. Interacting with people and protecting them from that burnout has become more critical than ever.”

TODD INSKEEP, FOUNDER AND CYBERSECURITY ADVISOR, INCOVATE SOLUTIONS
Weighing Levels of Compliance
The first step, Claypoole said, is understanding the compliance obligations the company faces. These obligations include both regulatory requirements (which are tightening) as well as contract terms from customers.

“For a business, that can be scary, because your business may be agreeing to contract terms with customers and they aren’t asking you about the security requirements in those contracts,” Wilson said.

The panel also noted that “compliance” and “security” aren’t the same thing. Compliance is a minimum set of standards that must be met, while security is a more wide-reaching goal.

But company leaders must realize they can’t have a perfect cybersecurity system, even if they could afford it. It’s important to identify priorities—including which operations are the most important to the company and which would be most disruptive if they went offline.

Wilson noted that global privacy regulations are increasing and becoming stricter every year. In addition, federal officials have taken criminal action against CSOs in recent years.

“Everybody’s radar is kind of up,” Tuttle said. The increasingly compliance pressure also means it’s important for cybersecurity teams to work collaboratively with other departments, rather than making key decisions in a vacuum. Inskeep said such decisions need to be carefully documented as well.

“If you get to a place where you are being investigated, you need your own lawyer,” Claypoole said.

“If you get to a place where you are being investigated, you need your own lawyer.”

TED CLAYPOOLE, PARTNER, WOMBLE BOND DICKINSON
Cyberinsurance is another consideration for data privacy teams, but it can help Chief Security Officers make the case for more resources (both financial and work hours). Inskeep said cyberinsurance questions also can help companies identify areas of risks and where they need to prioritize their efforts. Such priorities can change, and he said companies need to have a committee or some other mechanism to regularly review and update cybersecurity priorities.

Wilson said one positive change he’s seen is that top executives now understand the importance of cybersecurity and are more willing to include cybersecurity team members in the up-front decision-making process.

Bringing in Outside Expertise
Consultants and vendors can be helpful to a cybersecurity team, particularly for smaller teams. Companies can move certain functions to third-party consultants, allowing their own teams to focus on core priorities.

“If we don’t have that internal expertise, that’s a situation where we’d call in third-party resources,” Wilson said.

Bringing in outside professionals also can help a company keep up with new trends and new technologies.

Ultimately, a proactive and well-coordinated cybersecurity strategy is indispensable for safeguarding the digital landscape of modern enterprises. With an ever-evolving threat landscape, companies must be agile in their approach and continuously review and update their security measures. At the core of any effective cybersecurity plan is a comprehensive risk management framework that identifies potential vulnerabilities and outlines steps to mitigate their impact. This framework should also include incident response protocols to minimize the damage in case of a cyberattack.

In addition to technology and processes, the human element is crucial in cybersecurity. Employees must be educated on how to spot potential threats, such as phishing emails or suspicious links, and know what steps to take if they encounter them.

Key Takeaways:
What are the biggest risk areas and how do you minimize those risks?
Know your external cyber footprint. This is what attackers see and will target.
Align with your team, your peers, and your executive staff.
Prioritize implementing multi-factor authentication and controlling access to protect against common threats like phishing and ransomware.
Develop reliable backup systems and robust incident response plans to recover lost data and respond quickly to cyber incidents.
Engage team members who are not on the front lines of cybersecurity to ensure quick identification and escalation of potential threats.
Conduct tabletop exercises and security awareness training regularly.
Leverage intern programs and help desk personnel to build a strong cybersecurity team internally.
Explore remote work options to widen the talent pool for hiring cybersecurity professionals, while keeping remote workers engaged and integrated.
Balance regulatory compliance with overall security goals, understanding that compliance is just a minimum standard.

Copyright © 2024 Womble Bond Dickinson (US) LLP All Rights Reserved.

by: Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

For more on Cybersecurity, visit the Communications Media Internet section.

American Privacy Rights Act Advances with Significant Revisions

On May 23, 2024, the U.S. House Committee on Energy and Commerce Subcommittee on Data, Innovation, and Commerce approved a revised draft of the American Privacy Rights Act (“APRA”), which was released just 36 hours before the markup session. With the subcommittee’s approval, the APRA will now advance to full committee consideration. The revised draft includes several notable changes from the initial discussion draft, including:

  • New Section on COPPA 2.0 – the revised APRA draft includes the Children’s Online Privacy Protection Act (COPPA 2.0) under Title II, which differs to a certain degree from the COPPA 2.0 proposal currently before the Senate (e.g., removal of the revised “actual knowledge” standard; removal of applicability to teens over age 12 and under age 17).
  • New Section on Privacy By Design – the revised APRA draft includes a new dedicated section on privacy by design. This section requires covered entities, service providers and third parties to establish, implement, and maintain reasonable policies, practices and procedures that identify, assess and mitigate privacy risks related to their products and services during the design, development and implementation stages, including risks to covered minors.
  • Expansion of Public Research Permitted Purpose – as an exception to the general data minimization obligation, the revised APRA draft adds another permissible purpose for processing data for public or peer-reviewed scientific, historical, or statistical research projects. These research projects must be in the public interest and comply with all relevant laws and regulations. If the research involves transferring sensitive covered data, the revised APRA draft requires the affirmative express consent of the affected individuals.
  • Expanded Obligations for Data Brokers – the revised APRA draft expands obligations for data brokers by requiring them to include a mechanism for individuals to submit a “Delete My Data” request. This mechanism, similar to the California Delete Act, requires data brokers to delete all covered data related to an individual that they did not collect directly from that individual, if the individual so requests.
  • Changes to Algorithmic Impact Assessments – while the initial APRA draft required large data holders to conduct and report a covered algorithmic impact assessment to the FTC if they used a covered algorithm posing a consequential risk of harm to individuals, the revised APRA requires such impact assessments for covered algorithms to make a “consequential decision.” The revised draft also allows large data holders to use certified independent auditors to conduct the impact assessments, directs the reporting mechanism to NIST instead of the FTC, and expands requirements related to algorithm design evaluations.
  • Consequential Decision Opt-Out – while the initial APRA draft allowed individuals to invoke an opt-out right against covered entities’ use of a covered algorithm making or facilitating a consequential decision, the revised draft now also allows individuals to request that consequential decisions be made by a human.
  • New and/or Revised Definitions – the revised APRA draft’s definition section includes new terms, such as “contextual advertising” and “first party advertising.”. The revised APRA draft also redefines certain terms, including “covered algorithm,” “sensitive covered data,” “small business” and “targeted advertising.”

Mandatory Cybersecurity Incident Reporting: The Dawn of a New Era for Businesses

A significant shift in cybersecurity compliance is on the horizon, and businesses need to prepare. Starting in 2024, organizations will face new requirements to report cybersecurity incidents and ransomware payments to the federal government. This change stems from the U.S. Department of Homeland Security’s (DHS) Cybersecurity Infrastructure and Security Agency (CISA) issuing a Notice of Proposed Rulemaking (NPRM) on April 4, 2024. This notice aims to enforce the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Essentially, this means that “covered entities” must report specific cyber incidents and ransom payments to CISA within defined timeframes.

Background

Back in March 2022, President Joe Biden signed CIRCIA into law. This was a big step towards improving America’s cybersecurity. The law requires CISA to create and enforce regulations mandating that covered entities report cyber incidents and ransom payments. The goal is to help CISA quickly assist victims, analyze trends across different sectors, and share crucial information with network defenders to prevent other potential attacks.

The proposed rule is open for public comments until July 3, 2024. After this period, CISA has 18 months to finalize the rule, with an expected implementation date around October 4, 2025. The rule should be effective in early 2026. This document provides an overview of the NPRM, highlighting its key points from the detailed Federal Register notice.

Cyber Incident Reporting Initiatives

CIRCIA includes several key requirements for mandatory cyber incident reporting:

  • Cyber Incident Reporting Requirements – CIRCIA mandates that CISA develop regulations requiring covered entities to report any covered cyber incidents within 72 hours from the time the entity reasonably believes the incident occurred.
  • Federal Incident Report Sharing – Any federal entity receiving a report on a cyber incident after the final rule’s effective date must share that report with CISA within 24 hours. CISA will also need to make information received under CIRCIA available to certain federal agencies within the same timeframe.
  • Cyber Incident Reporting Council – The Department of Homeland Security (DHS) must establish and chair an intergovernmental Cyber Incident Reporting Council to coordinate, deconflict, and harmonize federal incident reporting requirements.

Ransomware Initiatives

CIRCIA also authorizes or mandates several initiatives to combat ransomware:

  • Ransom Payment Reporting Requirements – CISA must develop regulations requiring covered entities to report to CISA within 24 hours of making any ransom payments due to a ransomware attack. These reports must be shared with federal agencies similarly to cyber incident reports.
  • Ransomware Vulnerability Warning Pilot Program – CISA must establish a pilot program to identify systems vulnerable to ransomware attacks and may notify the owners of these systems.
  • Joint Ransomware Task Force – CISA has announced the launch of the Joint Ransomware Task Force to build on existing efforts to coordinate a nationwide campaign against ransomware attacks. This task force will work closely with the Federal Bureau of Investigation and the Office of the National Cyber Director.

Scope of Applicability

The regulation targets many “covered entities” within critical infrastructure sectors. CISA clarifies that “covered entities” encompass more than just owners and operators of critical infrastructure systems and assets. Entities actively participating in these sectors might be considered “in the sector,” even if they are not critical infrastructure themselves. Entities uncertain about their status are encouraged to contact CISA.

Critical Infrastructure Sectors

CISA’s interpretation includes entities within one of the 16 sectors defined by Presidential Policy Directive 21 (PPD 21). These sectors include Chemical, Commercial Facilities, Communications, Critical Manufacturing, Dams, Defense Industrial Base, Emergency Services, Energy, Financial Services, Food and Agriculture, Government Facilities, Healthcare and Public Health, Information Technology, Nuclear Reactors, Materials, and Waste, Transportation Systems, Water and Wastewater Systems.

Covered Entities

CISA aims to include small businesses that own and operate critical infrastructure by setting additional sector-based criteria. The proposed rule applies to organizations falling into one of two categories:

  1. Entities operating within critical infrastructure sectors, except small businesses
  2. Entities in critical infrastructure sectors that meet sector-based criteria, even if they are small businesses

Size-Based Criteria

The size-based criteria use Small Business Administration (SBA) standards, which vary by industry and are based on annual revenue and number of employees. Entities in critical infrastructure sectors exceeding these thresholds are “covered entities.” The SBA standards are updated periodically, so organizations must stay informed about the current thresholds applicable to their industry.

Sector-Based Criteria

The sector-based criteria target essential entities within a sector, regardless of size, based on the potential consequences of disruption. The proposed rule outlines specific criteria for nearly all 16 critical infrastructure sectors. For instance, in the information technology sector, the criteria include:

  • Entities providing IT services for the federal government
  • Entities developing, licensing, or maintaining critical software
  • Manufacturers, vendors, or integrators of operational technology hardware or software
  • Entities involved in election-related information and communications technology

In the healthcare and public health sector, the criteria include:

  • Hospitals with 100 or more beds
  • Critical access hospitals
  • Manufacturers of certain drugs or medical devices

Covered Cyber Incidents

Covered entities must report “covered cyber incidents,” which include significant loss of confidentiality, integrity, or availability of an information system, serious impacts on operational system safety and resiliency, disruption of business or industrial operations, and unauthorized access due to third-party service provider compromises or supply chain breaches.

Significant Incidents

This definition covers substantial cyber incidents regardless of their cause, such as third-party compromises, denial-of-service attacks, and vulnerabilities in open-source code. However, threats or activities responding to owner/operator requests are not included. Substantial incidents include encryption of core systems, exploitation causing extended downtime, and ransomware attacks on industrial control systems.

Reporting Requirements

Covered entities must report cyber incidents to CISA within 72 hours of reasonably believing an incident has occurred. Reports must be submitted via a web-based “CIRCIA Incident Reporting Form” on CISA’s website and include extensive details about the incident and ransom payments.

Report Types and Timelines

  • Covered Cyber Incident Reports within 72 hours of identifying an incident
  • Ransom Payment Reports due to a ransomware attack within 24 hours of payment
  • Joint Covered Cyber Incident and Ransom Payment Reports within 72 hours for ransom payment incidents
  • Supplemental Reports within 24 hours if new information or additional payments arise

Entities must retain data used for reports for at least two years. They can authorize a third party to submit reports on their behalf but remain responsible for compliance.

Exemptions for Similar Reporting

Covered entities may be exempt from CIRCIA reporting if they have already reported to another federal agency, provided an agreement exists between CISA and that agency. This agreement must ensure the reporting requirements are substantially similar, and the agency must share information with CISA. Federal agencies that report to CISA under the Federal Information Security Modernization Act (FISMA) are exempt from CIRCIA reporting.

These agreements are still being developed. Entities reporting to other federal agencies should stay informed about their progress to understand how they will impact their reporting obligations under CIRCIA.

Enforcement and Penalties

The CISA director can make a request for information (RFI) if an entity fails to submit a required report. Non-compliance can lead to civil action or court orders, including penalties such as disbarment and restrictions on future government contracts. False statements in reports may result in criminal penalties.

Information Protection

CIRCIA protects reports and RFI responses, including immunity from enforcement actions based solely on report submissions and protections against legal discovery and use in proceedings. Reports are exempt from Freedom of Information Act (FOIA) disclosures, and entities can designate reports as “commercial, financial, and proprietary information.” Information can be shared with federal agencies for cybersecurity purposes or specific threats.

Business Takeaways

Although the rule will not be effective until late 2025, companies should begin preparing now. Entities should review the proposed rule to determine if they qualify as covered entities and understand the reporting requirements, then adjust their security programs and incident response plans accordingly. Creating a regulatory notification chart can help track various incident reporting obligations. Proactive measures and potential formal comments on the proposed rule can aid in compliance once the rules are finalized.

These steps are designed to guide companies in preparing for CIRCIA, though each company must assess its own needs and procedures within its specific operational, business, and regulatory context.

Listen to this post