FCC Puts Another Carrier On Notice with Cease and Desist Letter

If you haven’t already figured it out, the FCC is serious about carriers and providers not carrying robocalls.

The FCC sent a cease and desist letter to DigitalIPvoice informing them of the need to investigate suspected traffic. The FCC reminded them that failure to comply with the letter “may result in downstream voice service providers permanently blocking all of DigitalIPvoice’s traffic”.

For background, DigitalIPvoice is a gateway provider meaning they accept calls directly from foreign originating or intermediate providers. The Industry Traceback Group (ITG) investigated some questionable traffic back in December and identified DigitalIPvoice as the gateway provider for some of the calls. ITG informed DigitalIPvoice and “DigitialIPVoice did not dispute that the calls were illegal.”

This is problematic because as the FCC states “gateway providers that transmit illegal robocall traffic face serious consequences, including blocking by downstream providers of all of the provider’s traffic.”

Emphasis in original. Yes. The FCC sent that in BOLD to DigitalIPvoice. I love aggressive formatting choices.

The FCC then gave DigitalIPvoice steps to take to mitigate the calls in response to this notice. They have to investigate the traffic and then block identified traffic and report back to the FCC and the ITG on the outcome of the investigation.

The whole letter is worth reading but a few points for voice service providers and gateway providers:

  1. You have to know who your customers are and what they are doing on your network. The FCC is requiring voice service providers and gateway providers to include KYC in their robocall mitigation plans.
  2. You have to work with the ITG. You have to have a traceback policy and procedures. All traceback requests have to be treated as a P0 priority.
  3. You have to be able to trace the traffic you are handling. From beginning to end.

The FCC is going after robocalls hard. Protect yourself by understanding what is going to be required of your network.

Keeping you in the loop.

For more news on FCC Regulations, visit the NLR Communications, Media & Internet section.

Global Regulatory Update for April 2024

WEBINAR – Registration Is Open For “Harmonizing TSCA Consent Orders with OSHA HCS 2012”: Register now to join The Acta Group (Acta®) and Bergeson & Campbell, P.C. (B&C®) for “Harmonizing TSCA Consent Orders with OSHA HCS 2012,” a complimentary webinar covering case studies and practical applications of merging the requirements for consent order language on the Safety Data Sheet (SDS). In this webinar, Karin F. Baron, MSPH, Director of Hazard Communication and International Registration Strategy, Acta, will explore two hypothetical examples and provide guidance on practical approaches to compliance. An industry perspective will be presented by Sara Glazier Frojen, Senior Product Steward, Hexion Inc., who will discuss the realities of managing this process day-to-day.

SAVE THE DATE – “TSCA Reform — 8 Years Later” On June 26, 2024: Save the date to join Acta affiliate B&C, the Environmental Law Institute (ELI), and the George Washington University Milken Institute School of Public Health for a day-long conference reflecting on the challenges and accomplishments since the implementation of the 2016 Lautenberg Amendments and where the Toxic Substances Control Act (TSCA) stands today. This year, the conference will be held in person at the George Washington University Milken Institute School of Public Health (and will be livestreamed via YouTube). Continuing legal education (CLE) credit will be offered in select states for in-person attendees only. Please check ELI’s event page in the coming weeks for more information, including an agenda, CLE information, registration, and more. If you have questions in the meantime, please contact Madison Calhoun (calhoun@eli.org).

AUSTRALIA

Changes To Categorization, Reporting, And Recordkeeping Requirements For Industrial Chemicals Will Take Effect April 24, 2024: The Australian Industrial Chemicals Introduction Scheme (AICIS) announced regulatory changes to categorization, reporting, and recordkeeping requirements will start April 24, 2024. For the changes to take effect, the Industrial Chemicals (General) Rules 2019 (Rules) and Industrial Chemicals Categorisation Guidelines will be amended. According to AICIS, key changes to the Rules include:

  • Written undertakings replaced with records that will make compliance easier;
  • Greater acceptance of International Nomenclature of Cosmetic Ingredients (INCI) names for reporting and recordkeeping;
  • Changes to the categorization criteria to benefit:
    • Local soap makers;
    • Introducers of chemicals in flavor and fragrance blends; and
    • Introducers of hazardous chemicals where introduction and use are controlled; and
  • Strengthening criteria and/or reporting requirements for health and environmental protection.

AICIS announced final changes to the Industrial Chemicals Categorisation Guidelines that will take effect April 24, 2024. According to AICIS, the changes include:

  • Refinement of the requirement to check for hazardous esters and salts of chemicals on the “List of chemicals with high hazards for categorisation” (the List);
  • Provision to include highly hazardous chemicals to the List based on an AICIS assessment or evaluation;
  • Expanded options for introducers to demonstrate the absence of skin irritation and skin sensitization; and
  • More models for in silico predictions and an added test guideline for ready biodegradability.

AICIS states that it will publish a second update to the Guidelines in September 2024 due to industry stakeholders’ feedback that they need more time to prepare for some of the changes. It will include:

  • For the List: add chemicals based on current sources and add the European Commission (EC) Endocrine Disruptor List (List I) as a source; and
  • Refined requirements for introducers to show the absence of specific target organ toxicity after repeated exposure and bioaccumulation potential.

CANADA

Canada Provides Updates On Its Implementation Of The Modernized CEPA: As reported in our June 23, 2023, memorandum, Bill S-5, Strengthening Environmental Protection for a Healthier Canada Act, received Royal Assent on June 13, 2023. Canada is working to implement the bill through initiatives that include the development of various instruments, policies, strategies, regulations, and processes. In April 2024, Canada updated its list of public consultation opportunities:

  • Discussion document on the implementation framework for a right to a healthy environment under the Canadian Environmental Protection Act, 1999 (CEPA) (winter 2024);
  • Proposed Watch List approach (spring/summer 2024);
  • Proposed plan of chemicals management priorities (summer 2024);
  • Draft strategy to replace, reduce or refine vertebrate animal testing (summer/fall 2024);
  • Draft implementation framework for a right to a healthy environment under CEPA (summer/fall 2024);
  • Discussion document for toxic substances of highest risk regulations (winter 2025); and
  • Discussion document on the restriction and authorization of certain toxic substances regulations (winter/spring 2025).

EUROPEAN UNION (EU)

ECHA Checks More Than 20 Percent Of REACH Registration Dossiers For Compliance: The European Chemicals Agency (ECHA) announced on February 27, 2024, that between 2009 and 2023, it performed compliance checks of approximately 15,000 Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) registrations, representing 21 percent of full registrations. ECHA states that it met its legal target for dossier evaluation, which increased from five percent to 20 percent in 2019. ECHA notes that for substances registered at quantities of 100 metric tons or more per year, it has checked compliance for around 30 percent of the dossiers.

According to ECHA, in 2023, it conducted 301 compliance checks, covering more than 1,750 registrations and addressing 274 individual substances. ECHA “focused on registration dossiers that may have data gaps and aim to enhance the safety data of these substances.” ECHA sent 251 adopted decisions to companies, “requesting additional data to clarify long-term effects of chemicals on human health or the environment.” ECHA states that during the follow-up evaluation process, it will assess the incoming information for compliance. ECHA will share the outcome of the incoming data with the EU member states and the EC to enable prioritization of substances. ECHA will work closely with the member states for enforcement of non-compliant dossiers. Compliance of registration dossiers will remain a priority for ECHA. In 2024, ECHA will review the impact of the Joint Evaluation Action Plan, aimed at improving REACH registration compliance, and, together with stakeholders, develop new priority areas on which to focus. More information is available in our March 29, 2024, blog item.

Council Of The EU And EP Reach Provisional Agreement On Proposed Regulation On Packaging And Packaging Waste: The Council of the EU announced on March 4, 2024, that its presidency and the European Parliament’s (EP) representatives reached a provisional political agreement on a proposal for a regulation on packaging and packaging waste. The press release states that the proposal considers the full life-cycle of packaging and establishes requirements to ensure that packaging is safe and sustainable by requiring that all packaging is recyclable and that the presence of substances of concern is minimized. It also includes labeling harmonization requirements to improve consumer information. In line with the waste hierarchy, the proposal aims to reduce significantly the generation of packaging waste by setting binding re-use targets, restricting certain types of single-use packaging, and requiring economic operators to minimize the packaging used. The proposal would introduce a restriction on the placing on the market of food contact packaging containing per- and polyfluoroalkyl substances (PFAS) above certain thresholds. The press release notes that to avoid any overlap with other pieces of legislation, the co-legislators tasked the EC to assess the need to amend that restriction within four years of the date of application of the regulation.

EP Adopts Position On Establishing System To Verify And Pre-Approve Environmental Marketing Claims: The EP announced on March 12, 2024, that it adopted its first reading position on establishing a verification and pre-approval system for environmental marketing claims to protect citizens from misleading ads. According to the EP’s press release, the green claims directive would require companies to submit evidence about their environmental marketing claims before advertising products as “biodegradable,” “less polluting,” “water saving,” or having “biobased content.” Micro enterprises would be exempt from the new rules, and small and medium-sized enterprises (SME) would have an extra year to comply compared to larger businesses. The press release notes that the EP also decided that green claims about products containing hazardous substances should remain possible for now, but that the EC “should assess in the near future whether they should be banned entirely.” The new EP will follow up on the file after the European elections that will take place in June 2024.

On April 3, 2024, a coalition of industry associations issued a “Joint statement in reference to ‘the ban of green claims for products containing hazardous substances’ in the Green Claims Substantiation Directive (GCD).” The associations “fully support the principle that consumers should not be misled by false or unsubstantiated environmental claims and share the EU’s objective to establish a clear, robust and credible framework to enable consumers to make an informed choice.” The associations express concern that the proposed prohibition of environmental claims for products containing certain hazardous substances “will run contrary to the objective of the Directive to enable consumers to make sustainable purchase decisions and ensure proper substantiation of claims.” According to the associations, for a number of consumer products, “the reference to ‘products containing’ would encompass substances that would have intrinsic hazardous properties,” implying that there would be a ban of making any environmental claim(s), “even if such trace amounts of unavoidable and unintentional impurities and contaminants are present in these products.” The signatories include the International Association for Soaps, Detergents and Maintenance Products; the European Brands Association; APPLiA; the Association of Manufacturers and Formulators of Enzyme Products; CosmeticsEurope; the European Power Tool Association; the Federation of the European Sporting Goods Industry; the International Fragrance Association; LightingEurope; the International Natural and Organic Cosmetics Association; Toy Industries of Europe; Verband der Elektro- und Digitalindustrie; and the World Federation of Advertisers.

ECHA Clarifies Next Steps For PFAS Restriction Proposal: ECHA issued a press release on March 13, 2024, to outline how the Scientific Committees for Risk Assessment (RAC) and for Socio-Economic Analysis (SEAC) will progress in evaluating the proposal to restrict PFAS in Europe. As reported in our February 13, 2023, memorandum, the national authorities of Denmark, Germany, the Netherlands, Norway, and Sweden submitted a proposal to restrict more than 10,000 PFAS under REACH. The proposal suggests two restriction options — a full ban and a ban with use-specific derogations — to address the identified risks. Following the screening of thousands of comments received during the consultation, ECHA states that it is clarifying the next steps for the proposal. According to ECHA, RAC and SEAC will evaluate the proposed restriction together with the comments from the consultation in batches, focusing on the different sectors that may be affected.

In tandem, the five national authorities who prepared the proposal are updating their initial report to address the consultation comments. This updated report will be assessed by the committees and will serve as the foundation for their opinions. The sectors and elements that will be discussed in the next three committee meetings are:

March 2024 Meetings

  • Consumer mixtures, cosmetics, and ski wax;
  • Hazards of PFAS (only by RAC); and
  • General approach (only by SEAC).

June 2024 Meetings

  • Metal plating and manufacture of metal products; and
  • Additional discussion on hazards (only by RAC).

September 2024 Meetings

  • Textiles, upholstery, leather, apparel, carpets (TULAC);
  • Food contact materials and packaging; and
  • Petroleum and mining.

More information is available in our March 18, 2024, blog item.

ECHA Adopts And Publishes CoRAP For 2024-2026: On March 19, 2024, ECHA adopted and published the Community rolling action plan (CoRAP) for 2024-2026. The CoRAP lists 28 substances suspected of posing a risk to human health or the environment for evaluation by 11 Member State Competent Authorities. The CoRAP includes 11 newly allocated substances and 17 substances already included in the previous CoRAP 2023-2025 update, published on March 21, 2023. For 11 out of these 17 substances, ECHA notes that the evaluation year has been postponed, mainly to await submission of new information requested under dossier evaluation. Of the 28 substances to be evaluated, ten are to be evaluated in 2024, 13 in 2025, and five in 2026. The remaining substance of the 24 substances listed in the previous CoRAP was withdrawn as its evaluation is currently considered to be a low priority. According to ECHA, for this substance, a compliance check is needed first. ECHA states that the substance can be placed in the CoRAP list again, if after the conclusion of the dossier evaluation process, concerns remain beyond what can be clarified through dossier evaluation. ECHA has posted a guide for registrants that need to update their dossiers with new relevant information such as hazard, tonnages, use, and exposure.

Comments On Proposals To Identify New SVHCs Due April 15, 2025: A public consultation on proposals to identify two new substances of very high concern (SVHC) will close on April 15, 2024. The substances and examples of their uses are:

  • Bis(α,α-dimethylbenzyl) peroxide: This substance is used in products such as pH-regulators, flocculants, precipitants, and neutralization agents; and
  • Triphenyl phosphate: This substance is used as a flame retardant and plasticizer in polymer formulations, adhesives, and sealants.

UNITED KINGDOM (UK)

HSE Publishes UK REACH Work Programme For 2023/24: In February 2024, the Health and Safety Executive (HSE) published its UK REACH Work Programme 2023/24. The Work Programme sets out how HSE, with the support of the Environment Agency, will deliver its regulatory activities to meet the objectives and timescales set out in UK REACH. Alongside these activities, HSE and the Environment Agency will engage with stakeholders. The Work Programme includes the following deliverables and target deadlines:

Topic Deliverable Target
Substance evaluation Evaluate substances in the Rolling Action Plan (RAP) Evaluate one
Authorization Complete the processing of received applications within the statutory deadline (this includes comments from public consultation and REACH Independent Scientific Expert Pool (RISEP) input) 100 percent
SVHC identification Undertake an initial assessment of substances submitted for SVHC identification under EU REACH during 2022/23 and consider if they are appropriate for SVHC identification under UK REACH Assess up to five
Regulatory management options analysis (RMOA) Complete RMOAs initiated in 22/23 

Initiate RMOAs for substances identified as priorities

Up to ten 

Up to five

Restriction Complete ongoing restriction opinions 

Begin Annex 15 restriction dossiers

Initiate scoping work for restrictions

Two

One 

Two

HSE Opens Call For Evidence On PFAS In FFFs: HSE is working with the Environment Agency to prepare a restriction dossier that will assess the risks of PFAS in firefighting foams (FFF). HSE will propose restrictions, if necessary, to manage any significant risks identified. To help compile the dossier, HSE opened a call for evidence. HSE states that it would like stakeholders to identify themselves as willing to engage in further dialogue throughout the restrictions process. In particular, it would like to hear from stakeholders with relevant information on PFAS (or alternatives) in FFFs, especially information specific to Great Britain (GB). Regarding relevant information, HSE is interested in all aspects of FFFs, including:

  • Manufacture of FFFs: Substances used, process, quantities;
  • Import of FFF products of all types: Quantities, suppliers;
  • Use: Quantities, sector of use, frequency, storage on site, products used;
  • Alternatives to PFAS in FFF: Availability, cost, performance in comparison to PFAS-containing foams, barriers to switching;
  • Hazardous properties: SDSs, new studies on intrinsic properties and exposure, recommended risk management measures;
  • Environmental fate: What happens to the FFF after it is used, where does it go;
  • Waste: Disposal requirements, recycling opportunities, remediation; and
  • Standards: Including product-specific legislation, performance, certification.

HSE states that the call for evidence targets companies (manufacturers, importers, distributors, and retailers) and professional users of FFFs, trade associations, environmental organizations, consumer organizations, and any other organizations and members of the public holding relevant information. HSE intends to publish the final dossier, including any restriction proposals, on its website in March 2025. Interested parties will also then be able to submit comments on any proposed restriction.

New GB BPR Data Requirements Will Apply To Applications Submitted In October 2025: The Biocidal Products (Health and Safety) (Amendment and Transitional Provision etc.) Regulations 2024, which update the data requirements in Annexes II and III of the GB Biocidal Products Regulation (BPR), were laid in Parliament on March 13, 2024, and came into force on April 6, 2024. The legislation updates some of the data requirements to reflect developments in science and technology. These include the use of alternative testing approaches to determine some hazardous properties that previously relied on animal testing. HSE held a public consultation on the proposed changes in 2023 and has posted a report on the outcome of the consultation. The new data requirements will apply to applications received 18 months after the legislation came into force (October 6, 2025) and do not apply to existing applications. HSE will provide further guidance on the changes in the future.

Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.

Navigating the EU AI Act from a US Perspective: A Timeline for Compliance

After extensive negotiations, the European Parliament, Commission, and Council came to a consensus on the EU Artificial Intelligence Act (the “AI Act”) on Dec. 8, 2023. This marks a significant milestone, as the AI Act is expected to be the most far-reaching regulation on AI globally. The AI Act is poised to significantly impact how companies develop, deploy, and manage AI systems. In this post, NM’s AI Task Force breaks down the key compliance timelines to offer a roadmap for U.S. companies navigating the AI Act.

The AI Act will have a staged implementation process. While it will officially enter into force 20 days after publication in the EU’s Official Journal (“Entry into Force”), most provisions won’t be directly applicable for an additional 24 months. This provides a grace period for businesses to adapt their AI systems and practices to comply with the AI Act. To bridge this gap, the European Commission plans to launch an AI Pact. This voluntary initiative allows AI developers to commit to implementing key obligations outlined in the AI Act even before they become legally enforceable.

With the impending enforcement of the AI Act comes the crucial question for U.S. companies that operate in the EU or whose AI systems interact with EU citizens: How can they ensure compliance with the new regulations? To start, U.S. companies should understand the key risk categories established by the AI Act and their associated compliance timelines.

I. Understanding the Risk Categories
The AI Act categorizes AI systems based on their potential risk. The risk level determines the compliance obligations a company must meet.  Here’s a simplified breakdown:

  • Unacceptable Risk: These systems are banned entirely within the EU. This includes applications that threaten people’s safety, livelihood, and fundamental rights. Examples may include social credit scoring, emotion recognition systems at work and in education, and untargeted scraping of facial images for facial recognition.
  • High Risk: These systems pose a significant risk and require strict compliance measures. Examples may include AI used in critical infrastructure (e.g., transport, water, electricity), essential services (e.g., insurance, banking), and areas with high potential for bias (e.g., education, medical devices, vehicles, recruitment).
  • Limited Risk: These systems require some level of transparency to ensure user awareness. Examples include chatbots and AI-powered marketing tools where users should be informed that they’re interacting with a machine.
  • Minimal Risk: These systems pose minimal or no identified risk and face no specific regulations.

II. Key Compliance Timelines (as of March 2024):

Time Frame  Anticipated Milestones
6 months after Entry into Force
  • Prohibitions on Unacceptable Risk Systems will come into effect.
12 months after Entry into Force
  • This marks the start of obligations for companies that provide general-purpose AI models (those designed for widespread use across various applications). These companies will need to comply with specific requirements outlined in the AI Act.
  • Member states will appoint competent authorities responsible for overseeing the implementation of the AI Act within their respective countries.
  • The European Commission will conduct annual reviews of the list of AI systems categorized as “unacceptable risk” and banned under the AI Act.
  • The European Commission will issue guidance on high-risk AI incident reporting.
18 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
24 months after Entry into Force
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.
36 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
By the end of 2030
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.

In addition to the above, we can expect further rulemaking and guidance from the European Commission to come forth regarding aspects of the AI Act such as use cases, requirements, delegated powers, assessments, thresholds, and technical documentation.

Even before the AI Act’s Entry into Force, there are crucial steps U.S. companies operating in the EU can take to ensure a smooth transition. The priority is familiarization. Once the final version of the Act is published, carefully review it to understand the regulations and how they might apply to your AI systems. Next, classify your AI systems according to their risk level (high, medium, minimal, or unacceptable). This will help you determine the specific compliance obligations you’ll need to meet. Finally, conduct a thorough gap analysis. Identify any areas where your current practices for developing, deploying, or managing AI systems might not comply with the Act. By taking these proactive steps before the official enactment, you’ll gain valuable time to address potential issues and ensure your AI systems remain compliant in the EU market.

UNDER SURVEILLANCE: Police Commander and City of Pittsburgh Face Wiretap Lawsuit

Hi CIPAWorld! The Baroness here and I have an interesting filing that just came in the other day.

This one involves alleged violations of the Pennsylvania Wiretapping and Electronic Surveillance Act, 18 Pa.C.S.A. § 5703, et seq., and the Federal Wiretap Act, 18 U.S.C. § 2511, et seq.

Pursuant to the Pennsylvania Wiretapping and Electronic Surveillance Act, 18 Pa.C.S.A. § 5703, et seq., a person is guilty of a felony of the third degree if he:

(1) intentionally intercepts, endeavors to intercept, or procures any other person to intercept or endeavor to intercept any wire, electronic or oral communication;

(2) intentionally discloses or endeavors to disclose to any other person the contents of any wire, electronic or oral communication, or evidence derived therefrom, knowing or having reason to know that the information was obtained through the interception of a wire, electronic or oral communication; or

(3) intentionally uses or endeavors to use the contents of any wire, electronic or oral communication, or evidence derived therefrom, knowing or having reason to know, that the information was obtained through the interception of a wire, electronic or oral communication.

Seven police officers employed by the City of Pittsburg Bureau of Police team up to sue Matthew Lackner (Commander) and the City of Pittsburgh.

Plaintiffs, Colleen Jumba Baker, Brittany Mercer, Matthew O’Brien, Jonathan Sharp, Matthew Zuccher, Christopher Sedlak and Devlyn Valencic Keller allege that beginning on September 27, 2003 through October 4, 2003, Matthew Lacker utilized body worn cameras to video and audio records Plaintiffs along with utilizing the GPS component of the body worn camera to track them.

Yes. To track them.

Plaintiffs allege they were unaware that Lacker was utilizing a body worn camera to video and auto them and utilizing the GPS function of the body worn camera. Nor did they consent to have their conversations audio recorded by Lacker and/or the City of Pittsburgh.

Interestingly, Lackner was already charged with four (4) counts of Illegal Use of Wire or Oral Communication pursuant to the Pennsylvania Wiretapping and Electronic Surveillance Act. 18 Pa.C.S.A. § 5703(1) in a criminal suit.

So now Plaintiffs seek compensatory damages, including actual damages or statutory damages, punitive damages, and reasonably attorneys’ fees.

This case was just filed so it will be interesting to see how this case progresses. But this case is an important reminder that many states have their own privacy laws and to take these laws seriously to avoid lawsuits like this one.

Case No.: Case 2:24-cv-00461

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

AI Got It Wrong, Doesn’t Mean We Are Right: Practical Considerations for the Use of Generative AI for Commercial Litigators

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).

In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.

Confidentiality

As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.

Oversight

It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.

Judicial Preferences

Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.

FCC Updated Data Breach Notification Rules Go into Effect Despite Challenges

On March 13, 2024, the Federal Communications Commission’s updates to the FCC data breach notification rules (the “Rules”) went into effect. They were adopted in December 2023 pursuant to an FCC Report and Order (the “Order”).

The Rules went into effect despite challenges brought in the United States Court of Appeals for the Sixth Circuit. Two trade groups, the Ohio Telecom Association and the Texas Association of Business, petitioned the United States Court of Appeals for the Sixth Circuit and Fifth Circuit, respectively, to vacate the FCC’s Order modifying the Rules. The Order was published in the Federal Register on February 12, 2024, and the petitions were filed shortly thereafter. The challenges, which the United States Panel on Multidistrict Litigation consolidated to the Sixth Circuit, argue that the Rules exceed the FCC’s authority and are arbitrary and capricious. The Order addresses the argument that the Rules are “substantially the same” as breach rules nullified by Congress in 2017. The challenges, however, have not progressed since the Rules went into effect.

Read our previous blog post to learn more about the Rules.

Listen to this post

U.S. House of Representatives Passes Bill to Ban TikTok Unless Divested from ByteDance

Yesterday, with broad bipartisan support, the U.S. House of Representatives voted overwhelmingly (352-65) to support the Protecting Americans from Foreign Adversary Controlled Applications Act, designed to begin the process of banning TikTok’s use in the United States. This is music to my ears. See a previous blog post on this subject.

The Act would penalize app stores and web hosting services that host TikTok while it is owned by Chinese-based ByteDance. However, if the app is divested from ByteDance, the Act will allow use of TikTok in the U.S.

National security experts have warned legislators and the public about downloading and using TikTok as a national security threat. This threat manifests because the owner of ByteDance is required by Chinese law to share users’ data with the Chinese Communist government. When downloading the app, TikTok obtains access to users’ microphones, cameras, and location services, which is essentially spyware on over 170 million Americans’ every move, (dance or not).

Lawmakers are concerned about the detailed sharing of Americans’ data with one of its top adversaries and the ability of TikTok’s algorithms to influence and launch disinformation campaigns against the American people. The Act will make its way through the Senate, and if passed, President Biden has indicated that he will sign it. This is a big win for privacy and national security.

Copyright © 2024 Robinson & Cole LLP. All rights reserved.
by: Linn F. Freedman of Robinson & Cole LLP

For more news on Social Media Legislation, visit the NLR Communications, Media & Internet section.