The Race to Report: DOJ Announces Pilot Whistleblower Program

In recent years, the Department of Justice (DOJ) has rolled out a significant and increasing number of carrots and sticks aimed at deterring and punishing white collar crime. Speaking at the American Bar Association White Collar Conference in San Francisco on March 7, Deputy Attorney General Lisa Monaco announced the latest: a pilot program to provide financial incentives for whistleblowers.

While the program is not yet fully developed, the premise is simple: if an individual helps DOJ discover significant corporate or financial misconduct, she could qualify to receive a portion of the resulting forfeiture, consistent with the following predicates:

  • The information must be truthful and not already known to the government.
  • The whistleblower must not have been involved in the criminal activity itself.
  • Payments are available only in cases where there is not an existing financial disclosure incentive.
  • Payments will be made only after all victims have been properly compensated.

Money Motivates 

Harkening back to the “Wanted” posters of the Old West, Monaco observed that law enforcement has long offered rewards to incentivize tipsters. Since the passage of Dodd Frank almost 15 years ago, the SEC and CFTC have relied on whistleblower programs that have been incredibly successful. In 2023, the SEC received more than 18,000 whistleblower tips (almost 50 percent more than the previous record set in FY2022), and awarded nearly $600 million — the highest annual total by dollar value in the program’s history. Over the course of 2022 and 2023, the CFTC received more than 3,000 whistleblower tips and paid nearly $350 million in awards — including a record-breaking $200 million award to a single whistleblower. Programs at IRS and FinCEN have been similarly fruitful, as are qui tam actions for fraud against the government. But, Monaco acknowledged, those programs are by their very nature limited. Accordingly, DOJ’s program will fill in the gaps and address the full range of corporate and financial misconduct that the Department prosecutes. And though only time will tell, it seems likely that this program will generate a similarly large number of tips.

The Attorney General already has authority to pay awards for “information or assistance leading to civil or criminal forfeitures,” but it has never used that power in any systematic way. Now, DOJ plans to leverage that authority to offer financial incentives to those who (1) disclose truthful and new information regarding misconduct (2) in which they were not involved (3) where there is no existing financial disclosure incentive and (4) after all victims have been compensated. The Department has begun a 90-day policy sprint to develop and implement the program, with a formal start date later this year. Acting Assistant Attorney General Nicole Argentieri explained that, because the statutory authority is tied to the department’s forfeiture program, the Department’s Money Laundering and Asset Recovery Section will play a leading role in designing the program’s nuts and bolts, in close coordination with US Attorneys, the FBI and other DOJ offices.

Monaco spoke directly to potential whistleblowers, saying that while the Department will accept information about violations of any federal law, it is especially interested in information regarding

  • Criminal abuses of the US financial system;
  • Foreign corruption cases outside the jurisdiction of the SEC, including FCPA violations by non-issuers and violations of the recently enacted Foreign Extortion Prevention Act; and
  • Domestic corruption cases, especially involving illegal corporate payments to government officials.

Like the SEC and CFTC whistleblower programs, DOJ’s program will allow whistleblower awards only in cases involving penalties above a certain monetary threshold, but that threshold has yet to be determined.

Prior to Monaco’s announcement, the United States Attorney’s Office for the Southern District of New York launched its own pilot “whistleblower” program, which became effective February 13, 2024. Both the Department-wide pilot and the SDNY policy require that the government have been previously unaware of the misconduct, but they are different in a critical way: the Department-wide policy under development will explicitly apply only to reports by individuals who did not participate in the misconduct, while SDNY’s program offers incentives to “individual participants in certain non-violent offenses.” Thus, it appears that SDNY’s program is actually more akin to a VSD program, while DOJ’s Department-wide pilot program will target a new audience of potential whistleblowers.

Companies with an international footprint should also pay attention to non-US prosecutors. The new Director of the UK Serious Fraud Office recently announced that he would like to set up a similar program, no doubt noticing the effectiveness of current US programs.

Corporate Considerations

Though directed at whistleblowers, the pilot program is equally about incentivizing companies to voluntarily self-disclose misconduct in a timely manner. Absent aggravating factors, a qualifying VSD will result in a much more favorable resolution, including possibly avoiding a guilty plea and receiving a reduced financial penalty. But because the benefits under both programs only go to those who provide DOJ with new information, every day that a company sits on knowledge about misconduct is another day that a whistleblower might beat them to reporting that misconduct, and reaping the reward for doing so.

“When everyone needs to be first in the door, no one wants to be second,” Monaco said. “With these announcements, our message to whistleblowers is clear: the Department of Justice wants to hear from you. And to those considering a voluntary self-disclosure, our message is equally clear: knock on our door before we knock on yours.”

By providing a cash reward for whistleblowing to DOJ, this program may present challenges for companies’ efforts to operate and maintain and effective compliance program. Such rewards may encourage employees to report misconduct to DOJ instead of via internal channels, such as a compliance hotline, which can lead to compliance issues going undiagnosed or untreated — such as in circumstances where the DOJ is the only entity to receive the report but does not take any further action. Companies must therefore ensure that internal compliance and whistleblower systems are clear, easy to use, and effective — actually addressing the employee’s concerns and, to the extent possible, following up with the whistleblower to make sure they understand the company’s response.

If an employee does elect to provide information to DOJ, companies must ensure that they do not take any action that could be construed as interfering with the disclosure. Companies already face potential regulatory sanctions for restricting employees from reporting misconduct to the SEC. Though it is too early to know, it seems likely that DOJ will adopt a similar position, and a company’s interference with a whistleblower’s communications potentially could be deemed obstruction of justice.

The False Claims Act in 2023: A Year in Review

In 2023, the government and whistleblowers were party to 543 False Claims Act (FCA) settlements and judgments, the highest number of FCA settlements and judgments in a single year. As a result, collections under the FCA exceeded $2.68 billion, confirming that the FCA remains one of the government’s most important tools to root out fraud, safeguard government programs, and ensure that public funds are used appropriately. As in recent years, the healthcare industry was the primary focus of FCA enforcement, with over $1.8 billion recovered from matters involving hospitals, pharmacies, physicians, managed care providers, laboratories, and long-term acute care facilities. Other areas of focus in 2023 were government procurement fraud, pandemic fraud, and enforcement through the government’s new Civil Cyber-Fraud Initiative.

Listen to this post 

Commerce Department Launches Cross-Sector Consortium on AI Safety — AI: The Washington Report

  1. The Department of Commerce has launched the US AI Safety Institute Consortium (AISIC), a multistakeholder body tasked with developing AI safety standards and practices.
  2. The AISIC is currently composed of over 200 members representing industry, academia, labor, and civil society.
  3. The consortium may play an important role in implementing key provisions of President Joe Biden’s executive order on AI, including the development of guidelines on red-team testing[1] for AI and the creation of a companion resource to the AI Risk Management Framework.

Introduction: “First-Ever Consortium Dedicated to AI Safety” Launches

On February 8, 2024, the Department of Commerce announced the creation of the US AI Safety Institute Consortium (AISIC), a multistakeholder body housed within the National Institute of Standards and Technology (NIST). The purpose of the AISIC is to facilitate the development and adoption of AI safety standards and practices.

The AISIC has brought together over 200 organizations from industry, labor, academia, and civil society, with more members likely to join in the coming months.

Biden AI Executive Order Tasks Commerce Department with AI Safety Efforts

On October 30, 2023, President Joe Biden signed a wide-ranging executive order on AI (“AI EO”). This executive order has mobilized agencies across the federal bureaucracy to implement policies, convene consortiums, and issue reports on AI. Among other provisions, the AI EO directs the Department of Commerce (DOC) to establish “guidelines and best practices, with the aim of promoting consensus…[and] for developing and deploying safe, secure, and trustworthy AI systems.”

Responding to this mandate, the DOC established the US Artificial Intelligence Safety Institute (AISI) in November 2023. The role of the AISI is to “lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.” Concretely, the AISI is tasked with developing AI safety guidelines and standards and liaising with the AI safety bodies of partner nations.

The AISI is also responsible for convening multistakeholder fora on AI safety. It is in pursuance of this responsibility that the DOC has convened the AISIC.

The Responsibilities of the AISIC

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said DOC Secretary Gina Raimondo in a statement announcing the launch of the AISIC. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

To achieve the objectives set out by the AI EO, the AISIC has convened leading AI developers, research institutions, and civil society groups. At launch, the AISIC has over 200 members, and that number will likely grow in the coming months.

According to NIST, members of the AISIC will engage in the following objectives:

  1. Guide the evolution of industry standards on the development and deployment of safe, secure, and trustworthy AI.
  2. Develop methods for evaluating AI capabilities, especially those that are potentially harmful.
  3. Encourage secure development practices for generative AI.
  4. Ensure the availability of testing environments for AI tools.
  5. Develop guidance and practices for red-team testing and privacy-preserving machine learning.
  6. Create guidance and tools for digital content authentication.
  7. Encourage the development of AI-related workforce skills.
  8. Conduct research on human-AI system interactions and other social implications of AI.
  9. Facilitate understanding among actors operating across the AI ecosystem.

To join the AISIC, organizations were instructed to submit a letter of intent via an online webform. If selected for participation, applicants were asked to sign a Cooperative Research and Development Agreement (CRADA)[2] with NIST. Entities that could not participate in a CRADA were, in some cases, given the option to “participate in the Consortium pursuant to separate non-CRADA agreement.”

While the initial deadline to submit a letter of intent has passed, NIST has provided that there “may be continuing opportunity to participate even after initial activity commences for participants who were not selected initially or have submitted the letter of interest after the selection process.” Inquiries regarding AISIC membership may be directed to this email address.

Conclusion: The AISIC as a Key Implementer of the AI EO?

While at the time of writing NIST has not announced concrete initiatives that the AISIC will undertake, it is likely that the body will come to play an important role in implementing key provisions of Biden’s AI EO. As discussed earlier, NIST created the AISI and the AISIC in response to the AI EO’s requirement that DOC establish “guidelines and best practices…for developing and deploying safe, secure, and trustworthy AI systems.” Under this general heading, the AI EO lists specific resources and frameworks that the DOC must establish, including:

It is premature to assert that either the AISI or the AISIC will exclusively carry out these goals, as other bodies within the DOC (such as the National AI Research Resource) may also contribute to the satisfaction of these requirements. That being said, given the correspondence between these mandates and the goals of the AISIC, along with the multistakeholder and multisectoral structure of the consortium, it is likely that the AISIC will play a significant role in carrying out these tasks.

We will continue to provide updates on the AISIC and related DOC AI initiatives. Please feel free to contact us if you have questions as to current practices or how to proceed.

Endnotes

[1] As explained in our July 2023 newsletter on Biden’s voluntary framework on AI, “red-teaming” is “a strategy whereby an entity designates a team to emulate the behavior of an adversary attempting to break or exploit the entity’s technological systems. As the red team discovers vulnerabilities, the entity patches them, making their technological systems resilient to actual adversaries.”

[2] See “CRADAs – Cooperative Research & Development Agreements” for an explanation of CRADAs. https://www.doi.gov/techtransfer/crada.

Raj Gambhir contributed to this article.

Form I-9 Software: Avoiding Unlawful Discrimination When Selecting and Using I-9 and E-Verify Software Systems

A recent employer fact sheet from the U.S. Department of Justice (DOJ) and U.S. Department of Homeland Security (DHS) provides guidance for avoiding unlawful discrimination and other violations when using private software products to complete Forms I-9 and E-Verify cases.

Quick Hits

  • Employers are responsible for selecting and using software products that avoid unlawful discrimination and comply with Form I-9 and E-Verify requirements.
  • Employers must not use software products that violate Form I-9 and E-Verify requirements or involve system limitations that unlawfully discriminate among workers.
  • DOJ and DHS advise employers to train staff on Form I-9 and E-Verify requirements, and to provide access to published government guidance on Form I-9 and E-Verify requirements.

Employer Compliance With Form I-9 Software Products

The fact sheet reminds employers to use the current Form I-9 and properly complete the Form I-9 for each new hire after November 6, 1986, with any acceptable employee documents. Form I-9 systems must comply with requirements for electronic signatures and document storage including the ability to provide Form I-9 summary files containing all information fields on electronically stored Forms I-9. The fact sheet confirms required software capabilities and employer practices to properly complete the Form I-9 and avoid unlawful discrimination.

Employers must ensure that any software:

  • allows employees to leave form fields blank, if they’re not required fields (such as Social Security numbers, if not required on E-Verify cases);
  • allows workers with only one name to record “Unknown” in the first name field and to enter their names in the last name field on the Form I-9;
  • uniquely identifies “each person accessing, correcting, or changing a Form I-9”;
  • permits Form I-9 corrections in Section 1 and does not complete Section 1 corrections for workers, unless completing preparer/translator certifications in Supplement A;
  • retains all employee information and documents presented for form completion; and
  • permits Form I-9 corrections in Section 2 and allows completion of Supplement B reverifications with any acceptable employee documents.

Employer Compliance With E-Verify Software Products

The fact sheet reminds employers to comply with E-Verify program requirements when using software interfaces for E-Verify case completion. The fact sheet confirms required software capabilities and employer practices for completing E-Verify cases. Employers must still:

  • provide employees with current versions of Further Action Notices and Referral Date Confirmation letters in resolving Tentative Nonconfirmations (mismatches) in the E-Verify system;
  • provide English and non-English Further Action Notices and Referral Date Confirmation letters to employees with limited English proficiency;
  • display E-Verify notices confirming employer use of E-Verify;
  • “promptly notify employees in private” of E-Verify mismatches and provide Further Action Notices. If an employee who has been notified of a mismatch takes action to resolve the mismatch, provide the Referral Date Confirmation letter with case-specific information;
  • delay E-Verify case creation, when required. For example, when workers are awaiting Social Security numbers or have presented acceptable receipts for Form I-9 completion, employers must be able to delay E-Verify case creation; and
  • allow employees to resolve E-Verify mismatches prior to taking any adverse action, including suspensions or withholding pay.

Prohibited Employer Activity When Using Form I-9 Software

The fact sheet notes that an employer that uses private software products for Form I-9 or E-Verify compliance is prohibited from:

  • completing the Form I-9 on an employee’s behalf unless the employer is helping an employee complete Section 1 as a preparer or translator;
  • prepopulating employee information from other sources, providing auto-correct on employee inputs, or using predictive language for form completion;
  • requiring more or less information from employees for Form I-9 completion or preventing workers from using preparers/translators for form completion;
  • improperly correcting the Form I-9, improperly creating E-Verify cases, or failing to report corrections in the Form I-9 audit trail;
  • requesting more or different documentation than needed for Form I-9 completion, or failing to complete reverification in Supplement B of the Form I-9; and
  • imposing “unnecessary obstacles” in starting work or receiving pay, “such as by requiring a Social Security number to onboard or by not paying an employee who can complete the Form I-9 and is waiting for a Social Security number.” (Emphasis in the original.)

Staff Training and Technical Support

The fact sheet warns employers against using software products that do not provide technical support to workers, and it notes that employers are required to provide training to staff on Form I-9 and E-Verify compliance. Resources for staff members using software products for Form I-9 and E-Verify case completion include I-9 Central, the Handbook for Employers M-274, the M-775, E-Verify User Manual, and DOJ publications.

Top Risks for Businesses in 2024

Just weeks into 2024, it is already clear that uncertainty will be the watchword. Will the economic soft landing of 2023 persist into 2024? Will labor unrest, strong in 2023, settle down as inflation cools? Will inflation remain tamed? Will the U.S. elections bring continuity or a new administration with very different views on the role of the U.S. in the world and in regulating business?

Uncertainty is also fueling a complex risk environment that will require monitoring global developments more so than in the past. As outlined below, geopolitical risks are present, multiple, interconnected and high impact. International relations have traditionally fallen outside the mandate of most C-Suites, but how the U.S. government responds to geopolitical challenges will impact business operations. Beyond additional disruptions to global trade, businesses in 2024 will face risks associated with expanding protectionist economic policies, climate change impacts, and AI-driven disruptors.

Geopolitical Tensions Disrupting Global Trade

The guardrails are coming off the international system that enshrines the ideals of preserving peace and security through diplomatic engagement, respecting international borders (not changing them through military might) and ensuring the free flow of global trade. In 2022, the world was shocked by Russia’s invasion of Ukraine, but it has taken time for the full impact to reverberate through the international system. While political analysts write on a “spillover of conflict,” the more insidious impact is that more leaders of countries and non-state groups are acting outside the guardrails because they are no longer deterred from using military force to achieve political goals, making 2024 ripe for new military conflicts disrupting global trade beyond the ongoing war in Europe.

In October 2023, Hamas launched a war from Gaza against Israel. Thus far, fighting has spread to the West Bank, between Israel and Lebanese Hezbollah in the north, and to the Red Sea, with Iranian-backed Houthis attacking shipping through the strategic Bab al Mandab strait. Container ships and oil tankers, to avoid the risks, are re-routing to the Cape of Good Hope, adding two weeks of extra sailing time, with the associated costs. Insurance premiums for cargo ships sailing in the eastern Mediterranean have skyrocketed, with some no longer servicing Israeli ports. Companies and retailers with tight delivery schedules are switching to airfreight, which is expected to drive up airfreight rates.

Iran, emboldened by its blossoming relationship with Russia as one of Moscow’s new arms suppliers, is activating its proxy armies in Yemen, Iraq, Syria and Lebanon to attack Western targets. In a two-day period in January 2024, the Iran Revolutionary Guards directly launched strikes in Syria, Iraq and Pakistan. Nuclear-armed Pakistan retaliated with a cross border strike in Iran. While there are many nuances to these incidents, it is evident that deterrence against cross-border military conflict is eroding in a region with deep, festering grievances among neighbors. Iran is in an escalatory mode and could resume harassing shipping in the Persian Gulf and the strategic Strait of Hormuz, where about a fifth of the volume of the world’s total oil consumption passes through on a daily basis.

In East Asia, North Korea is also emboldened by the changing geopolitical environment. Pyongyang, too, has become a major supplier of weaponry to Moscow for use in Ukraine. While Russia (and China) in the past have constructively contained North Korean predilection for aggression against its neighbors, Supreme Leader Kim Jong Un may believe the time is ripe to change the status quo. Ominously, in a Jan. 15 speech before the Supreme People’s Assembly (North Korea’s parliament), Kim rejected the policy of reunification with South Korea and proposed incorporating the country into North Korea “in the event of war.” While North Korean leaders frequently revert to brinksmanship and aggressive language, Kim’s speech reflects confidence of a nuclear power, aligned with Russia against a shared adversary – South Korea, which is firmly aligned with the G7 consensus on Russia. A war in the Korean peninsula would be felt around the world because East Asia is central to global shipping and manufacturing, disrupting supply chains, as well as the regional economy.

China is also waiting for the right moment to “unite” Taiwan with the mainland. Beijing has seen the impact of Western sanctions on Russia over Ukraine and has been deterred from aiding the Russian war effort. In many ways, China has benefited from these sanctions and the reorientation of global trade. Also, Russia, with its far weaker economy, has proven surprisingly resilient to sanctions, another lesson for China. Meanwhile, the Taiwanese people voted in January and returned for a third time the ruling party that strongly rejects Chinese territorial claims. Tensions are high, with the Chinese military once again harassing Taiwanese defenses. For Beijing, the “right moment” could fall this year should conflict break out on the Korean peninsula, which would tie the U.S. down because of the Mutual Defense Treaty.

The uncertainty here is not that there are global tensions, but how the U.S. will respond as they develop and how U.S. businesses can navigate external shocks. Will the U.S. be drawn into a new war in the Middle East? Can the U.S. manage multiple conflicts, already deeply involved in supporting Ukraine? Is the U.S. economy resilient enough to withstand trade disruptions? How can businesses strengthen their own resiliency?

Economic Protectionism Increasing Costs and Risks

Geopolitical tensions, the global pandemic and the unequal benefits of globalization are impacting economic policies of the U.S. and the political discourse around the merits of unrestrained free trade. Protectionist economic policies are creeping in, under the nomenclature of “secure supply chains,” “friend-shoring” and “home-shoring.” The U.S. has imposed tariffs on countries (even allies) accused of unfair trade practices and has foreclosed access to certain technologies by unfriendly countries, namely China.

While the response to some of these trade restrictions are new trade agreements with “friends” to regulate access under preferred terms, in essence creating multiple “friends” trade blocs for specific sectors, other responses are retaliatory, including counter tariffs and export restrictions or outright bans. In 2024, the U.S. economy will see the impact of these trade fragmentation policies in acute ways, with upside risks of new business opportunities and downside risks of supply chain disruptions, critical resource competition, increased input costs, compliance risks and increased reputational risks.

Trade with China, which remains significant and important to the stability of the U.S. economy, will pose new risks in 2024. While Washington and Beijing have agreed to some political and security guardrails to manage the relationship, economic competition is unrestrained and stability in the bilateral relations is not guaranteed. The December 2023 bipartisan report by the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party, with its 150 recommendations on fundamentally resetting economic and technological competition with China, if even partially adopted, risks reigniting the trade war.

2024 is a presidential election year for the U.S. A change of control of the executive branch could result in many economic and regulatory policy reversals. The definition of “friend” could shift or narrow. Restrictions on trade with China could accelerate.

Impacts of Climate Change and Sustainability Policies

2023 was the hottest year on record, and El Niño conditions are expected to further boost the warming trend. Many regions experienced record-breaking wildfire activity in 2023, including Canada where 18 million hectares of land burned. Extreme storms caused life-threatening flooding in Europe, Asia and the Americas. 2024 is expected to bring even more climate hazards. The impacts will be physical and financial, including growing insurance losses and adverse impacts on operations and value chain. Analysts expect that in 2024, the economic and financial costs of adverse health impacts from climate change will increase, with risks related to the spread of infectious disease, insufficient access to clean water, and physical harm to the elderly and vulnerable. The direct economic effect will be on health systems, but also loss of productivity due to extreme weather incidents and effects of epidemics.

Energy transition to low-carbon emissions is underway in the U.S., but it is uneven and still uncertain. The financial market is investing in an impressive number of startups and large-scale projects revolving around cleantech. Still, there is hesitancy on the opportunity and risks of sustainability. Thus far, progress towards sustainability goals has been private sector-led and government-enabled. There is a risk that government incentive programs encouraging the transition to low-carbon energy could be reversed or curtailed under a new administration.

In 2024, some companies will face more climate disclosure compliance requirements. The Securities and Exchange Commission (SEC) is expected to release its final rule on climate change disclosures. The final action has been delayed several times because of pushback by public companies on some of the requirements, including Scope 3 greenhouse gas emission disclosures (those linked to supply chains and end users). California has not waited for the SEC’s final rule: In October 2023, Gov. Gavin Newsom signed into law legislation that will require large companies to disclose greenhouse gas emissions. The California climate laws go into effect in 2026, but companies will need to start much earlier to build the capabilities to plan, track and report their carbon footprint. For U.S. companies doing business in the European Union, they will need to comply with the EU Corporate Sustainability Reporting Directive, with the rules coming into force mid-2024.

Disruptive Technology

In 2023, generative AI was the talk of the town; in 2024, it will be the walk. Companies are popping up with new tools for every imaginable sector, to increase efficiency, task automation, customization, personalization and cost reduction. Business leaders are scrambling to integrate AI to gain a competitive edge, while navigating the everyday risks related to privacy, liability and security. While there are concerns that AI will displace humans, there is a growing consensus that while some jobs will disappear, people will focus on higher value work. That said, new rounds of labor disruptions linked to workforce transition are likely in 2024.

2024 will also bring AI-generated misinformation and disinformation. Bad actors will spread “synthetic” content, such as sophisticated voice cloning, doctored images and counterfeit websites, seeking to manipulate people, damage companies and economies, and foment dissent.

In 2024, around 2 billion people in more than 50 countries will vote in elections at risk of manipulation by misinformation and disinformation, which could destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence, terrorism and erosion of democratic processes. Large democracies will hold elections in 2024, including the U.S., the EU, Mexico, South Korea, India, Pakistan, Indonesia and South Africa. Synthetic content can be very difficult to detect, while easy to produce with AI tools.

This is not a theoretical threat; synthetic content is already being disseminated in the U.S., targeting New Hampshire voters with robocalls that share fake recorded messages from President Biden encouraging people not to vote in the primary election. The U.S. is already polarized with citizens distrustful of the government and media, a ready vulnerability. Businesses are not immune. Notably, CEOs have stood apart, with higher ratings for trustworthiness and risk being called upon to vouch for “truth” (and becoming collateral damage in the fray).

AI-powered malware will make 2023 cyber risks look like child’s play. Attackers can use AI algorithms to find and exploit software vulnerabilities, making attacks precise and effective. AI can help hackers quickly identify security measures and evade them. AI-created phishing attacks will be more sophisticated and difficult to detect because the algorithms can assess larger amounts of piecemeal information and craft messages that mimic communication styles.

The role of states backing cyber armies to spread disinformation or steal information is growing and is part and parcel of the erosion of the existing international order. States face little deterrence from digital cross-border attacks because there are yet to be established mechanisms to impose real costs.

CNN, BREAKING NEWS: CNN Targeted In Massive CIPA Case Involving A NEW Theory Under Section 638.51!

CNN is now facing a massive CIPA class action for violating CIPA Section 638.51 by allegedly installing “Trackers” on its website. In  Lesh v. Cable News Network, Inc, filed in the Superior Court of the State of California by Bursor & Fisher, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of Section 638.51.

More on that in a bit…

As CIPAworld readers know, we predicted the 2023 privacy litigation trends for you.

We warned you of the risky CIPA Chat Box cases.

We broke the news on the evolution of CIPA Web Session recording cases.

We notified you of major CIPA class action lawsuits against some of the world’s largest brands facing millions of dollars in potential exposure.

Now – we are reporting on a lesser-known facet of CIPA – but one that might be even more dangerous for companies using new Internet technologies.

This new focus for plaintiff’s attorneys appears to rely on the theory that website analytic tools are “pen register” or “trap and trace” devices under CIPA §638.51. These allegations also come with a massive $5,000 per violation penalty.

First, let’s delve into the background.

The Evolution of California Invasion of Privacy Act:

We know the California Invasion of Privacy Act is this weird little statute that was enacted decades ago and was designed to prevent ease dropping and wiretapping because — of course back then law enforcements were listening into folks phone calls to find the communist.

638.51 in particular was originally enacted back in the 80s and traditionally, “pen-traps” were employed by law enforcement to record outgoing and/or incoming telephone numbers from a telephone line.

The last two years, plaintiffs have been using these decades-old statues against companies claiming that the use of internet technologies such as website chat boxes, web session recording tools, java scripts, pixels, cookies and other newfangled technologies constitute “wire tapping” or “eavesdropping” on website users.

And California courts who love to take old statutes and apply it to these new technologies – have basically said internet communications are protected from being ease dropped on.

Now California courts will have to address whether these new fangled technologies are also “pen-trap” “devices or processes” under 638.51. These new 638.51 cases involve technologies such as cookies, web beacons, java scripts, and pixels to obtain information about users and their devices as they browse websites and or mobile applications. The users are then analyzed by the website operator or a third party vendor to gather relevant information users’ online activities.

Section 638.51:

Section 638.51 prohibits the usage or installation of “pen registers” – a device or process that records or decodes dialing, routing, addressing, or signaling information (commonly known as DRAS) and “trap and trace” (pen-traps) – devices or processes traditionally used by law enforcement that allow one to record all numbers dialed on outgoing calls or numbers identifying incoming calls — without first obtaining a court order.

Unlike CIPA’s 631, which prohibits wiretapping — the real-time interception of the content of the communications without consent, CIPA 638.51 prohibits the collection of DRAS.

638.51 has limited exceptions including where a service provider’s customer consents to the device’s use or to protect the rights of a service provider’s property.

Breaking Down the Terminology:

The term “pen register” means a device or process that records or decodes DRAs “transmitted by an instrument or facility from which a wire or electronic communication is transmitted, but not the contents of a communication.” §638.50(b).

The term “trap and trace” focuses on incoming, rather than outgoing numbers, and means a “device or process that captures the incoming electronic or other impulses that identify the originating number or other dialing, routing, addressing, or signaling information reasonably likely to identify the source of a wire or electronic communication, but not the contents of a communication.” §638.50(c).

Lesh v. Cable News Network, Inc “CNN” and its precedent:

This new wave of CIPA litigation stems from a single recent decision, Greenley v. Kochava, where the CA court –allowed a “pen register” claim to move pass the motion to dismiss stage. In Kochava, plaintiff challenged the use of these new internet technologies and asserting that the defendant data broker’s software was able to collect a variety of data such as geolocation, search terms, purchase decisions, and spending habits. Applying the plain meaning to the word “process” the Kochava court concluded that “software that identifies consumers, gathers data, and correlates that data through unique ‘fingerprinting’ is a process that falls within CIPA’s pen register definition.”

The Kochava court noted that no other court had interpreted Section 638.51, and while pen registers were traditionally physical machines used by law enforcement to record outbound call from a telephone, “[t]oday pen registers take the form of software.” Accordingly the court held that the plaintiff adequately alleged that the software could collect DRAs and was a “pen register.”

Kochava paved the wave for 638.51 litigation – with hundreds of complaints filed since. The majority of these cases are being filed in Los Angeles Country Superior Court by the Pacific Trial Attorneys in Newport Beach.

In  Lesh v. Cable News Network, Inc, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of CIPA Section 638.51(a) which proscribes any “person” from “install[ing] or us[ing] a pen register or a trap and trace device without first obtaining a court order.”

Plaintiff alleges CNN uses three “Trackers” (PubMatic, Magnite, and Aniview), on its website which constitute “pen registers.” The complaint alleges to make CNN’s website load on a user’s browser, the browser sends “HTTP request” or “GET” request to CNN’s servers where the data is stored. In response to the request, CNN’s service sends an “HTTP response” back to the browser with a set of instructions how to properly display the website – i.e. what images to load, what text should appear, or what music should play.

These instructions cause the Trackers to be installed on a user’s browsers which then cause the browser to send identifying information – including users’ IP addresses to the Trackers to analyze data, create and analyze the performance of marketing campaigns, and target specific users for advertisements. Accordingly the Trackers are “pen registers” – so the complaint alleges.

On this basis, the Plaintiff is asking the court for an order to certify the class, and statutory damages in addition to attorney fees. The alleged class is as follows:

“Pursuant to Cal. Code Civ. Proc. § 382, Plaintiff seeks to represent a class defined as all California residents who accessed the Website in California and had their IP address collected by the Trackers (the “Class”).

The following people are excluded from the Class: (i) any Judge presiding over this action and members of her or her family; (ii) Defendant, Defendant’s subsidiaries, parents, successors, predecessors, and any entity in which Defendant or their parents have a controlling interest (including current and former employees, officers, or directors); (iii) persons who properly execute and file a timely request for exclusion from the Class; (iv) persons whose claims in this matter have been finally adjudicated on the merits or otherwise released; (v) Plaintiff’s counsel and Defendant’s counsel; and (vi) the legal representatives, successors, and assigns of any such excluded persons.”

Under this expansive definition of “pen-register,” plaintiffs are alleging that almost any device that can track a user’s web session activity falls within the definition of a pen-register.

We’ll keep an eye out on this one – but until more helpful case law develops, the Kochava decision will keep open the floodgate of these new CIPA suits. Companies should keep in mind that unlike the other CIPA cases under Section 631 and 632.7, 638.51 allows for a cause of action even where no “contents” are being “recorded” – making 638.51 easier to allege.

Additionally, companies should be mindful of CIPA’s consent exceptions and ensure they are obtaining consent to any technologies that may trigger CIPA.

Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.

2024: The Year of the Telehealth Cliff

What does December 31, 2024, mean to you? New Year’s Eve? Post-2024 election? Too far away to know?

Our answer: December 31, 2024, is when we will go over a “telehealth cliff” if Congress fails to act before that date, directly impacting care and access for Medicare beneficiaries. What is this telehealth cliff? Let’s back up a bit.

TELEHEALTH COVERAGE POLICIES

Current statute (1834(m) of the Social Security Act) lays out payment and coverage policies for Medicare telehealth services. As written, the provisions significantly limit Medicare providers’—and therefore patients’—ability to utilize telehealth services. Some examples:

  • If the patient is in their home when the telehealth service is being provided, telehealth is generally not eligible for reimbursement.
  • Providers cannot bill for telehealth services provided via audio-only communication.
  • There is a narrow list of providers who are eligible to seek reimbursement for telehealth services.

COVID-19-RELATED TELEHEALTH FLEXIBILITIES

When the COVID-19 pandemic hit in 2020, a public health emergency (PHE) was declared. Congress passed several laws, and the administration acted through its own authorities to provide flexibilities around these Medicare telehealth restrictions. In general, nearly all statutory limitations on telehealth were lifted during the PHE. As we all know, utilization of telehealth skyrocketed.

The PHE ended last year, and through subsequent congressional efforts and regulatory actions by the Centers for Medicare and Medicaid Services (CMS), many flexibilities were extended beyond the end of the PHE, through December 31, 2024. Congress and CMS continue to grapple with how to support the provision of Medicare telehealth services for the future.

CMS has taken steps through the annual payment rule, the Medicare Physician Fee Schedule (MPFS), to align many of the payment and coverage policies for which it has regulatory authority with congressional deadlines. CMS has also restructured its telehealth list, giving more clarity to stakeholders and Congress as to which pandemic-era telehealth services could continue if an extension is passed. But CMS can’t address the statutory limitations on its own. Congress must legislate. CMS highlighted this in the final calendar year (CY) 2024 MPFS rule released on November 2, 2023, noting that “while the CAA, 2023, does extend certain COVID-19 PHE flexibilities, including allowing the beneficiary’s home to serve as an originating site, such flexibilities are only extended through the end of CY 2024.”

THE TELEHEALTH CLIFF

This brings us to the telehealth cliff. CMS generally releases the annual MPFS proposed rule in July, with the final rule coming on or around November 1. If history is any indication, Congress is not likely to act on the extensions much before the current December 31 deadline. This sets up the potential for a high level of uncertainty headed into 2025.

If we go over, this telehealth cliff would directly impact care and access for Medicare beneficiaries. The effects could be felt acutely in rural and underserved areas, where patients have been able to access, via telehealth, medical services that may have been out of reach for them in the past. The telehealth cliff would also impact how providers interact with their patients, and their collective ability to continue to utilize telehealth in a way that has benefited patients and providers alike. It could also influence how health plans choose to cover these services in the private marketplace beyond 2024. Such a dramatic change would impact business decisions for many providers and practices heading into 2025. And, at a time when provider shortages are still a significant issue, it would eliminate an option that has allowed many providers, practices and facilities to extend scarce resources for patient care.

TAKE ACTION

Stakeholders should be raising these concerns to Congress now. There are many ways to engage, including reaching out directly to key Members of Congress, looking for opportunities to testify or submit written testimony for relevant congressional hearings, and participating in organized events where Members of Congress will be present. This cliff can be avoided, but not without a concentrated effort and a lot of noise.

Exploring the Future of Information Governance: Key Predictions for 2024

Information governance has evolved rapidly, with technology driving the pace of change. Looking ahead to 2024, we anticipate technology playing an even larger role in data management and protection. In this blog post, we’ll delve into the key predictions for information governance in 2024 and how they’ll impact businesses of all sizes.

  1. Embracing AI and Automation: Artificial intelligence and automation are revolutionizing industries, bringing about significant changes in information governance practices. Over the next few years, it is anticipated that an increasing number of companies will harness the power of AI and automation to drive efficient data analysis, classification, and management. This transformative approach will not only enhance risk identification and compliance but also streamline workflows and alleviate administrative burdens, leading to improved overall operational efficiency and effectiveness. As organizations adapt and embrace these technological advancements, they will be better equipped to navigate the evolving landscape of data governance and stay ahead in an increasingly competitive business environment.
  2. Prioritizing Data Privacy and Security: In recent years, data breaches and cyber-attacks have significantly increased concerns regarding the usage and protection of personal data. As we look ahead to 2024, the importance of data privacy and security will be paramount. This heightened emphasis is driven by regulatory measures such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). These regulations necessitate that businesses take proactive measures to protect sensitive data and provide transparency in their data practices. By doing so, businesses can instill trust in their customers and ensure the responsible handling of personal information.
  3. Fostering Collaboration Across Departments: In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world.
  4. Exploring Blockchain Technology: Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers. This transformative technology not only enhances data integrity but also mitigates the risks of tampering, ensuring trust and accountability in the digital age. With its ability to provide a robust and reliable framework for data management, blockchain is poised to reshape the way we handle and secure information, paving the way for a more efficient and trustworthy future.
  5. Prioritizing Data Ethics: As data-driven decision-making becomes increasingly crucial in the business landscape, the importance of ethical data usage cannot be overstated. In the year 2024, businesses will place even greater emphasis on data ethics, recognizing the need to establish clear guidelines and protocols to navigate potential ethical dilemmas that may arise. To ensure responsible and ethical data practices, organizations will invest in enhancing data literacy among their workforce, prioritizing education and training initiatives. Additionally, there will be a growing focus on transparency in data collection and usage, with businesses striving to build trust and maintain the privacy of individuals while harnessing the power of data for informed decision-making.

The future of information governance will be shaped by technology, regulations, and ethical considerations. Businesses that adapt to these changes will thrive in a data-driven world. By investing in AI and automation, prioritizing data privacy and security, fostering collaboration, exploring blockchain technology, and upholding data ethics, companies can prepare for the challenges and opportunities of 2024 and beyond.

Jim Merrifield, Robinson+Cole’s Director of Information Governance & Business Intake, contributed to this report.

FCC Adopts Updated Data Breach Notification Rules

On December 13, 2023, the Federal Communications Commission (FCC) voted to update its 16-year old data breach notification rules (the “Rules”). Pursuant to the FCC update, providers of telecommunications, Voice over Internet Protocol (VoIP) and telecommunications relay services (TRS) are now required to notify the FCC of a data breach, in addition to existing obligations to notify affected customers, the FBI and the U.S. Secret Service.

The updated Rules introduce a new customer notification timing requirement, requiring notice of a data breach to affected customers without unreasonable delay after notification to the FCC and law enforcement agencies, and in no case more than 30 days after the reasonable determination of a breach. The new Rules also expand the definition of “breach” to include “inadvertent access, use, or disclosure of customer information, except in those cases where such information is acquired in good faith by an employee or agent of a carrier or TRS provider, and such information is not used improperly or further disclosed.” The updated Rules further introduce a harm threshold, whereby customer notification is not required if a carrier or TRS provider can “reasonably determine that no harm to customers is reasonably likely to occur as a result of the breach,” or where the breach solely involves encrypted data and the encryption key was not affected.