The Privacy Patchwork: Beyond US State “Comprehensive” Laws

We’ve cautioned before about the danger of thinking only about US state “comprehensive” laws when looking to legal privacy and data security obligations in the United States. We’ve also mentioned that the US has a patchwork of privacy laws. That patchwork is found to a certain extent outside of the US as well. What laws exist in the patchwork that relate to a company’s activities?

There are laws that apply when companies host websites, including the most well-known, the California Privacy Protection Act (CalOPPA). It has been in effect since July 2004, thus predating COPPA by 14 years. Then there are laws the apply if a company is collecting and using biometric identifiers, like Illinois’ Biometric Information Privacy Act.

Companies are subject to specific laws both in the US and elsewhere when engaging in digital communications. These laws include the US federal laws TCPA and TCFAPA, as well as CAN-SPAM. Digital communication laws exist in countries as wide ranging as Australia, Canada, Morocco, and many others. Then we have laws that apply when collecting information during a credit card transaction, like the Song Beverly Credit Card Act (California).

Putting It Into Practice: When assessing your company’s obligations under privacy and data security laws, keep activity specific privacy laws in mind. Depending on what you are doing, and in what jurisdictions, you may have more obligations to address than simply those found in comprehensive privacy laws.

The Double-Edged Impact of AI Compliance Algorithms on Whistleblowing

As the implementation of Artificial Intelligence (AI) compliance and fraud detection algorithms within corporations and financial institutions continues to grow, it is crucial to consider how this technology has a twofold effect.

It’s a classic double-edged technology: in the right hands it can help detect fraud and bolster compliance, but in the wrong it can snuff out would-be-whistleblowers and weaken accountability mechanisms. Employees should assume it is being used in a wide range of ways.

Algorithms are already pervasive in our legal and governmental systems: the Securities and Exchange Commission, a champion of whistleblowers, employs these very compliance algorithms to detect trading misconduct and determine whether a legal violation has taken place.

There are two major downsides to the implementation of compliance algorithms that experts foresee: institutions avoiding culpability and tracking whistleblowers. AI can uncover fraud but cannot guarantee the proper reporting of it. This same technology can be used against employees to monitor and detect signs of whistleblowing.

Strengths of AI Compliance Systems:

AI excels at analyzing vast amounts of data to identify fraudulent transactions and patterns that might escape human detection, allowing institutions to quickly and efficiently spot misconduct that would otherwise remain undetected.

AI compliance algorithms are promised to operate as follows:

  • Real-time Detection: AI can analyze vast amounts of data, including financial transactions, communication logs, and travel records, in real-time. This allows for immediate identification of anomalies that might indicate fraudulent activity.
  • Pattern Recognition: AI excels at finding hidden patterns, analyzing spending habits, communication patterns, and connections between seemingly unrelated entities to flag potential conflicts of interest, unusual transactions, or suspicious interactions.
  • Efficiency and Automation: AI can automate data collection and analysis, leading to quicker identification and investigation of potential fraud cases.

Yuktesh Kashyap, associate Vice President of data science at Sigmoid explains on TechTarget that AI allows financial institutions, for example, to “streamline compliance processes and improve productivity. Thanks to its ability to process massive data logs and deliver meaningful insights, AI can give financial institutions a competitive advantage with real-time updates for simpler compliance management… AI technologies greatly reduce workloads and dramatically cut costs for financial institutions by enabling compliance to be more efficient and effective. These institutions can then achieve more than just compliance with the law by actually creating value with increased profits.”

Due Diligence and Human Oversight

Stephen M. Kohn, founding partner of Kohn, Kohn & Colapinto LLP, argues that AI compliance algorithms will be an ineffective tool that allow institutions to escape liability. He worries that corporations and financial institutions will implement AI systems and evade enforcement action by calling it due diligence.

“Companies want to use AI software to show the government that they are complying reasonably. Corporations and financial institutions will tell the government that they use sophisticated algorithms, and it did not detect all that money laundering, so you should not sanction us because we did due diligence.” He insists that the U.S. Government should not allow these algorithms to be used as a regulatory benchmark.

Legal scholar Sonia Katyal writes in her piece “Democracy & Distrust in an Era of Artificial Intelligence” that “While automation lowers the cost of decision making, it also raises significant due process concerns, involving a lack of notice and the opportunity to challenge the decision.”

While AI can be used as a powerful tool for identifying fraud, there is still no method for it to contact authorities with its discoveries. Compliance personnel are still required to blow the whistle, given societies standard due process. These algorithms should be used in conjunction with human judgment to determine compliance or lack thereof. Due process is needed so that individuals can understand the reasoning behind algorithmic determinations.

The Double-Edged Sword

Darrell West, Senior Fellow at Brookings Institute’s Center for Technology Innovation and Douglas Dillon Chair in Governmental Studies warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them.

Nowadays most office jobs (whether remote or in person) conduct operations fully online. Employees are required to use company computers and networks to do their jobs. Data generated by each employee passes through these devices and networks. Meaning, your privacy rights are questionable.

Because of this, whistleblowing will get much harder – organizations can employ the technology they initially implemented to catch fraud to instead catch whistleblowers. They can monitor employees via the capabilities built into our everyday tech: cameras, emails, keystroke detectors, online activity logs, what is downloaded, and more. West urges people to operate under the assumption that employers are monitoring their online activity.

These techniques have been implemented in the workplace for years, but AI automates tracking mechanisms. AI gives organizations more systematic tools to detect internal problems.

West explains, “All organizations are sensitive to a disgruntled employee who might take information outside the organization, especially if somebody’s dealing with confidential information, budget information or other types of financial information. It is just easy for organizations to monitor that because they can mine emails. They can analyze text messages; they can see who you are calling. Companies could have keystroke detectors and see what you are typing. Since many of us are doing our jobs in Microsoft Teams meetings and other video conferencing, there is a camera that records and transcribes information.”

If a company is defining a whistleblower as a problem, they can monitor this very information and look for keywords that would indicate somebody is engaging in whistleblowing.

With AI, companies can monitor specific employees they might find problematic (such as a whistleblower) and all the information they produce, including the keywords that might indicate fraud. Creators of these algorithms promise that soon their products will be able to detect all sorts of patterns and feelings, such as emotion and sentiment.

AI cannot determine whether somebody is a whistleblower, but it can flag unusual patterns and refer those patterns to compliance analysts. AI then becomes a tool to monitor what is going on within the organization, making it difficult for whistleblowers to go unnoticed. The risk of being caught by internal compliance software will be much greater.

“The only way people could report under these technological systems would be to go offline, using their personal devices or burner phones. But it is difficult to operate whistleblowing this way and makes it difficult to transmit confidential information. A whistleblower must, at some point, download information. Since you will be doing that on a company network, and that is easily detected these days.”

But the question of what becomes of the whistleblower is based on whether the compliance officers operate in support of the company or the public interest – they will have an extraordinary amount of information about the company and the whistleblower.

Risks for whistleblowers have gone up as AI has evolved because it is harder for them to collect and report information on fraud and compliance without being discovered by the organization.

West describes how organizations do not have a choice whether or not to use AI anymore: “All of the major companies are building it into their products. Google, Microsoft, Apple, and so on. A company does not even have to decide to use it: it is already being used. It’s a question of whether they avail themselves of the results of what’s already in their programs.”

“There probably are many companies that are not set up to use all the information that is at their disposal because it does take a little bit of expertise to understand data analytics. But this is just a short-term barrier, like organizations are going to solve that problem quickly.”

West recommends that organizations should just be a lot more transparent about their use of these tools. They should inform their employees what kind of information they are using, how they are monitoring employees, and what kind of software they use. Are they using detection? Software of any sort? Are they monitoring keystrokes?

Employees should want to know how long information is being stored. Organizations might legitimately use this technology for fraud detection, which might be a good argument to collect information, but it does not mean they should keep that information for five years. Once they have used the information and determined whether employees are committing fraud, there is no reason to keep it. Companies are largely not transparent about length of storage and what is done with this data and once it is used.

West believes that currently, most companies are not actually informing employees of how their information is being kept and how the new digital tools are being utilized.

The Importance of Whistleblower Programs:

The ability of AI algorithms to track whistleblowers poses a real risk to regulatory compliance given the massive importance of whistleblower programs in the United States’ enforcement of corporate crime.

The whistleblower programs at the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) respond to individuals who voluntarily report original information about fraud or misconduct.

If a tip leads to a successful enforcement action, the whistleblowers are entitled to 10-30% of the recovered funds. These programs have created clear anti-retaliation protections and strong financial incentives for reporting securities and commodities fraud.

Established in 2010 under the Dodd-Frank Act, these programs have been integral to enforcement. The SEC reports that whistleblower tips have led to over $6 billion in sanctions while the CFTC states that almost a third of its investigations stem from whistleblower disclosures.

Whistleblower programs, with robust protections for those who speak out, remain essential for exposing fraud and holding organizations accountable. This ensures that detected fraud is not only identified, but also reported and addressed, protecting taxpayer money, and promoting ethical business practices.

If AI algorithms are used to track down whistleblowers, their implementation would hinder these programs. Companies will undoubtedly retaliate against employees they suspect of blowing the whistle, creating a massive chilling effect where potential whistleblowers would not act out of fear of detection.

Already being employed in our institutions, experts believe these AI-driven compliance systems must have independent oversight for transparency’s sake. The software must also be designed to adhere to due process standards.

For more news on AI Compliance and Whistleblowing, visit the NLR Communications, Media & Internet section.

White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

Five Compliance Best Practices for … Conducting a Risk Assessment

As an accompaniment to our biweekly series on “What Every Multinational Should Know About” various international trade, enforcement, and compliance topics, we are introducing a second series of quick-hit pieces on compliance best practices. Give us two minutes, and we will give you five suggested compliance best practices that will benefit your international regulatory compliance program.

Conducting an international risk assessment is crucial for identifying and mitigating potential risks associated with conducting business operations in foreign countries and complying with the expansive application of U.S. law. Because compliance is essentially an exercise in identifying, mitigating, and managing risk, the starting point for any international compliance program is to conduct a risk assessment. If your company has not done one within the last two years, then your organization probably should be putting one in motion.

Here are five compliance checks that are important to consider when conducting a risk assessment:

  1. Understand Business Operations: A good starting point is to gain a thorough understanding of the organization’s business operations, including products, services, markets, supply chains, distribution channels, and key stakeholders. You should pay special attention to new risk areas, including newly acquired companies and divisions, expansions into new countries, and new distribution patterns. Identifying the business profile of the organization, and how it raises systemic risks, is the starting point of developing the risk profile of the company.
  2. Conduct Country- and Industry-Specific Risk Factors: Analyze the political, economic, legal, and regulatory landscape of each country where the organization operates or plans to operate. Consider factors such as political stability, corruption levels, regulatory environment, and cultural differences. You should also understand which countries also raise indirect risks, such as for the transshipment of goods to sanctioned countries. You also should evaluate industry-specific risks and trends that may impact your company’s risk profile, such as the history of recent enforcement actions.
  3. Gather Risk-Related Data and Information: You should gather relevant data and information from internal and external sources to inform the risk-assessment process. Relevant examples include internal documentation, industry publications, reports of recent enforcement actions, and areas where government regulators are stressing compliance, such as the recent focus on supply chain factors. Use risk-assessment tools and methodologies to systematically evaluate and prioritize risks, such as risk matrices, risk heat maps, scenario analysis, and probability-impact assessments. (The Foley anticorruption, economic sanctions, and forced labor heat maps are found here.)
  4. Engage Stakeholders: Engage key stakeholders throughout the risk-assessment process to gather insights, perspectives, and feedback. Consult with local employees and business partners to gain feedback on compliance issues that are likely to arise while also seeking their aid in disseminating the eventual compliance dictates, internal controls, and other compliance measures that your organization ends up implementing or updating.
  5. Document Findings and Develop Risk-Mitigation Strategies: Document the findings of the risk assessment, including identified risks, their potential impact and likelihood, and recommended mitigation strategies. Ensure that documentation is clear, concise, and actionable. Use the documented findings to develop risk-mitigation strategies and action plans to address identified risks effectively while prioritizing mitigation efforts based on risk severity, urgency, and feasibility of implementation.

Most importantly, you should recognize that assessing and addressing risk is an ongoing process. You should ensure your organization has established processes for the ongoing monitoring and review of risks to track changes in the risk landscape and evaluate the effectiveness of mitigation measures. Further, at least once every two years, most multinational organizations should be updating their risk assessment periodically to reflect evolving risks and business conditions as well as changing regulations and regulator enforcement priorities.

Justice Department has Opportunity to Revolutionize its Enforcement Efforts with Whistleblower Program

Over the past few decades, modern whistleblower award programs have radically altered the ability of numerous U.S. agencies to crack down on white-collar crime. This year, the Department of Justice (DOJ) may be joining their ranks, if it incorporates the key elements of successful whistleblower programs into the program it is developing.

On March 7, the Deputy Attorney General Lisa Monaco announced that the DOJ was launching a “90-day policy sprint” to develop “a DOJ-run whistleblower rewards program.” According to Monaco, the DOJ has taken note of the successes of the U.S.’s whistleblower award programs, such as those run by the Securities and Exchange Commission (SEC) and Internal Revenue Service (IRS), noting that they “have proven indispensable.”

Monaco understood that the SEC and IRS programs have been so successful because they “encourage individuals to report misconduct” by “rewarding whistleblowers.” But how any award program is administered is the key to whether or not the program will work. There is a nearly 50-year history of what rules need to be implemented to transform these programs into highly effective law enforcement tools. The Justice Department needs to follow these well defined rules.

The key element of all successful whistleblower award programs is very simple: If a whistleblower meets all of the requirements set forth by the government for compensation the awards must be mandatory and based on a percentage of the sanctions collected thanks to the whistleblower. A qualified whistleblower cannot be left out in the cold. Denying qualified whistleblowers compensation will destroy the trust necessary for a whistleblower program to work.

It is not the possibility of money that incentives individuals to report misconduct but the promise of money. Blowing the whistle is an immense risk and individuals are only compelled to take such a risk when there is real guarantee of an award.

This dynamic has been laid clear in recent legislative history. There is a long track record of whistleblower laws and programs failing when awards are discretionary and then becoming immensely successful once awards are made mandatory.

For example, under the 1943 version of the False Claims Act awards to whistleblowers were fully discretionary. After decades of ineffectiveness, in 1986, Congress amended the law to set a mandate that qualified whistleblowers receive awards of 15-30% of the proceeds collected by the government in the action connected with their disclosure.

The 1986 Senate Report explained why Congress was amending the law:

“The new percentages . . . create a guarantee that relators [i.e., whistleblowers] will receive at least some portion of the award if the litigation proves successful. Hearing witnesses who themselves had exposed fraud in Government contracting, expressed concern that current law fails to offer any security, financial or otherwise, to persons considering publicly exposing fraud.

“If a potential plaintiff reads the present statute and understands that in a successful case the court may arbitrarily decide to award only a tiny fraction of the proceeds to the person who brought the action, the potential plaintiff may decide it is too risky to proceed in the face of a totally unpredictable recovery.”

In the nearly four decades since awards were made mandatory, the False Claims Act has established itself as America’s premier anti-fraud law. The government has recovered over $75 billions of taxpayer money from fraudsters, the vast majority from whistleblower initiated cases based directly on the 1986 amendments making awards mandatory.

Similar transformations occurred at both the IRS and SEC where ineffective discretionary award laws were replaced by laws which mandated that qualified whistleblowers receive a set percentage of the funds collected thanks to their whistleblowing. Since these reforms, the whistleblower programs have revolutionized these agencies’ enforcement efforts, leading directly to billions of dollars in sanctions and creating a massive deterrent effect on corporate wrongdoing.

Most recently, Congress reaffirmed the importance of mandatory whistleblower awards when it reformed the anti-money laundering whistleblower law. The original version of the law, which passed in January 2021, had no set minimum amount for awards, meaning that they were fully discretionary. After the AML Whistleblower Program struggled to take off, Congress listened to the feedback from whistleblower advocates and passed the AML Whistleblower Improvement Act to mandate that qualified money laundering whistleblowers are awarded.

Monaco states that the DOJ has long had the discretionary authority to pay whistleblower awards to individuals who report information leading to civil or criminal forfeitures and has “used this authority here and there — but never as part of a targeted program.”

The most important step in turning an underutilized and ineffective whistleblower award law into an “indispensable” whistleblower award program has been made clear over the past decades. Qualified whistleblowers must be guaranteed an award based on a percentage of the sanctions collected in connection with their disclosure.

By administering its whistleblower program in a way that mandates award payments, the DOJ would go a long way towards creating a whistleblower program which revolutionizes its ability to fight crime. The Justice Department has taken the most important first step – recognizing the importance of whistleblowers in reporting frauds. It now must follow through during its “90-day sprint,” making sure reforming the management of the Asset Forfeiture Fund works in practice. Whistleblowers who risk their jobs and careers need real, enforceable justice.

Using an LLC to Protect the Family Vacation Home

Vacation homes offer a retreat from daily life, providing a sanctuary to relax and create cherished family memories. Many owners envision passing down their vacation home for future generations to enjoy, but the lack of proper planning can often lead to intra-family disputes. Leaving a vacation home outright to children or other family members may be the easiest option, but the potential for discord over the control and usage of the property only increases as ownership is passed from one generation to the next. A limited liability company (LLC) can mitigate the risk of conflict and provide a tailored solution to the meet the specific needs of a family.

When a vacation home is owned by an LLC, the membership interests in the LLC are passed down to younger generations, which allows for the continued use and enjoyment of the property by the family. The structure also provides a framework for management through an operating agreement, which governs the LLC. An operating agreement allows the original owner to create a plan for how the property will be used and managed as additional owners are added. The agreement can determine who is responsible for property management, how expenses should be proportioned and paid, how decisions should be made and provide guidelines for scheduling family usage. By establishing clear rules and procedures, an LLC can reduce the likelihood of disputes and encourage fairness among different generations.

Another benefit of an LLC is the ability to prevent unwanted transfers of ownership thus ensuring that the property stays in the family. A well-drafted operating agreement can prohibit membership interests from being transferred to third parties, protecting the family as a whole from an individual’s divorce or creditor problems. The LLC can also hold additional assets, including rental income and deposits of other funds earmarked for property expenditures, which facilitates the proper management and use of resources to cover expenses.

An LLC offers an efficient structure to avoid intra-family turmoil and preserves the spirit of the family vacation home for generations to come.

For more news on Protecting Real Estate Ownership, visit the NLR Real Estate section.

Amendments to New York LLC Transparency Act Delay Effective Date, Among Other Changes

New York Governor Kathy Hochul last month signed into law amendments to the recently enacted New York LLC Transparency Act (as amended, the “NYLTA”), extending the NYLTA’s effective date from December 21, 2024, to January 1, 2026 (the “Effective Date”).

The NYLTA will require all limited liability companies (“LLCs”) either formed under New York law or foreign LLCs that seek to be authorized to do business in New York to submit certain beneficial ownership information to the New York Department of State. LLCs will be required to disclose their beneficial owners unless the LLC qualifies for an exemption from the requirements. New York LLCs and foreign LLCs registered to do business in New York should evaluate their structure with counsel that is familiar with the NYLTA (and the federal Corporate Transparency Act (the “CTA”)) to determine whether they will have a filing obligation under the new law.

For New York LLCs formed on or prior to the Effective Date, and foreign LLCs authorized to do business in New York on or prior to the Effective Date, the deadline to file the required beneficial ownership report or the statement specifying the applicable exemptions(s) from the filing requirement is January 1, 2027. For New York LLCs formed after the Effective Date, and foreign LLCs authorized in New York after the Effective Date, the NYLTA will require that beneficial ownership information be submitted within thirty days of filing the articles of organization for an LLC formed under New York law or the initial application for registration filed by a foreign LLC. Thereafter, the NYLTA (as amended) imposes an ongoing requirement to file an annual statement with the New York Department of State confirming or updating (1) the beneficial ownership disclosure information; (2) the street address of the entity’s principal executive office; (3) status as an exempt company, if applicable; and (4) such other information as may be designated by the New York Department of State.

The definitions of important terms such as “exempt company,” “reporting company,” “applicant,” and “beneficial owner” used in the NYLTA refer to the equivalent definitions in the CTA but are limited in application only to LLCs. Correspondingly, the NYLTA shares the same 23 exemptions from the reporting requirements as the CTA. If an LLC falls within one or more of the available exemptions, however, in a departure from the CTA, the NYLTA requires the entity to submit a statement attested to under penalty of perjury indicating the specific exemption(s) for which the LLC qualifies.

Potential penalties for failing to comply with the NYLTA include monetary penalties of $500 for every day that a required filing under the NYLTA is past due, as well as a potential suspension or cancellation of an LLC.

The amendments to the NYLTA also provide that the beneficial ownership information relating to natural persons will be deemed confidential except (1) by written consent of or request by the beneficial owner of the LLC; (2) by court order; (3) to federal, state, or local government agencies performing official duties as required by statute; or (4) for a valid law enforcement purpose. This is in contrast to the original New York statute, which provided for beneficial ownership information to be made publicly available in a searchable database.

The SEC Speaks–And Fails to Defend Mandatory Climate Disclosures

During the opening remarks of the two-day SEC Speaks Conference, Chairman Gensler failed to express any statement of support in connection with the SEC’s recently promulgated rule on mandatory climate disclosures. (Instead, his speech focused on a number of other topics, including clearinghouse rules and proposed regulations.) In contrast, Republican SEC Commissioner Uyeda devoted the entirety of his speech to offering critiques of the SEC’s newly enacted mandatory climate disclosure rule.

While most of Commissioner Uyeda’s criticisms had been previously voiced on other occasions, certain legal arguments achieved greater prominence in these remarks. In particular, Commissioner Uyeda emphasized the concept of materiality, noting that “[t]he significant changes in the final rule reflect a recognition that no disclosure rule that veers from materiality is likely to survive a court challenge,” and opining that “changes to selected portions of the rule text intended to mitigate legal risk do not necessarily convert a climate change activism rule to a material risk disclosure rule.” There was also a focus on procedural concerns, including a potential violation of the Administrative Procedure Act due to “the failure to repropose the rule” since “the changes were so significant,” and that “the fail[ure] to consider [the] rule’s economic consequences [renders] the adoption of the rule arbitrary and capricious.” Finally, Commissioner Uyeda compared the climate disclosure rule to the previously enacted conflict minerals rule (which was mandated by Congress), stating that “public companies and investors are stuck with a mandatory disclosure rule that deviates from financial materiality but fails to resolve the social purpose for which it was adopted.” Each of these arguments should be expected to feature in the upcoming litigation in the Eighth Circuit concerning the legality of the SEC’s climate disclosure rule.

Still, the failure by Chairman Gensler and his fellow Democratic Commissioners to offer a robust public defense of the climate disclosure rule may simply reflect a shifting of priorities now that the rule has been enacted. Notably, just a few days ago–on March 22, 2024–Chairman Gensler forcefully defended the SEC’s climate disclosure rule at a conference hosted by Columbia Law School, where his entire speech advocated the concept of mandatory disclosures and stated that the SEC’s climate disclosure rule “enhance[d] the consistency, comparability, and reliability of [climate-related] disclosures.” Moreover, it is altogether possible that a speech on the second day of the conference might offer a rejoinder to the varied critiques of the climate disclosure rule.

Unlike the conflict minerals rule, which was mandated by Congress, the Commission has acted on its own volition to adopt a climate disclosure rule that seeks to exert societal pressure on companies to change their behavior. It is the Commission that determined to delve into matters beyond its jurisdiction and expertise. In my view, this action deviates from the Commission’s mission and contravenes established law.

https://www.sec.gov/news/speech/uyeda-remarks-sec-speaks-040224

Navigating the EU AI Act from a US Perspective: A Timeline for Compliance

After extensive negotiations, the European Parliament, Commission, and Council came to a consensus on the EU Artificial Intelligence Act (the “AI Act”) on Dec. 8, 2023. This marks a significant milestone, as the AI Act is expected to be the most far-reaching regulation on AI globally. The AI Act is poised to significantly impact how companies develop, deploy, and manage AI systems. In this post, NM’s AI Task Force breaks down the key compliance timelines to offer a roadmap for U.S. companies navigating the AI Act.

The AI Act will have a staged implementation process. While it will officially enter into force 20 days after publication in the EU’s Official Journal (“Entry into Force”), most provisions won’t be directly applicable for an additional 24 months. This provides a grace period for businesses to adapt their AI systems and practices to comply with the AI Act. To bridge this gap, the European Commission plans to launch an AI Pact. This voluntary initiative allows AI developers to commit to implementing key obligations outlined in the AI Act even before they become legally enforceable.

With the impending enforcement of the AI Act comes the crucial question for U.S. companies that operate in the EU or whose AI systems interact with EU citizens: How can they ensure compliance with the new regulations? To start, U.S. companies should understand the key risk categories established by the AI Act and their associated compliance timelines.

I. Understanding the Risk Categories
The AI Act categorizes AI systems based on their potential risk. The risk level determines the compliance obligations a company must meet.  Here’s a simplified breakdown:

  • Unacceptable Risk: These systems are banned entirely within the EU. This includes applications that threaten people’s safety, livelihood, and fundamental rights. Examples may include social credit scoring, emotion recognition systems at work and in education, and untargeted scraping of facial images for facial recognition.
  • High Risk: These systems pose a significant risk and require strict compliance measures. Examples may include AI used in critical infrastructure (e.g., transport, water, electricity), essential services (e.g., insurance, banking), and areas with high potential for bias (e.g., education, medical devices, vehicles, recruitment).
  • Limited Risk: These systems require some level of transparency to ensure user awareness. Examples include chatbots and AI-powered marketing tools where users should be informed that they’re interacting with a machine.
  • Minimal Risk: These systems pose minimal or no identified risk and face no specific regulations.

II. Key Compliance Timelines (as of March 2024):

Time Frame  Anticipated Milestones
6 months after Entry into Force
  • Prohibitions on Unacceptable Risk Systems will come into effect.
12 months after Entry into Force
  • This marks the start of obligations for companies that provide general-purpose AI models (those designed for widespread use across various applications). These companies will need to comply with specific requirements outlined in the AI Act.
  • Member states will appoint competent authorities responsible for overseeing the implementation of the AI Act within their respective countries.
  • The European Commission will conduct annual reviews of the list of AI systems categorized as “unacceptable risk” and banned under the AI Act.
  • The European Commission will issue guidance on high-risk AI incident reporting.
18 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
24 months after Entry into Force
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.
36 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
By the end of 2030
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.

In addition to the above, we can expect further rulemaking and guidance from the European Commission to come forth regarding aspects of the AI Act such as use cases, requirements, delegated powers, assessments, thresholds, and technical documentation.

Even before the AI Act’s Entry into Force, there are crucial steps U.S. companies operating in the EU can take to ensure a smooth transition. The priority is familiarization. Once the final version of the Act is published, carefully review it to understand the regulations and how they might apply to your AI systems. Next, classify your AI systems according to their risk level (high, medium, minimal, or unacceptable). This will help you determine the specific compliance obligations you’ll need to meet. Finally, conduct a thorough gap analysis. Identify any areas where your current practices for developing, deploying, or managing AI systems might not comply with the Act. By taking these proactive steps before the official enactment, you’ll gain valuable time to address potential issues and ensure your AI systems remain compliant in the EU market.

DOJ Plan to Offer Whistleblower Awards “A Good First Step”

The Department of Justice (DOJ) will launch a whistleblower rewards program later this year, Deputy Attorney General Lisa Monaco, announced today. Monaco stated that other U.S. whistleblower award programs, such as the SEC, CFTC, IRS and AML programs, “have proven indispensable” and that the DOJ plans to offer awards for tips not covered under these programs.

“This is a good first step, but the Justice Department has miles to go in creating a whistleblower program competitive with the programs managed by the U.S. Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC),” said Stephen M. Kohn.

“We hope that the DOJ will follow the lead of the SEC and CFTC and establish a central Whistleblower Office that can accept anonymous and confidential complaints. Such a program has been required under the anti-money laundering whistleblower law for over three years, but Justice has simply failed to follow the law,” added Kohn, who also serves as Chairman of the Board of the National Whistleblower Center.

According to Monaco, “under current law, the Attorney General is authorized to pay awards for information or assistance leading to civil or criminal forfeitures” but this authority has never been used “as part of a targeted program.” The DOJ is “launching a 90-day sprint to develop and implement a pilot program, with a formal start date later this year,” she stated.

While the specifics of the program have yet to be announced, Monaco did state that the DOJ will only offer awards to individuals who were not involved in the criminal activity itself.

“The Justice Department’s decision to exclude persons who may have had some involvement in the criminal activity is a step backwards and demonstrates a fundamental misunderstanding as to why the Dodd-Frank and False Claims Acts work so well,” continued Kohn. “When the False Claims Act was signed into law by President Abraham Lincoln in 1863 it was widely understood that the award laws worked best when they induced persons who were part of the conspiracy to turn in their former associates in crime. Justice needs to understand that by failing to follow the basic tenants of the most successful whistleblower laws ever enacted, their program is starting off on the wrong foot.”

Geoff Schweller also contributed to this article.