Navigating the EU AI Act from a US Perspective: A Timeline for Compliance

After extensive negotiations, the European Parliament, Commission, and Council came to a consensus on the EU Artificial Intelligence Act (the “AI Act”) on Dec. 8, 2023. This marks a significant milestone, as the AI Act is expected to be the most far-reaching regulation on AI globally. The AI Act is poised to significantly impact how companies develop, deploy, and manage AI systems. In this post, NM’s AI Task Force breaks down the key compliance timelines to offer a roadmap for U.S. companies navigating the AI Act.

The AI Act will have a staged implementation process. While it will officially enter into force 20 days after publication in the EU’s Official Journal (“Entry into Force”), most provisions won’t be directly applicable for an additional 24 months. This provides a grace period for businesses to adapt their AI systems and practices to comply with the AI Act. To bridge this gap, the European Commission plans to launch an AI Pact. This voluntary initiative allows AI developers to commit to implementing key obligations outlined in the AI Act even before they become legally enforceable.

With the impending enforcement of the AI Act comes the crucial question for U.S. companies that operate in the EU or whose AI systems interact with EU citizens: How can they ensure compliance with the new regulations? To start, U.S. companies should understand the key risk categories established by the AI Act and their associated compliance timelines.

I. Understanding the Risk Categories
The AI Act categorizes AI systems based on their potential risk. The risk level determines the compliance obligations a company must meet.  Here’s a simplified breakdown:

  • Unacceptable Risk: These systems are banned entirely within the EU. This includes applications that threaten people’s safety, livelihood, and fundamental rights. Examples may include social credit scoring, emotion recognition systems at work and in education, and untargeted scraping of facial images for facial recognition.
  • High Risk: These systems pose a significant risk and require strict compliance measures. Examples may include AI used in critical infrastructure (e.g., transport, water, electricity), essential services (e.g., insurance, banking), and areas with high potential for bias (e.g., education, medical devices, vehicles, recruitment).
  • Limited Risk: These systems require some level of transparency to ensure user awareness. Examples include chatbots and AI-powered marketing tools where users should be informed that they’re interacting with a machine.
  • Minimal Risk: These systems pose minimal or no identified risk and face no specific regulations.

II. Key Compliance Timelines (as of March 2024):

Time Frame  Anticipated Milestones
6 months after Entry into Force
  • Prohibitions on Unacceptable Risk Systems will come into effect.
12 months after Entry into Force
  • This marks the start of obligations for companies that provide general-purpose AI models (those designed for widespread use across various applications). These companies will need to comply with specific requirements outlined in the AI Act.
  • Member states will appoint competent authorities responsible for overseeing the implementation of the AI Act within their respective countries.
  • The European Commission will conduct annual reviews of the list of AI systems categorized as “unacceptable risk” and banned under the AI Act.
  • The European Commission will issue guidance on high-risk AI incident reporting.
18 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
24 months after Entry into Force
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.
36 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
By the end of 2030
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.

In addition to the above, we can expect further rulemaking and guidance from the European Commission to come forth regarding aspects of the AI Act such as use cases, requirements, delegated powers, assessments, thresholds, and technical documentation.

Even before the AI Act’s Entry into Force, there are crucial steps U.S. companies operating in the EU can take to ensure a smooth transition. The priority is familiarization. Once the final version of the Act is published, carefully review it to understand the regulations and how they might apply to your AI systems. Next, classify your AI systems according to their risk level (high, medium, minimal, or unacceptable). This will help you determine the specific compliance obligations you’ll need to meet. Finally, conduct a thorough gap analysis. Identify any areas where your current practices for developing, deploying, or managing AI systems might not comply with the Act. By taking these proactive steps before the official enactment, you’ll gain valuable time to address potential issues and ensure your AI systems remain compliant in the EU market.

UNDER SURVEILLANCE: Police Commander and City of Pittsburgh Face Wiretap Lawsuit

Hi CIPAWorld! The Baroness here and I have an interesting filing that just came in the other day.

This one involves alleged violations of the Pennsylvania Wiretapping and Electronic Surveillance Act, 18 Pa.C.S.A. § 5703, et seq., and the Federal Wiretap Act, 18 U.S.C. § 2511, et seq.

Pursuant to the Pennsylvania Wiretapping and Electronic Surveillance Act, 18 Pa.C.S.A. § 5703, et seq., a person is guilty of a felony of the third degree if he:

(1) intentionally intercepts, endeavors to intercept, or procures any other person to intercept or endeavor to intercept any wire, electronic or oral communication;

(2) intentionally discloses or endeavors to disclose to any other person the contents of any wire, electronic or oral communication, or evidence derived therefrom, knowing or having reason to know that the information was obtained through the interception of a wire, electronic or oral communication; or

(3) intentionally uses or endeavors to use the contents of any wire, electronic or oral communication, or evidence derived therefrom, knowing or having reason to know, that the information was obtained through the interception of a wire, electronic or oral communication.

Seven police officers employed by the City of Pittsburg Bureau of Police team up to sue Matthew Lackner (Commander) and the City of Pittsburgh.

Plaintiffs, Colleen Jumba Baker, Brittany Mercer, Matthew O’Brien, Jonathan Sharp, Matthew Zuccher, Christopher Sedlak and Devlyn Valencic Keller allege that beginning on September 27, 2003 through October 4, 2003, Matthew Lacker utilized body worn cameras to video and audio records Plaintiffs along with utilizing the GPS component of the body worn camera to track them.

Yes. To track them.

Plaintiffs allege they were unaware that Lacker was utilizing a body worn camera to video and auto them and utilizing the GPS function of the body worn camera. Nor did they consent to have their conversations audio recorded by Lacker and/or the City of Pittsburgh.

Interestingly, Lackner was already charged with four (4) counts of Illegal Use of Wire or Oral Communication pursuant to the Pennsylvania Wiretapping and Electronic Surveillance Act. 18 Pa.C.S.A. § 5703(1) in a criminal suit.

So now Plaintiffs seek compensatory damages, including actual damages or statutory damages, punitive damages, and reasonably attorneys’ fees.

This case was just filed so it will be interesting to see how this case progresses. But this case is an important reminder that many states have their own privacy laws and to take these laws seriously to avoid lawsuits like this one.

Case No.: Case 2:24-cv-00461

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

AI Got It Wrong, Doesn’t Mean We Are Right: Practical Considerations for the Use of Generative AI for Commercial Litigators

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).

In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.

Confidentiality

As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.

Oversight

It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.

Judicial Preferences

Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.

FCC Updated Data Breach Notification Rules Go into Effect Despite Challenges

On March 13, 2024, the Federal Communications Commission’s updates to the FCC data breach notification rules (the “Rules”) went into effect. They were adopted in December 2023 pursuant to an FCC Report and Order (the “Order”).

The Rules went into effect despite challenges brought in the United States Court of Appeals for the Sixth Circuit. Two trade groups, the Ohio Telecom Association and the Texas Association of Business, petitioned the United States Court of Appeals for the Sixth Circuit and Fifth Circuit, respectively, to vacate the FCC’s Order modifying the Rules. The Order was published in the Federal Register on February 12, 2024, and the petitions were filed shortly thereafter. The challenges, which the United States Panel on Multidistrict Litigation consolidated to the Sixth Circuit, argue that the Rules exceed the FCC’s authority and are arbitrary and capricious. The Order addresses the argument that the Rules are “substantially the same” as breach rules nullified by Congress in 2017. The challenges, however, have not progressed since the Rules went into effect.

Read our previous blog post to learn more about the Rules.

Listen to this post

U.S. House of Representatives Passes Bill to Ban TikTok Unless Divested from ByteDance

Yesterday, with broad bipartisan support, the U.S. House of Representatives voted overwhelmingly (352-65) to support the Protecting Americans from Foreign Adversary Controlled Applications Act, designed to begin the process of banning TikTok’s use in the United States. This is music to my ears. See a previous blog post on this subject.

The Act would penalize app stores and web hosting services that host TikTok while it is owned by Chinese-based ByteDance. However, if the app is divested from ByteDance, the Act will allow use of TikTok in the U.S.

National security experts have warned legislators and the public about downloading and using TikTok as a national security threat. This threat manifests because the owner of ByteDance is required by Chinese law to share users’ data with the Chinese Communist government. When downloading the app, TikTok obtains access to users’ microphones, cameras, and location services, which is essentially spyware on over 170 million Americans’ every move, (dance or not).

Lawmakers are concerned about the detailed sharing of Americans’ data with one of its top adversaries and the ability of TikTok’s algorithms to influence and launch disinformation campaigns against the American people. The Act will make its way through the Senate, and if passed, President Biden has indicated that he will sign it. This is a big win for privacy and national security.

Copyright © 2024 Robinson & Cole LLP. All rights reserved.
by: Linn F. Freedman of Robinson & Cole LLP

For more news on Social Media Legislation, visit the NLR Communications, Media & Internet section.

The Race to Report: DOJ Announces Pilot Whistleblower Program

In recent years, the Department of Justice (DOJ) has rolled out a significant and increasing number of carrots and sticks aimed at deterring and punishing white collar crime. Speaking at the American Bar Association White Collar Conference in San Francisco on March 7, Deputy Attorney General Lisa Monaco announced the latest: a pilot program to provide financial incentives for whistleblowers.

While the program is not yet fully developed, the premise is simple: if an individual helps DOJ discover significant corporate or financial misconduct, she could qualify to receive a portion of the resulting forfeiture, consistent with the following predicates:

  • The information must be truthful and not already known to the government.
  • The whistleblower must not have been involved in the criminal activity itself.
  • Payments are available only in cases where there is not an existing financial disclosure incentive.
  • Payments will be made only after all victims have been properly compensated.

Money Motivates 

Harkening back to the “Wanted” posters of the Old West, Monaco observed that law enforcement has long offered rewards to incentivize tipsters. Since the passage of Dodd Frank almost 15 years ago, the SEC and CFTC have relied on whistleblower programs that have been incredibly successful. In 2023, the SEC received more than 18,000 whistleblower tips (almost 50 percent more than the previous record set in FY2022), and awarded nearly $600 million — the highest annual total by dollar value in the program’s history. Over the course of 2022 and 2023, the CFTC received more than 3,000 whistleblower tips and paid nearly $350 million in awards — including a record-breaking $200 million award to a single whistleblower. Programs at IRS and FinCEN have been similarly fruitful, as are qui tam actions for fraud against the government. But, Monaco acknowledged, those programs are by their very nature limited. Accordingly, DOJ’s program will fill in the gaps and address the full range of corporate and financial misconduct that the Department prosecutes. And though only time will tell, it seems likely that this program will generate a similarly large number of tips.

The Attorney General already has authority to pay awards for “information or assistance leading to civil or criminal forfeitures,” but it has never used that power in any systematic way. Now, DOJ plans to leverage that authority to offer financial incentives to those who (1) disclose truthful and new information regarding misconduct (2) in which they were not involved (3) where there is no existing financial disclosure incentive and (4) after all victims have been compensated. The Department has begun a 90-day policy sprint to develop and implement the program, with a formal start date later this year. Acting Assistant Attorney General Nicole Argentieri explained that, because the statutory authority is tied to the department’s forfeiture program, the Department’s Money Laundering and Asset Recovery Section will play a leading role in designing the program’s nuts and bolts, in close coordination with US Attorneys, the FBI and other DOJ offices.

Monaco spoke directly to potential whistleblowers, saying that while the Department will accept information about violations of any federal law, it is especially interested in information regarding

  • Criminal abuses of the US financial system;
  • Foreign corruption cases outside the jurisdiction of the SEC, including FCPA violations by non-issuers and violations of the recently enacted Foreign Extortion Prevention Act; and
  • Domestic corruption cases, especially involving illegal corporate payments to government officials.

Like the SEC and CFTC whistleblower programs, DOJ’s program will allow whistleblower awards only in cases involving penalties above a certain monetary threshold, but that threshold has yet to be determined.

Prior to Monaco’s announcement, the United States Attorney’s Office for the Southern District of New York launched its own pilot “whistleblower” program, which became effective February 13, 2024. Both the Department-wide pilot and the SDNY policy require that the government have been previously unaware of the misconduct, but they are different in a critical way: the Department-wide policy under development will explicitly apply only to reports by individuals who did not participate in the misconduct, while SDNY’s program offers incentives to “individual participants in certain non-violent offenses.” Thus, it appears that SDNY’s program is actually more akin to a VSD program, while DOJ’s Department-wide pilot program will target a new audience of potential whistleblowers.

Companies with an international footprint should also pay attention to non-US prosecutors. The new Director of the UK Serious Fraud Office recently announced that he would like to set up a similar program, no doubt noticing the effectiveness of current US programs.

Corporate Considerations

Though directed at whistleblowers, the pilot program is equally about incentivizing companies to voluntarily self-disclose misconduct in a timely manner. Absent aggravating factors, a qualifying VSD will result in a much more favorable resolution, including possibly avoiding a guilty plea and receiving a reduced financial penalty. But because the benefits under both programs only go to those who provide DOJ with new information, every day that a company sits on knowledge about misconduct is another day that a whistleblower might beat them to reporting that misconduct, and reaping the reward for doing so.

“When everyone needs to be first in the door, no one wants to be second,” Monaco said. “With these announcements, our message to whistleblowers is clear: the Department of Justice wants to hear from you. And to those considering a voluntary self-disclosure, our message is equally clear: knock on our door before we knock on yours.”

By providing a cash reward for whistleblowing to DOJ, this program may present challenges for companies’ efforts to operate and maintain and effective compliance program. Such rewards may encourage employees to report misconduct to DOJ instead of via internal channels, such as a compliance hotline, which can lead to compliance issues going undiagnosed or untreated — such as in circumstances where the DOJ is the only entity to receive the report but does not take any further action. Companies must therefore ensure that internal compliance and whistleblower systems are clear, easy to use, and effective — actually addressing the employee’s concerns and, to the extent possible, following up with the whistleblower to make sure they understand the company’s response.

If an employee does elect to provide information to DOJ, companies must ensure that they do not take any action that could be construed as interfering with the disclosure. Companies already face potential regulatory sanctions for restricting employees from reporting misconduct to the SEC. Though it is too early to know, it seems likely that DOJ will adopt a similar position, and a company’s interference with a whistleblower’s communications potentially could be deemed obstruction of justice.

The False Claims Act in 2023: A Year in Review

In 2023, the government and whistleblowers were party to 543 False Claims Act (FCA) settlements and judgments, the highest number of FCA settlements and judgments in a single year. As a result, collections under the FCA exceeded $2.68 billion, confirming that the FCA remains one of the government’s most important tools to root out fraud, safeguard government programs, and ensure that public funds are used appropriately. As in recent years, the healthcare industry was the primary focus of FCA enforcement, with over $1.8 billion recovered from matters involving hospitals, pharmacies, physicians, managed care providers, laboratories, and long-term acute care facilities. Other areas of focus in 2023 were government procurement fraud, pandemic fraud, and enforcement through the government’s new Civil Cyber-Fraud Initiative.

Listen to this post 

Commerce Department Launches Cross-Sector Consortium on AI Safety — AI: The Washington Report

  1. The Department of Commerce has launched the US AI Safety Institute Consortium (AISIC), a multistakeholder body tasked with developing AI safety standards and practices.
  2. The AISIC is currently composed of over 200 members representing industry, academia, labor, and civil society.
  3. The consortium may play an important role in implementing key provisions of President Joe Biden’s executive order on AI, including the development of guidelines on red-team testing[1] for AI and the creation of a companion resource to the AI Risk Management Framework.

Introduction: “First-Ever Consortium Dedicated to AI Safety” Launches

On February 8, 2024, the Department of Commerce announced the creation of the US AI Safety Institute Consortium (AISIC), a multistakeholder body housed within the National Institute of Standards and Technology (NIST). The purpose of the AISIC is to facilitate the development and adoption of AI safety standards and practices.

The AISIC has brought together over 200 organizations from industry, labor, academia, and civil society, with more members likely to join in the coming months.

Biden AI Executive Order Tasks Commerce Department with AI Safety Efforts

On October 30, 2023, President Joe Biden signed a wide-ranging executive order on AI (“AI EO”). This executive order has mobilized agencies across the federal bureaucracy to implement policies, convene consortiums, and issue reports on AI. Among other provisions, the AI EO directs the Department of Commerce (DOC) to establish “guidelines and best practices, with the aim of promoting consensus…[and] for developing and deploying safe, secure, and trustworthy AI systems.”

Responding to this mandate, the DOC established the US Artificial Intelligence Safety Institute (AISI) in November 2023. The role of the AISI is to “lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.” Concretely, the AISI is tasked with developing AI safety guidelines and standards and liaising with the AI safety bodies of partner nations.

The AISI is also responsible for convening multistakeholder fora on AI safety. It is in pursuance of this responsibility that the DOC has convened the AISIC.

The Responsibilities of the AISIC

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said DOC Secretary Gina Raimondo in a statement announcing the launch of the AISIC. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

To achieve the objectives set out by the AI EO, the AISIC has convened leading AI developers, research institutions, and civil society groups. At launch, the AISIC has over 200 members, and that number will likely grow in the coming months.

According to NIST, members of the AISIC will engage in the following objectives:

  1. Guide the evolution of industry standards on the development and deployment of safe, secure, and trustworthy AI.
  2. Develop methods for evaluating AI capabilities, especially those that are potentially harmful.
  3. Encourage secure development practices for generative AI.
  4. Ensure the availability of testing environments for AI tools.
  5. Develop guidance and practices for red-team testing and privacy-preserving machine learning.
  6. Create guidance and tools for digital content authentication.
  7. Encourage the development of AI-related workforce skills.
  8. Conduct research on human-AI system interactions and other social implications of AI.
  9. Facilitate understanding among actors operating across the AI ecosystem.

To join the AISIC, organizations were instructed to submit a letter of intent via an online webform. If selected for participation, applicants were asked to sign a Cooperative Research and Development Agreement (CRADA)[2] with NIST. Entities that could not participate in a CRADA were, in some cases, given the option to “participate in the Consortium pursuant to separate non-CRADA agreement.”

While the initial deadline to submit a letter of intent has passed, NIST has provided that there “may be continuing opportunity to participate even after initial activity commences for participants who were not selected initially or have submitted the letter of interest after the selection process.” Inquiries regarding AISIC membership may be directed to this email address.

Conclusion: The AISIC as a Key Implementer of the AI EO?

While at the time of writing NIST has not announced concrete initiatives that the AISIC will undertake, it is likely that the body will come to play an important role in implementing key provisions of Biden’s AI EO. As discussed earlier, NIST created the AISI and the AISIC in response to the AI EO’s requirement that DOC establish “guidelines and best practices…for developing and deploying safe, secure, and trustworthy AI systems.” Under this general heading, the AI EO lists specific resources and frameworks that the DOC must establish, including:

It is premature to assert that either the AISI or the AISIC will exclusively carry out these goals, as other bodies within the DOC (such as the National AI Research Resource) may also contribute to the satisfaction of these requirements. That being said, given the correspondence between these mandates and the goals of the AISIC, along with the multistakeholder and multisectoral structure of the consortium, it is likely that the AISIC will play a significant role in carrying out these tasks.

We will continue to provide updates on the AISIC and related DOC AI initiatives. Please feel free to contact us if you have questions as to current practices or how to proceed.

Endnotes

[1] As explained in our July 2023 newsletter on Biden’s voluntary framework on AI, “red-teaming” is “a strategy whereby an entity designates a team to emulate the behavior of an adversary attempting to break or exploit the entity’s technological systems. As the red team discovers vulnerabilities, the entity patches them, making their technological systems resilient to actual adversaries.”

[2] See “CRADAs – Cooperative Research & Development Agreements” for an explanation of CRADAs. https://www.doi.gov/techtransfer/crada.

Raj Gambhir contributed to this article.

Form I-9 Software: Avoiding Unlawful Discrimination When Selecting and Using I-9 and E-Verify Software Systems

A recent employer fact sheet from the U.S. Department of Justice (DOJ) and U.S. Department of Homeland Security (DHS) provides guidance for avoiding unlawful discrimination and other violations when using private software products to complete Forms I-9 and E-Verify cases.

Quick Hits

  • Employers are responsible for selecting and using software products that avoid unlawful discrimination and comply with Form I-9 and E-Verify requirements.
  • Employers must not use software products that violate Form I-9 and E-Verify requirements or involve system limitations that unlawfully discriminate among workers.
  • DOJ and DHS advise employers to train staff on Form I-9 and E-Verify requirements, and to provide access to published government guidance on Form I-9 and E-Verify requirements.

Employer Compliance With Form I-9 Software Products

The fact sheet reminds employers to use the current Form I-9 and properly complete the Form I-9 for each new hire after November 6, 1986, with any acceptable employee documents. Form I-9 systems must comply with requirements for electronic signatures and document storage including the ability to provide Form I-9 summary files containing all information fields on electronically stored Forms I-9. The fact sheet confirms required software capabilities and employer practices to properly complete the Form I-9 and avoid unlawful discrimination.

Employers must ensure that any software:

  • allows employees to leave form fields blank, if they’re not required fields (such as Social Security numbers, if not required on E-Verify cases);
  • allows workers with only one name to record “Unknown” in the first name field and to enter their names in the last name field on the Form I-9;
  • uniquely identifies “each person accessing, correcting, or changing a Form I-9”;
  • permits Form I-9 corrections in Section 1 and does not complete Section 1 corrections for workers, unless completing preparer/translator certifications in Supplement A;
  • retains all employee information and documents presented for form completion; and
  • permits Form I-9 corrections in Section 2 and allows completion of Supplement B reverifications with any acceptable employee documents.

Employer Compliance With E-Verify Software Products

The fact sheet reminds employers to comply with E-Verify program requirements when using software interfaces for E-Verify case completion. The fact sheet confirms required software capabilities and employer practices for completing E-Verify cases. Employers must still:

  • provide employees with current versions of Further Action Notices and Referral Date Confirmation letters in resolving Tentative Nonconfirmations (mismatches) in the E-Verify system;
  • provide English and non-English Further Action Notices and Referral Date Confirmation letters to employees with limited English proficiency;
  • display E-Verify notices confirming employer use of E-Verify;
  • “promptly notify employees in private” of E-Verify mismatches and provide Further Action Notices. If an employee who has been notified of a mismatch takes action to resolve the mismatch, provide the Referral Date Confirmation letter with case-specific information;
  • delay E-Verify case creation, when required. For example, when workers are awaiting Social Security numbers or have presented acceptable receipts for Form I-9 completion, employers must be able to delay E-Verify case creation; and
  • allow employees to resolve E-Verify mismatches prior to taking any adverse action, including suspensions or withholding pay.

Prohibited Employer Activity When Using Form I-9 Software

The fact sheet notes that an employer that uses private software products for Form I-9 or E-Verify compliance is prohibited from:

  • completing the Form I-9 on an employee’s behalf unless the employer is helping an employee complete Section 1 as a preparer or translator;
  • prepopulating employee information from other sources, providing auto-correct on employee inputs, or using predictive language for form completion;
  • requiring more or less information from employees for Form I-9 completion or preventing workers from using preparers/translators for form completion;
  • improperly correcting the Form I-9, improperly creating E-Verify cases, or failing to report corrections in the Form I-9 audit trail;
  • requesting more or different documentation than needed for Form I-9 completion, or failing to complete reverification in Supplement B of the Form I-9; and
  • imposing “unnecessary obstacles” in starting work or receiving pay, “such as by requiring a Social Security number to onboard or by not paying an employee who can complete the Form I-9 and is waiting for a Social Security number.” (Emphasis in the original.)

Staff Training and Technical Support

The fact sheet warns employers against using software products that do not provide technical support to workers, and it notes that employers are required to provide training to staff on Form I-9 and E-Verify compliance. Resources for staff members using software products for Form I-9 and E-Verify case completion include I-9 Central, the Handbook for Employers M-274, the M-775, E-Verify User Manual, and DOJ publications.