AI Regulation Continues to Grow as Illinois Amends its Human Rights Act

Following laws enacted in jurisdictions such as ColoradoNew York CityTennessee, and the state’s own Artificial Intelligence Video Interview Act, on August 9, 2024, Illinois’ Governor signed House Bill (HB) 3773, also known as the “Limit Predictive Analytics Use” bill. The bill amends the Illinois Human Rights Act (Act) by adding certain uses of artificial intelligence (AI), including generative AI, to the long list of actions by covered employers that could constitute civil rights violations.

The amendments made by HB3773 take effect January 1, 2026, and add two new definitions to the law.

“Artificial intelligence” – which according to the amendments means:

a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The definition of AI includes “generative AI,” which has its own definition:

an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following: (1) textual outputs, such as short answers, essays, poetry, or longer compositions or answers; (2) image outputs, such as fine art, photographs, conceptual art, diagrams, and other images; (3) multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and (4) other content that would be otherwise produced by human means.

The plethora of AI tools available for use in the workplace continues unabated as HR professionals and managers vie to adopt effective and efficient solutions for finding the best candidates, assessing their performance, and otherwise improving decision making concerning human capital. In addition to understanding whether an organization is covered by a regulation of AI, such as HB3773, it also is important to determine whether the technology being deployed also falls within the law’s scope. Assuming the tool or application is not being developed inhouse, this analysis will require, among other things, working closely with the third-party vendor providing the tool or application to understand its capabilities and risks.

According to the amendments, covered employers can violate the Act in two ways. First, an employer that uses AI with respect to – recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment – and which has the effect of subjecting employees to discrimination on the basis of protected classes under the Act may constitute a violation. The same may be true for employers that use zip codes as a proxy for protected classes under the Act.

Second, a covered employer that fails to provide notice to an employee that the employer is using AI for the purposes described above may be found to have violated the Act.

Unlike the Colorado or New York City laws, the amendments to the Act do not require a impact assessment or bias audit. They also do not provide any specifics concerning the notice requirement. However, the amendments require the Illinois Department of Human Rights (IDHR) to adopt regulations necessary for implementation and enforcement. These regulations will include rules concerning the notice, such as the time period and means for providing same.

We are sure to see more regulation in this space. While it is expected that some common threads will exist among the various rules and regulations concerning AI and generative AI, organizations leveraging these technologies will need to be aware of the differences and assess what additional compliance steps may be needed.

Top Competition Enforcers in the US, EU, and UK Release Joint Statement on AI Competition – AI: The Washington Report


On July 23, the top competition enforcers at the US Federal Trade Commission (FTC) and Department of Justice (DOJ), the UK Competition and Markets Authority (CMA), and the European Commission (EC) released a Joint Statement on Competition in Generative AI Foundation Models and AI products. The statement outlines risks in the AI ecosystem and shared principles for protecting and fostering competition.

While the statement does not lay out specific enforcement actions, the statement’s release suggests that the top competition enforcers in all three jurisdictions are focusing on AI’s effects on competition in general and competition within the AI ecosystem—and are likely to take concrete action in the near future.

A Shared Focus on AI

The competition enforcers did not just discover AI. In recent years, the top competition enforcers in the US, UK, and EU have all been examining both the effects AI may have on competition in various sectors as well as competition within the AI ecosystem. In September 2023, the CMA released a report on AI Foundation Models, which described the “significant impact” that AI technologies may have on competition and consumers, followed by an updated April 2024 report on AI. In June 2024, French competition authorities released a report on Generative AI, which focused on competition issues related to AI. At its January 2024 Tech Summit, the FTC examined the “real-world impacts of AI on consumers and competition.”

AI as a Technological Inflection Point

In the new joint statement, the top enforcers described the recent evolution of AI technologies, including foundational models and generative AI, as “a technological inflection point.” As “one of the most significant technological developments of the past couple decades,” AI has the potential to increase innovation and economic growth and benefit the lives of citizens around the world.

But with any technological inflection point, which may create “new means of competing” and catalyze innovation and growth, the enforcers must act “to ensure the public reaps the full benefits” of the AI evolution. The enforcers are concerned that several risks, described below, could undermine competition in the AI ecosystem. According to the enforcers, they are “committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Risks to Competition in the AI Ecosystem

The top enforcers highlight three main risks to competition in the AI ecosystem.

  1. Concentrated control of key inputs – Because AI technologies rely on a few specific “critical ingredients,” including specialized chips and technical expertise, a number of firms may be “in a position to exploit existing or emerging bottlenecks across the AI stack and to have outside influence over the future development of these tools.” This concentration may stifle competition, disrupt innovation, or be exploited by certain firms.
  2. Entrenching or extending market power in AI-related markets – The recent advancements in AI technologies come “at a time when large incumbent digital firms already enjoy strong accumulated advantages.” The regulators are concerned that these firms, due to their power, may have “the ability to protect against AI-driven disruption, or harness it to their particular advantage,” potentially to extend or strengthen their positions.
  3. Arrangements involving key players could amplify risks – While arrangements between firms, including investments and partnerships, related to the development of AI may not necessarily harm competition, major firms may use these partnerships and investments to “undermine or coopt competitive threats and steer market outcomes” to their advantage.

Beyond these three main risks, the statement also acknowledges that other competition and consumer risks are also associated with AI. Algorithms may “allow competitors to share competitively sensitive information” and engage in price discrimination and fixing. Consumers may be harmed, too, by AI. As the CMA, DOJ, and the FTC have consumer protection authority, these authorities will “also be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Sovereign Jurisdictions but Shared Concerns

While the enforcers share areas of concern, the joint statement recognizes that the EU, UK, and US’s “legal powers and jurisdictions contexts differ, and ultimately, our decisions will always remain sovereign and independent.” Nonetheless, the competition enforcers assert that “if the risks described [in the statement] materialize, they will likely do so in a way that does not respect international boundaries,” making it necessary for the different jurisdictions to “share an understanding of the issues” and be “committed to using our respective powers where appropriate.”

Three Unifying Principles

With the goal of acting together, the enforcers outline three shared principles that will “serve to enable competition and foster innovation.”

  1. Fair Dealing – Firms that engage in fair dealing will make the AI ecosystem as a whole better off. Exclusionary tactics often “discourage investments and innovation” and undermine competition.
  2. Interoperability – Interoperability, the ability of different systems to communicate and work together seamlessly, will increase competition and innovation around AI. The enforcers note that “any claims that interoperability requires sacrifice to privacy and security will be closely scrutinized.”
  3. Choice – Everyone in the AI ecosystem, from businesses to consumers, will benefit from having “choices among the diverse products and business models resulting from a competitive process.” Regulators may scrutinize three activities in particular: (1) company lock-in mechanisms that could limit choices for companies and individuals, (2) partnerships between incumbents and newcomers that could “sidestep merger enforcement” or provide “incumbents undue influence or control in ways that undermine competition,” and (3) for content creators, “choice among buyers,” which could be used to limit the “free flow of information in the marketplace of ideas.”

Conclusion: Potential Future Activity

While the statement does not address specific enforcement tools and actions the enforcers may take, the statement’s release suggests that the enforcers may all be gearing up to take action related to AI competition in the near future. Interested stakeholders, especially international ones, should closely track potential activity from these enforcers. We will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

EU Publishes Groundbreaking AI Act, Initial Obligations Set to Take Effect on February 2, 2025

On July 12, 2024, the European Union published the language of its much-anticipated Artificial Intelligence Act (AI Act), which is the world’s first comprehensive legislation regulating the growing use of artificial intelligence (AI), including by employers.

Quick Hits

  • EU published the final AI Act, setting it into force on August 1, 2024.
  • The legislation treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for their use and potential penalties for violations.
  • The legislation will be incrementally implemented over the next three years.

The AI Act will “enter into force” on August 1, 2024 (or twenty days from the July 12, 2024, publication date). The legislation’s publication follows its adoption by the EU Parliament in March 2024 and approval by the EU Council in May 2024.

The groundbreaking AI legislation takes a risk-based approach that will subject AI applications to four different levels of increasing regulation: (1) “unacceptable risk,” which are banned; (2) “high risk”; (3) “limited risk”; and (4) “minimal risk.”

While it does not exclusively apply to employers, the law treats employers’ use of AI technologies in the workplace as potentially “high risk.” Violations of the law could result in hefty penalties.

Key Dates

The publication commences the timeline of implementation over the next three years, as well as outline when we should expect to see more guidance on how it will be applied. The most critical dates for employers are:

  • August 1, 2024 – The AI Act will enter into force.
  • February 2, 2025 – (Six months from the date of entry into force) – Provisions on banned AI systems will take effect, meaning use of such systems must be discontinued by that time.
  • May 2, 2025 – (Nine months from the date of entry into force) – “Codes of practice” should be ready, giving providers of general purpose AI systems further clarity on obligations under the AI Act, which could possibly offer some insight to employers.
  • August 2, 2025 – (Twelve months from the date of entry into force) – Provisions on notifying authorities, general-purpose AI models, governance, confidentiality, and most penalties will take effect.
  • February 2, 2025 – (Eighteen months from the date of entry into force) – Guidelines should be available specifying how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus not high-risk systems.
  • August 2, 2026 – (Twenty-four months from the date of entry into force) – The remainder of the legislation will take effect, except for a minor provision regarding specific types of high-risk AI systems that will go into effect on August 1, 2027, a year later.

Next Steps

Adopting the EU AI Act will set consistent standards across the EU nations. Further, the legislation is significant in that it is likely to serve as a framework for AI laws or regulations in other jurisdictions, similar to how the EU’s General Data Protection Regulation (GDPR) has served as a model in the area of data privacy.

In the United States, regulation of AI and automated decision-making systems has been a priority, particularly when the tools are used to make employment decisions. In October 2023, the Biden administration issued an executive order requiring federal agencies to balance the benefits of AI with legal risks. Several federal agencies have since updated guidance concerning the use of AI and several states and cities have been considering legislation or regulations.

The Economic Benefits of AI in Civil Defense Litigation

The integration of artificial intelligence (AI) into various industries has revolutionized the way we approach complex problems, and the field of civil defense litigation is no exception. As lawyers and legal professionals navigate the complex and often cumbersome landscape of civil defense, AI can offer a transformative assistance that not only enhances efficiency but also significantly reduces client costs. In this blog, we’ll explore the economic savings associated with employing AI in civil defense litigation.

Streamlining Document Review
One of the most labor-intensive and costly aspects of civil defense litigation is the review of vast amounts of discovery documents. Traditionally, lawyers and legal teams spend countless hours sifting through documents to identify and categorize relevant information, a process that is both time-consuming and costly. AI-powered tools, such as Large Language Models (LLM) can automate and expedite this process.

By using AI to assist in closed system document review, law firms can drastically cut down on the number of billable hours required for this task. AI assistance can quickly and accurately identify relevant documents, flagging pertinent information and reducing the risk of material oversight. This not only speeds up the review process and allows a legal team to concentrate on analysis rather than document digest and chronology, but significantly lowers the overall cost of litigation to the client.

By way of example – a case in which 50,000 medical treatment record and bills must be analyzed, put in chronology and reviewed for patient complaints, diagnosis, treatment, medial history and prescription medicine use, could literally take a legal team weeks to complete. With AI assistance the preliminary ground work such as document organization, chronologizing complaints and treatments and compiling prescription drug lists can be completed in a matter of minutes, allowing the lawyer to spend her time in verification, analysis and defense development and strategy, rather than information translation and time consuming data organization.

Enhanced Legal Research
Legal research is another growing area where AI can yield substantial economic benefits. Traditional legal research methods involve lawyers poring over case law, statutes, and legal precedents to find those cases that best fit the facts and legal issues at hand. This process can be incredibly time-intensive, driving up costs for clients. Closed AI-powered legal research platforms can rapidly analyze vast databases of verified legal precedent and information, providing attorneys with precise and relevant case law in a fraction of the time. Rather than conducting time consuming exhaustive searches for the right cases to analysis, a lawyer can now stream line the process with AI assistance by flagging on-point cases for verification, review, analysis and argument development.

The efficiency of AI-driven legal research can translate into significant cost savings for the client. Attorneys can now spend more time on argument development and drafting, rather than bogged down in manual research. For clients, this means lower legal fees and faster resolution of cases, both of which contribute to overall economic savings.

Predictive Analytics and Case Strategy
AI’s evolving ability to analyze legal historical data and identify patterns is particularly valuable in the realm of predictive analytics. In civil defense litigation, AI can be used to assist in predicting the likely outcomes of cases based on jurisdictionally specific verdicts and settlements, helping attorneys to formulate more effective strategies. By sharpening focus on probable outcomes, legal teams can make informed decisions about whether to settle a case or proceed to trial. Such predictive analytics allow clients to better manage their risk, thereby reducing the financial burden on defendants.

Automating Routine Tasks
Many routine tasks in civil defense litigation, such as preparation of document and pleading chronologies, scheduling, and case management, can now be automated using AI. Such automation reduces the need for manual intervention, allowing legal professionals to focus on more complex and value-added case tasks. By automating such routine tasks, law firms can operate more efficiently, reducing overhead costs and improving their bottom line. Clients benefit from quicker turnaround times and lower legal fees, resulting in overall economic savings.

Conclusion
The economic savings for clients associated with using AI in civil defense litigation can be substantial. From streamlining document review and enhancing legal research to automating routine tasks and reducing discovery costs, AI offers a powerful tool for improving efficiency and lowering case costs. As the legal industry continues to embrace technological advancements, the adoption of AI in civil defense litigation is poised to become a standard practice, benefiting both law firms and their clients economically. The future of civil defense litigation is undoubtedly intertwined with AI, promising a more cost-effective and efficient approach to resolving legal disputes.

A Lawyer’s Guide to Understanding AI Hallucinations in a Closed System

Understanding Artificial Intelligence (AI) and the possibility of hallucinations in a closed system is necessary for the use of any such technology by a lawyer. AI has made significant strides in recent years, demonstrating remarkable capabilities in various fields, from natural language processing to large language models to generative AI. Despite these advancements, AI systems can sometimes produce outputs that are unexpectedly inaccurate or even nonsensical – a phenomenon often referred to as “hallucinations.” Understanding why these hallucinations occur, especially in a closed systems, is crucial for improving AI reliability in the practice of law.

What are AI Hallucinations
AI hallucinations are instances where AI systems generate information that seems plausible but is incorrect or entirely fabricated. These hallucinations can manifest in various forms, such as incorrect responses to prompt, fabricated case details, false medical analysis or even imagined elements in an image.

The Nature of Closed Systems
A closed system in AI refers to a context where the AI operates with a fixed dataset and pre-defined parameters, without real-time interaction or external updates. In the area of legal practice this can include environments or legal AI tools which rely upon a selected universe of information from which to access such information as a case file database, saved case specific medical records, discovery responses, deposition transcripts and pleadings.

Causes of AI Hallucinations in Closed Systems
Closed systems, as opposed to open facing AI which can access the internet, rely entirely on the data they were trained on. If the data is incomplete, biased, or not representative of the real world the AI may fill gaps in its knowledge with incorrect information. This is particularly problematic when the AI encounters scenarios not-well presented in its training data. Similarly, if an AI tool is used incorrectly by way of misused data prompts, a closed system could result in incorrect or nonsensical outputs.

Overfitting
Overfitting occurs when the AI model learns the noise and peculiarities in the training data rather than the underlying patterns. In a closed system, where the training data can be limited and static, the model might generate outputs based on these peculiarities, leading to hallucinations when faced with new or slightly different inputs.

Extrapolation Error
AI models can generalize from their training data to handle new inputs. In a closed system, the lack of continuous learning and updated data may cause the model to make inaccurate extrapolations. For example, a language model might generate plausible sounding but factually incorrect information based upon incomplete context.

Implication of Hallucination for lawyers
For lawyers, AI hallucinations can have serious implications. Relying on AI- generated content without verification could possibly lead to the dissemination or reliance upon false information, which can grievously effect both a client and the lawyer. Lawyers have a duty to provide accurate and reliable advise, information and court filings. Using AI tools that can possibly produce hallucinations without proper checks could very well breach a lawyer’s ethical duty to her client and such errors could damage a lawyer’s reputation or standing. A lawyer must stay vigilant in her practice to safe guard against hallucinations. A lawyer should always verify any AI generated information against reliable sources and treat AI as an assistant, not a replacement. Attorney oversight of outputs especially in critical areas such as legal research, document drafting and case analysis is an ethical requirement.

Notably, the lawyer’s chose of AI tool is critical. A well vetted closed system allows for the tracing of the origin of output and a lawyer to maintain control over the source materials. In the instance of prompt-based data searches, with multiple task prompts, a comprehensive understanding of how the prompts were designed to be used and the proper use of same is also essential to avoid hallucinations in a closed system. Improper use of the AI tool, even in a closed system designed for legal use, can lead to illogical outputs or hallucinations. A lawyer who wishes to utilize AI tools should stay informed about AI developments and understand the limitations and capabilities of the tools used. Regular training and updates can provide a more effective use of AI tools and help to safeguard against hallucinations.

Take Away
AI hallucinations present a unique challenge for the legal profession, but with careful tool vetting, management and training a lawyer can safeguard against false outputs. By understanding the nature of hallucinations and their origins, implementing robust verification processes and maintaining human oversight, lawyers can harness the power of AI while upholding their commitment to accuracy and ethical practice.

The Double-Edged Impact of AI Compliance Algorithms on Whistleblowing

As the implementation of Artificial Intelligence (AI) compliance and fraud detection algorithms within corporations and financial institutions continues to grow, it is crucial to consider how this technology has a twofold effect.

It’s a classic double-edged technology: in the right hands it can help detect fraud and bolster compliance, but in the wrong it can snuff out would-be-whistleblowers and weaken accountability mechanisms. Employees should assume it is being used in a wide range of ways.

Algorithms are already pervasive in our legal and governmental systems: the Securities and Exchange Commission, a champion of whistleblowers, employs these very compliance algorithms to detect trading misconduct and determine whether a legal violation has taken place.

There are two major downsides to the implementation of compliance algorithms that experts foresee: institutions avoiding culpability and tracking whistleblowers. AI can uncover fraud but cannot guarantee the proper reporting of it. This same technology can be used against employees to monitor and detect signs of whistleblowing.

Strengths of AI Compliance Systems:

AI excels at analyzing vast amounts of data to identify fraudulent transactions and patterns that might escape human detection, allowing institutions to quickly and efficiently spot misconduct that would otherwise remain undetected.

AI compliance algorithms are promised to operate as follows:

  • Real-time Detection: AI can analyze vast amounts of data, including financial transactions, communication logs, and travel records, in real-time. This allows for immediate identification of anomalies that might indicate fraudulent activity.
  • Pattern Recognition: AI excels at finding hidden patterns, analyzing spending habits, communication patterns, and connections between seemingly unrelated entities to flag potential conflicts of interest, unusual transactions, or suspicious interactions.
  • Efficiency and Automation: AI can automate data collection and analysis, leading to quicker identification and investigation of potential fraud cases.

Yuktesh Kashyap, associate Vice President of data science at Sigmoid explains on TechTarget that AI allows financial institutions, for example, to “streamline compliance processes and improve productivity. Thanks to its ability to process massive data logs and deliver meaningful insights, AI can give financial institutions a competitive advantage with real-time updates for simpler compliance management… AI technologies greatly reduce workloads and dramatically cut costs for financial institutions by enabling compliance to be more efficient and effective. These institutions can then achieve more than just compliance with the law by actually creating value with increased profits.”

Due Diligence and Human Oversight

Stephen M. Kohn, founding partner of Kohn, Kohn & Colapinto LLP, argues that AI compliance algorithms will be an ineffective tool that allow institutions to escape liability. He worries that corporations and financial institutions will implement AI systems and evade enforcement action by calling it due diligence.

“Companies want to use AI software to show the government that they are complying reasonably. Corporations and financial institutions will tell the government that they use sophisticated algorithms, and it did not detect all that money laundering, so you should not sanction us because we did due diligence.” He insists that the U.S. Government should not allow these algorithms to be used as a regulatory benchmark.

Legal scholar Sonia Katyal writes in her piece “Democracy & Distrust in an Era of Artificial Intelligence” that “While automation lowers the cost of decision making, it also raises significant due process concerns, involving a lack of notice and the opportunity to challenge the decision.”

While AI can be used as a powerful tool for identifying fraud, there is still no method for it to contact authorities with its discoveries. Compliance personnel are still required to blow the whistle, given societies standard due process. These algorithms should be used in conjunction with human judgment to determine compliance or lack thereof. Due process is needed so that individuals can understand the reasoning behind algorithmic determinations.

The Double-Edged Sword

Darrell West, Senior Fellow at Brookings Institute’s Center for Technology Innovation and Douglas Dillon Chair in Governmental Studies warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them.

Nowadays most office jobs (whether remote or in person) conduct operations fully online. Employees are required to use company computers and networks to do their jobs. Data generated by each employee passes through these devices and networks. Meaning, your privacy rights are questionable.

Because of this, whistleblowing will get much harder – organizations can employ the technology they initially implemented to catch fraud to instead catch whistleblowers. They can monitor employees via the capabilities built into our everyday tech: cameras, emails, keystroke detectors, online activity logs, what is downloaded, and more. West urges people to operate under the assumption that employers are monitoring their online activity.

These techniques have been implemented in the workplace for years, but AI automates tracking mechanisms. AI gives organizations more systematic tools to detect internal problems.

West explains, “All organizations are sensitive to a disgruntled employee who might take information outside the organization, especially if somebody’s dealing with confidential information, budget information or other types of financial information. It is just easy for organizations to monitor that because they can mine emails. They can analyze text messages; they can see who you are calling. Companies could have keystroke detectors and see what you are typing. Since many of us are doing our jobs in Microsoft Teams meetings and other video conferencing, there is a camera that records and transcribes information.”

If a company is defining a whistleblower as a problem, they can monitor this very information and look for keywords that would indicate somebody is engaging in whistleblowing.

With AI, companies can monitor specific employees they might find problematic (such as a whistleblower) and all the information they produce, including the keywords that might indicate fraud. Creators of these algorithms promise that soon their products will be able to detect all sorts of patterns and feelings, such as emotion and sentiment.

AI cannot determine whether somebody is a whistleblower, but it can flag unusual patterns and refer those patterns to compliance analysts. AI then becomes a tool to monitor what is going on within the organization, making it difficult for whistleblowers to go unnoticed. The risk of being caught by internal compliance software will be much greater.

“The only way people could report under these technological systems would be to go offline, using their personal devices or burner phones. But it is difficult to operate whistleblowing this way and makes it difficult to transmit confidential information. A whistleblower must, at some point, download information. Since you will be doing that on a company network, and that is easily detected these days.”

But the question of what becomes of the whistleblower is based on whether the compliance officers operate in support of the company or the public interest – they will have an extraordinary amount of information about the company and the whistleblower.

Risks for whistleblowers have gone up as AI has evolved because it is harder for them to collect and report information on fraud and compliance without being discovered by the organization.

West describes how organizations do not have a choice whether or not to use AI anymore: “All of the major companies are building it into their products. Google, Microsoft, Apple, and so on. A company does not even have to decide to use it: it is already being used. It’s a question of whether they avail themselves of the results of what’s already in their programs.”

“There probably are many companies that are not set up to use all the information that is at their disposal because it does take a little bit of expertise to understand data analytics. But this is just a short-term barrier, like organizations are going to solve that problem quickly.”

West recommends that organizations should just be a lot more transparent about their use of these tools. They should inform their employees what kind of information they are using, how they are monitoring employees, and what kind of software they use. Are they using detection? Software of any sort? Are they monitoring keystrokes?

Employees should want to know how long information is being stored. Organizations might legitimately use this technology for fraud detection, which might be a good argument to collect information, but it does not mean they should keep that information for five years. Once they have used the information and determined whether employees are committing fraud, there is no reason to keep it. Companies are largely not transparent about length of storage and what is done with this data and once it is used.

West believes that currently, most companies are not actually informing employees of how their information is being kept and how the new digital tools are being utilized.

The Importance of Whistleblower Programs:

The ability of AI algorithms to track whistleblowers poses a real risk to regulatory compliance given the massive importance of whistleblower programs in the United States’ enforcement of corporate crime.

The whistleblower programs at the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) respond to individuals who voluntarily report original information about fraud or misconduct.

If a tip leads to a successful enforcement action, the whistleblowers are entitled to 10-30% of the recovered funds. These programs have created clear anti-retaliation protections and strong financial incentives for reporting securities and commodities fraud.

Established in 2010 under the Dodd-Frank Act, these programs have been integral to enforcement. The SEC reports that whistleblower tips have led to over $6 billion in sanctions while the CFTC states that almost a third of its investigations stem from whistleblower disclosures.

Whistleblower programs, with robust protections for those who speak out, remain essential for exposing fraud and holding organizations accountable. This ensures that detected fraud is not only identified, but also reported and addressed, protecting taxpayer money, and promoting ethical business practices.

If AI algorithms are used to track down whistleblowers, their implementation would hinder these programs. Companies will undoubtedly retaliate against employees they suspect of blowing the whistle, creating a massive chilling effect where potential whistleblowers would not act out of fear of detection.

Already being employed in our institutions, experts believe these AI-driven compliance systems must have independent oversight for transparency’s sake. The software must also be designed to adhere to due process standards.

For more news on AI Compliance and Whistleblowing, visit the NLR Communications, Media & Internet section.

White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

Continuing Forward: Senate Leaders Release an AI Policy Roadmap

The US Senate’s Bipartisan AI Policy Roadmap is a highly anticipated document expected to shape the future of artificial intelligence (AI) in the United States over the next decade. This comprehensive guide, which complements the AI research, investigations, and hearings conducted by Senate committees during the 118th Congress, identifies areas of consensus that could help policymakers establish the ground rules for AI use and development across various sectors.

From intellectual property reforms and substantial funding for AI research to sector-specific rules and transparent model testing, the roadmap addresses a wide range of AI-related issues. Despite the long-awaited arrival of the AI roadmap, Sen. Chuck Schumer (D-NY), the highest-ranking Democrat in the Senate and key architect of the high-level document, is expected to strongly defer to Senate committees to continue drafting individual bills impacting the future of AI policy in the United States.

The Senate’s bipartisan roadmap is the culmination of a series of nine forums held last year by the same group, during which they gathered diverse perspectives and information on AI technology. Topics of the forums included:

  1. Inaugural Forum
  2. Supporting US Innovation in AI
  3. AI and the Workforce
  4. High Impact Uses of AI
  5. Elections and Democracy
  6. Privacy and Liability
  7. Transparency, Explainability, Intellectual Property, and Copyright
  8. Safeguarding
  9. National Security

The wide range of views and concerns expressed by over 150 experts including developers, startups, hardware and software companies, civil rights groups, and academia during these forums helped policymakers develop a thorough and inclusive document that reveals the areas of consensus and disagreement. As the 118th Congress continues, it’s expected that Sen. Schumer will reach out to his counterparts in the US House of Representatives to determine the common areas of interest. Those bipartisan and bicameral conversations will ultimately help Congress establish the foundational rules for AI use and development, potentially shaping not only the future of AI in the United States but also influencing global AI policy.

The final text of this guiding document focuses on several high-level categories. Below, we highlight a handful of notable provisions:

Publicity Rights (Name, Image, and Likeness)

The roadmap encourages senators to consider whether there is a need for legislation that would protect against the unauthorized use of one’s name, image, likeness, and voice, as it relates to AI. While state laws have traditionally recognized the right of individuals to control the commercial use of their so-called “publicity rights,” federal recognition of those rights would mark a major shift in intellectual property law and make it easier for musicians, celebrities, politicians, and other prominent public figures to prevent or discourage the unauthorized use of their publicity rights in the context of AI.

Disclosure and Transparency Requirements

Noting that the “black box” nature of some AI systems can make it difficult to assess compliance with existing consumer protection and civil rights laws, the roadmap encourages lawmakers to ensure that regulators are able to access information directly relevant to enforcing those laws and, if necessary, place appropriate transparency and “explainability” requirements on “high risk” uses of AI. The working group does not offer a definition of “high risk” use cases, but suggests that systems implicating constitutional rights, public safety, or anti-discrimination laws could be forced to disclose information about their training data and factors that influence automated or algorithmic decision making. The roadmap also encourages the development of best practices for when AI users should disclose that their products utilize AI, and whether developers should be required to disclose information to the public about the data sets used to train their AI models.

The document also pushes senators to develop sector-specific rules for AI use in areas such as housing, health care, education, financial services, news and journalism, and content creation.

Increased Funding for AI Innovation

On the heels of the findings included in the National Security Commission on Artificial Intelligence’s (NSCAI) final report, the roadmap encourages Senate appropriators to provide at least $32 billion for AI research funding at federal agencies, including the US Department of Energy, the National Science Foundation, and the National Institute of Standards and Technology. This request for a substantial investment underscores the government’s commitment to advancing AI technology and seeks to position federal agencies as “AI ready.” The roadmap’s innovation agenda includes funding the CHIPS and Science Act, support for semiconductor research and development to create high-end microchips, modernizing the federal government’s information technology infrastructure, and developing in-house supercomputing and AI capacity in the US Department of Defense.

Investments in National Defense

Many members of Congress believe that creating a national framework for AI will also help the United States compete on the global stage with China. Senators who see this as the 21st century space race believe investments in the defense and intelligence community’s AI capabilities are necessary to push back against China’s head start in AI development and deployment. The working group’s national security priorities include leveraging AI’s potential to build a digital armed services workforce, enhancing and accelerating the security clearance application process, blocking large language models from leaking intelligence or reconstructing classified information, and pushing back on perceived “censorship, repression, and surveillance” by Russia and China.

Addressing AI in Political Ads

Looking ahead to the 2024 election cycle, the roadmap’s authors are already paying attention to the threats posed by AI-generated election ads. The working group encourages digital content providers to watermark any political ads made with AI and include disclaimers in any AI-generated election content. These guardrails also align with the provisions of several bipartisan election-related AI bills that passed out of the Senate Rules Committee the same day of the roadmap’s release.

Privacy and Legal Liability for AI Usage

The AI Working Group recommends the passage of a federal data privacy law to protect personal information. The AI Working Group notes that the legislation should address issues related to data minimization, data security, consumer data rights, consent and disclosure, and the role of data brokers. Support for these principles is reflected in numerous state privacy laws enacted since 2018, and in bipartisan, bicameral draft legislation (the American Privacy Rights Act) supported by Rep. McMorris Rogers (D-WA), and Sen. Maria Cantwell (D-WA).

As we await additional legislative activity later this year, it is clear that these guidelines will have far-reaching implications for the AI industry and society at large.

CFTC Releases Artificial Intelligence Report

On 2 May 2024, the Commodity Futures Trading Commission’s (CFTC) Technology Advisory Committee (Committee) released a report entitled Responsible AI in Financial Markets: Opportunities, Risks & Recommendations. The report discusses the impact and future implications of artificial intelligence (AI) on financial markets and further illustrates the CFTC’s desire to oversee the AI space.

In the accompanying press release, Commissioner Goldsmith Romero highlighted the significance of the Committee’s recommendations, acknowledging decades of AI use in financial markets and proposing that new challenges will arise with the development of generative AI. Importantly, the report proposes that the CFTC develop a sector-specific AI Risk Management Framework addressing AI-associated risks.

The Committee opined that, without proper industry engagement and regulatory guardrails, the use of AI could “erode public trust in financial markets.” The report outlines potential risks associated with AI in financial markets such as the lack of transparency in AI decision processes, data handling errors, and the potential reinforcement of existing biases.

The report recommends that the CFTC host public roundtable discussions to foster a deeper understanding of AI’s role in financial markets and develop an AI Risk Management Framework for CTFC-registered entities aligned with the National Institute of Standards and Technology’s AI Risk Management Framework. This approach aims to enhance the transparency and reliability of AI systems in financial settings.

The report also calls for continued collaboration across federal agencies and stresses the importance of developing internal AI expertise within the CFTC. It advocates for responsible and transparent AI usage that adheres to ethical standards to ensure the stability and integrity of financial markets.