A Look at the Evolving Scope of Transatlantic AI Regulations

There have been significant changes to the regulations surrounding artificial intelligence (AI) on a global scale. New measures from governments worldwide are coming online, including the United States (U.S.) government’s executive order on AI, California’s upcoming regulations, the European Union’s AI Act, and emerging developments in the United Kingdom that contribute to this evolving environment.

The European Union (EU) AI Act and the U.S. Executive Order on AI aim to develop and utilize AI safely, securely, and with respect for fundamental rights, yet their approaches are markedly different. The EU AI Act establishes a binding legal framework across EU member states, directly applies to businesses involved in the AI value chain, classifies AI systems by risk, and imposes significant fines for violations. In contrast, the U.S. Executive Order is more of a guideline as federal agencies develop AI standards and policies. It prioritizes AI safety and trustworthiness but lacks specific penalties, instead relying on voluntary compliance and agency collaboration.

The EU approach includes detailed oversight and enforcement, while the U.S. method encourages the adoption of new standards and international cooperation that aligns with global standards but is less prescriptive. Despite their shared objectives, differences in regulatory approach, scope, enforcement, and penalties could lead to contradictions in AI governance standards between the two regions.

There has also been some collaboration on an international scale. Recently, there has been an effort between antitrust officials at the U.S. Department of Justice (DOJ), U.S. Federal Trade Commission (FTC), the European Commission, and the UK’s Competition and Markets Authority to monitor AI and its risks to competition. The agencies have issued a joint statement, with all four antitrust enforcers pledging to “to remain vigilant for potential competition issues” and to use the powers of their agencies to provide safeguards against the utilization of AI to undermine competition or lead to unfair or deceptive practices.

The regulatory landscape for AI across the globe is evolving in real time as the technology develops at a record pace. As regulations strive to keep up with the technology, there are real challenges and risks that exist for companies involved in the development or utilization of AI. Therefore, it is critical that business leaders understand regulatory changes on an international scale, adapt, and stay compliant to avoid what could be significant penalties and reputational damage.

The U.S. Federal Executive Order on AI

In October 2023, the Biden Administration issued an executive order to foster responsible AI innovation. This order outlines several key initiatives, including promoting ethical, trustworthy, and lawful AI technologies. It also calls for collaboration between federal agencies, private companies, academia, and international partners to advance AI capabilities and realize its myriad benefits. The order emphasizes the need for robust frameworks to address potential AI risks such as bias, privacy concerns, and security vulnerabilities. In addition, the order directs that various sweeping actions be taken, including the establishment of new standards for AI safety and security, the passing of bipartisan data privacy legislation to protect Americans’ privacy from the risks posed by AI, the promotion of the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, and the implementation of actions to ensure responsible government deployment of AI and modernization of the federal AI infrastructure through the rapid hiring of AI professionals.

At the state level, Colorado and California are leading the way. Colorado enacted the first comprehensive regulation of AI at the state level with The Colorado Artificial Intelligence Act (Senate Bill (SB) 24-205), signed into law by Governor Jared Polis on May 17, 2024. As our team previously outlined, The Colorado AI Act is comprehensive, establishing requirements for developers and deployers of “high-risk artificial intelligence systems,” to adhere to a host of obligations, including disclosures, risk management practices, and consumer protections. The Colorado law goes into effect on February 1, 2026, giving companies over a year to thoroughly adapt.

In California, a host of proposed AI regulations focusing on transparency, accountability, and consumer protection would require the disclosure of information such as AI systems’ functions, data sources, and decision-making processes. For example, AB2013 was introduced on January 31, 2024, and would require that developers of an AI system or service made available to Californians to post on the developer’s website documentation of the datasets used to train the AI system or service.

SB970 is another bill that was introduced in January 2024 and would require any person or entity that sells or provides access to any AI technology that is designed to create synthetic images, video, or voice to give a consumer warning that misuse of the technology may result in civil or criminal liability for the user.

Finally, on July 2, 2024 the California State Assembly Judiciary Committee passed SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), which regulates AI models based on complexity.

The European Union’s AI Act

The EU is leading the way in AI regulation through its AI Act, which establishes a framework and represents Europe’s first comprehensive attempt to regulate AI. The AI Act was adopted to promote the uptake of human-centric and trustworthy AI while ensuring high level protections of health, safety, and fundamental rights against the harmful effects of AI systems in the EU and supporting innovation.

The AI Act sets forth harmonized rules for the release and use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the release of general-purpose AI models; rules on market monitoring, market surveillance, governance, and enforcement; and measures to support innovation, with a particular focus on SMEs, including startups.

The AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Applications that pose an unacceptable risk, such as government social scoring systems, are outright banned. High-risk applications, including CV-scanning tools, face stringent regulations to ensure safety and accountability. Limited risk applications lack full transparency as to AI usage, and the AI Act imposes transparency obligations. For example, humans should be informed when they are using AI systems (such as chatbots) that they are interacting with a machine and not a human so as to enable the user to make an informed decision whether or not to continue. The AI Act allows the free use of minimal-risk AI, including applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

The adoption of the AI Act has not come without criticism from major European companies. In an open letter signed by 150 executives, they raised concerns over the heavy regulation of generative AI and foundation models. The fear is that the increased compliance costs and hindered productivity would drive companies away from the EU. Despite these concerns, the AI Act is here to stay, and it would be wise for companies to prepare for compliance by assessing their systems.

Recommendations for Global Businesses

As governments and regulatory bodies worldwide implement diverse AI regulations, companies have the power to adopt strategies that both ensure compliance and mitigate risks proactively. Global businesses should consider the following recommendations:

  1. Risk Assessments: Conducting thorough risk assessments of AI systems is important for companies to align with the EU’s classification scheme and the U.S.’s focus on safety and security. There must also be an assessment of the safety and security of your AI systems, particularly those categorized as high-risk under the EU’s AI Act. This proactive approach will not only help you meet regulatory requirements but also protect your business from potential sanctions as the legal landscape evolves.
  2. Compliance Strategy: Develop a compliance strategy that specifically addresses the most stringent aspects of the EU and U.S. regulations.
  3. Legal Monitoring: Stay on top of evolving best practices and guidelines. Monitor regulatory developments in regions in which your company operates to adapt to new requirements and avoid penalties and engage with policymakers and industry groups to stay ahead of compliance requirements. Participation in public consultations and industry forums can provide valuable insights and influence regulatory outcomes.
  4. Transparency and Accountability: To meet ethical and regulatory expectations, transparency and accountability should be prioritized in AI development. This means ensuring AI systems are transparent, with clear documentation of data sources, decision-making processes, and system functionalities. There should also be accountability measures in place, such as regular audits and impact assessments.
  5. Data Governance: Implement robust data governance measures to meet the EU’s requirements and align with the U.S.’s emphasis on trustworthy AI. Establish governance structures that ensure compliance with federal, state, and international AI regulations, including appointing compliance officers and developing internal policies.
  6. Invest in Ethical AI Practices: Develop and deploy AI systems that adhere to ethical guidelines, focusing on fairness, privacy, and user rights. Ethical AI practices ensure compliance, build public trust, and enhance brand reputation.

AI-Generated Content and Trademarks

The rapid evolution of artificial intelligence has undeniably transformed the digital landscape, with AI-generated content becoming increasingly common. This shift has profound implications for brand owners introducing both challenges and opportunities.

One of the most pressing concerns is trademark infringement. In a recent example, the Walt Disney Company, a company fiercely protective of its intellectual property, raised concerns about AI-generated content potentially infringing on its trademarks.  Social media users were having fun using Microsoft’s Bing AI imaging tool, powered by DALL-E 3 technology, to create images of pets in a “Pixar” style.  However, Disney’s concern wasn’t the artwork itself, but the possibility of the AI inadvertently generating the iconic Disney-Pixar logo within the images, constituting a trademark infringement. This incident highlights the potential for AI-generated content to unintentionally infringe upon established trademarks, requiring brand owners to stay vigilant in protecting their intellectual property in the digital age.

Dilution of trademarks is another critical issue. A recent lawsuit filed by Getty Images against Stability AI sheds light on this concern. Getty Images, a leading provider of stock photos, accused Stability AI of using millions of its copyrighted images to train its AI image generation software. This alleged use, according to Getty Images, involved Stability AI’s incorporation of Getty Images’ marks into low-quality, unappealing, or offensive images which dilutes those marks in further violation of federal and state trademark laws. The lawsuit highlights the potential for AI, through the sheer volume of content it generates, to blur the lines between inspiration and infringement, weakening the association between a trademark and its source.

In addition, the ownership of copyrights in AI-generated marketing can cause problems. While AI tools can create impressive content, questions about who owns the intellectual property rights persist.  Recent disputes over AI-generated artwork and music have highlighted the challenges of determining ownership and copyright in this new digital frontier.

However, AI also presents opportunities for trademark owners. For example, AI can be employed to monitor online platforms for trademark infringements, providing an early warning system. Luxury brands have used AI to authenticate products and combat counterfeiting. For instance, Entrupy has developed a mobile device-based authentication system that uses AI and microscopy to analyze materials and detect subtle irregularities indicative of counterfeit products. Brands can integrate Entrupy’s technology into their retail stores or customer-facing apps.

Additionally, AI can be a powerful tool for brand building. By analyzing consumer data and preferences, AI can help create highly targeted marketing campaigns. For example, cosmetic brands have successfully leveraged AI to personalize product recommendations, enhancing customer engagement and loyalty.

The intersection of AI and trademarks is a dynamic and evolving landscape. As technology continues to advance, so too will the challenges and opportunities for trademark owners. Proactive measures, such as robust trademark portfolios, AI-powered monitoring tools, and clear internal guidelines, are essential for safeguarding brand integrity in this new era.

American Bar Association Issues Formal Opinion on Use of Generative AI Tools

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.”

The opinion addresses the ethical considerations lawyers are required to consider when using generative AI (GenAI) tools in the practice of law.

The opinion sets forth the ethical rules to consider, including the duties of competence, confidentiality, client communication, raising only meritorious claims, candor toward the tribunal, supervisory responsibilities of others, and setting of fees.

Competence

The opinion reiterates previous ABA opinions that lawyers are required to have a reasonable understanding of the capabilities and limitations of specific technologies used, including remaining “vigilant” about the benefits and risks of the use of technology, including GenAI tools. It specifically mentions that attorneys must be aware of the risk of inaccurate output or hallucinations of GenAI tools and that independent verification is necessary when using GenAI tools. According to the opinion, users must evaluate the tool being used, analyze the output, not solely rely on the tool’s conclusions, and cannot replace their judgment with that of the tool.

Confidentiality

The opinion reminds lawyers that they are ethically required to make reasonable efforts to prevent inadvertent or unauthorized access or disclosure of client information or their representation of a client. It suggests that, before inputting data into a GenAI tool, a lawyer must evaluate not only the risk of unauthorized disclosure outside the firm, but also possible internal unauthorized disclosure in violation of an ethical wall or access controls. The opinion stressed that if client information is uploaded to a GenAI tool within the firm, the client data may be disclosed to and used by other lawyers in the firm, without the client’s consent, to benefit other clients. The client data input into the GenAI tool may be used for self-learning or teaching an algorithm that then discloses the client data without the client’s consent.

The opinion suggests that before submitting client data to a GenAI tool, lawyers must review the tool’s privacy policy, terms of use, and all contractual terms to determine how the GenAI tool will collect and use the data in the context of the ethical duty of confidentiality with clients.

Further, the opinion suggests that if lawyers intend to use GenAI tools to provide legal services to clients, lawyers are required to obtain informed client consent before using the tool. The lawyer is required to inform the client of the use of the GenAI tool, the risk of use of the tool and then obtain the client’s informed consent prior to use. Importantly, the opinion states that “general, boiler-plate provisions [in an] engagement letter” are not sufficient” to meet this requirement.

Communication

With regard to lawyers’ duty to effectively communicate information that is in the best interest of their client, the opinion notes that—depending on the circumstances—it may be in the best interest of the client to disclose the use of GenAI tools, particularly if the use will affect the fee charged to the client, or the output of the GenAI tool will influence a significant decision in the representation of the client. This communication can be included in the engagement letter, though it may be appropriate to communicate directly with the client before including it in the engagement letter.

Meritorious Claims + Candor Toward Tribunal

Lawyers are officers of the court and have an ethical obligation to put forth meritorious claims and to be candid with the tribunal before which such claims are presented. In the context of the use of GenAI tools, as stated above, there is a risk that without appropriate evaluation and supervision (including the use of independent professional judgment), the output of a GenAI tool can sometimes be erroneous or considered a “hallucination.” Therefore, to reiterate the ethical duty of competence, lawyers are advised to independently evaluate any output provided by a GenAI tool.

In addition, some courts require that attorneys disclose whether GenAI tools have been used in court filings. It is important to research and follow local court rules and practices regarding disclosure of the use of GenAI tools before submitting filings.

Supervisory Responsibilities

Consistent with other ABA Opinions relevant to the use of technology, the opinion stresses that managerial responsibilities include providing clear policies to lawyers, non-lawyers, and staff about the use of GenAI in the practice of law. I think this is one of the most important messages of the opinion. Firms and law practices are required to develop and implement a GenAI governance program, evaluate the risk and benefit of the use of a GenAI tool, educate all individuals in the firm on the policies and guardrails put in place to use such tools, and supervise their use. This is a clear message that lawyers and law firms need to evaluate the use of GenAI tools and start working on developing and implementing their own AI governance program for all internal users.

Fees

The key takeaway of the fees section of Opinion 512 is that a lawyer can’t bill a client to learn how to use a GenAI tool. Consistent with other opinions relating to fees, only extraordinary costs associated with the use of GenAI tools are permitted to be billed to the client, with the client’s knowledge and consent. In addition, the opinion points out that any efficiencies gained by the use of GenAI tools, with the client’s consent, should benefit the client through reduced fees.

Conclusion

Although consistent with other ABA opinions related to the use of technology, an understanding of ABA Opinion 512 is important as GenAI tools become more ubiquitous. It is clear that there will be additional opinions related to the use of GenAI tools from the ABA as well as state bar associations and that it is a topic of interest in the context of adherence with ethical obligations. A clear message from Opinion 512 is that now is a good time to consider developing an AI governance program.

AI Regulation Continues to Grow as Illinois Amends its Human Rights Act

Following laws enacted in jurisdictions such as ColoradoNew York CityTennessee, and the state’s own Artificial Intelligence Video Interview Act, on August 9, 2024, Illinois’ Governor signed House Bill (HB) 3773, also known as the “Limit Predictive Analytics Use” bill. The bill amends the Illinois Human Rights Act (Act) by adding certain uses of artificial intelligence (AI), including generative AI, to the long list of actions by covered employers that could constitute civil rights violations.

The amendments made by HB3773 take effect January 1, 2026, and add two new definitions to the law.

“Artificial intelligence” – which according to the amendments means:

a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The definition of AI includes “generative AI,” which has its own definition:

an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following: (1) textual outputs, such as short answers, essays, poetry, or longer compositions or answers; (2) image outputs, such as fine art, photographs, conceptual art, diagrams, and other images; (3) multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and (4) other content that would be otherwise produced by human means.

The plethora of AI tools available for use in the workplace continues unabated as HR professionals and managers vie to adopt effective and efficient solutions for finding the best candidates, assessing their performance, and otherwise improving decision making concerning human capital. In addition to understanding whether an organization is covered by a regulation of AI, such as HB3773, it also is important to determine whether the technology being deployed also falls within the law’s scope. Assuming the tool or application is not being developed inhouse, this analysis will require, among other things, working closely with the third-party vendor providing the tool or application to understand its capabilities and risks.

According to the amendments, covered employers can violate the Act in two ways. First, an employer that uses AI with respect to – recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment – and which has the effect of subjecting employees to discrimination on the basis of protected classes under the Act may constitute a violation. The same may be true for employers that use zip codes as a proxy for protected classes under the Act.

Second, a covered employer that fails to provide notice to an employee that the employer is using AI for the purposes described above may be found to have violated the Act.

Unlike the Colorado or New York City laws, the amendments to the Act do not require a impact assessment or bias audit. They also do not provide any specifics concerning the notice requirement. However, the amendments require the Illinois Department of Human Rights (IDHR) to adopt regulations necessary for implementation and enforcement. These regulations will include rules concerning the notice, such as the time period and means for providing same.

We are sure to see more regulation in this space. While it is expected that some common threads will exist among the various rules and regulations concerning AI and generative AI, organizations leveraging these technologies will need to be aware of the differences and assess what additional compliance steps may be needed.

Top Competition Enforcers in the US, EU, and UK Release Joint Statement on AI Competition – AI: The Washington Report


On July 23, the top competition enforcers at the US Federal Trade Commission (FTC) and Department of Justice (DOJ), the UK Competition and Markets Authority (CMA), and the European Commission (EC) released a Joint Statement on Competition in Generative AI Foundation Models and AI products. The statement outlines risks in the AI ecosystem and shared principles for protecting and fostering competition.

While the statement does not lay out specific enforcement actions, the statement’s release suggests that the top competition enforcers in all three jurisdictions are focusing on AI’s effects on competition in general and competition within the AI ecosystem—and are likely to take concrete action in the near future.

A Shared Focus on AI

The competition enforcers did not just discover AI. In recent years, the top competition enforcers in the US, UK, and EU have all been examining both the effects AI may have on competition in various sectors as well as competition within the AI ecosystem. In September 2023, the CMA released a report on AI Foundation Models, which described the “significant impact” that AI technologies may have on competition and consumers, followed by an updated April 2024 report on AI. In June 2024, French competition authorities released a report on Generative AI, which focused on competition issues related to AI. At its January 2024 Tech Summit, the FTC examined the “real-world impacts of AI on consumers and competition.”

AI as a Technological Inflection Point

In the new joint statement, the top enforcers described the recent evolution of AI technologies, including foundational models and generative AI, as “a technological inflection point.” As “one of the most significant technological developments of the past couple decades,” AI has the potential to increase innovation and economic growth and benefit the lives of citizens around the world.

But with any technological inflection point, which may create “new means of competing” and catalyze innovation and growth, the enforcers must act “to ensure the public reaps the full benefits” of the AI evolution. The enforcers are concerned that several risks, described below, could undermine competition in the AI ecosystem. According to the enforcers, they are “committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Risks to Competition in the AI Ecosystem

The top enforcers highlight three main risks to competition in the AI ecosystem.

  1. Concentrated control of key inputs – Because AI technologies rely on a few specific “critical ingredients,” including specialized chips and technical expertise, a number of firms may be “in a position to exploit existing or emerging bottlenecks across the AI stack and to have outside influence over the future development of these tools.” This concentration may stifle competition, disrupt innovation, or be exploited by certain firms.
  2. Entrenching or extending market power in AI-related markets – The recent advancements in AI technologies come “at a time when large incumbent digital firms already enjoy strong accumulated advantages.” The regulators are concerned that these firms, due to their power, may have “the ability to protect against AI-driven disruption, or harness it to their particular advantage,” potentially to extend or strengthen their positions.
  3. Arrangements involving key players could amplify risks – While arrangements between firms, including investments and partnerships, related to the development of AI may not necessarily harm competition, major firms may use these partnerships and investments to “undermine or coopt competitive threats and steer market outcomes” to their advantage.

Beyond these three main risks, the statement also acknowledges that other competition and consumer risks are also associated with AI. Algorithms may “allow competitors to share competitively sensitive information” and engage in price discrimination and fixing. Consumers may be harmed, too, by AI. As the CMA, DOJ, and the FTC have consumer protection authority, these authorities will “also be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Sovereign Jurisdictions but Shared Concerns

While the enforcers share areas of concern, the joint statement recognizes that the EU, UK, and US’s “legal powers and jurisdictions contexts differ, and ultimately, our decisions will always remain sovereign and independent.” Nonetheless, the competition enforcers assert that “if the risks described [in the statement] materialize, they will likely do so in a way that does not respect international boundaries,” making it necessary for the different jurisdictions to “share an understanding of the issues” and be “committed to using our respective powers where appropriate.”

Three Unifying Principles

With the goal of acting together, the enforcers outline three shared principles that will “serve to enable competition and foster innovation.”

  1. Fair Dealing – Firms that engage in fair dealing will make the AI ecosystem as a whole better off. Exclusionary tactics often “discourage investments and innovation” and undermine competition.
  2. Interoperability – Interoperability, the ability of different systems to communicate and work together seamlessly, will increase competition and innovation around AI. The enforcers note that “any claims that interoperability requires sacrifice to privacy and security will be closely scrutinized.”
  3. Choice – Everyone in the AI ecosystem, from businesses to consumers, will benefit from having “choices among the diverse products and business models resulting from a competitive process.” Regulators may scrutinize three activities in particular: (1) company lock-in mechanisms that could limit choices for companies and individuals, (2) partnerships between incumbents and newcomers that could “sidestep merger enforcement” or provide “incumbents undue influence or control in ways that undermine competition,” and (3) for content creators, “choice among buyers,” which could be used to limit the “free flow of information in the marketplace of ideas.”

Conclusion: Potential Future Activity

While the statement does not address specific enforcement tools and actions the enforcers may take, the statement’s release suggests that the enforcers may all be gearing up to take action related to AI competition in the near future. Interested stakeholders, especially international ones, should closely track potential activity from these enforcers. We will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

EU Publishes Groundbreaking AI Act, Initial Obligations Set to Take Effect on February 2, 2025

On July 12, 2024, the European Union published the language of its much-anticipated Artificial Intelligence Act (AI Act), which is the world’s first comprehensive legislation regulating the growing use of artificial intelligence (AI), including by employers.

Quick Hits

  • EU published the final AI Act, setting it into force on August 1, 2024.
  • The legislation treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for their use and potential penalties for violations.
  • The legislation will be incrementally implemented over the next three years.

The AI Act will “enter into force” on August 1, 2024 (or twenty days from the July 12, 2024, publication date). The legislation’s publication follows its adoption by the EU Parliament in March 2024 and approval by the EU Council in May 2024.

The groundbreaking AI legislation takes a risk-based approach that will subject AI applications to four different levels of increasing regulation: (1) “unacceptable risk,” which are banned; (2) “high risk”; (3) “limited risk”; and (4) “minimal risk.”

While it does not exclusively apply to employers, the law treats employers’ use of AI technologies in the workplace as potentially “high risk.” Violations of the law could result in hefty penalties.

Key Dates

The publication commences the timeline of implementation over the next three years, as well as outline when we should expect to see more guidance on how it will be applied. The most critical dates for employers are:

  • August 1, 2024 – The AI Act will enter into force.
  • February 2, 2025 – (Six months from the date of entry into force) – Provisions on banned AI systems will take effect, meaning use of such systems must be discontinued by that time.
  • May 2, 2025 – (Nine months from the date of entry into force) – “Codes of practice” should be ready, giving providers of general purpose AI systems further clarity on obligations under the AI Act, which could possibly offer some insight to employers.
  • August 2, 2025 – (Twelve months from the date of entry into force) – Provisions on notifying authorities, general-purpose AI models, governance, confidentiality, and most penalties will take effect.
  • February 2, 2025 – (Eighteen months from the date of entry into force) – Guidelines should be available specifying how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus not high-risk systems.
  • August 2, 2026 – (Twenty-four months from the date of entry into force) – The remainder of the legislation will take effect, except for a minor provision regarding specific types of high-risk AI systems that will go into effect on August 1, 2027, a year later.

Next Steps

Adopting the EU AI Act will set consistent standards across the EU nations. Further, the legislation is significant in that it is likely to serve as a framework for AI laws or regulations in other jurisdictions, similar to how the EU’s General Data Protection Regulation (GDPR) has served as a model in the area of data privacy.

In the United States, regulation of AI and automated decision-making systems has been a priority, particularly when the tools are used to make employment decisions. In October 2023, the Biden administration issued an executive order requiring federal agencies to balance the benefits of AI with legal risks. Several federal agencies have since updated guidance concerning the use of AI and several states and cities have been considering legislation or regulations.

The Economic Benefits of AI in Civil Defense Litigation

The integration of artificial intelligence (AI) into various industries has revolutionized the way we approach complex problems, and the field of civil defense litigation is no exception. As lawyers and legal professionals navigate the complex and often cumbersome landscape of civil defense, AI can offer a transformative assistance that not only enhances efficiency but also significantly reduces client costs. In this blog, we’ll explore the economic savings associated with employing AI in civil defense litigation.

Streamlining Document Review
One of the most labor-intensive and costly aspects of civil defense litigation is the review of vast amounts of discovery documents. Traditionally, lawyers and legal teams spend countless hours sifting through documents to identify and categorize relevant information, a process that is both time-consuming and costly. AI-powered tools, such as Large Language Models (LLM) can automate and expedite this process.

By using AI to assist in closed system document review, law firms can drastically cut down on the number of billable hours required for this task. AI assistance can quickly and accurately identify relevant documents, flagging pertinent information and reducing the risk of material oversight. This not only speeds up the review process and allows a legal team to concentrate on analysis rather than document digest and chronology, but significantly lowers the overall cost of litigation to the client.

By way of example – a case in which 50,000 medical treatment record and bills must be analyzed, put in chronology and reviewed for patient complaints, diagnosis, treatment, medial history and prescription medicine use, could literally take a legal team weeks to complete. With AI assistance the preliminary ground work such as document organization, chronologizing complaints and treatments and compiling prescription drug lists can be completed in a matter of minutes, allowing the lawyer to spend her time in verification, analysis and defense development and strategy, rather than information translation and time consuming data organization.

Enhanced Legal Research
Legal research is another growing area where AI can yield substantial economic benefits. Traditional legal research methods involve lawyers poring over case law, statutes, and legal precedents to find those cases that best fit the facts and legal issues at hand. This process can be incredibly time-intensive, driving up costs for clients. Closed AI-powered legal research platforms can rapidly analyze vast databases of verified legal precedent and information, providing attorneys with precise and relevant case law in a fraction of the time. Rather than conducting time consuming exhaustive searches for the right cases to analysis, a lawyer can now stream line the process with AI assistance by flagging on-point cases for verification, review, analysis and argument development.

The efficiency of AI-driven legal research can translate into significant cost savings for the client. Attorneys can now spend more time on argument development and drafting, rather than bogged down in manual research. For clients, this means lower legal fees and faster resolution of cases, both of which contribute to overall economic savings.

Predictive Analytics and Case Strategy
AI’s evolving ability to analyze legal historical data and identify patterns is particularly valuable in the realm of predictive analytics. In civil defense litigation, AI can be used to assist in predicting the likely outcomes of cases based on jurisdictionally specific verdicts and settlements, helping attorneys to formulate more effective strategies. By sharpening focus on probable outcomes, legal teams can make informed decisions about whether to settle a case or proceed to trial. Such predictive analytics allow clients to better manage their risk, thereby reducing the financial burden on defendants.

Automating Routine Tasks
Many routine tasks in civil defense litigation, such as preparation of document and pleading chronologies, scheduling, and case management, can now be automated using AI. Such automation reduces the need for manual intervention, allowing legal professionals to focus on more complex and value-added case tasks. By automating such routine tasks, law firms can operate more efficiently, reducing overhead costs and improving their bottom line. Clients benefit from quicker turnaround times and lower legal fees, resulting in overall economic savings.

Conclusion
The economic savings for clients associated with using AI in civil defense litigation can be substantial. From streamlining document review and enhancing legal research to automating routine tasks and reducing discovery costs, AI offers a powerful tool for improving efficiency and lowering case costs. As the legal industry continues to embrace technological advancements, the adoption of AI in civil defense litigation is poised to become a standard practice, benefiting both law firms and their clients economically. The future of civil defense litigation is undoubtedly intertwined with AI, promising a more cost-effective and efficient approach to resolving legal disputes.

A Lawyer’s Guide to Understanding AI Hallucinations in a Closed System

Understanding Artificial Intelligence (AI) and the possibility of hallucinations in a closed system is necessary for the use of any such technology by a lawyer. AI has made significant strides in recent years, demonstrating remarkable capabilities in various fields, from natural language processing to large language models to generative AI. Despite these advancements, AI systems can sometimes produce outputs that are unexpectedly inaccurate or even nonsensical – a phenomenon often referred to as “hallucinations.” Understanding why these hallucinations occur, especially in a closed systems, is crucial for improving AI reliability in the practice of law.

What are AI Hallucinations
AI hallucinations are instances where AI systems generate information that seems plausible but is incorrect or entirely fabricated. These hallucinations can manifest in various forms, such as incorrect responses to prompt, fabricated case details, false medical analysis or even imagined elements in an image.

The Nature of Closed Systems
A closed system in AI refers to a context where the AI operates with a fixed dataset and pre-defined parameters, without real-time interaction or external updates. In the area of legal practice this can include environments or legal AI tools which rely upon a selected universe of information from which to access such information as a case file database, saved case specific medical records, discovery responses, deposition transcripts and pleadings.

Causes of AI Hallucinations in Closed Systems
Closed systems, as opposed to open facing AI which can access the internet, rely entirely on the data they were trained on. If the data is incomplete, biased, or not representative of the real world the AI may fill gaps in its knowledge with incorrect information. This is particularly problematic when the AI encounters scenarios not-well presented in its training data. Similarly, if an AI tool is used incorrectly by way of misused data prompts, a closed system could result in incorrect or nonsensical outputs.

Overfitting
Overfitting occurs when the AI model learns the noise and peculiarities in the training data rather than the underlying patterns. In a closed system, where the training data can be limited and static, the model might generate outputs based on these peculiarities, leading to hallucinations when faced with new or slightly different inputs.

Extrapolation Error
AI models can generalize from their training data to handle new inputs. In a closed system, the lack of continuous learning and updated data may cause the model to make inaccurate extrapolations. For example, a language model might generate plausible sounding but factually incorrect information based upon incomplete context.

Implication of Hallucination for lawyers
For lawyers, AI hallucinations can have serious implications. Relying on AI- generated content without verification could possibly lead to the dissemination or reliance upon false information, which can grievously effect both a client and the lawyer. Lawyers have a duty to provide accurate and reliable advise, information and court filings. Using AI tools that can possibly produce hallucinations without proper checks could very well breach a lawyer’s ethical duty to her client and such errors could damage a lawyer’s reputation or standing. A lawyer must stay vigilant in her practice to safe guard against hallucinations. A lawyer should always verify any AI generated information against reliable sources and treat AI as an assistant, not a replacement. Attorney oversight of outputs especially in critical areas such as legal research, document drafting and case analysis is an ethical requirement.

Notably, the lawyer’s chose of AI tool is critical. A well vetted closed system allows for the tracing of the origin of output and a lawyer to maintain control over the source materials. In the instance of prompt-based data searches, with multiple task prompts, a comprehensive understanding of how the prompts were designed to be used and the proper use of same is also essential to avoid hallucinations in a closed system. Improper use of the AI tool, even in a closed system designed for legal use, can lead to illogical outputs or hallucinations. A lawyer who wishes to utilize AI tools should stay informed about AI developments and understand the limitations and capabilities of the tools used. Regular training and updates can provide a more effective use of AI tools and help to safeguard against hallucinations.

Take Away
AI hallucinations present a unique challenge for the legal profession, but with careful tool vetting, management and training a lawyer can safeguard against false outputs. By understanding the nature of hallucinations and their origins, implementing robust verification processes and maintaining human oversight, lawyers can harness the power of AI while upholding their commitment to accuracy and ethical practice.

The Double-Edged Impact of AI Compliance Algorithms on Whistleblowing

As the implementation of Artificial Intelligence (AI) compliance and fraud detection algorithms within corporations and financial institutions continues to grow, it is crucial to consider how this technology has a twofold effect.

It’s a classic double-edged technology: in the right hands it can help detect fraud and bolster compliance, but in the wrong it can snuff out would-be-whistleblowers and weaken accountability mechanisms. Employees should assume it is being used in a wide range of ways.

Algorithms are already pervasive in our legal and governmental systems: the Securities and Exchange Commission, a champion of whistleblowers, employs these very compliance algorithms to detect trading misconduct and determine whether a legal violation has taken place.

There are two major downsides to the implementation of compliance algorithms that experts foresee: institutions avoiding culpability and tracking whistleblowers. AI can uncover fraud but cannot guarantee the proper reporting of it. This same technology can be used against employees to monitor and detect signs of whistleblowing.

Strengths of AI Compliance Systems:

AI excels at analyzing vast amounts of data to identify fraudulent transactions and patterns that might escape human detection, allowing institutions to quickly and efficiently spot misconduct that would otherwise remain undetected.

AI compliance algorithms are promised to operate as follows:

  • Real-time Detection: AI can analyze vast amounts of data, including financial transactions, communication logs, and travel records, in real-time. This allows for immediate identification of anomalies that might indicate fraudulent activity.
  • Pattern Recognition: AI excels at finding hidden patterns, analyzing spending habits, communication patterns, and connections between seemingly unrelated entities to flag potential conflicts of interest, unusual transactions, or suspicious interactions.
  • Efficiency and Automation: AI can automate data collection and analysis, leading to quicker identification and investigation of potential fraud cases.

Yuktesh Kashyap, associate Vice President of data science at Sigmoid explains on TechTarget that AI allows financial institutions, for example, to “streamline compliance processes and improve productivity. Thanks to its ability to process massive data logs and deliver meaningful insights, AI can give financial institutions a competitive advantage with real-time updates for simpler compliance management… AI technologies greatly reduce workloads and dramatically cut costs for financial institutions by enabling compliance to be more efficient and effective. These institutions can then achieve more than just compliance with the law by actually creating value with increased profits.”

Due Diligence and Human Oversight

Stephen M. Kohn, founding partner of Kohn, Kohn & Colapinto LLP, argues that AI compliance algorithms will be an ineffective tool that allow institutions to escape liability. He worries that corporations and financial institutions will implement AI systems and evade enforcement action by calling it due diligence.

“Companies want to use AI software to show the government that they are complying reasonably. Corporations and financial institutions will tell the government that they use sophisticated algorithms, and it did not detect all that money laundering, so you should not sanction us because we did due diligence.” He insists that the U.S. Government should not allow these algorithms to be used as a regulatory benchmark.

Legal scholar Sonia Katyal writes in her piece “Democracy & Distrust in an Era of Artificial Intelligence” that “While automation lowers the cost of decision making, it also raises significant due process concerns, involving a lack of notice and the opportunity to challenge the decision.”

While AI can be used as a powerful tool for identifying fraud, there is still no method for it to contact authorities with its discoveries. Compliance personnel are still required to blow the whistle, given societies standard due process. These algorithms should be used in conjunction with human judgment to determine compliance or lack thereof. Due process is needed so that individuals can understand the reasoning behind algorithmic determinations.

The Double-Edged Sword

Darrell West, Senior Fellow at Brookings Institute’s Center for Technology Innovation and Douglas Dillon Chair in Governmental Studies warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them.

Nowadays most office jobs (whether remote or in person) conduct operations fully online. Employees are required to use company computers and networks to do their jobs. Data generated by each employee passes through these devices and networks. Meaning, your privacy rights are questionable.

Because of this, whistleblowing will get much harder – organizations can employ the technology they initially implemented to catch fraud to instead catch whistleblowers. They can monitor employees via the capabilities built into our everyday tech: cameras, emails, keystroke detectors, online activity logs, what is downloaded, and more. West urges people to operate under the assumption that employers are monitoring their online activity.

These techniques have been implemented in the workplace for years, but AI automates tracking mechanisms. AI gives organizations more systematic tools to detect internal problems.

West explains, “All organizations are sensitive to a disgruntled employee who might take information outside the organization, especially if somebody’s dealing with confidential information, budget information or other types of financial information. It is just easy for organizations to monitor that because they can mine emails. They can analyze text messages; they can see who you are calling. Companies could have keystroke detectors and see what you are typing. Since many of us are doing our jobs in Microsoft Teams meetings and other video conferencing, there is a camera that records and transcribes information.”

If a company is defining a whistleblower as a problem, they can monitor this very information and look for keywords that would indicate somebody is engaging in whistleblowing.

With AI, companies can monitor specific employees they might find problematic (such as a whistleblower) and all the information they produce, including the keywords that might indicate fraud. Creators of these algorithms promise that soon their products will be able to detect all sorts of patterns and feelings, such as emotion and sentiment.

AI cannot determine whether somebody is a whistleblower, but it can flag unusual patterns and refer those patterns to compliance analysts. AI then becomes a tool to monitor what is going on within the organization, making it difficult for whistleblowers to go unnoticed. The risk of being caught by internal compliance software will be much greater.

“The only way people could report under these technological systems would be to go offline, using their personal devices or burner phones. But it is difficult to operate whistleblowing this way and makes it difficult to transmit confidential information. A whistleblower must, at some point, download information. Since you will be doing that on a company network, and that is easily detected these days.”

But the question of what becomes of the whistleblower is based on whether the compliance officers operate in support of the company or the public interest – they will have an extraordinary amount of information about the company and the whistleblower.

Risks for whistleblowers have gone up as AI has evolved because it is harder for them to collect and report information on fraud and compliance without being discovered by the organization.

West describes how organizations do not have a choice whether or not to use AI anymore: “All of the major companies are building it into their products. Google, Microsoft, Apple, and so on. A company does not even have to decide to use it: it is already being used. It’s a question of whether they avail themselves of the results of what’s already in their programs.”

“There probably are many companies that are not set up to use all the information that is at their disposal because it does take a little bit of expertise to understand data analytics. But this is just a short-term barrier, like organizations are going to solve that problem quickly.”

West recommends that organizations should just be a lot more transparent about their use of these tools. They should inform their employees what kind of information they are using, how they are monitoring employees, and what kind of software they use. Are they using detection? Software of any sort? Are they monitoring keystrokes?

Employees should want to know how long information is being stored. Organizations might legitimately use this technology for fraud detection, which might be a good argument to collect information, but it does not mean they should keep that information for five years. Once they have used the information and determined whether employees are committing fraud, there is no reason to keep it. Companies are largely not transparent about length of storage and what is done with this data and once it is used.

West believes that currently, most companies are not actually informing employees of how their information is being kept and how the new digital tools are being utilized.

The Importance of Whistleblower Programs:

The ability of AI algorithms to track whistleblowers poses a real risk to regulatory compliance given the massive importance of whistleblower programs in the United States’ enforcement of corporate crime.

The whistleblower programs at the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) respond to individuals who voluntarily report original information about fraud or misconduct.

If a tip leads to a successful enforcement action, the whistleblowers are entitled to 10-30% of the recovered funds. These programs have created clear anti-retaliation protections and strong financial incentives for reporting securities and commodities fraud.

Established in 2010 under the Dodd-Frank Act, these programs have been integral to enforcement. The SEC reports that whistleblower tips have led to over $6 billion in sanctions while the CFTC states that almost a third of its investigations stem from whistleblower disclosures.

Whistleblower programs, with robust protections for those who speak out, remain essential for exposing fraud and holding organizations accountable. This ensures that detected fraud is not only identified, but also reported and addressed, protecting taxpayer money, and promoting ethical business practices.

If AI algorithms are used to track down whistleblowers, their implementation would hinder these programs. Companies will undoubtedly retaliate against employees they suspect of blowing the whistle, creating a massive chilling effect where potential whistleblowers would not act out of fear of detection.

Already being employed in our institutions, experts believe these AI-driven compliance systems must have independent oversight for transparency’s sake. The software must also be designed to adhere to due process standards.

For more news on AI Compliance and Whistleblowing, visit the NLR Communications, Media & Internet section.

White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.