The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

AI Got It Wrong, Doesn’t Mean We Are Right: Practical Considerations for the Use of Generative AI for Commercial Litigators

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).

In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.

Confidentiality

As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.

Oversight

It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.

Judicial Preferences

Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.

Recent Healthcare-Related Artificial Intelligence Developments

AI is here to stay. The development and use of artificial intelligence (“AI”) is rapidly growing in the healthcare landscape with no signs of slowing down.

From a governmental perspective, many federal agencies are embracing the possibilities of AI. The Centers for Disease Control and Prevention is exploring the ability of AI to estimate sentinel events and combat disease outbreaks and the National Institutes of Health is using AI for priority research areas. The Centers for Medicare and Medicaid Services is also assessing whether algorithms used by plans and providers to identify high risk patients and manage costs can introduce bias and restrictions. Additionally, as of December 2023, the U.S. Food & Drug Administration cleared more than 690 AI-enabled devices for market use.

From a clinical perspective, payers and providers are integrating AI into daily operations and patient care. Hospitals and payers are using AI tools to assist in billing. Physicians are using AI to take notes and a wide range of providers are grappling with which AI tools to use and how to deploy AI in the clinical setting. With the application of AI in clinical settings, the standard of patient care is evolving and no entity wants to be left behind.

From an industry perspective, the legal and business spheres are transforming as a result of new national and international regulations focused on establishing the safe and effective use of AI, as well as commercial responses to those regulations. Three such regulations are top of mind, including (i) President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI; (ii) the U.S. Department of Health and Human Services’ (“HHS”) Final Rule on Health Data, Technology, and Interoperability; and (iii) the World Health Organization’s (“WHO”) Guidance for Large Multi-Modal Models of Generative AI. In response to the introduction of regulations and the general advancement of AI, interested healthcare stakeholders, including many leading healthcare companies, have voluntarily committed to a shared goal of responsible AI use.

U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“Executive Order”). Though long-awaited, the Executive Order was a major development and is one of the most ambitious attempts to regulate this burgeoning technology. The Executive Order has eight guiding principles and priorities, which include (i) Safety and Security; (ii) Innovation and Competition; (iii) Commitment to U.S. Workforce; (iv) Equity and Civil Rights; (v) Consumer Protection; (vi) Privacy; (vii) Government Use of AI; and (viii) Global Leadership.

Notably for healthcare stakeholders, the Executive Order directs the National Institute of Standards and Technology to establish guidelines and best practices for the development and use of AI and directs HHS to develop an AI Task force that will engineer policies and frameworks for the responsible deployment of AI and AI-enabled tech in healthcare. In addition to those directives, the Executive Order highlights the duality of AI with the “promise” that it brings and the “peril” that it has the potential to cause. This duality is reflected in HHS directives to establish an AI safety program to prioritize the award of grants in support of AI development while ensuring standards of nondiscrimination are upheld.

U.S. Department of Health and Human Services Health Data, Technology, and Interoperability Rule

In the wake of the Executive Order, the HHS Office of the National Coordinator finalized its rule to increase algorithm transparency, widely known as HT-1, on December 13, 2023. With respect to AI, the rule promotes transparency by establishing transparency requirements for AI and other predictive algorithms that are part of certified health information technology. The rule also:

  • implements requirements to improve equity, innovation, and interoperability;
  • supports the access, exchange, and use of electronic health information;
  • addresses concerns around bias, data collection, and safety;
  • modifies the existing clinical decision support certification criteria and narrows the scope of impacted predictive decision support intervention; and
  • adopts requirements for certification of health IT through new Conditions and Maintenance of Certification requirements for developers.

Voluntary Commitments from Leading Healthcare Companies for Responsible AI Use

Immediately on the heels of the release of HT-1 came voluntary commitments from leading healthcare companies on responsible AI development and deployment. On December 14, 2023, the Biden Administration announced that 28 healthcare provider and payer organizations signed up to move toward the safe, secure, and trustworthy purchasing and use of AI technology. Specifically, the provider and payer organizations agreed to:

  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and use of AI technology. Specifically, the provider and payer organizations agreed to:
  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and
  • research, investigate, and develop AI swiftly but responsibly.

WHO Guidance for Large Multi-Modal Models of Generative AI

On January 18, 2024, the WHO released guidance for large multi-modal models (“LMM”) of generative AI, which can simultaneously process and understand multiple types of data modalities such as text, images, audio, and video. The WHO guidance contains 98 pages with over 40 recommendations for tech developers, providers and governments on LMMs, and names five potential applications of LMMs, such as (i) diagnosis and clinical care; (ii) patient-guided use; (iii) administrative tasks; (iv) medical education; and (v) scientific research. It also addresses the liability issues that may arise out of the use of LMMs.

Closely related to the WHO guidance, the European Council’s agreement to move forward with a European Union AI Act (“Act”), was a significant milestone in AI regulation in the European Union. As previewed in December 2023, the Act will inform how AI is regulated across the European Union, and other nations will likely take note of and follow suit.

Conclusion

There is no question that AI is here to stay. But how the healthcare industry will look when AI is more fully integrated still remains to be seen. The framework for regulating AI will continue to evolve as AI and the use of AI in healthcare settings changes. In the meantime, healthcare stakeholders considering or adopting AI solutions should stay abreast of developments in AI to ensure compliance with applicable laws and regulations.

Commerce Department Launches Cross-Sector Consortium on AI Safety — AI: The Washington Report

  1. The Department of Commerce has launched the US AI Safety Institute Consortium (AISIC), a multistakeholder body tasked with developing AI safety standards and practices.
  2. The AISIC is currently composed of over 200 members representing industry, academia, labor, and civil society.
  3. The consortium may play an important role in implementing key provisions of President Joe Biden’s executive order on AI, including the development of guidelines on red-team testing[1] for AI and the creation of a companion resource to the AI Risk Management Framework.

Introduction: “First-Ever Consortium Dedicated to AI Safety” Launches

On February 8, 2024, the Department of Commerce announced the creation of the US AI Safety Institute Consortium (AISIC), a multistakeholder body housed within the National Institute of Standards and Technology (NIST). The purpose of the AISIC is to facilitate the development and adoption of AI safety standards and practices.

The AISIC has brought together over 200 organizations from industry, labor, academia, and civil society, with more members likely to join in the coming months.

Biden AI Executive Order Tasks Commerce Department with AI Safety Efforts

On October 30, 2023, President Joe Biden signed a wide-ranging executive order on AI (“AI EO”). This executive order has mobilized agencies across the federal bureaucracy to implement policies, convene consortiums, and issue reports on AI. Among other provisions, the AI EO directs the Department of Commerce (DOC) to establish “guidelines and best practices, with the aim of promoting consensus…[and] for developing and deploying safe, secure, and trustworthy AI systems.”

Responding to this mandate, the DOC established the US Artificial Intelligence Safety Institute (AISI) in November 2023. The role of the AISI is to “lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.” Concretely, the AISI is tasked with developing AI safety guidelines and standards and liaising with the AI safety bodies of partner nations.

The AISI is also responsible for convening multistakeholder fora on AI safety. It is in pursuance of this responsibility that the DOC has convened the AISIC.

The Responsibilities of the AISIC

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said DOC Secretary Gina Raimondo in a statement announcing the launch of the AISIC. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

To achieve the objectives set out by the AI EO, the AISIC has convened leading AI developers, research institutions, and civil society groups. At launch, the AISIC has over 200 members, and that number will likely grow in the coming months.

According to NIST, members of the AISIC will engage in the following objectives:

  1. Guide the evolution of industry standards on the development and deployment of safe, secure, and trustworthy AI.
  2. Develop methods for evaluating AI capabilities, especially those that are potentially harmful.
  3. Encourage secure development practices for generative AI.
  4. Ensure the availability of testing environments for AI tools.
  5. Develop guidance and practices for red-team testing and privacy-preserving machine learning.
  6. Create guidance and tools for digital content authentication.
  7. Encourage the development of AI-related workforce skills.
  8. Conduct research on human-AI system interactions and other social implications of AI.
  9. Facilitate understanding among actors operating across the AI ecosystem.

To join the AISIC, organizations were instructed to submit a letter of intent via an online webform. If selected for participation, applicants were asked to sign a Cooperative Research and Development Agreement (CRADA)[2] with NIST. Entities that could not participate in a CRADA were, in some cases, given the option to “participate in the Consortium pursuant to separate non-CRADA agreement.”

While the initial deadline to submit a letter of intent has passed, NIST has provided that there “may be continuing opportunity to participate even after initial activity commences for participants who were not selected initially or have submitted the letter of interest after the selection process.” Inquiries regarding AISIC membership may be directed to this email address.

Conclusion: The AISIC as a Key Implementer of the AI EO?

While at the time of writing NIST has not announced concrete initiatives that the AISIC will undertake, it is likely that the body will come to play an important role in implementing key provisions of Biden’s AI EO. As discussed earlier, NIST created the AISI and the AISIC in response to the AI EO’s requirement that DOC establish “guidelines and best practices…for developing and deploying safe, secure, and trustworthy AI systems.” Under this general heading, the AI EO lists specific resources and frameworks that the DOC must establish, including:

It is premature to assert that either the AISI or the AISIC will exclusively carry out these goals, as other bodies within the DOC (such as the National AI Research Resource) may also contribute to the satisfaction of these requirements. That being said, given the correspondence between these mandates and the goals of the AISIC, along with the multistakeholder and multisectoral structure of the consortium, it is likely that the AISIC will play a significant role in carrying out these tasks.

We will continue to provide updates on the AISIC and related DOC AI initiatives. Please feel free to contact us if you have questions as to current practices or how to proceed.

Endnotes

[1] As explained in our July 2023 newsletter on Biden’s voluntary framework on AI, “red-teaming” is “a strategy whereby an entity designates a team to emulate the behavior of an adversary attempting to break or exploit the entity’s technological systems. As the red team discovers vulnerabilities, the entity patches them, making their technological systems resilient to actual adversaries.”

[2] See “CRADAs – Cooperative Research & Development Agreements” for an explanation of CRADAs. https://www.doi.gov/techtransfer/crada.

Raj Gambhir contributed to this article.

WHO Publishes Guidance for Ethics and Governance of AI for Healthcare Sector

The World Health Organization (WHO) recently published “Ethics and Governance of Artificial Intelligence for Health: Guidance on large multi-modal models” (LMMs), which is designed to provide “guidance to assist Member States in mapping the benefits and challenges associated with the use of for health and in developing policies and practices for appropriate development, provision and use. The guidance includes recommendations for governance within companies, by governments, and through international collaboration, aligned with the guiding principles. The principles and recommendations, which account for the unique ways in which humans can use generative AI for health, are the basis of this guidance.”

The guidance focused on one type of generative AI, large multi-modal models (LMMs), “which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.” According to the report, LMMs have “been adopted faster than any consumer application in history.” The report outlines the benefits and risks of LLMs, particularly the risk of using LLMs in the healthcare sector.

The report proposes solutions to address the risks of using LMMs in health care during development, provision, and deployment of LMMs and ethics and governance of LLMs, “what can be done, and by who.”

In the ever-changing world of AI, this is one report that is timely and provides steps and solutions to follow to tackle the risk of using LMMs.

Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.

5 Trends to Watch: 2024 Emerging Technology

  1. Increased Adoption of Generative AI and Push to Minimize Algorithmic Biases – Generative AI took center stage in 2023 and popularity of this technology will continue to grow. The importance behind the art of crafting nuanced and effective prompts will heighten, and there will be greater adoption across a wider variety of industries. There should be advancements in algorithms, increasing accessibility through more user-friendly platforms. These can lead to increased focus on minimizing algorithmic biases and the establishment of guardrails governing AI policies. Of course, a keen awareness of the ethical considerations and policy frameworks will help guide generative AI’s responsible use.
  2. Convergence of AR/VR and AI May Result in “AR/VR on steroids” The fusion of Augmented Reality (AR) and Virtual Reality (VR) technologies with AI unlocks a new era of customization and promises enhanced immersive experiences, blurring the lines between the digital and physical worlds. We expect to see further refining and personalizing of AR/VR to redefine gaming, education, and healthcare, along with various industrial applications.
  3. EV/Battery Companies Charge into Greener Future. With new technologies and chemistries, advancements in battery efficiency, energy density, and sustainability can move the adoption of electric vehicles (EVs) to new heights. Decreasing prices for battery metals canbatter help make EVs more competitive with traditional vehicles. AI may providenew opportunities in optimizing EV performance and help solve challenges in battery development, reliability, and safety.
  4. “Rosie the Robot” is Closer than You Think. With advancements in machine learning algorithms, sensor technologies, and integration of AI, the intelligence and adaptability of robotics should continue to grow. Large language models (LLMs) will likely encourage effective human-robot collaboration, and even non-technical users will find it easy to employ robotics to accomplish a task. Robotics is developing into a field where machines can learn, make decisions, and work in unison with people. It is no longer limited to monotonous activities and repetitive tasks.
  5. Unified Defense in Battle Against Cyber-Attacks. Digital threats are expected to only increase in 2024, including more sophisticated AI-powered attacks. As the international battle against hackers wages on, threat detection, response, and mitigation will play a crucial role in staying ahead of rapidly evolving cyber-attacks. As risks to national security and economic growth, there should be increased collaboration between industries and governments to establish standardized cybersecurity frameworks to protect data and privacy.

5 Trends to Watch: 2024 Artificial Intelligence

  1. Banner Year for Artificial Intelligence (AI) in Health – With AI-designed drugs entering clinical trials, growing adoption of generative AI tools in medical practices, increasing FDA approvals for AI-enabled devices, and new FDA guidance on AI usage, 2023 was a banner year for advancements in AI for medtech, healthtech, and techbio—even with the industry-wide layoffs that also hit digital and AI teams. The coming year should see continued innovation and investment in AI in areas from drug design to new devices to clinical decision support to documentation and revenue cycle management (RCM) to surgical augmented reality (AR) and more, together with the arrival of more new U.S. government guidance on and best practices for use of this fast-evolving technology.
  2. Congress and AI Regulation – Congress continues to grapple with the proper regulatory structure for AI. At a minimum, expect Congress in 2024 to continue funding AI research and the development of standards required under the Biden Administration’s October 2023 Executive Order. Congress will also debate legislation relating to the use of AI in elections, intelligence operations, military weapons systems, surveillance and reconnaissance, logistics, cybersecurity, health care, and education.
  3. New State and City Laws Governing AI’s Use in HR Decisions – Look for additional state and city laws to be enacted governing an employer’s use of AI in hiring and performance software, similar to New York City’s Local Law 144, known as the Automated Employment Decisions Tools law. More than 200 AI-related laws have been introduced in state legislatures across the country, as states move forward with their own regulation while debate over federal law continues. GT expects 2024 to bring continued guidance from the EEOC and other federal agencies, mandating notice to employees regarding the use of AI in HR-function software as well as restricting its use absent human oversight.
  4. Data Privacy Rules Collide with Use of AI – Application of existing laws to AI, both within the United States and internationally, will be a key issue as companies apply transparency, consent, automated decision making, and risk assessment requirements in existing privacy laws to AI personal information processing. U.S. states will continue to propose new privacy legislation in 2024, with new implementing regulations for previously passed laws also expected. Additionally, there’s a growing trend towards the adoption of “privacy by design” principles in AI development, ensuring privacy considerations are integrated into algorithms and platforms from the ground up. These evolving legal landscapes are not only shaping AI development but also compelling organizations to reevaluate their data strategies, balancing innovation with the imperative to protect individual privacy rights, all while trying to “future proof” AI personal information processing from privacy regulatory changes.
  5. Continued Rise in AI-Related Copyright & Patent Filings, Litigation – Expect the Patent and Copyright Offices to develop and publish guidance on issues at the intersection of AI and IP, including patent eligibility and inventorship for AI-related innovations, the scope of protection for works produced using AI, and the treatment of copyrighted works in AI training, as mandated in the Biden Administration Executive Order. IP holders are likely to become more sophisticated in how they integrate AI into their innovation and authorship workflows. And expect to see a surge in litigation around AI-generated IP, particularly given the ongoing denial of IP protection for AI-generated content and the lack of precedent in this space in general.

Algorithmic Pricing Agents and Price-Fixing Facilitators: Antitrust Law’s Latest Conundrum

Are machines doing the collaborating that competitors may not?

It is an application of artificial intelligence (“AI”) that many businesses, agencies, legislators, lawyers, and antitrust law enforcers around the world are only beginning to confront. It is also among the top concerns of in-house counsel across industries. Competitors are increasingly setting prices through the use of communal, AI-enhanced algorithms that analyze data that are private, public, or a mix of both.

Allegations in private and public litigation describe “algorithmic price fixing” in which the antitrust violation occurs when competitors feed and access the same database platform and use the same analytical tools. Then, as some allege, the violations continue when competitors agree to the prices produced by the algorithms. Right now, renters and prosecutors are teeing off on the poster child for algorithmic pricing, RealPage Inc., and the many landlords and property managers who use it.

PRIVATE AND PUBLIC LITIGATION

A Nov. 1, 2023 complaint filed by the Washington, DC, Attorney General’s office described RealPage’s offerings this way: “[A] variety of technology-based services to real estate owners and property managers including revenue management products that employ statistical models that use data—including non-public, competitively sensitive data—to estimate supply and demand for multifamily housing that is specific to particular geographic areas and unit types, and then generate a ‘price’ to charge for renting those units that maximizes the landlord’s revenue.”

The complaint alleges that more than 30% of apartments in multifamily buildings and 60% of units in large multifamily buildings nationwide are priced using the RealPage software. In the Washington-Arlington-Alexandria Metropolitan Area that number leaps to more than 90% of units in large buildings. The complaint alleges that landlords have agreed to set their rates using RealPage.

Private actions against RealPage have also been filed in federal courts across the country and have been centralized in multi-district litigation in the Middle District of Tennessee (In re: RealPage, Inc., Rental Software Antitrust Litigation [NO. II], Case No. 3:23-md-3071, MDL No. 3071). The Antitrust Division of the Department of Justice filed a Statement of Interest and a Memorandum in Support in the case urging the court to deny the defendants’ motion to dismiss.

Even before the MDL, RealPage had attracted the Antitrust Division’s attention when the company acquired its largest competitor, Lease Rent Options for $300 million, Axiometrics for $75 million, and On-Site Manager, Inc. for $250 million.

The Antitrust Division has been pursuing the use of algorithms in other industries, including airlines and online retailers. The DOJ and FTC are both studying the issue and reaching out to experts to learn more.

JOURNALISTS AND SENATORS

Additionally, three senators urged DOJ  to investigate RealPage after reporters at ProPublica wrote an investigative report in October 2022. The journalists claim that RealPage’s price-setting software “uses nearby competitors’ nonpublic rent data to feed an algorithm that suggests what landlords should charge for available apartments each day.” ProPublica speculated that the algorithm is enabling landlords to coordinate prices and in the process push rents above competitive levels in violation of the antitrust laws.

Senators Amy Klobuchar (D-MN), Dick Durban (D-IL) and Cory Booker (D-NJ) wrote to the DOJ concerned that the RealPage enables “a cartel to artificially inflate rental rates in multifamily residential buildings.”

Sen. Sherrod Brown (D-OH) also wrote to the Federal Trade Commission with concerns “about collusion in the rental market,” urging the FTC to “review whether rent setting algorithms that analyze rent prices through the use of competitors’ private data … violate antitrust laws.” The Ohio senator specifically mentioned RealPage’s YieldStar and AI Revenue Management programs.

THE EUROPEANS

The European Commission has enacted the Artificial Intelligence Act, which includes provisions on algorithmic pricing, requiring algorithmic pricing systems be transparent, explainable, and non-discriminatory with regard to consumers. Companies that use algorithmic pricing systems will be required to implement compliance procedures, including audits, data governance, and human oversight.

THE LEGAL CONUNDRUM

An essential element of any claimed case of price-fixing under the U.S. antitrust laws is the element of agreement: a plaintiff alleging price-fixing must prove the existence of an agreement between two or more competitors who should be setting their prices independently but aren’t. Consumer harm from collusion occurs when competitors set prices to achieve their maximum joint profit instead of setting prices to maximize individual profits. To condemn algorithmic pricing as collusion, therefore, requires proof of agreement.

It may be difficult for the RealPage plaintiffs to prove that the RealPage’s users agreed among themselves to adhere to any particular price or pricing formula, but not impossible. End users are likely to argue that RealPage’s pricing recommendations are merely aggregate market signals that RealPage is collecting and disseminating. The use of the same information service, their argument will go, does not prove the existence of an agreement for purposes of Section 1 of the Sherman Act.

The parties and courts embroiled in the RealPage litigation are constrained to live under the law as it presently exists, so the solution proposed by Michal Gal, Professor and Director of the Forum on Law and Markets at the University of Haifa, is out of reach. In her 2018 paper, “Algorithms as Illegal Agreements,” Professor Gal confronts the agreement problem when algorithms set prices and concludes that it is time to “rethink our laws and focus on reducing harms to social welfare rather than on what constitutes an agreement.” Academics have been critical of the agreement element of Section 1 for years, but it is unlikely to change anytime soon, even with the added inconvenience it poses where competitors rely on a common vendor of machine-generated pricing recommendations.

Nonetheless, there is some evidence that autonomous machines, just like humans, can learn that collusion allows sellers to charge monopoly prices. In their December 2019 paper, “Artificial Intelligence, Algorithmic Pricing and Collusion,” Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolo, and Sergio Pastorello at the Department of Economics at the University of Bologna showed with computer simulations that machines autonomously analyzing prices can develop collusive strategies “from scratch, engaging in active experimentation and adapting to changing environments.” The authors say indications from their models “suggest that algorithmic collusion is more than a remote theoretical possibility.” They find that “relatively simple [machine learning] pricing algorithms systematically learn to play collusive strategies.” The authors claim to be the first to “clearly document the emergence of collusive strategies among autonomous pricing agents.”

THE AGREEMENT ELEMENT IN THE MACHINE PRICING CASE

For three main reasons, the element of agreement need not be an obstacle to successfully prosecuting a price-fixing claim against competitors that use a common or similar vendor of algorithmic pricing data and software.

First, there is significant precedent for inferring the existence of an agreement among parties that knowingly participate in a collusive arrangement even if they do not directly interact, sometimes imprecisely referred to as a “rimless wheel hub-and-spoke” conspiracy. For example, in Toys “R” Us, Inc. v. F.T.C., 221 F.3d 928 (9th Cir. 2000), the court inferred the necessary concerted action from a series of individual agreements between toy manufacturers and Toys “R” Us in which the manufacturers promised not to sell the toys sold to Toys “R” Us and other toy stores to big box stores in the same packaging. The FTC found that each of the manufacturers entered into the restraint on the condition that the others also did so. The court found that Toys “R” Us had engineered a horizontal boycott against a competitor in violation of Section 1, despite the absence of evidence of any “privity” between the boycotting manufacturers.

The Toys “R” Us case relied on the Supreme Court’s decision in Interstate Circuit v. United States, 306 U.S. 208 (1939), in which movie theater chains sent an identical letter to eight movie studios asking them to restrict secondary runs of certain films. The letter disclosed that each of the eight were receiving the same letter. The Court held that a direct agreement was not a prerequisite for an unlawful conspiracy. “It was enough that, knowing that concerted action was contemplated and invited, the distributors gave their adherence to the scheme and participated in it.”

The analogous issue in the algorithmic pricing scenario is whether the vendor’s end users that their competitors are also end users. If so, the inquiry can consider the agreement element satisfied if the algorithm does, in fact, jointly maximize the end users’ profits.

The second factor overcoming the agreement element is related to the first. Whether software that recommends prices has interacted with the prices set by competitors to achieve joint profit maximization—that is, whether the machines have learned to collude without human intervention—is an empirical question. The same techniques used to uncover machine-learned collusion by simulation can be used to determine the extent of interdependence in historical price setting. If statistical evidence of collusive pricing is available, it is enough that the end users knowingly accepted the offer to set its prices guided by the algorithm. The economics underlying the agreement element in the first place lies in prohibition of joint rather than individual profit maximization, so direct evidence that market participants are jointly profit maximizing should obviate the need for further evidence of agreement.

A third reason the agreement element need not stymie a Section 1 action against defendants engaged in algorithmic pricing is based on the Supreme Court’s decision in American Needle v. NFL, 560 U.S. 183 (2010). In that case the Court made clear that arrangements that remove independent centers of decision-making from the market run afoul of Section 1, if the net effect of the algorithm is to displace individual decision-making with decisions outsourced to a centralized pricing agent, the mechanism should be immaterial.

The rimless wheel of the so-called hub-and-spoke conspiracy is an inadequate analogy because the wheel in these cases does have a rim, i.e., a connection between the conspirators. In the scenarios above in which the courts have found Section 1 liability i) each of the participants knew that its rivals were also entering into the same or similar arrangements, ii) the participants devolved pricing authority away from themselves down to an algorithmic pricing agent, and iii) historical prices could be shown statistically to have exceeded the competitive level in a way consistent with collusive pricing. These elements connect the participants in the scheme, supplying the “rim” to the spokes of the wheel. If the plaintiffs in the RealPage litigation can establish these elements, they will have met their burden of establishing the requisite element of agreement in their Section 1 claim.

What Employers Need to Know about the White House’s Executive Order on AI

President Joe Biden recently issued an executive order devised to establish minimum risk practices for use of generative artificial intelligence (“AI”) with focus on rights and safety of people, with many consequences for employers. Businesses should be aware of these directives to agencies, especially as they may result in new regulations, agency guidance and enforcements that apply to their workers.

Executive Order Requirements Impacting Employers

Specifically, the executive order requires the Department of Justice and federal civil rights offices to coordinate on ‘best practices’ for investigating and prosecuting civil rights violations related to AI. The ‘best practices’ will address: job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and ‘best practices’ are focused on benefitting workers and “preventing employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.”

The executive order also requested a report on AI’s potential labor-market impacts, and to study and identify options for strengthening federal support for workers facing labor disruptions, including from AI. Specifically, the president has directed the Chairman of the Council of Economic Advisers to “prepare and submit a report to the President on the labor-market effects of AI”. In addition, there is a requirement for the Secretary of Labor to submit “a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements.” This report will include principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits. Employers should expect more direction once this report is completed in April 2024.

Increasing International Employment?

Developing and using generative AI inherently requires skilled workers, which President Biden recognizes. One of the goals of his executive order is to “[u]se existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.” While work visas have been historically difficult for employers to navigate, this executive order may make it easier for US employers to access skilled workers from overseas.

Looking Ahead

In light of the focus of this executive order, employers using AI for recruiting or decisions about applicants (and even current employees) must be aware of the consequences of not putting a human check on the potential bias. Working closely with employment lawyers at Sheppard Mullin and having a multiple checks and balances on recruiting practices are essential when using generative AI.

While this executive order is quite limited in scope, it is only a first step. As these actions are implemented in the coming months, be sure to check back for updates.

For more news on the Impact of the Executive Order on AI for Employers, visit the NLR Communications, Media & Internet section.