WHO Publishes Guidance for Ethics and Governance of AI for Healthcare Sector

The World Health Organization (WHO) recently published “Ethics and Governance of Artificial Intelligence for Health: Guidance on large multi-modal models” (LMMs), which is designed to provide “guidance to assist Member States in mapping the benefits and challenges associated with the use of for health and in developing policies and practices for appropriate development, provision and use. The guidance includes recommendations for governance within companies, by governments, and through international collaboration, aligned with the guiding principles. The principles and recommendations, which account for the unique ways in which humans can use generative AI for health, are the basis of this guidance.”

The guidance focused on one type of generative AI, large multi-modal models (LMMs), “which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.” According to the report, LMMs have “been adopted faster than any consumer application in history.” The report outlines the benefits and risks of LLMs, particularly the risk of using LLMs in the healthcare sector.

The report proposes solutions to address the risks of using LMMs in health care during development, provision, and deployment of LMMs and ethics and governance of LLMs, “what can be done, and by who.”

In the ever-changing world of AI, this is one report that is timely and provides steps and solutions to follow to tackle the risk of using LMMs.

Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.

5 Trends to Watch: 2024 Emerging Technology

  1. Increased Adoption of Generative AI and Push to Minimize Algorithmic Biases – Generative AI took center stage in 2023 and popularity of this technology will continue to grow. The importance behind the art of crafting nuanced and effective prompts will heighten, and there will be greater adoption across a wider variety of industries. There should be advancements in algorithms, increasing accessibility through more user-friendly platforms. These can lead to increased focus on minimizing algorithmic biases and the establishment of guardrails governing AI policies. Of course, a keen awareness of the ethical considerations and policy frameworks will help guide generative AI’s responsible use.
  2. Convergence of AR/VR and AI May Result in “AR/VR on steroids” The fusion of Augmented Reality (AR) and Virtual Reality (VR) technologies with AI unlocks a new era of customization and promises enhanced immersive experiences, blurring the lines between the digital and physical worlds. We expect to see further refining and personalizing of AR/VR to redefine gaming, education, and healthcare, along with various industrial applications.
  3. EV/Battery Companies Charge into Greener Future. With new technologies and chemistries, advancements in battery efficiency, energy density, and sustainability can move the adoption of electric vehicles (EVs) to new heights. Decreasing prices for battery metals canbatter help make EVs more competitive with traditional vehicles. AI may providenew opportunities in optimizing EV performance and help solve challenges in battery development, reliability, and safety.
  4. “Rosie the Robot” is Closer than You Think. With advancements in machine learning algorithms, sensor technologies, and integration of AI, the intelligence and adaptability of robotics should continue to grow. Large language models (LLMs) will likely encourage effective human-robot collaboration, and even non-technical users will find it easy to employ robotics to accomplish a task. Robotics is developing into a field where machines can learn, make decisions, and work in unison with people. It is no longer limited to monotonous activities and repetitive tasks.
  5. Unified Defense in Battle Against Cyber-Attacks. Digital threats are expected to only increase in 2024, including more sophisticated AI-powered attacks. As the international battle against hackers wages on, threat detection, response, and mitigation will play a crucial role in staying ahead of rapidly evolving cyber-attacks. As risks to national security and economic growth, there should be increased collaboration between industries and governments to establish standardized cybersecurity frameworks to protect data and privacy.

5 Trends to Watch: 2024 Artificial Intelligence

  1. Banner Year for Artificial Intelligence (AI) in Health – With AI-designed drugs entering clinical trials, growing adoption of generative AI tools in medical practices, increasing FDA approvals for AI-enabled devices, and new FDA guidance on AI usage, 2023 was a banner year for advancements in AI for medtech, healthtech, and techbio—even with the industry-wide layoffs that also hit digital and AI teams. The coming year should see continued innovation and investment in AI in areas from drug design to new devices to clinical decision support to documentation and revenue cycle management (RCM) to surgical augmented reality (AR) and more, together with the arrival of more new U.S. government guidance on and best practices for use of this fast-evolving technology.
  2. Congress and AI Regulation – Congress continues to grapple with the proper regulatory structure for AI. At a minimum, expect Congress in 2024 to continue funding AI research and the development of standards required under the Biden Administration’s October 2023 Executive Order. Congress will also debate legislation relating to the use of AI in elections, intelligence operations, military weapons systems, surveillance and reconnaissance, logistics, cybersecurity, health care, and education.
  3. New State and City Laws Governing AI’s Use in HR Decisions – Look for additional state and city laws to be enacted governing an employer’s use of AI in hiring and performance software, similar to New York City’s Local Law 144, known as the Automated Employment Decisions Tools law. More than 200 AI-related laws have been introduced in state legislatures across the country, as states move forward with their own regulation while debate over federal law continues. GT expects 2024 to bring continued guidance from the EEOC and other federal agencies, mandating notice to employees regarding the use of AI in HR-function software as well as restricting its use absent human oversight.
  4. Data Privacy Rules Collide with Use of AI – Application of existing laws to AI, both within the United States and internationally, will be a key issue as companies apply transparency, consent, automated decision making, and risk assessment requirements in existing privacy laws to AI personal information processing. U.S. states will continue to propose new privacy legislation in 2024, with new implementing regulations for previously passed laws also expected. Additionally, there’s a growing trend towards the adoption of “privacy by design” principles in AI development, ensuring privacy considerations are integrated into algorithms and platforms from the ground up. These evolving legal landscapes are not only shaping AI development but also compelling organizations to reevaluate their data strategies, balancing innovation with the imperative to protect individual privacy rights, all while trying to “future proof” AI personal information processing from privacy regulatory changes.
  5. Continued Rise in AI-Related Copyright & Patent Filings, Litigation – Expect the Patent and Copyright Offices to develop and publish guidance on issues at the intersection of AI and IP, including patent eligibility and inventorship for AI-related innovations, the scope of protection for works produced using AI, and the treatment of copyrighted works in AI training, as mandated in the Biden Administration Executive Order. IP holders are likely to become more sophisticated in how they integrate AI into their innovation and authorship workflows. And expect to see a surge in litigation around AI-generated IP, particularly given the ongoing denial of IP protection for AI-generated content and the lack of precedent in this space in general.

Algorithmic Pricing Agents and Price-Fixing Facilitators: Antitrust Law’s Latest Conundrum

Are machines doing the collaborating that competitors may not?

It is an application of artificial intelligence (“AI”) that many businesses, agencies, legislators, lawyers, and antitrust law enforcers around the world are only beginning to confront. It is also among the top concerns of in-house counsel across industries. Competitors are increasingly setting prices through the use of communal, AI-enhanced algorithms that analyze data that are private, public, or a mix of both.

Allegations in private and public litigation describe “algorithmic price fixing” in which the antitrust violation occurs when competitors feed and access the same database platform and use the same analytical tools. Then, as some allege, the violations continue when competitors agree to the prices produced by the algorithms. Right now, renters and prosecutors are teeing off on the poster child for algorithmic pricing, RealPage Inc., and the many landlords and property managers who use it.

PRIVATE AND PUBLIC LITIGATION

A Nov. 1, 2023 complaint filed by the Washington, DC, Attorney General’s office described RealPage’s offerings this way: “[A] variety of technology-based services to real estate owners and property managers including revenue management products that employ statistical models that use data—including non-public, competitively sensitive data—to estimate supply and demand for multifamily housing that is specific to particular geographic areas and unit types, and then generate a ‘price’ to charge for renting those units that maximizes the landlord’s revenue.”

The complaint alleges that more than 30% of apartments in multifamily buildings and 60% of units in large multifamily buildings nationwide are priced using the RealPage software. In the Washington-Arlington-Alexandria Metropolitan Area that number leaps to more than 90% of units in large buildings. The complaint alleges that landlords have agreed to set their rates using RealPage.

Private actions against RealPage have also been filed in federal courts across the country and have been centralized in multi-district litigation in the Middle District of Tennessee (In re: RealPage, Inc., Rental Software Antitrust Litigation [NO. II], Case No. 3:23-md-3071, MDL No. 3071). The Antitrust Division of the Department of Justice filed a Statement of Interest and a Memorandum in Support in the case urging the court to deny the defendants’ motion to dismiss.

Even before the MDL, RealPage had attracted the Antitrust Division’s attention when the company acquired its largest competitor, Lease Rent Options for $300 million, Axiometrics for $75 million, and On-Site Manager, Inc. for $250 million.

The Antitrust Division has been pursuing the use of algorithms in other industries, including airlines and online retailers. The DOJ and FTC are both studying the issue and reaching out to experts to learn more.

JOURNALISTS AND SENATORS

Additionally, three senators urged DOJ  to investigate RealPage after reporters at ProPublica wrote an investigative report in October 2022. The journalists claim that RealPage’s price-setting software “uses nearby competitors’ nonpublic rent data to feed an algorithm that suggests what landlords should charge for available apartments each day.” ProPublica speculated that the algorithm is enabling landlords to coordinate prices and in the process push rents above competitive levels in violation of the antitrust laws.

Senators Amy Klobuchar (D-MN), Dick Durban (D-IL) and Cory Booker (D-NJ) wrote to the DOJ concerned that the RealPage enables “a cartel to artificially inflate rental rates in multifamily residential buildings.”

Sen. Sherrod Brown (D-OH) also wrote to the Federal Trade Commission with concerns “about collusion in the rental market,” urging the FTC to “review whether rent setting algorithms that analyze rent prices through the use of competitors’ private data … violate antitrust laws.” The Ohio senator specifically mentioned RealPage’s YieldStar and AI Revenue Management programs.

THE EUROPEANS

The European Commission has enacted the Artificial Intelligence Act, which includes provisions on algorithmic pricing, requiring algorithmic pricing systems be transparent, explainable, and non-discriminatory with regard to consumers. Companies that use algorithmic pricing systems will be required to implement compliance procedures, including audits, data governance, and human oversight.

THE LEGAL CONUNDRUM

An essential element of any claimed case of price-fixing under the U.S. antitrust laws is the element of agreement: a plaintiff alleging price-fixing must prove the existence of an agreement between two or more competitors who should be setting their prices independently but aren’t. Consumer harm from collusion occurs when competitors set prices to achieve their maximum joint profit instead of setting prices to maximize individual profits. To condemn algorithmic pricing as collusion, therefore, requires proof of agreement.

It may be difficult for the RealPage plaintiffs to prove that the RealPage’s users agreed among themselves to adhere to any particular price or pricing formula, but not impossible. End users are likely to argue that RealPage’s pricing recommendations are merely aggregate market signals that RealPage is collecting and disseminating. The use of the same information service, their argument will go, does not prove the existence of an agreement for purposes of Section 1 of the Sherman Act.

The parties and courts embroiled in the RealPage litigation are constrained to live under the law as it presently exists, so the solution proposed by Michal Gal, Professor and Director of the Forum on Law and Markets at the University of Haifa, is out of reach. In her 2018 paper, “Algorithms as Illegal Agreements,” Professor Gal confronts the agreement problem when algorithms set prices and concludes that it is time to “rethink our laws and focus on reducing harms to social welfare rather than on what constitutes an agreement.” Academics have been critical of the agreement element of Section 1 for years, but it is unlikely to change anytime soon, even with the added inconvenience it poses where competitors rely on a common vendor of machine-generated pricing recommendations.

Nonetheless, there is some evidence that autonomous machines, just like humans, can learn that collusion allows sellers to charge monopoly prices. In their December 2019 paper, “Artificial Intelligence, Algorithmic Pricing and Collusion,” Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolo, and Sergio Pastorello at the Department of Economics at the University of Bologna showed with computer simulations that machines autonomously analyzing prices can develop collusive strategies “from scratch, engaging in active experimentation and adapting to changing environments.” The authors say indications from their models “suggest that algorithmic collusion is more than a remote theoretical possibility.” They find that “relatively simple [machine learning] pricing algorithms systematically learn to play collusive strategies.” The authors claim to be the first to “clearly document the emergence of collusive strategies among autonomous pricing agents.”

THE AGREEMENT ELEMENT IN THE MACHINE PRICING CASE

For three main reasons, the element of agreement need not be an obstacle to successfully prosecuting a price-fixing claim against competitors that use a common or similar vendor of algorithmic pricing data and software.

First, there is significant precedent for inferring the existence of an agreement among parties that knowingly participate in a collusive arrangement even if they do not directly interact, sometimes imprecisely referred to as a “rimless wheel hub-and-spoke” conspiracy. For example, in Toys “R” Us, Inc. v. F.T.C., 221 F.3d 928 (9th Cir. 2000), the court inferred the necessary concerted action from a series of individual agreements between toy manufacturers and Toys “R” Us in which the manufacturers promised not to sell the toys sold to Toys “R” Us and other toy stores to big box stores in the same packaging. The FTC found that each of the manufacturers entered into the restraint on the condition that the others also did so. The court found that Toys “R” Us had engineered a horizontal boycott against a competitor in violation of Section 1, despite the absence of evidence of any “privity” between the boycotting manufacturers.

The Toys “R” Us case relied on the Supreme Court’s decision in Interstate Circuit v. United States, 306 U.S. 208 (1939), in which movie theater chains sent an identical letter to eight movie studios asking them to restrict secondary runs of certain films. The letter disclosed that each of the eight were receiving the same letter. The Court held that a direct agreement was not a prerequisite for an unlawful conspiracy. “It was enough that, knowing that concerted action was contemplated and invited, the distributors gave their adherence to the scheme and participated in it.”

The analogous issue in the algorithmic pricing scenario is whether the vendor’s end users that their competitors are also end users. If so, the inquiry can consider the agreement element satisfied if the algorithm does, in fact, jointly maximize the end users’ profits.

The second factor overcoming the agreement element is related to the first. Whether software that recommends prices has interacted with the prices set by competitors to achieve joint profit maximization—that is, whether the machines have learned to collude without human intervention—is an empirical question. The same techniques used to uncover machine-learned collusion by simulation can be used to determine the extent of interdependence in historical price setting. If statistical evidence of collusive pricing is available, it is enough that the end users knowingly accepted the offer to set its prices guided by the algorithm. The economics underlying the agreement element in the first place lies in prohibition of joint rather than individual profit maximization, so direct evidence that market participants are jointly profit maximizing should obviate the need for further evidence of agreement.

A third reason the agreement element need not stymie a Section 1 action against defendants engaged in algorithmic pricing is based on the Supreme Court’s decision in American Needle v. NFL, 560 U.S. 183 (2010). In that case the Court made clear that arrangements that remove independent centers of decision-making from the market run afoul of Section 1, if the net effect of the algorithm is to displace individual decision-making with decisions outsourced to a centralized pricing agent, the mechanism should be immaterial.

The rimless wheel of the so-called hub-and-spoke conspiracy is an inadequate analogy because the wheel in these cases does have a rim, i.e., a connection between the conspirators. In the scenarios above in which the courts have found Section 1 liability i) each of the participants knew that its rivals were also entering into the same or similar arrangements, ii) the participants devolved pricing authority away from themselves down to an algorithmic pricing agent, and iii) historical prices could be shown statistically to have exceeded the competitive level in a way consistent with collusive pricing. These elements connect the participants in the scheme, supplying the “rim” to the spokes of the wheel. If the plaintiffs in the RealPage litigation can establish these elements, they will have met their burden of establishing the requisite element of agreement in their Section 1 claim.

What Employers Need to Know about the White House’s Executive Order on AI

President Joe Biden recently issued an executive order devised to establish minimum risk practices for use of generative artificial intelligence (“AI”) with focus on rights and safety of people, with many consequences for employers. Businesses should be aware of these directives to agencies, especially as they may result in new regulations, agency guidance and enforcements that apply to their workers.

Executive Order Requirements Impacting Employers

Specifically, the executive order requires the Department of Justice and federal civil rights offices to coordinate on ‘best practices’ for investigating and prosecuting civil rights violations related to AI. The ‘best practices’ will address: job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and ‘best practices’ are focused on benefitting workers and “preventing employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.”

The executive order also requested a report on AI’s potential labor-market impacts, and to study and identify options for strengthening federal support for workers facing labor disruptions, including from AI. Specifically, the president has directed the Chairman of the Council of Economic Advisers to “prepare and submit a report to the President on the labor-market effects of AI”. In addition, there is a requirement for the Secretary of Labor to submit “a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements.” This report will include principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits. Employers should expect more direction once this report is completed in April 2024.

Increasing International Employment?

Developing and using generative AI inherently requires skilled workers, which President Biden recognizes. One of the goals of his executive order is to “[u]se existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.” While work visas have been historically difficult for employers to navigate, this executive order may make it easier for US employers to access skilled workers from overseas.

Looking Ahead

In light of the focus of this executive order, employers using AI for recruiting or decisions about applicants (and even current employees) must be aware of the consequences of not putting a human check on the potential bias. Working closely with employment lawyers at Sheppard Mullin and having a multiple checks and balances on recruiting practices are essential when using generative AI.

While this executive order is quite limited in scope, it is only a first step. As these actions are implemented in the coming months, be sure to check back for updates.

For more news on the Impact of the Executive Order on AI for Employers, visit the NLR Communications, Media & Internet section.

The FCC Approves an NOI to Dive Deeper into AI and its Effects on Robocalls and Robotexts

AI is on the tip of everyone’s tongue it seems these days. The Dame brought you a recap of President Biden’s orders addressing AI at the beginning of the month. This morning at the FCC’s open meeting they were presented with a request for a Notice of Inquiry (NOI) to gather additional information about the benefits and harms of artificial intelligence and its use alongside “robocall and robotext”. The following five areas of interest are as follows:

  • First, the NOI seeks, on whether and if so how the commission should define AI technologies for purposes of the inquiry this includes particular uses of AI technologies that are relevant to the commission’s statutory response abilities under the TCPA, which protects consumers from nonemergency calls and texts using an autodialer or containing an artificial or prerecorded voice.
  • Second, the NOI seeks comment on how technologies may impact consumers who receive robocalls and robotexts including any potential benefits and risks that the emerging technologies may create. Specifically, the NOI seeks information on how these technologies may alter the functioning of the existing regulatory framework so that the commission may formulate policies that benefit consumers by ensuring they continue to receive privacy protections under the TCPA.
  • Third, the NOI seeks comment on whether it is necessary or possible to determine at this point whether future types of AI technologies may fall within the TCPA’s existing prohibitions on autodial calls or texts and artificial or prerecorded voice messages.
  • Fourth, NOI seeks comment on whether the commission should consider ways to verify the authenticity and legitimately generate AI voice or text content from trusted sources such as through the use of watermarks, certificates, labels, signatures, or other forms of labels when callers rely on AI technology to generate content. This may include, for example, emulating a human voice on a robocall or creating content in a text message.
  • Lastly, seeks comment on what next steps the commission should consider to further the inquiry.

While all the commissioners voted to approve the NOI they did share a few insightful comments. Commissioner Carr stated “ If AI can combat illegal robocalls, I’m all for it” but he also expressed that he does “…worry that the path we are heading down is going to be overly prescriptive” and suggests “…Let’s put some common-sense guardrails in place, but let’s not be so prescriptive and so heavy-handed on the front end that we end up benefiting large incumbents in the space because they can deal with the regulatory frameworks and stifling the smaller innovation to come.”

Commissioner Starks shared “I, for one, believe this intersectionality is clinical because the future of AI remains uncertain, one thing is clear — it has the potential to impact if not transform every aspect of American life, and because of that potential, each part of our government bears responsibility to better understand the risks, opportunities within its mandate, while being mindful of the limits of its expertise, experience, and authority. In this era of rapid technological change, we must collaborate, lean into our expertise across agencies to best serve our citizens and consumers.” Commissioner Starks seemed to be particularly focused on AI’s ability to facilitate bad actors in schemes like voice cloning and how the FCC can implement safeguards against this type of behavior.

“AI technologies can bring new challenges and opportunities. responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplifying the risks in increasing the digital landscape” Commissioner Gomez shared.

Finally, the topic around the AI NOI wrapped up with Chairwoman Rosenworcel commenting “… I think we make a mistake if we only focus on the potential for harm. We needed to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts. We are talking about technology that can see patterns in our network traffic, unlike anything we have today. They can lead to the development of analytic tools that are exponentially better at finding fraud before it reaches us at home. Used at scale, we cannot only stop this junk, we can use it to increase trust in our networks. We are asking how artificial intelligence is being used right now to recognize patterns in network traffic and how it can be used in the future. We know the risks this technology involves but we also want to harness the benefits.”

Biden Executive Order Calls for HHS to Establish Health Care-Specific Artificial Intelligence Programs and Policies

On October 30, 2023, the Biden Administration released and signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order) that articulates White House priorities and policies related to the use and development of artificial intelligence (AI) across different sectors, including health care.

The Biden Administration acknowledged the various competing interests related to AI, including weighing significant technological innovation against unintended societal consequences. Our Mintz and ML Strategies colleagues broadly covered the Executive Order in this week’s issue of AI: The Washington Report. Some sections of the Executive Order are sector-agnostic but will be especially relevant in health care, such as the requirement that agencies use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the improper collection and use of individuals’ data.

The Biden Administration only recently announced the Executive Order, but the discussion of regulating AI in health care is certainly not novel. For example, the U.S. Food and Drug Administration (FDA) has already incorporated artificial intelligence and machine learning-based medical device software into its medical device and software regulatory regime. The Office of the National Coordinator for Health Information Technology (ONC) also included AI and machine learning proposals under the HTI-1 Proposed Rule, including proposals to increase algorithmic transparency and allow users of clinical decision support (CDS) to determine if predictive Decision Support Interventions (DSIs) are fair, appropriate, valid, effective, and safe.

We will focus this post on the Executive Order health care-specific directives for the U.S. Department of Health and Human Services (HHS) and other relevant agencies.

HHS AI Task Force and Quality Assurance

To address how AI should be used safely and effectively in health care, the Executive Order requires HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to establish an “HHS AI Task Force” by January 28, 2024. Once created, the HHS AI Task Force has 365 days to develop a regulatory action plan for predictive and generative AI-enabled technologies in health care that includes:

  • use of AI in health care delivery and financing and the need for human oversight where necessary and appropriate;
  • long-term safety and real-world performance monitoring of AI-enabled technologies;
  • integration of equity principles in AI-enabled technologies, including monitoring for model discrimination and bias;
  • assurance that safety, privacy, and security standards are baked into the software development lifecycle;
  • prioritization of transparency and making model documentation available to users to ensure AI is used safely;
  • collaboration with state, local, Tribal, and territorial health and human services agencies to communicate successful AI use cases and best practices; and
  • use of AI to make workplaces more efficient and reduce administrative burdens where possible.

HHS also has until March 28, 2024 to take the following steps:

  • consult with other relevant agencies to determine whether AI-enabled technologies in health care “maintain appropriate levels of quality”;
  • develop (along with other agencies) AI assurance policies to evaluate the performance of AI-enabled health care tools and assess AI-enabled health care-technology algorithmic system performance against real-world data; and
  • consult with other relevant agencies to reconcile the uses of AI in health care against federal non-discrimination and privacy laws, including providing technical assistance to and communicating potential consequences of noncompliance to health care providers and payers.

AI Safety Program and Drug Development

The Executive Order also directs HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to organize and implement an AI Safety Program by September 30, 2024. In partnership with federally listed Patient Safety Organizations, the AI Safety Program will be tasked with creating a common framework that organizations can use to monitor and track clinical errors resulting from AI used in health care settings. The program will also create a central tracking repository to track complaints from patients and caregivers who report discrimination and bias related to the use of AI.

Additionally, by September 30, 2024, HHS must develop a strategy to regulate the use of AI or AI-enabled tools in the various phases of the drug development process, including determining opportunities for future regulation, rulemaking, guidance, and use of additional statutory authority.

HHS Grant and Award Programs and AI Tech Sprint

The Executive Order also directs HHS to use existing grant and award programs to support ethical AI development by health care technology developers by:

  • leveraging existing HHS programs to work with private sector actors to develop AI-enabled tools that can create personalized patient immune-response profiles safely and securely;
  • allocating 2024 Leading Edge Acceleration Projects (LEAP) in Health Information Technology funding for the development of AI tools for clinical care, real-world-evidence programs, population health, public health, and related research; and
  • accelerating grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program and demonstrating successful AIM-AHEAD activities in underserved communities.

The Secretary of Veterans Affairs must also host two 3-month nationwide AI Tech Sprint competitions by September 30, 2024, with the goal of further developing AI systems to improve the quality of health care for veterans.

Key Takeaways

The Executive Order will spark the cross-agency development of a variety of AI-focused working groups, programs, and policies, including possible rulemaking and regulation, across the health care sector in the coming months. While the law has not yet caught up with the technology, the Executive Order provides helpful insight into the areas that will be topics of new legislation and regulation, such as drug development, as well as what may be the new enforcement priorities under existing law such as non-discrimination and data privacy and security. Health care technology developers and users will want to review their current policies and practices in light of the Biden Administration’s priorities to determine possible areas of improvement in the short term in connection with developing, implementing, and using AI.

Additionally, the National Institute of Standards and Technology (NIST) released the voluntary AI Risk Management Framework in January 2023 that, among other things, organizations can use to analyze and manage risks, impacts, and harms while responsibly designing, developing, deploying, and using AI systems over time. The Executive Order calls for NIST to develop a companion resource to the AI Risk Management Framework for generative AI. In preparation for the new AI programs and possible associated rulemaking from HHS, organizations in health care will want to familiarize themselves with the NIST AI Risk Management Framework and its generative AI companion as well as the AI Bill of Rights published by the Biden Administration in October 2022 to better understand what the federal government sees as characteristics of trustworthy AI systems.

Madison Castle contributed to this article.

Proposed Amendments to NY Film Production Tax Credit Would Disallow Costs for Artificial Intelligence

Since 2004, New York has provided tax credits to encourage film and television productions located in the state. In its adopted budget for fiscal year 2024, the tax credit program was extended to 2034, and the amount available for the tax credit increased to $700 million. The credit is 30% of “qualified costs” incurred in the production. This tax credit is one of the reasons that New York has remained one of the top filming locations in the United States notwithstanding stiff competition from other states to lure television and film projects.

Subsequently, legislation (S7422A) was introduced that would remove from “qualified costs” used to calculate the tax credit any production that “uses artificial intelligence in a manner which results in the displacement of employees whose salaries are qualified expenses, unless such replacement is permitted by a current collective bargaining agreement in force covering such employees.”

Given that the purpose of the tax credit is to incentivize production and creation of jobs in the state, with the increasing use of artificial intelligence (AI), there is scrutiny of how AI will impact/employment in film and television productions. The legislators were also aware that the use of AI was a major issue in the recent negotiations for contracts with the writers (now settled) and actors (still ongoing as of this date). Consequently, the idea to disincentivize the use of AI that supplants employment by removing the cost of AI from the calculation of the tax credit provides motivation to pursue the proposed legislation in New York’s Legislature.

The goal of removing AI costs from the credit is protecting employment from encroachment by AI, but how the disallowance would be implemented is unclear. For example, if instead of using costumed characters or extensive make-up, a production used computer generated images (CGI), would the cost of the CGI be disallowed? Or if AI were used to write or supplement dialogue, would that call into question those costs for computing the tax credit? How would an auditor reviewing the film credit know and understand where AI is used and whether it actually displaced a human employee? In addition, auditors would have to examine collective bargaining agreements to determine whether “such replacement is permitted by a current collective bargaining agreement in force covering such employees.”

Whether or not S. 7422-A is enacted, the proposal may pique the interest of the other 37 states that have some type of credit for film production. See Film Industry Tax Incentives: State-by-State (2023) | Wrapbook.

The Generative AI Revolution: Key Legal Considerations for the Fashion & Retail Industry

For better or worse, generative artificial intelligence (AI) is already transforming the way we live and work. Retail and fashion companies that fail to embrace AI likely risk losing their current market share or, worse, going out of business altogether. This paradigm shift is existential, and businesses that recognize and leverage AI will gain a significant competitive advantage.

For instance, some of our clients are using AI to streamline product design processes, reducing the costs and time necessary to generate designs, while others employ virtual models to circumvent issues related to adult and child modeling. Additionally, AI can provide valuable market intelligence to inform sales and distribution strategies. This alert will address these benefits, as well as other significant commercial advantages, and delve into the legal risks associated with utilizing AI in the fashion and retail industry.

There are significant commercial advantages to using AI for retail and fashion companies, including:

1. Product Design

From fast fashion to luxury brands, AI is set to revolutionize the fashion and retail industry. It enables the generation of innovative designs by drawing inspiration from a designer’s existing works and incorporating the designer’s unique style into new creations. For instance, in March 2023, G-Star Raw created its first denim couture piece designed by AI. We also worked with a client who utilized an AI tool to analyze its footwear designs from the previous two years and generate new designs for 2024. Remarkably, the AI tool produced 50 designs in just four minutes, with half of them being accepted by the company. Typically, this process would have required numerous designers and taken months to complete. While it is unlikely that AI tools will entirely replace human designers, the cost savings and efficiency gained from using such technology are undeniable and should not be overlooked.

2. Virtual Models

2023 marks a groundbreaking year with the world’s first AI Fashion Week and the launch of AI-generated campaigns, such as Valentino’s Maison Valentino Essentials collection, which combined AI-generated models with actual product photography. Fashion companies allocate a significant portion of their budget to model selection and hiring, necessitating entire departments and grappling with legal concerns such as royalties, SAG, moral issues, and child labor. By leveraging AI tools to create lifelike virtual models, these companies can eliminate the associated challenges and expenses, as AI models are not subject to labor laws — including child entertainment regulations — or collective bargaining agreements.

3. Advertising Campaigns

AI can also be used to create entire advertising campaigns from print copy to email blasts, blog posts, and social media. Companies traditionally invest substantial time and resources in these efforts, but AI can generate such content in mere moments. While human involvement remains essential, AI allows businesses to reduce the manpower required. Retailers can also benefit from AI-powered chatbots, which provide 24/7 customer support while reducing overhead expenses linked to in-person customer service. Moreover, AI’s predictive capabilities enable businesses to anticipate trends across various demographics in real-time, driving customer engagement. By processing and analyzing vast amounts of consumer data and preferences, brands can create hyper-personalized and bespoke content, enhancing customer acquisition, engagement, and retention. Furthermore, AI facilitates mass content creation at an impressively low cost, making it an invaluable tool in today’s competitive market.

4. ESG – Virtual Mirrors and Apps

From an environmental, social, and corporate governance (ESG) standpoint, the use of AI-powered technology can eliminate the need for retail stores to carry excess inventory, thereby reducing online returns and exchanges. AI smart mirrors can enhance in-store experiences for shoppers by enabling them to virtually try on outfits in various sizes and colors. Furthermore, customers can now enjoy the virtual try-on experience from the comfort of their homes, as demonstrated by Amazon’s “Virtual Try-On for Shoes,” which allows users to visualize how selected shoes will appear on their feet using their smartphone cameras.

5. Product Distribution and Logistics

Fashion companies rely on their C-level executives to make informed predictions about product quantities, potential sales in specific markets or stores, and the styles that will perform best in each market. In terms of logistics, AI models can be employed to forecast a business’s future sales by analyzing historical inventory and sales data. This ability to anticipate supply chain requirements can lead to increased profits and support the industry’s initiatives to reduce waste.

Legal and Ethical Risks

Although AI has some major advantages, it also comes with a number of legal and ethical risks that should be considered, including:

1. Accuracy and Reliability

For all their well-deserved accolades and hype, generative AI tools remain a work in progress. Users, especially commercial enterprises, should never assume that AI-created works are accurate, non-infringing, or fit for commercial use. In fact, there have been numerous recorded instances in which generative AI tools have created works that arguably infringe the copyrights of existing works, make up facts, or cite phantom sources. It is also important to note that works created by generative AI may incorporate or display third-party trademarks or celebrity likenesses, which generally cannot be used for commercial purposes without appropriate rights or permissions. Like anything else, companies should carefully vet any content produced by generative AI before using it for commercial purposes.

2. Data Security and Confidentiality

Before utilizing generative AI tools, companies should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices for these tools vary. Some tools may store and use prompts and other information submitted by users. Other tools offer assurances that prompts and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.

Before authorizing the use of generative AI tools, organizations and their legal counsel should (i) carefully review the applicable terms of use, (ii) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality, and (iii) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements.

3. Software Development and Open-Source Software

One of the most popular use cases for generative AI has been computer coding and software development. But the proliferation of AI tools like GitHub Copilot, as well as a pending lawsuit against its developers, has raised a number of questions for legal counsel about whether use of such tools could expose companies to legal claims or license obligations.

These concerns stem in part from the use of open-source code libraries in the data sets for Copilot and similar tools. While open-source code is generally freely available for use, that does not mean that it may be used without condition or limitation. In fact, open-source code licenses typically impose a variety of obligations on individuals and entities that incorporate open-source code into their works. This may include requiring an attribution notice in the derivative work, providing access to source code, and/or requiring that the derivative work be made available on the same terms as the open-source code.

Many companies, particularly those that develop valuable software products, cannot risk having open-source code inadvertently included in their proprietary products or inadvertently disclosing proprietary code through insecure generative AI coding tools. That said, some AI developers are now providing tools that allow coders to exclude AI-generated code that matches code in large public repositories (in other words, making sure the AI assistant is not directly copying other public code), which would reduce the likelihood of an infringement claim or inclusion of open-source code. As with other AI generated content, users should proceed cautiously, while carefully reviewing and testing AI-contributed code.

4. Content Creation and Fair Compensation

In a recent interview, Billy Corgan, the lead singer of Smashing Pumpkins, predicted that “AI will change music forever” because once young artists figure out they can use generative AI tools to create new music, they won’t spend 10,000 hours in a basement the way he did. The same could be said for photography, visual art, writing, and other forms of creative expression.

This challenge to the notion of human authorship has ethical and legal implications. For example, generative AI tools have the potential to significantly undermine the IP royalty and licensing regimes that are intended to ensure human creators are fairly compensated for their work. Consider the recent example of the viral song, “Heart on My Sleeve,” which sounded like a collaboration between Drake and the Weeknd, but was in fact created entirely by AI. Before being removed from streaming services, the song racked up millions of plays — potentially depriving the real artists of royalties they would otherwise have earned from plays of their copyrighted songs. In response, some have suggested that human artists should be compensated when generative AI tools create works that mimic or are closely inspired by copyrighted works and/or that artists should be compensated if their works are used to train the large language models that make generative AI possible. Others have suggested that works should be clearly labeled if they are created by generative AI, so as to distinguish works created by humans from those created by machine.

5. Intellectual Property Protection and Enforcement

Content produced without significant human control and involvement is not protectable by US copyright or patent laws, creating a new orphan class of works with no human author and potentially no usage restrictions. That said, one key principle can go a long way to mitigating IP risk: generative AI tools should aid human creation, not replace it. Provided that generative AI tools are used merely to help with drafting or the creative process, then it is more likely that the resulting work product will be protectable under copyright or patent laws. In contrast, asking generative AI tools to create a finished work product, such as asking it to draft an entire legal brief, will likely deprive the final work product of protection under IP laws, not to mention the professional responsibility and ethical implications.

6. Labor and Employment

When Hollywood writers went on strike, one issue in particular generated headlines: a demand by the union to regulate the use of artificial intelligence on union projects, including prohibiting AI from writing or re-writing literary material; prohibiting its use as source material; and prohibiting the use of union content to train AI large language models. These demands are likely to presage future battles to maintain the primacy of human labor over cheaper or more efficient AI alternatives.

Employers are also utilizing automated systems to target job advertisements, recruit applicants, and make hiring decisions. Such systems expose employers to liability if they intentionally or unintentionally exclude or impact protected groups. According to the Equal Employment Opportunity Commission (EEOC), that’s precisely what happened with iTutorGroup, Inc.

7. Future Regulation

Earlier this year, Italy became the first Western country to ban ChatGPT, but it may not be the last. In the United States, legislators and prominent industry voices have called for proactive federal regulation, including the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others have suggested creating a federal private right of action that would make it easier for consumers to sue AI developers for harm they create. Whether US legislators and regulators can overcome partisan divisions and enact a comprehensive framework seems unlikely, but as is becoming increasingly clear, these are unprecedented times.

For more articles on AI, visit the NLR Communications, Media and Internet section.