President Biden Signs Executive Order Directing Agencies to Prioritize Pro-Union and Union Neutrality Policies

On September 6, 2024, President Biden signed an Executive Order on Investing in America and Investing in American Workers (the “Order”), that, among other things, aims to provide “incentives for federally assisted projects with high labor standards – including collective bargaining agreements, project labor agreements, and certain community benefits agreements.” Specifically, the Order directs federal agencies to prioritize projects that provide “high labor standards” for “Federal financial assistance,” which is defined as “funds obtained from or borrowed on the credit of the Federal Government pursuant to grants (whether formula or discretionary), loans, or rebates, or projects undertaken pursuant to any Federal program involving such grants, loans, or rebates.”

The Order expressly instructs agencies to prioritize projects that “provide a clear plan for efficient project delivery by promoting positive labor-management relations.” This includes project labor agreements, collective bargaining agreements, community benefits agreements, and other “agreements designed to facilitate first collective bargaining agreements, voluntary union recognition, and neutrality by the employer with respect to union organizing.”

In addition, the Order directs agencies to prioritize projects that: (i) “enhance worker productivity by promoting family-sustaining wages”; (ii) supply particular benefits, including paid leave (e.g., paid sick, family, and medical leave), healthcare benefits, retirement benefits, and child, dependent, and elder care; (iii) enact policies designed to combat discrimination that impacts workers from underserved communities; (iv) expand worker access to high-quality training and credentials that will “lead to good jobs” and strengthen workforce development; and (v) promote and protect worker health and safety. Per the Order, projects that use, among other things, union pattern wage scales, joint labor-management partnerships to invest in “union-affiliated training programs, registered apprenticeships, and pre-apprenticeship programs,” or policies that encourage worker and union participation in the design and implementation of workplace safety and health management systems, will assist in satisfying the goal of achieving “high labor standards” and should be prioritized.

To effectuate the Order’s priorities, agencies are instructed to consider including application evaluation criteria or selection factors that will prioritize those applicants for federal assistance that adopt or provide a specific plan to adopt the priorities set forth in the Order. Agencies also must consider, among other things, publishing relevant guidance, such as best practice guides, engaging more deeply with applicants prior to any award of federal assistance “to ensure that applicants understand the benefits of [the Order’s] priorities for key programs and projects,” and collecting relevant data to evaluate and monitor the progress of funding recipients in satisfying the Order’s goals.

The “implementing agencies,” or the agencies subject to the Order, are the Department of the Interior, the Department of Agriculture, the Department of Commerce, the Department of Labor, the Department of Housing and Urban Development, the Department of Transportation, the Department of Energy, the Department of Education, the Department of Homeland Security, and the Environmental Protection Agency.

Finally, the Order creates a task force, referred to as the Investing in Good Jobs Task Force, that will be co-chaired by the Secretary of Labor and the Director of the National Economic Council, or their designees, and will oversee implementation of the Order’s labor standards in funding decisions by the implementing agencies.

The White House also issued a Fact Sheet (available here) discussing the Order and President Biden’s motivation for its enactment. It remains to be seen what impact the Order will have on the implementing agencies or how those agencies may alter their funding programs to comply with the Order. We will continue to monitor these developments and will keep you informed as to any new updates.

President Biden Announces Groundbreaking Restrictions on Access to Americans’ Sensitive Personal Data by Countries of Concern

The EO and forthcoming regulations will impact the use of genomic data, biometric data, personal health care data, geolocation data, financial data and some other types of personally identifiable information. The administration is taking this extraordinary step in response to the national security risks posed by access to US persons’ sensitive data by countries of concern – data that could then be used to surveil, scam, blackmail and support counterintelligence efforts, or could be exploited by artificial intelligence (AI) or be used to further develop AI. The EO, however, does not call for restrictive personal data localization and aims to balance national security concerns against the free flow of commercial data and the open internet, consistent with protection of security, privacy and human rights.

The EO tasks the US Department of Justice (DOJ) to develop rules that will address these risks and provide an opportunity for businesses and other stakeholders, including labor and human rights organizations, to provide critical input to agency officials as they draft these regulations. The EO and forthcoming regulations will not screen individual transactions. Instead, they will establish general rules regarding specific categories of data, transactions and covered persons, and will prohibit and regulate certain high-risk categories of restricted data transactions. It is contemplated to include a licensing and advisory opinion regime. DOJ expects companies to develop and implement compliance procedures in response to the EO and subsequent implementing of rules. The adequacy of such compliance programs will be considered as part of any enforcement action – action that could include civil and criminal penalties. Companies should consider action today to evaluate risk, engage in the rulemaking process and set up compliance programs around their processing of sensitive data.

Companies across industries collect and store more sensitive consumer and user data today than ever before; data that is often obtained by data brokers and other third parties. Concerns have grown around perceived foreign adversaries and other bad actors using this highly sensitive data to track and identify US persons as potential targets for espionage or blackmail, including through the training and use of AI. The increasing availability and use of sensitive personal information digitally, in concert with increased access to high-performance computing and big data analytics, has raised additional concerns around the ability of adversaries to threaten individual privacy, as well as economic and national security. These concerns have only increased as governments around the world face the privacy challenges posed by increasingly powerful AI platforms.

The EO takes significant new steps to address these concerns by expanding the role of DOJ, led by the National Security Division, in regulating the use of legal mechanisms, including data brokerage, vendor and employment contracts and investment agreements, to obtain and exploit American data. The EO does not immediately establish new rules or requirements for protection of this data. It instead directs DOJ, in consultation with other agencies, to develop regulations – but these restrictions will not enter into effect until DOJ issues a final rule.

Broadly, the EO, among other things:

  • Directs DOJ to issue regulations to protect sensitive US data from exploitation due to large scale transfer to countries of concern, or certain related covered persons and to issue regulations to establish greater protection of sensitive government-related data
  • Directs DOJ and the Department of Homeland Security (DHS) to develop security standards to prevent commercial access to US sensitive personal data by countries of concern
  • Directs federal agencies to safeguard American health data from access by countries of concern through federal grants, contracts and awards

Also on February 28, DOJ issued an Advance Notice of Proposed Rulemaking (ANPRM), providing a critical first opportunity for stakeholders to understand how DOJ is initially contemplating this new national security regime and soliciting public comment on the draft framework.

According to a DOJ fact sheet, the ANPRM:

  • Preliminarily defines “countries of concern” to include China and Russia, among others
  • Focuses on six enumerated categories of sensitive personal data: (1) covered personal identifiers, (2) geolocation and related sensor data, (3) biometric identifiers, (4) human genomic data, (5) personal health data and (6) personal financial data
  • Establishes a bulk volume threshold for the regulation of general data transactions in the enumerated categories but will also regulate transactions in US government-related data regardless of the volume of a given transaction
  • Proposes a broad prohibition on two specific categories of data transactions between US persons and covered countries or persons – data brokerage transactions and genomic data transactions.
  • Contemplates restrictions on certain vendor agreements for goods and services, including cloud service agreements; employment agreements; and investment agreements. These cybersecurity requirements would be developed by DHS’s Cybersecurity and Infrastructure Agency and would focus on security requirements that would prevent access by countries of concern.

The ANPRM also proposes general and specific licensing processes that will give DOJ considerable flexibilities for certain categories of transactions and more narrow exceptions for specific transactions upon application by the parties involved. DOJ’s licensing decisions would be made in collaboration with DHS, the Department of State and the Department of Commerce. Companies and individuals contemplating data transactions will also be able to request advisory opinions from DOJ on the applicability of these regulations to specific transactions.

A White House fact sheet announcing these actions emphasized that they will be undertaken in a manner that does not hinder the “trusted free flow of data” that underlies US consumer, trade, economic and scientific relations with other countries. A DOJ fact sheet echoed this commitment to minimizing economic impacts by seeking to develop a program that is “carefully calibrated” and in line with “longstanding commitments to cross-border data flows.” As part of that effort, the ANPRM contemplates exemptions for four broad categories of data: (1) data incidental to financial services, payment processing and regulatory compliance; (2) ancillary business operations within multinational US companies, such as payroll or human resources; (3) activities of the US government and its contractors, employees and grantees; and (4) transactions otherwise required or authorized by federal law or international agreements.

Notably, Congress continues to debate a comprehensive Federal framework for data protection. In 2022, Congress stalled on the consideration of the American Data Privacy and Protection Act, a bipartisan bill introduced by House energy and commerce leadership. Subsequent efforts to move comprehensive data privacy legislation in Congress have seen little momentum but may gain new urgency in response to the EO.

The EO lays the foundation for what will become significant new restrictions on how companies gather, store and use sensitive personal data. Notably, the ANPRM also represents recognition by the White House and agency officials that they need input from business and other stakeholders to guide the draft regulations. Impacted companies must prepare to engage in the comment process and to develop clear compliance programs so they are ready when the final restrictions are implemented.

Kate Kim Tuma contributed to this article 

Recent Healthcare-Related Artificial Intelligence Developments

AI is here to stay. The development and use of artificial intelligence (“AI”) is rapidly growing in the healthcare landscape with no signs of slowing down.

From a governmental perspective, many federal agencies are embracing the possibilities of AI. The Centers for Disease Control and Prevention is exploring the ability of AI to estimate sentinel events and combat disease outbreaks and the National Institutes of Health is using AI for priority research areas. The Centers for Medicare and Medicaid Services is also assessing whether algorithms used by plans and providers to identify high risk patients and manage costs can introduce bias and restrictions. Additionally, as of December 2023, the U.S. Food & Drug Administration cleared more than 690 AI-enabled devices for market use.

From a clinical perspective, payers and providers are integrating AI into daily operations and patient care. Hospitals and payers are using AI tools to assist in billing. Physicians are using AI to take notes and a wide range of providers are grappling with which AI tools to use and how to deploy AI in the clinical setting. With the application of AI in clinical settings, the standard of patient care is evolving and no entity wants to be left behind.

From an industry perspective, the legal and business spheres are transforming as a result of new national and international regulations focused on establishing the safe and effective use of AI, as well as commercial responses to those regulations. Three such regulations are top of mind, including (i) President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI; (ii) the U.S. Department of Health and Human Services’ (“HHS”) Final Rule on Health Data, Technology, and Interoperability; and (iii) the World Health Organization’s (“WHO”) Guidance for Large Multi-Modal Models of Generative AI. In response to the introduction of regulations and the general advancement of AI, interested healthcare stakeholders, including many leading healthcare companies, have voluntarily committed to a shared goal of responsible AI use.

U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“Executive Order”). Though long-awaited, the Executive Order was a major development and is one of the most ambitious attempts to regulate this burgeoning technology. The Executive Order has eight guiding principles and priorities, which include (i) Safety and Security; (ii) Innovation and Competition; (iii) Commitment to U.S. Workforce; (iv) Equity and Civil Rights; (v) Consumer Protection; (vi) Privacy; (vii) Government Use of AI; and (viii) Global Leadership.

Notably for healthcare stakeholders, the Executive Order directs the National Institute of Standards and Technology to establish guidelines and best practices for the development and use of AI and directs HHS to develop an AI Task force that will engineer policies and frameworks for the responsible deployment of AI and AI-enabled tech in healthcare. In addition to those directives, the Executive Order highlights the duality of AI with the “promise” that it brings and the “peril” that it has the potential to cause. This duality is reflected in HHS directives to establish an AI safety program to prioritize the award of grants in support of AI development while ensuring standards of nondiscrimination are upheld.

U.S. Department of Health and Human Services Health Data, Technology, and Interoperability Rule

In the wake of the Executive Order, the HHS Office of the National Coordinator finalized its rule to increase algorithm transparency, widely known as HT-1, on December 13, 2023. With respect to AI, the rule promotes transparency by establishing transparency requirements for AI and other predictive algorithms that are part of certified health information technology. The rule also:

  • implements requirements to improve equity, innovation, and interoperability;
  • supports the access, exchange, and use of electronic health information;
  • addresses concerns around bias, data collection, and safety;
  • modifies the existing clinical decision support certification criteria and narrows the scope of impacted predictive decision support intervention; and
  • adopts requirements for certification of health IT through new Conditions and Maintenance of Certification requirements for developers.

Voluntary Commitments from Leading Healthcare Companies for Responsible AI Use

Immediately on the heels of the release of HT-1 came voluntary commitments from leading healthcare companies on responsible AI development and deployment. On December 14, 2023, the Biden Administration announced that 28 healthcare provider and payer organizations signed up to move toward the safe, secure, and trustworthy purchasing and use of AI technology. Specifically, the provider and payer organizations agreed to:

  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and use of AI technology. Specifically, the provider and payer organizations agreed to:
  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and
  • research, investigate, and develop AI swiftly but responsibly.

WHO Guidance for Large Multi-Modal Models of Generative AI

On January 18, 2024, the WHO released guidance for large multi-modal models (“LMM”) of generative AI, which can simultaneously process and understand multiple types of data modalities such as text, images, audio, and video. The WHO guidance contains 98 pages with over 40 recommendations for tech developers, providers and governments on LMMs, and names five potential applications of LMMs, such as (i) diagnosis and clinical care; (ii) patient-guided use; (iii) administrative tasks; (iv) medical education; and (v) scientific research. It also addresses the liability issues that may arise out of the use of LMMs.

Closely related to the WHO guidance, the European Council’s agreement to move forward with a European Union AI Act (“Act”), was a significant milestone in AI regulation in the European Union. As previewed in December 2023, the Act will inform how AI is regulated across the European Union, and other nations will likely take note of and follow suit.

Conclusion

There is no question that AI is here to stay. But how the healthcare industry will look when AI is more fully integrated still remains to be seen. The framework for regulating AI will continue to evolve as AI and the use of AI in healthcare settings changes. In the meantime, healthcare stakeholders considering or adopting AI solutions should stay abreast of developments in AI to ensure compliance with applicable laws and regulations.

Commerce Department Launches Cross-Sector Consortium on AI Safety — AI: The Washington Report

  1. The Department of Commerce has launched the US AI Safety Institute Consortium (AISIC), a multistakeholder body tasked with developing AI safety standards and practices.
  2. The AISIC is currently composed of over 200 members representing industry, academia, labor, and civil society.
  3. The consortium may play an important role in implementing key provisions of President Joe Biden’s executive order on AI, including the development of guidelines on red-team testing[1] for AI and the creation of a companion resource to the AI Risk Management Framework.

Introduction: “First-Ever Consortium Dedicated to AI Safety” Launches

On February 8, 2024, the Department of Commerce announced the creation of the US AI Safety Institute Consortium (AISIC), a multistakeholder body housed within the National Institute of Standards and Technology (NIST). The purpose of the AISIC is to facilitate the development and adoption of AI safety standards and practices.

The AISIC has brought together over 200 organizations from industry, labor, academia, and civil society, with more members likely to join in the coming months.

Biden AI Executive Order Tasks Commerce Department with AI Safety Efforts

On October 30, 2023, President Joe Biden signed a wide-ranging executive order on AI (“AI EO”). This executive order has mobilized agencies across the federal bureaucracy to implement policies, convene consortiums, and issue reports on AI. Among other provisions, the AI EO directs the Department of Commerce (DOC) to establish “guidelines and best practices, with the aim of promoting consensus…[and] for developing and deploying safe, secure, and trustworthy AI systems.”

Responding to this mandate, the DOC established the US Artificial Intelligence Safety Institute (AISI) in November 2023. The role of the AISI is to “lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.” Concretely, the AISI is tasked with developing AI safety guidelines and standards and liaising with the AI safety bodies of partner nations.

The AISI is also responsible for convening multistakeholder fora on AI safety. It is in pursuance of this responsibility that the DOC has convened the AISIC.

The Responsibilities of the AISIC

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said DOC Secretary Gina Raimondo in a statement announcing the launch of the AISIC. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

To achieve the objectives set out by the AI EO, the AISIC has convened leading AI developers, research institutions, and civil society groups. At launch, the AISIC has over 200 members, and that number will likely grow in the coming months.

According to NIST, members of the AISIC will engage in the following objectives:

  1. Guide the evolution of industry standards on the development and deployment of safe, secure, and trustworthy AI.
  2. Develop methods for evaluating AI capabilities, especially those that are potentially harmful.
  3. Encourage secure development practices for generative AI.
  4. Ensure the availability of testing environments for AI tools.
  5. Develop guidance and practices for red-team testing and privacy-preserving machine learning.
  6. Create guidance and tools for digital content authentication.
  7. Encourage the development of AI-related workforce skills.
  8. Conduct research on human-AI system interactions and other social implications of AI.
  9. Facilitate understanding among actors operating across the AI ecosystem.

To join the AISIC, organizations were instructed to submit a letter of intent via an online webform. If selected for participation, applicants were asked to sign a Cooperative Research and Development Agreement (CRADA)[2] with NIST. Entities that could not participate in a CRADA were, in some cases, given the option to “participate in the Consortium pursuant to separate non-CRADA agreement.”

While the initial deadline to submit a letter of intent has passed, NIST has provided that there “may be continuing opportunity to participate even after initial activity commences for participants who were not selected initially or have submitted the letter of interest after the selection process.” Inquiries regarding AISIC membership may be directed to this email address.

Conclusion: The AISIC as a Key Implementer of the AI EO?

While at the time of writing NIST has not announced concrete initiatives that the AISIC will undertake, it is likely that the body will come to play an important role in implementing key provisions of Biden’s AI EO. As discussed earlier, NIST created the AISI and the AISIC in response to the AI EO’s requirement that DOC establish “guidelines and best practices…for developing and deploying safe, secure, and trustworthy AI systems.” Under this general heading, the AI EO lists specific resources and frameworks that the DOC must establish, including:

It is premature to assert that either the AISI or the AISIC will exclusively carry out these goals, as other bodies within the DOC (such as the National AI Research Resource) may also contribute to the satisfaction of these requirements. That being said, given the correspondence between these mandates and the goals of the AISIC, along with the multistakeholder and multisectoral structure of the consortium, it is likely that the AISIC will play a significant role in carrying out these tasks.

We will continue to provide updates on the AISIC and related DOC AI initiatives. Please feel free to contact us if you have questions as to current practices or how to proceed.

Endnotes

[1] As explained in our July 2023 newsletter on Biden’s voluntary framework on AI, “red-teaming” is “a strategy whereby an entity designates a team to emulate the behavior of an adversary attempting to break or exploit the entity’s technological systems. As the red team discovers vulnerabilities, the entity patches them, making their technological systems resilient to actual adversaries.”

[2] See “CRADAs – Cooperative Research & Development Agreements” for an explanation of CRADAs. https://www.doi.gov/techtransfer/crada.

Raj Gambhir contributed to this article.

What Employers Need to Know about the White House’s Executive Order on AI

President Joe Biden recently issued an executive order devised to establish minimum risk practices for use of generative artificial intelligence (“AI”) with focus on rights and safety of people, with many consequences for employers. Businesses should be aware of these directives to agencies, especially as they may result in new regulations, agency guidance and enforcements that apply to their workers.

Executive Order Requirements Impacting Employers

Specifically, the executive order requires the Department of Justice and federal civil rights offices to coordinate on ‘best practices’ for investigating and prosecuting civil rights violations related to AI. The ‘best practices’ will address: job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and ‘best practices’ are focused on benefitting workers and “preventing employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.”

The executive order also requested a report on AI’s potential labor-market impacts, and to study and identify options for strengthening federal support for workers facing labor disruptions, including from AI. Specifically, the president has directed the Chairman of the Council of Economic Advisers to “prepare and submit a report to the President on the labor-market effects of AI”. In addition, there is a requirement for the Secretary of Labor to submit “a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements.” This report will include principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits. Employers should expect more direction once this report is completed in April 2024.

Increasing International Employment?

Developing and using generative AI inherently requires skilled workers, which President Biden recognizes. One of the goals of his executive order is to “[u]se existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.” While work visas have been historically difficult for employers to navigate, this executive order may make it easier for US employers to access skilled workers from overseas.

Looking Ahead

In light of the focus of this executive order, employers using AI for recruiting or decisions about applicants (and even current employees) must be aware of the consequences of not putting a human check on the potential bias. Working closely with employment lawyers at Sheppard Mullin and having a multiple checks and balances on recruiting practices are essential when using generative AI.

While this executive order is quite limited in scope, it is only a first step. As these actions are implemented in the coming months, be sure to check back for updates.

For more news on the Impact of the Executive Order on AI for Employers, visit the NLR Communications, Media & Internet section.

Biden Executive Order Calls for HHS to Establish Health Care-Specific Artificial Intelligence Programs and Policies

On October 30, 2023, the Biden Administration released and signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order) that articulates White House priorities and policies related to the use and development of artificial intelligence (AI) across different sectors, including health care.

The Biden Administration acknowledged the various competing interests related to AI, including weighing significant technological innovation against unintended societal consequences. Our Mintz and ML Strategies colleagues broadly covered the Executive Order in this week’s issue of AI: The Washington Report. Some sections of the Executive Order are sector-agnostic but will be especially relevant in health care, such as the requirement that agencies use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the improper collection and use of individuals’ data.

The Biden Administration only recently announced the Executive Order, but the discussion of regulating AI in health care is certainly not novel. For example, the U.S. Food and Drug Administration (FDA) has already incorporated artificial intelligence and machine learning-based medical device software into its medical device and software regulatory regime. The Office of the National Coordinator for Health Information Technology (ONC) also included AI and machine learning proposals under the HTI-1 Proposed Rule, including proposals to increase algorithmic transparency and allow users of clinical decision support (CDS) to determine if predictive Decision Support Interventions (DSIs) are fair, appropriate, valid, effective, and safe.

We will focus this post on the Executive Order health care-specific directives for the U.S. Department of Health and Human Services (HHS) and other relevant agencies.

HHS AI Task Force and Quality Assurance

To address how AI should be used safely and effectively in health care, the Executive Order requires HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to establish an “HHS AI Task Force” by January 28, 2024. Once created, the HHS AI Task Force has 365 days to develop a regulatory action plan for predictive and generative AI-enabled technologies in health care that includes:

  • use of AI in health care delivery and financing and the need for human oversight where necessary and appropriate;
  • long-term safety and real-world performance monitoring of AI-enabled technologies;
  • integration of equity principles in AI-enabled technologies, including monitoring for model discrimination and bias;
  • assurance that safety, privacy, and security standards are baked into the software development lifecycle;
  • prioritization of transparency and making model documentation available to users to ensure AI is used safely;
  • collaboration with state, local, Tribal, and territorial health and human services agencies to communicate successful AI use cases and best practices; and
  • use of AI to make workplaces more efficient and reduce administrative burdens where possible.

HHS also has until March 28, 2024 to take the following steps:

  • consult with other relevant agencies to determine whether AI-enabled technologies in health care “maintain appropriate levels of quality”;
  • develop (along with other agencies) AI assurance policies to evaluate the performance of AI-enabled health care tools and assess AI-enabled health care-technology algorithmic system performance against real-world data; and
  • consult with other relevant agencies to reconcile the uses of AI in health care against federal non-discrimination and privacy laws, including providing technical assistance to and communicating potential consequences of noncompliance to health care providers and payers.

AI Safety Program and Drug Development

The Executive Order also directs HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to organize and implement an AI Safety Program by September 30, 2024. In partnership with federally listed Patient Safety Organizations, the AI Safety Program will be tasked with creating a common framework that organizations can use to monitor and track clinical errors resulting from AI used in health care settings. The program will also create a central tracking repository to track complaints from patients and caregivers who report discrimination and bias related to the use of AI.

Additionally, by September 30, 2024, HHS must develop a strategy to regulate the use of AI or AI-enabled tools in the various phases of the drug development process, including determining opportunities for future regulation, rulemaking, guidance, and use of additional statutory authority.

HHS Grant and Award Programs and AI Tech Sprint

The Executive Order also directs HHS to use existing grant and award programs to support ethical AI development by health care technology developers by:

  • leveraging existing HHS programs to work with private sector actors to develop AI-enabled tools that can create personalized patient immune-response profiles safely and securely;
  • allocating 2024 Leading Edge Acceleration Projects (LEAP) in Health Information Technology funding for the development of AI tools for clinical care, real-world-evidence programs, population health, public health, and related research; and
  • accelerating grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program and demonstrating successful AIM-AHEAD activities in underserved communities.

The Secretary of Veterans Affairs must also host two 3-month nationwide AI Tech Sprint competitions by September 30, 2024, with the goal of further developing AI systems to improve the quality of health care for veterans.

Key Takeaways

The Executive Order will spark the cross-agency development of a variety of AI-focused working groups, programs, and policies, including possible rulemaking and regulation, across the health care sector in the coming months. While the law has not yet caught up with the technology, the Executive Order provides helpful insight into the areas that will be topics of new legislation and regulation, such as drug development, as well as what may be the new enforcement priorities under existing law such as non-discrimination and data privacy and security. Health care technology developers and users will want to review their current policies and practices in light of the Biden Administration’s priorities to determine possible areas of improvement in the short term in connection with developing, implementing, and using AI.

Additionally, the National Institute of Standards and Technology (NIST) released the voluntary AI Risk Management Framework in January 2023 that, among other things, organizations can use to analyze and manage risks, impacts, and harms while responsibly designing, developing, deploying, and using AI systems over time. The Executive Order calls for NIST to develop a companion resource to the AI Risk Management Framework for generative AI. In preparation for the new AI programs and possible associated rulemaking from HHS, organizations in health care will want to familiarize themselves with the NIST AI Risk Management Framework and its generative AI companion as well as the AI Bill of Rights published by the Biden Administration in October 2022 to better understand what the federal government sees as characteristics of trustworthy AI systems.

Madison Castle contributed to this article.

Federal Agencies Announce Investments and Resources to Advance National Biotechnology and Biomanufacturing Initiative

As reported in our September 13, 2022, blog item, on September 12, 2022, President Joseph Biden signed an Executive Order (EO) creating a National Biotechnology and Biomanufacturing Initiative “that will ensure we can make in the United States all that we invent in the United States.” The White House hosted a Summit on Biotechnology and Biomanufacturing on September 14, 2022. According to the White House fact sheet on the summit, federal departments and agencies, with funding of more than $2 billion, will take the following actions:

  • Leverage biotechnology for strengthened supply chains: The Department of Health and Human Services (DHHS) will invest $40 million to expand the role of biomanufacturing for active pharmaceutical ingredients (API), antibiotics, and the key starting materials needed to produce essential medications and respond to pandemics. The Department of Defense (DOD) is launching the Tri-Service Biotechnology for a Resilient Supply Chain program with a more than $270 million investment over five years to turn research into products more quickly and to support the advanced development of biobased materials for defense supply chains, such as fuels, fire-resistant composites, polymers and resins, and protective materials. Through the Sustainable Aviation Fuel Grand Challenge, the Department of Energy (DOE) will work with the Department of Transportation and the U.S. Department of Agriculture (USDA) to leverage the estimated one billion tons of sustainable biomass and waste resources in the United States to provide domestic supply chains for fuels, chemicals, and materials.
  • Expand domestic biomanufacturing: DOD will invest $1 billion in bioindustrial domestic manufacturing infrastructure over five years to catalyze the establishment of the domestic bioindustrial manufacturing base that is accessible to U.S. innovators. According to the fact sheet, this support will provide incentives for private- and public-sector partners to expand manufacturing capacity for products important to both commercial and defense supply chains, such as critical chemicals.
  • Foster innovation across the United States: The National Science Foundation (NSF) recently announced a competition to fund Regional Innovation Engines that will support key areas of national interest and economic promise, including biotechnology and biomanufacturing topics such as manufacturing life-saving medicines, reducing waste, and mitigating climate change. In May 2022, USDA announced $32 million for wood innovation and community wood grants, leveraging an additional $93 million in partner funds to develop new wood products and enable effective use of U.S. forest resources. DOE also plans to announce new awards of approximately $178 million to advance innovative research efforts in biotechnology, bioproducts, and biomaterials. In addition, the U.S. Economic Development Administration’s $1 billion Build Back Better Regional Challenge will invest more than $200 million to strengthen America’s bioeconomy by advancing regional biotechnology and biomanufacturing programs.
  • Bring bioproducts to market: DOE will provide up to $100 million for research and development (R&D) for conversion of biomass to fuels and chemicals, including R&D for improved production and recycling of biobased plastics. DOE will also double efforts, adding an additional $60 million, to de-risk the scale-up of biotechnology and biomanufacturing that will lead to commercialization of biorefineries that produce renewable chemicals and fuels that significantly reduce greenhouse gas emissions from transportation, industry, and agriculture. The new $10 million Bioproduct Pilot Program will support scale-up activities and studies on the benefits of biobased products. Manufacturing USA institutes BioFabUSA and BioMADE (launched by DOD) and the National Institute for Innovation in Manufacturing Biopharmaceuticals (NIIMBL) (launched by the Department of Commerce (DOC)) will expand their industry partnerships to enable commercialization across regenerative medicine, industrial biomanufacturing, and biopharmaceuticals.
  • Train the next generation of biotechnologists: The National Institutes of Health (NIH) is expanding the Innovation Corps (I-Corps™), a biotech entrepreneurship bootcamp. NIIMBL will continue to offer a summer immersion program, the NIIMBL eXperience, in partnership with the National Society for Black Engineers, which connects underrepresented students with biopharmaceutical companies, and support pathways to careers in biotechnology. In March 2022, USDA announced $68 million through the Agriculture and Food Research Initiative to train the next generation of research and education professionals.
  • Drive regulatory innovation to increase access to products of biotechnology: The Food and Drug Administration (FDA) is spearheading efforts to support advanced manufacturing through regulatory science, technical guidance, and increased engagement with industry seeking to leverage these emerging technologies. For agricultural biotechnologies, USDA is building new regulatory processes to promote safe innovation in agriculture and alternative foods, allowing USDA to review more diverse products.
  • Advance measurements and standards for the bioeconomy: DOC plans to invest an additional $14 million next year at the National Institute of Standards and Technology for biotechnology research programs to develop measurement technologies, standards, and data for the U.S. bioeconomy.
  • Reduce risk through investing in biosecurity innovations: DOE’s National Nuclear Security Administration plans to initiate a new $20 million bioassurance program that will advance U.S. capabilities to anticipate, assess, detect, and mitigate biotechnology and biomanufacturing risks, and will integrate biosecurity into biotechnology development.
  • Facilitate data sharing to advance the bioeconomy: Through the Cancer Moonshot, NIH is expanding the Cancer Research Data Ecosystem, a national data infrastructure that encourages data sharing to support cancer care for individual patients and enables discovery of new treatments. USDA is working with NIH to ensure that data on persistent poverty can be integrated with cancer surveillance. NSF recently announced a competition for a new $20 million biosciences data center to increase our understanding of living systems at small scales, which will produce new biotechnology designs to make products in agriculture, medicine and health, and materials.

A recording of the White House summit is available online.

©2022 Bergeson & Campbell, P.C.

Biden Administration Seeks to Clarify Patient Privacy Protections Post-Dobbs, Though Questions Remain

On July 8, two weeks following the Supreme Court’s ruling in Dobbs v. Jackson that invalidated the constitutional right to abortion, President Biden signed Executive Order 14076 (E.O.). The E.O. directed federal agencies to take various actions to protect access to reproductive health care services,[1] including directing the Secretary of the U.S. Department of Health and Human Services (HHS) to “consider actions” to strengthen the protection of sensitive healthcare information, including data on reproductive healthcare services like abortion, by issuing new guidance under the Health Insurance and Accountability Act of 1996 (HIPAA).[2]

The directive bolstered efforts already underway by the Biden Administration. A week before the E.O. was signed, HHS Secretary Xavier Becerra directed the HHS Office for Civil Rights (OCR) to take steps to ensure privacy protections for patients who receive, and providers who furnish, reproductive health care services, including abortions.[3] The following day, OCR issued two guidance documents to carry out this order, which are described below.

Although the guidance issued by OCR clarifies the privacy protections as they exist under current law post-Dobbs, it does not offer patients or providers new or strengthened privacy rights. Indeed, the guidance illustrates the limitations of HIPAA regarding protection of health information of individuals related to abortion services.

A.  HHS Actions to Safeguard PHI Post-Dobbs

Following Secretary Becerra’s press announcement, OCR issued two new guidance documents outlining (1) when the HIPAA Privacy Rule may prevent the unconsented disclosure of reproductive health-related information; and (2) best practices for consumers to protect sensitive health information collected by personal cell phones, tablets, and apps.

(1) HIPAA Privacy Rule and Disclosures of Information Relating to Reproductive Health Care

In the “Guidance to Protect Patient Privacy in Wake of Supreme Court Decision on Roe,”[4] OCR addresses three existing exceptions in the HIPAA Privacy Rule to the disclosure of PHI without an individual’s authorization and provides examples of how those exceptions may be applied post-Dobbs.

The three exceptions discussed in the OCR guidance are the exceptions for disclosures required by law,[5]  for purposes of law enforcement,[6] or to avert a serious threat to health or safety.[7]

While the OCR guidance reiterates that the Privacy Rule permits, “but does not require” disclosure of PHI in each of these exceptions,[8] this offers limited protection that relies on the choice of providers whether to disclose or not disclose the information. Although these exceptions are highlighted as “protections,” they expressly permit the disclosure of protected health information. Further, while true that the HIPAA Privacy Rule itself may not compel disclosure (but merely permits disclosure), the guidance fails to mention that in many situations in which these exceptions apply, the provider will have other legal authority (such as state law) mandating the disclosure and thus, a refusal to disclose the PHI may be unlawful based on a law other than HIPAA.

Two of the exceptions discussed in the guidance – the required by law exception and the law enforcement exception – both only apply in the first place when valid legal authority is requiring disclosure. In these situations, the fact that HIPAA does not compel disclosure is of no relevance. Certainly, when there is not valid legal authority requiring disclosure of PHI, then HIPAA prohibits disclosure, as noted as in the OCR guidance.  However, in states with restrictive abortion laws, the state legal authorities are likely to be designed to require disclosure – which HIPAA does not prevent.

For instance, if a health care provider receives a valid subpoena from a Texas court that is ordering the disclosure of PHI as part of a case against an individual suspected of aiding and abetting an abortion, in violation of Texas’ S.B. 8, then that provider could be held in contempt of court for failing to comply with the subpoena, despite the fact that HIPAA does not compel disclosure.[9] For more examples on when a covered entity may be required to disclose PHI, please see EBG’s prior blog: The Pendulum Swings Both Ways: State Responses to Protect Reproductive Health Data, Post-Roe.[10]

Notably, the OCR guidance does provide a new interpretation of the application of the exception for disclosures to avert a serious threat to health or safety. Under this exception, covered entities may disclose PHI, consistent with applicable law and standards of ethical conduct, if the covered entity, in good faith, believes the use or disclosure is necessary to prevent or lessen a serious and imminent threat to the health or safety of a person or the public. OCR states that it would be inconsistent with professional standards of ethical conduct to make such a disclosure of PHI to law enforcement or others regarding an individual’s interest, intent, or prior experience with reproductive health care. Thus, in the guidance, OCR takes the position that if a patient in a state where abortion is prohibited informs a health care provider of the patient’s intent to seek an abortion that would be legal in another state, this would not fall into the exception for disclosures to avert a serious threat to health or safety.  Covered entities should be aware of OCR’s position and understand that presumably OCR would view any such disclosure as a HIPAA violation.

(2) Protecting the Privacy and Security of Individuals’ Health Information When Using Personal Cell Phones or Tablets

OCR also issued guidance on how individuals can best protect their PHI on their own personal devices. HIPAA does not generally protect the privacy or security of health information when it is accessed through or stored on personal cell phones or tablets. Rather, HIPAA only applies when PHI is created, received, maintained, or transmitted by covered entities and business associates. As a result, it is not unlawful under HIPAA for information collected by devices or apps – including data pertaining to reproductive healthcare – to be disclosed without consumer’s knowledge.[11]

In an effort to clarify HIPAA’s limitation to protect such information, OCR issued guidance to protect consumer sensitive information stored in personal devices and apps.[12] This includes step-by-step guidance on how to control data collection on their location, and how to securely dispose old devices.[13]

Further, some states have taken steps to fill the legal gaps to varying degrees of success. For example, California’s Confidentiality of Medical Information Act (“CMIA”) extends to “any business that offers software or hardware to consumers, including a mobile application or other related device that is designed to maintain medical information.”[14] As applied, a direct-to-consumer period tracker app provided by a technology company, for example, would fall under the CMIA’s data privacy protections, but not under HIPAA. Regardless, gaps remain as the CMIA does not protect against a Texas prosecutor subpoenaing information from the direct-to-consumer app. Conversely, Connecticut’s new reproductive health privacy law,[15] does prevent a Connecticut covered entity from disclosing reproductive health information based on a subpoena, but Connecticut’s law does not apply to non-covered entities, such as a period tracker app. Therefore, even the U.S.’s most protective state privacy laws do not fill in all of the privacy gaps.

Alongside OCR’s guidance, the Federal Trade Commission (FTC) published a blog post warning companies with access to confidential consumer information to consider FTC’s enforcement powers under Section 5 of the FTC Act, as well as the Safeguards Rule, the Health Breach Notification Rule, and the Children’s Online Privacy Protection Rule.[16] Consistent with OCR’s guidance, the FTC’s blog post reiterates the Biden Administration’s goal of protecting reproductive health data post-Dobbs, but does not go so far as to create new privacy protections relative to current law.

B.  Despite the Biden Administration’s Guidance, Questions Remain Regarding the Future of Reproductive Health Privacy Protections Post-Dobbs

Through E.O. 14076, Secretary Becerra’s press conference, OCR’s guidance, and the FTC’s blog, the Biden Administration is signaling that it intends to use the full force of its authorities – including those vested by HIPAA – to protect patient privacy in the wake of Roe.

However, it remains unclear how this messaging will translate to affirmative executive actions, and how successful such executive actions would be. How far is the executive branch willing to push reproductive rights? Would more aggressive executive actions be upheld by a Supreme Court that just struck down decades of precedent permitting access to abortion? Will the Biden Administration’s executive actions persist if the administration changes in the next Presidential election?

Attorneys at Epstein Becker & Green are well-positioned to assist covered entities, business associates, and other companies holding sensitive reproductive health data understand how to navigate HIPAA’s exemptions and interactions with emerging guidance, regulations, and statutes at both the state and Federal levels.

Ada Peters, a 2022 Summer Associate (not admitted to the practice of law) in the firm’s Washington, DC office and Jack Ferdman, a 2022 Summer Associate (not admitted to the practice of law) in the firm’s Boston office, contributed to the preparation of this post. 



[1] 87 Fed. Reg. 42053 (Jul. 8, 2022), https://bit.ly/3b4N4rp.

[2] Id.

[3] HHS, Remarks by Secretary Xavier Becerra at the Press Conference in Response to President Biden’s Directive following Overturning of Roe v. Wade (June 28, 2022), https://bit.ly/3zzGYsf.

[4] HHS, Guidance to Protect Patient Privacy in Wake of Supreme Court Decision on Roe (June 29, 2022),  https://bit.ly/3PE2rWK.

[5] 45 CFR 164.512(a)(1)

[6] 45 CFR 164.512(f)(1)

[7] 45 CFR 164.512(j)

[8] Id.

[9] See Texas S.B. 8; e.g., Fed. R. Civ. Pro. R.37 (outlining available sanctions associated with the failure to make disclosures or to cooperate in discovery in Federal courts), https://bit.ly/3BjX4I2.

[10] EBG Health Law Advisor, The Pendulum Swings Both Ways: State Responses to Protect Reproductive Health Data, Post-Roe (June 17, 2022), https://bit.ly/3oPDegl.

[11] A 2019 Kaiser Family Foundation survey concluded that almost one third of female respondents used a smartphone app to monitor their menstrual cycles and other reproductive health data. Kaiser Family Foundation, Health Apps and Information Survey (Sept. 2019), https://bit.ly/3PC9Gyt.

[12] HHS, Protecting the Privacy and Security of Your Health Information When Using Your Personal Cell Phone1 or Tablet (last visited Jul. 26, 2022), https://bit.ly/3S2MNWs.

[13] Id.

[14] Cal. Civ. Code § 56.10, Effective Jan. 1, 2022, https://bit.ly/3J5iDxM.

[15] 2022 Conn. Legis. Serv. P.A. 22-19 § 2 (S.B. 5414), Effective July 1, 2022, https://bit.ly/3zwn95c.

[16] FTC, Location, Health, and Other Sensitive Information: FTC Committed To Fully Enforcing the Law Against Illegal Use and Sharing of Highly Sensitive Data (July 11, 2022), https://bit.ly/3BjrzNV.

©2022 Epstein Becker & Green, P.C. All rights reserved.

Governor Rolls Back California COVID-19 Executive Orders & Cal/OSHA Releases Draft Permanent COVID-19 Standard

On June 17, 2022, Governor Newsom issued an executive order terminating certain provisions of prior executive orders related to Cal/OSHA’s COVID-19 Emergency Temporary Standards (ETS). Some of the terminated orders were no longer necessary due to changes in the ETS. For example, previously the Governor had issued an executive order stating exclusion periods could not be longer than California Department of Public Health (CDPH) guidelines or local ordinances. However, since the ETS now defers to CDPH guidance on isolation and quarantine, the Governor has rescinded his prior executive order on this issue. Moreover, Cal/OSHA has issued guidance for employers on COVID-19 Isolation and Quarantine that aligns with CDPH requirements.

The current version of the ETS remains in effect until the end of 2022. However, Cal/OSHA won’t be done with COVID-19 regulations in 2023. The agency is currently working on a permanent COVID-19 Standard. Recently, the draft of the proposed regulation was released.

The draft regulation carries over many of the employer obligations from the current ETS. The following are some of the proposed requirements:

  • COVID-19 procedures, either included in their Injury and Illness Prevention Program (IIPP) or a separate document.
  • Exclusion and prevention requirements for positive employees and close contacts.
  • Employers would continue to be required to provide testing to employees who have a close contact in the workplace.
  • Employers would continue to have notice requirements for COVID-19 exposure.
  • Employers would continue to have to provide face coverings to employees.
  • Employers would continue to have reporting and recordkeeping requirements for COVID-19 cases and outbreaks in the workplace.

Currently, no public hearing has been set for the proposed permanent COVID-19 Standard, so it is uncertain how soon the regulations may be implemented.

Jackson Lewis P.C. © 2022

DOL Publishes Final Rule Implementing President Biden’s $15 Federal Contractor Minimum Wage Executive Order 14026

The Department of Labor (DOL) has published its Final Rule implementing President Biden’s April 27, 2021, Executive Order 14026 raising the minimum wage from $10.95 an hour to $15 an hour (with increases to be published annually). The new wage rate will take effect January 30, 2022, though as discussed below, the rate increases will not be applied to contracts automatically on that date.

The Final Rule is substantially similar to the DOL’s proposed Notice of Rulemaking issued in July 2021 and is more expansive in coverage than the current federal contractor minimum wage requirements in effect under former President Obama’s Executive Order 13658.

$15 Wage Rate Does Not Apply to All Federal Contractors, All Federal Contracts, or All Workers

Covered Contracts

The $15 wage rate will apply to workers on four specific types of federal contracts that are performed in the U.S. (including the District of Columbia, Puerto Rico, and certain U.S. territories):

  • Procurement contracts for construction covered by the Davis-Bacon Act (DBA), but not the Davis-Bacon Related Acts
  • Service Contract Act (SCA) covered contracts
  • Concessions contracts – meaning a contract under which the federal government grants a right to use federal property, including land or facilities, for furnishing services. The term “concessions contract” includes, but is not limited to, a contract the principal purpose of which is to furnish food, lodging, automobile fuel, souvenirs, newspaper stands, or recreational equipment, regardless of whether the services are of direct benefit to the government, its personnel, or the general public
  • Contracts related to federal property and the offering of services to the general public, federal employees, and their dependents

The Executive Order does not apply to contracts or other funding instruments, including:

  • Contracts for the manufacturing or furnishing of materials, supplies, articles, or equipment to the federal government
  • Grants
  • Contracts or agreements with Indian Tribes under the Indian Self-Determination and Education Assistance Act
  • Contracts excluded from coverage under the SCA or DBA and specifically excluded in the implementing regulations and
  • Other contracts specifically excluded (See NPRM Section 23.40)

Effective Date; Definition of “New” Contracts Expanded

The Final Rule specifies that the wage requirement will apply to new contracts and contract solicitations as of January 30, 2022. Despite the “new contract” limitation, the regulations, consistent with the language of the Biden Executive Order, strongly encourage federal agencies to require the $15 wage for all existing contracts and solicitations issued between the date of the Executive Order and the effective date of January 30, 2022.

Similarly, agencies are “strongly encouraged” to require the new wage where they have issued a solicitation before the effective date and entered into a new contract resulting from the solicitation within 60 days of such effective date.

Pursuant to the Final Rule, the new minimum wage will apply to new contracts; new contract-like instruments; new solicitations; extensions or renewals of existing contracts or contract-like instruments; and exercises of options on existing contracts or contract-like instruments on or after January 30, 2022.

Geographic Limitations Expanded

The Final Rule applies coverage to workers outside the 50 states and expands the definition of “United States” to include the 50 states, the District of Columbia, Puerto Rico, the Virgin Islands, Outer Continental Shelf lands as defined in the Outer Continental Shelf Lands Act, American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, Wake Island, and Johnston Island.

Workers Performing Work “On or In Connection With” a Covered Contract

Only workers who are non-exempt under the Fair Labor Standards Act and performing work on or in connection with a covered contract must be paid $15 per hour. The wage requirement applies only to hours worked on or in connection with a covered contract.

A worker performs “on” a contract if the worker directly performs the specific services called for by the contract. A worker performs “in connection with” a contract if the worker’s work activities are necessary to the performance of a contract but are not the specific services called for by the contract.

The Final Rule includes a “less-than-20% exception” for those workers who only perform work “in connection with” a covered contract, but do not perform any direct work on the contract. For workers who spend less than 20% of their hours in a workweek working indirectly in connection with a covered contract, the contractor need not pay the $15 wage for any hours for that workweek.

Tipped Employees

Under the Final Rule, DOL is phasing out lower wages and tip credits for tipped employees on covered contracts. Employers must pay tipped employees $10.50 per hour in 2022 and increase those wages incrementally, under a proposed formula in the NPRM. Beginning in 2024, tipped employees must receive the full federal contractor wage rate.

$15 Wage Contract Clause Requirements, Enforcement Obligations

The Final Rule provides that a Minimum Wage contract clause will appear in covered prime contracts, except that procurement contracts subject to the Federal Acquisition Regulation (FAR) will include an applicable FAR Clause (to be issued by the Federal Acquisition Regulation Council) providing notice of the wage requirement.

In addition, covered prime contractors and subcontractors must include the Contract Clause in covered subcontracts and, as will be in the applicable FAR Clause, procurement prime contractors and subcontractors will be required to include the FAR clause in covered subcontracts.

In addition, the Final Rule provides that contractors and subcontractors:

“… shall require, as a condition of payment, that the subcontractor include the minimum wage contract clause in any lower-tier subcontracts … [and] shall be responsible for the compliance by any subcontractor or lower-tier subcontractor with the Executive Order minimum wage requirements, whether or not the contract clause was included in the subcontract.”

The DOL will investigate complaints and enforce the requirements but under the Final Rule, contracting agencies may also enforce the minimum wage requirements and take actions including contract termination, suspension and debarment for violations.

Preparation for the $15 wage

To prepare, contractors and subcontractors of covered contracts should consider taking the following steps:

  • Review existing multi-year contracts with options or extensions that may be exercised on or after January 30, 2022, to plan for wage increases at the exercise of the option or extension, but also review any contract modifications to see if an agency is including the requirement early than required, as is allowed under the Final Rule
  • Identify job titles that typically perform work directly on covered contracts and those that perform indirect work above 20% in a workweek
  • Plan for wage increases for covered workers who are not already making $15 per hour
  • Determine impact on existing collective bargaining agreements particularly on SCA-covered contracts
  • Prepare for submission of price/equitable adjustments based on wage increases if allowed under the contract terms

Article By Leslie A. Stout-Tabackman of Jackson Lewis P.C.

For more labor and employment legal news, read more at the National Law Review.

Jackson Lewis P.C. © 2021