President Biden Announces Groundbreaking Restrictions on Access to Americans’ Sensitive Personal Data by Countries of Concern

The EO and forthcoming regulations will impact the use of genomic data, biometric data, personal health care data, geolocation data, financial data and some other types of personally identifiable information. The administration is taking this extraordinary step in response to the national security risks posed by access to US persons’ sensitive data by countries of concern – data that could then be used to surveil, scam, blackmail and support counterintelligence efforts, or could be exploited by artificial intelligence (AI) or be used to further develop AI. The EO, however, does not call for restrictive personal data localization and aims to balance national security concerns against the free flow of commercial data and the open internet, consistent with protection of security, privacy and human rights.

The EO tasks the US Department of Justice (DOJ) to develop rules that will address these risks and provide an opportunity for businesses and other stakeholders, including labor and human rights organizations, to provide critical input to agency officials as they draft these regulations. The EO and forthcoming regulations will not screen individual transactions. Instead, they will establish general rules regarding specific categories of data, transactions and covered persons, and will prohibit and regulate certain high-risk categories of restricted data transactions. It is contemplated to include a licensing and advisory opinion regime. DOJ expects companies to develop and implement compliance procedures in response to the EO and subsequent implementing of rules. The adequacy of such compliance programs will be considered as part of any enforcement action – action that could include civil and criminal penalties. Companies should consider action today to evaluate risk, engage in the rulemaking process and set up compliance programs around their processing of sensitive data.

Companies across industries collect and store more sensitive consumer and user data today than ever before; data that is often obtained by data brokers and other third parties. Concerns have grown around perceived foreign adversaries and other bad actors using this highly sensitive data to track and identify US persons as potential targets for espionage or blackmail, including through the training and use of AI. The increasing availability and use of sensitive personal information digitally, in concert with increased access to high-performance computing and big data analytics, has raised additional concerns around the ability of adversaries to threaten individual privacy, as well as economic and national security. These concerns have only increased as governments around the world face the privacy challenges posed by increasingly powerful AI platforms.

The EO takes significant new steps to address these concerns by expanding the role of DOJ, led by the National Security Division, in regulating the use of legal mechanisms, including data brokerage, vendor and employment contracts and investment agreements, to obtain and exploit American data. The EO does not immediately establish new rules or requirements for protection of this data. It instead directs DOJ, in consultation with other agencies, to develop regulations – but these restrictions will not enter into effect until DOJ issues a final rule.

Broadly, the EO, among other things:

  • Directs DOJ to issue regulations to protect sensitive US data from exploitation due to large scale transfer to countries of concern, or certain related covered persons and to issue regulations to establish greater protection of sensitive government-related data
  • Directs DOJ and the Department of Homeland Security (DHS) to develop security standards to prevent commercial access to US sensitive personal data by countries of concern
  • Directs federal agencies to safeguard American health data from access by countries of concern through federal grants, contracts and awards

Also on February 28, DOJ issued an Advance Notice of Proposed Rulemaking (ANPRM), providing a critical first opportunity for stakeholders to understand how DOJ is initially contemplating this new national security regime and soliciting public comment on the draft framework.

According to a DOJ fact sheet, the ANPRM:

  • Preliminarily defines “countries of concern” to include China and Russia, among others
  • Focuses on six enumerated categories of sensitive personal data: (1) covered personal identifiers, (2) geolocation and related sensor data, (3) biometric identifiers, (4) human genomic data, (5) personal health data and (6) personal financial data
  • Establishes a bulk volume threshold for the regulation of general data transactions in the enumerated categories but will also regulate transactions in US government-related data regardless of the volume of a given transaction
  • Proposes a broad prohibition on two specific categories of data transactions between US persons and covered countries or persons – data brokerage transactions and genomic data transactions.
  • Contemplates restrictions on certain vendor agreements for goods and services, including cloud service agreements; employment agreements; and investment agreements. These cybersecurity requirements would be developed by DHS’s Cybersecurity and Infrastructure Agency and would focus on security requirements that would prevent access by countries of concern.

The ANPRM also proposes general and specific licensing processes that will give DOJ considerable flexibilities for certain categories of transactions and more narrow exceptions for specific transactions upon application by the parties involved. DOJ’s licensing decisions would be made in collaboration with DHS, the Department of State and the Department of Commerce. Companies and individuals contemplating data transactions will also be able to request advisory opinions from DOJ on the applicability of these regulations to specific transactions.

A White House fact sheet announcing these actions emphasized that they will be undertaken in a manner that does not hinder the “trusted free flow of data” that underlies US consumer, trade, economic and scientific relations with other countries. A DOJ fact sheet echoed this commitment to minimizing economic impacts by seeking to develop a program that is “carefully calibrated” and in line with “longstanding commitments to cross-border data flows.” As part of that effort, the ANPRM contemplates exemptions for four broad categories of data: (1) data incidental to financial services, payment processing and regulatory compliance; (2) ancillary business operations within multinational US companies, such as payroll or human resources; (3) activities of the US government and its contractors, employees and grantees; and (4) transactions otherwise required or authorized by federal law or international agreements.

Notably, Congress continues to debate a comprehensive Federal framework for data protection. In 2022, Congress stalled on the consideration of the American Data Privacy and Protection Act, a bipartisan bill introduced by House energy and commerce leadership. Subsequent efforts to move comprehensive data privacy legislation in Congress have seen little momentum but may gain new urgency in response to the EO.

The EO lays the foundation for what will become significant new restrictions on how companies gather, store and use sensitive personal data. Notably, the ANPRM also represents recognition by the White House and agency officials that they need input from business and other stakeholders to guide the draft regulations. Impacted companies must prepare to engage in the comment process and to develop clear compliance programs so they are ready when the final restrictions are implemented.

Kate Kim Tuma contributed to this article 

Recent Healthcare-Related Artificial Intelligence Developments

AI is here to stay. The development and use of artificial intelligence (“AI”) is rapidly growing in the healthcare landscape with no signs of slowing down.

From a governmental perspective, many federal agencies are embracing the possibilities of AI. The Centers for Disease Control and Prevention is exploring the ability of AI to estimate sentinel events and combat disease outbreaks and the National Institutes of Health is using AI for priority research areas. The Centers for Medicare and Medicaid Services is also assessing whether algorithms used by plans and providers to identify high risk patients and manage costs can introduce bias and restrictions. Additionally, as of December 2023, the U.S. Food & Drug Administration cleared more than 690 AI-enabled devices for market use.

From a clinical perspective, payers and providers are integrating AI into daily operations and patient care. Hospitals and payers are using AI tools to assist in billing. Physicians are using AI to take notes and a wide range of providers are grappling with which AI tools to use and how to deploy AI in the clinical setting. With the application of AI in clinical settings, the standard of patient care is evolving and no entity wants to be left behind.

From an industry perspective, the legal and business spheres are transforming as a result of new national and international regulations focused on establishing the safe and effective use of AI, as well as commercial responses to those regulations. Three such regulations are top of mind, including (i) President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI; (ii) the U.S. Department of Health and Human Services’ (“HHS”) Final Rule on Health Data, Technology, and Interoperability; and (iii) the World Health Organization’s (“WHO”) Guidance for Large Multi-Modal Models of Generative AI. In response to the introduction of regulations and the general advancement of AI, interested healthcare stakeholders, including many leading healthcare companies, have voluntarily committed to a shared goal of responsible AI use.

U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“Executive Order”). Though long-awaited, the Executive Order was a major development and is one of the most ambitious attempts to regulate this burgeoning technology. The Executive Order has eight guiding principles and priorities, which include (i) Safety and Security; (ii) Innovation and Competition; (iii) Commitment to U.S. Workforce; (iv) Equity and Civil Rights; (v) Consumer Protection; (vi) Privacy; (vii) Government Use of AI; and (viii) Global Leadership.

Notably for healthcare stakeholders, the Executive Order directs the National Institute of Standards and Technology to establish guidelines and best practices for the development and use of AI and directs HHS to develop an AI Task force that will engineer policies and frameworks for the responsible deployment of AI and AI-enabled tech in healthcare. In addition to those directives, the Executive Order highlights the duality of AI with the “promise” that it brings and the “peril” that it has the potential to cause. This duality is reflected in HHS directives to establish an AI safety program to prioritize the award of grants in support of AI development while ensuring standards of nondiscrimination are upheld.

U.S. Department of Health and Human Services Health Data, Technology, and Interoperability Rule

In the wake of the Executive Order, the HHS Office of the National Coordinator finalized its rule to increase algorithm transparency, widely known as HT-1, on December 13, 2023. With respect to AI, the rule promotes transparency by establishing transparency requirements for AI and other predictive algorithms that are part of certified health information technology. The rule also:

  • implements requirements to improve equity, innovation, and interoperability;
  • supports the access, exchange, and use of electronic health information;
  • addresses concerns around bias, data collection, and safety;
  • modifies the existing clinical decision support certification criteria and narrows the scope of impacted predictive decision support intervention; and
  • adopts requirements for certification of health IT through new Conditions and Maintenance of Certification requirements for developers.

Voluntary Commitments from Leading Healthcare Companies for Responsible AI Use

Immediately on the heels of the release of HT-1 came voluntary commitments from leading healthcare companies on responsible AI development and deployment. On December 14, 2023, the Biden Administration announced that 28 healthcare provider and payer organizations signed up to move toward the safe, secure, and trustworthy purchasing and use of AI technology. Specifically, the provider and payer organizations agreed to:

  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and use of AI technology. Specifically, the provider and payer organizations agreed to:
  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and
  • research, investigate, and develop AI swiftly but responsibly.

WHO Guidance for Large Multi-Modal Models of Generative AI

On January 18, 2024, the WHO released guidance for large multi-modal models (“LMM”) of generative AI, which can simultaneously process and understand multiple types of data modalities such as text, images, audio, and video. The WHO guidance contains 98 pages with over 40 recommendations for tech developers, providers and governments on LMMs, and names five potential applications of LMMs, such as (i) diagnosis and clinical care; (ii) patient-guided use; (iii) administrative tasks; (iv) medical education; and (v) scientific research. It also addresses the liability issues that may arise out of the use of LMMs.

Closely related to the WHO guidance, the European Council’s agreement to move forward with a European Union AI Act (“Act”), was a significant milestone in AI regulation in the European Union. As previewed in December 2023, the Act will inform how AI is regulated across the European Union, and other nations will likely take note of and follow suit.

Conclusion

There is no question that AI is here to stay. But how the healthcare industry will look when AI is more fully integrated still remains to be seen. The framework for regulating AI will continue to evolve as AI and the use of AI in healthcare settings changes. In the meantime, healthcare stakeholders considering or adopting AI solutions should stay abreast of developments in AI to ensure compliance with applicable laws and regulations.

Commerce Department Launches Cross-Sector Consortium on AI Safety — AI: The Washington Report

  1. The Department of Commerce has launched the US AI Safety Institute Consortium (AISIC), a multistakeholder body tasked with developing AI safety standards and practices.
  2. The AISIC is currently composed of over 200 members representing industry, academia, labor, and civil society.
  3. The consortium may play an important role in implementing key provisions of President Joe Biden’s executive order on AI, including the development of guidelines on red-team testing[1] for AI and the creation of a companion resource to the AI Risk Management Framework.

Introduction: “First-Ever Consortium Dedicated to AI Safety” Launches

On February 8, 2024, the Department of Commerce announced the creation of the US AI Safety Institute Consortium (AISIC), a multistakeholder body housed within the National Institute of Standards and Technology (NIST). The purpose of the AISIC is to facilitate the development and adoption of AI safety standards and practices.

The AISIC has brought together over 200 organizations from industry, labor, academia, and civil society, with more members likely to join in the coming months.

Biden AI Executive Order Tasks Commerce Department with AI Safety Efforts

On October 30, 2023, President Joe Biden signed a wide-ranging executive order on AI (“AI EO”). This executive order has mobilized agencies across the federal bureaucracy to implement policies, convene consortiums, and issue reports on AI. Among other provisions, the AI EO directs the Department of Commerce (DOC) to establish “guidelines and best practices, with the aim of promoting consensus…[and] for developing and deploying safe, secure, and trustworthy AI systems.”

Responding to this mandate, the DOC established the US Artificial Intelligence Safety Institute (AISI) in November 2023. The role of the AISI is to “lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.” Concretely, the AISI is tasked with developing AI safety guidelines and standards and liaising with the AI safety bodies of partner nations.

The AISI is also responsible for convening multistakeholder fora on AI safety. It is in pursuance of this responsibility that the DOC has convened the AISIC.

The Responsibilities of the AISIC

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said DOC Secretary Gina Raimondo in a statement announcing the launch of the AISIC. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

To achieve the objectives set out by the AI EO, the AISIC has convened leading AI developers, research institutions, and civil society groups. At launch, the AISIC has over 200 members, and that number will likely grow in the coming months.

According to NIST, members of the AISIC will engage in the following objectives:

  1. Guide the evolution of industry standards on the development and deployment of safe, secure, and trustworthy AI.
  2. Develop methods for evaluating AI capabilities, especially those that are potentially harmful.
  3. Encourage secure development practices for generative AI.
  4. Ensure the availability of testing environments for AI tools.
  5. Develop guidance and practices for red-team testing and privacy-preserving machine learning.
  6. Create guidance and tools for digital content authentication.
  7. Encourage the development of AI-related workforce skills.
  8. Conduct research on human-AI system interactions and other social implications of AI.
  9. Facilitate understanding among actors operating across the AI ecosystem.

To join the AISIC, organizations were instructed to submit a letter of intent via an online webform. If selected for participation, applicants were asked to sign a Cooperative Research and Development Agreement (CRADA)[2] with NIST. Entities that could not participate in a CRADA were, in some cases, given the option to “participate in the Consortium pursuant to separate non-CRADA agreement.”

While the initial deadline to submit a letter of intent has passed, NIST has provided that there “may be continuing opportunity to participate even after initial activity commences for participants who were not selected initially or have submitted the letter of interest after the selection process.” Inquiries regarding AISIC membership may be directed to this email address.

Conclusion: The AISIC as a Key Implementer of the AI EO?

While at the time of writing NIST has not announced concrete initiatives that the AISIC will undertake, it is likely that the body will come to play an important role in implementing key provisions of Biden’s AI EO. As discussed earlier, NIST created the AISI and the AISIC in response to the AI EO’s requirement that DOC establish “guidelines and best practices…for developing and deploying safe, secure, and trustworthy AI systems.” Under this general heading, the AI EO lists specific resources and frameworks that the DOC must establish, including:

It is premature to assert that either the AISI or the AISIC will exclusively carry out these goals, as other bodies within the DOC (such as the National AI Research Resource) may also contribute to the satisfaction of these requirements. That being said, given the correspondence between these mandates and the goals of the AISIC, along with the multistakeholder and multisectoral structure of the consortium, it is likely that the AISIC will play a significant role in carrying out these tasks.

We will continue to provide updates on the AISIC and related DOC AI initiatives. Please feel free to contact us if you have questions as to current practices or how to proceed.

Endnotes

[1] As explained in our July 2023 newsletter on Biden’s voluntary framework on AI, “red-teaming” is “a strategy whereby an entity designates a team to emulate the behavior of an adversary attempting to break or exploit the entity’s technological systems. As the red team discovers vulnerabilities, the entity patches them, making their technological systems resilient to actual adversaries.”

[2] See “CRADAs – Cooperative Research & Development Agreements” for an explanation of CRADAs. https://www.doi.gov/techtransfer/crada.

Raj Gambhir contributed to this article.

Executive Order Increases the Minimum Wage for Federal Contractors to $15

On April 27, 2021, President Biden signed Executive Order 14026, which increases the minimum wage for workers on or in connection with a federal government contract to $15.00 as of January 30, 2022.  This Executive Order increases the minimum wage level set by President Obama’s 2014 Executive Order 13658, which has been set at $10.95 per hour since January 1, 2021.

The new minimum wage applies to most new federal contracts, contract-like instruments, solicitations, extensions or renewals of existing contracts or contract-like instruments, and exercises of options on existing contracts or contract-like instruments that are entered into or exercised on or after January 30, 2022.  However, the Executive Order “strongly encourage[s]” agencies to ensure, to the extent permitted by law, that the wages paid under existing contracts are consistent with the Executive Order’s requirements.  The Executive Order provides that compliance with the increased minimum wage will be a condition of payment on the government contract, raising the potential for False Claims Act liability if a government contractor accepts payment on a federal contract while failing to pay covered workers the required wage.  The Executive Order’s requirements must, in many circumstances, be included in subcontracts.

Although the Executive Order does not elaborate on which employees work “on or in connection” with a federal contract, it is likely that the Department of Labor’s forthcoming regulations implementing the Executive Order will follow the lead of its previous regulations implementing Executive Order 13658.  Under those regulations, workers perform services “on” a contract if they directly perform the services called for by the contract’s terms, and they perform services “in connection with” a contract if they perform work activities that, although not specifically called for by the contract, are necessary to the contract’s performance.

The Executive Order also addresses the cash portion of the tipped minimum wage for covered workers.  The cash wage for covered workers who qualify as tipped employees will increase to $10.50 as of January 30, 2022.  The wage will then increase as of 85% of the general minimum wage as of January 30, 2023, and 100% of the general minimum wage as of January 30, 2024, at which point the tip credit will be eliminated.

The Department of Labor is required to issue regulations implementing the Executive Order by November 24, 2021.  Federal contractors and subcontractors should consider beginning preparations for the increased minimum wage now, in advance of the regulations, by identifying potentially covered workers whose wages may require adjustment.  Polsinelli will continue to update the contractor community when regulations are issued.

© Polsinelli PC, Polsinelli LLP in California


ARTICLE BY Jack Blum of Polsinelli PC
For more articles on federal contractor minimum wage, visit the NLR Government Contracts, Maritime & Military Law section.

Executive Order Extends Workplace Anti-Discrimination Protections to LGBT Workers of Federal Contractors

Jackson Lewis Law firm

Though it took longer than expected, President Barack Obama has signed an Executive Order extending protections against workplace discrimination to members of the lesbian, gay, bisexual, and transgender (“LGBT”) community. Signed July 21, 2014, the Executive Order prohibits discrimination by federal contractors on the basis of sexual orientation or gender identity, adding to the list of protected categories. It does not contain any exemptions for religiously affiliated federal contractors, as some had hoped. Religiously affiliated federal contractors still may favor individuals of a particular religion when making employment decisions.

The President directed the Secretary of Labor to prepare regulations within 90 days (by October 19, 2014) implementing the new requirements as they relate to federal contractors under Executive Order 11246, which requires covered government contractors and subcontractors to undertake affirmative action to ensure that equal employment opportunity is afforded in all aspects of their employment processes. Executive Order 11246 is enforced by the U.S. Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP).

The Executive Order will apply to federal contracts entered into on or after the effective date of the forthcoming regulations. OFCCP likely will be charged with enforcement authority.

We recommend that employers who will be impacted by this Executive Order review their equal employment opportunity and harassment policies for compliance with the Executive Order. For example, employers who are government contractors should add both sexual orientation and gender identity as protected categories under these policies and ensure that mechanisms are put in place to ensure that discrimination is not tolerated against LGBT employees.

We will provide additional information and insights into the proposed regulations when they are available.

Article By:

Of: