White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

Continuing Forward: Senate Leaders Release an AI Policy Roadmap

The US Senate’s Bipartisan AI Policy Roadmap is a highly anticipated document expected to shape the future of artificial intelligence (AI) in the United States over the next decade. This comprehensive guide, which complements the AI research, investigations, and hearings conducted by Senate committees during the 118th Congress, identifies areas of consensus that could help policymakers establish the ground rules for AI use and development across various sectors.

From intellectual property reforms and substantial funding for AI research to sector-specific rules and transparent model testing, the roadmap addresses a wide range of AI-related issues. Despite the long-awaited arrival of the AI roadmap, Sen. Chuck Schumer (D-NY), the highest-ranking Democrat in the Senate and key architect of the high-level document, is expected to strongly defer to Senate committees to continue drafting individual bills impacting the future of AI policy in the United States.

The Senate’s bipartisan roadmap is the culmination of a series of nine forums held last year by the same group, during which they gathered diverse perspectives and information on AI technology. Topics of the forums included:

  1. Inaugural Forum
  2. Supporting US Innovation in AI
  3. AI and the Workforce
  4. High Impact Uses of AI
  5. Elections and Democracy
  6. Privacy and Liability
  7. Transparency, Explainability, Intellectual Property, and Copyright
  8. Safeguarding
  9. National Security

The wide range of views and concerns expressed by over 150 experts including developers, startups, hardware and software companies, civil rights groups, and academia during these forums helped policymakers develop a thorough and inclusive document that reveals the areas of consensus and disagreement. As the 118th Congress continues, it’s expected that Sen. Schumer will reach out to his counterparts in the US House of Representatives to determine the common areas of interest. Those bipartisan and bicameral conversations will ultimately help Congress establish the foundational rules for AI use and development, potentially shaping not only the future of AI in the United States but also influencing global AI policy.

The final text of this guiding document focuses on several high-level categories. Below, we highlight a handful of notable provisions:

Publicity Rights (Name, Image, and Likeness)

The roadmap encourages senators to consider whether there is a need for legislation that would protect against the unauthorized use of one’s name, image, likeness, and voice, as it relates to AI. While state laws have traditionally recognized the right of individuals to control the commercial use of their so-called “publicity rights,” federal recognition of those rights would mark a major shift in intellectual property law and make it easier for musicians, celebrities, politicians, and other prominent public figures to prevent or discourage the unauthorized use of their publicity rights in the context of AI.

Disclosure and Transparency Requirements

Noting that the “black box” nature of some AI systems can make it difficult to assess compliance with existing consumer protection and civil rights laws, the roadmap encourages lawmakers to ensure that regulators are able to access information directly relevant to enforcing those laws and, if necessary, place appropriate transparency and “explainability” requirements on “high risk” uses of AI. The working group does not offer a definition of “high risk” use cases, but suggests that systems implicating constitutional rights, public safety, or anti-discrimination laws could be forced to disclose information about their training data and factors that influence automated or algorithmic decision making. The roadmap also encourages the development of best practices for when AI users should disclose that their products utilize AI, and whether developers should be required to disclose information to the public about the data sets used to train their AI models.

The document also pushes senators to develop sector-specific rules for AI use in areas such as housing, health care, education, financial services, news and journalism, and content creation.

Increased Funding for AI Innovation

On the heels of the findings included in the National Security Commission on Artificial Intelligence’s (NSCAI) final report, the roadmap encourages Senate appropriators to provide at least $32 billion for AI research funding at federal agencies, including the US Department of Energy, the National Science Foundation, and the National Institute of Standards and Technology. This request for a substantial investment underscores the government’s commitment to advancing AI technology and seeks to position federal agencies as “AI ready.” The roadmap’s innovation agenda includes funding the CHIPS and Science Act, support for semiconductor research and development to create high-end microchips, modernizing the federal government’s information technology infrastructure, and developing in-house supercomputing and AI capacity in the US Department of Defense.

Investments in National Defense

Many members of Congress believe that creating a national framework for AI will also help the United States compete on the global stage with China. Senators who see this as the 21st century space race believe investments in the defense and intelligence community’s AI capabilities are necessary to push back against China’s head start in AI development and deployment. The working group’s national security priorities include leveraging AI’s potential to build a digital armed services workforce, enhancing and accelerating the security clearance application process, blocking large language models from leaking intelligence or reconstructing classified information, and pushing back on perceived “censorship, repression, and surveillance” by Russia and China.

Addressing AI in Political Ads

Looking ahead to the 2024 election cycle, the roadmap’s authors are already paying attention to the threats posed by AI-generated election ads. The working group encourages digital content providers to watermark any political ads made with AI and include disclaimers in any AI-generated election content. These guardrails also align with the provisions of several bipartisan election-related AI bills that passed out of the Senate Rules Committee the same day of the roadmap’s release.

Privacy and Legal Liability for AI Usage

The AI Working Group recommends the passage of a federal data privacy law to protect personal information. The AI Working Group notes that the legislation should address issues related to data minimization, data security, consumer data rights, consent and disclosure, and the role of data brokers. Support for these principles is reflected in numerous state privacy laws enacted since 2018, and in bipartisan, bicameral draft legislation (the American Privacy Rights Act) supported by Rep. McMorris Rogers (D-WA), and Sen. Maria Cantwell (D-WA).

As we await additional legislative activity later this year, it is clear that these guidelines will have far-reaching implications for the AI industry and society at large.

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.