The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

Another Lesson for Higher Education Institutions about the Importance of Cybersecurity Investment

Key Takeaway

A Massachusetts class action claim underscores that institutions of higher education will continue to be targets for cybercriminals – and class action plaintiffs know it.

Background

On January 4, 2023, in Jackson v. Suffolk University, No. 23-cv-10019, Jackson (Plaintiff) filed a proposed class action lawsuit in the U.S. District Court for the District of Massachusetts against her alma matter, Suffolk University (Suffolk), arising from a data breach affecting thousands of current and former Suffolk students.

The complaint alleges that an unauthorized party gained access to Suffolk’s computer network on or about July 9, 2022.  After learning of the unauthorized access, Suffolk engaged cybersecurity experts to assist in an investigation. Suffolk completed the investigation on November 14, 2022.  The investigation concluded that an unauthorized third party gained access to and/or exfiltrated files containing personally identifiable information (PII) for students who enrolled after 2002.

The complaint further alleges that the PII exposed in the data breach included students’ full names, Social Security Numbers, Driver License numbers, state identification numbers, financial account information, and Protected Health Information.  While Suffolk did not release the total number of students affected by the data breach, the complaint alleges that approximately 36,000 Massachusetts residents were affected.  No information was provided about affected out-of-state residents.

Colleges and Universities are Prime Targets for Cybercriminals

Unfortunately, Suffolk’s data breach is not an outlier.  Colleges and universities present a wealth of opportunities for cyber criminals because they house massive amounts of sensitive data, including employee and student personal and financial information, medical records, and confidential and proprietary data.  Given how stolen data can be sold through open and anonymous forums on the Dark Web, colleges and universities will continue to remain prime targets for cybercriminals.

Recognizing this, the FBI issued a warning for higher education institutions in March 2021, informing them that cybercriminals have been targeting institutions of higher education with ransomware attacks.  In May 2022, the FBI issued a second alert, warning that cyber bad actors continue to conduct attacks against colleges and universities.

Suffolk Allegedly Breached Data Protection Duty

In the complaint, Plaintiff alleges that Suffolk did not follow industry and government guidelines to protect student PII.  In particular, Plaintiff alleges that Suffolk’s failure to protect student PII is prohibited by the Federal Trade Commission Act, 15 U.S.C.A. § 45 and that Suffolk failed to comply with the Financial Privacy Rule of the Gramm-Leach-Bliley Act (GLBA),  15 U.S.C.A. § 6801.  Further, the suit alleges that Suffolk violated the Massachusetts Right to Privacy Law, Mass. Gen. Laws Ann. ch. 214, § 1B, as well as its common law duties.

How Much Cybersecurity is Enough?

To mitigate cyber risk, colleges and university must not only follow applicable government guidelines but also  consider following industry best practices to protect student PII.

In particular, GLBA requires a covered organization to designate a qualified individual to oversee its information security program and conduct risk assessments that continually assess internal and external risks to the security, confidentiality and integrity of personal information.  After the risk assessment, the organization must address the identified risks and document the specific safeguards intended to address those risks.  See 16 CFR § 314.4.  

Suffolk, as well as other colleges and universities, may also want to look to Massachusetts law for guidance about how to further invest in its cybersecurity program.  Massachusetts was an early leader among U.S. states when, in 2007, it enacted the “Regulations to safeguard personal information of commonwealth residents” (Mass. Gen. Laws ch. 93H § 2) (Data Security Law).  The Data Security Law – still among the most prescriptive general data security state law – sets forth a list of minimum requirements that, while not specific to colleges and universities, serves as a good cybersecurity checklist for all organizations:

  1. Designation of one or more employees responsible for the WISP.
  2. Assessments of risks to the security, confidentiality and/or integrity of organizational Information and the effectiveness of the current safeguards for limiting those risks, including ongoing employee and independent contractor training, compliance with the WISP and tools for detecting and preventing security system failures.
  3. Employee security policies relating to protection of organizational Information outside of business premises.
  4. Disciplinary measures for violations of the WISP and related policies.
  5. Access control measures that prevent terminated employees from accessing organizational Information.
  6. Management of service providers that access organizational Information as part of providing services directly to the organization, including retaining service providers capable of protecting organizational Information consistent with the Data Security Regulations and other applicable laws and requiring service providers by contract to implement and maintain appropriate measures to protect organizational Information.
  7. Physical access restrictions for records containing organizational Information and storage of those records in locked facilities, storage areas or containers.
  8. Regular monitoring of the WISP to ensure that it is preventing unauthorized access to or use of organizational Information and upgrading the WISP as necessary to limit risks.
  9. Review the WISP at least annually or more often if business practices that relate to the protection of organizational Information materially change.
  10. Documentation of responsive actions taken in connection with any “breach of security” and mandatory post-incident review of those actions to evaluate the need for changes to business practices relating to protection of organizational Information.

An organization not implementing any of these controls should consider documenting the decision-making process as a defensive measure.  In implementing these requirements and recommendations, colleges and universities can best position themselves to thwart cybercriminals and plaintiffs alike.

© Copyright 2023 Squire Patton Boggs (US) LLP