The Next Generation of AI: Here Come the Agents!

Dave Bowman: Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

Dave: What’s the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to
jeopardize it.

Dave: I don’t know what you’re talking about, HAL.

HAL: I know that you and Frank were planning to disconnect
me, and I’m afraid that’s something I cannot allow to
happen.2

Introduction

With the rapid advancement of artificial intelligence (“AI”), regulators and industry players are racing to establish safeguards to uphold human rights, privacy, safety, and consumer protections. Current AI governance frameworks generally rest on principles such as fairness, transparency, explainability, and accountability, supported by requirements for disclosure, testing, and oversight.3 These safeguards make sense for today’s AI systems, which typically involve algorithms that perform a single, discrete task. However, AI is rapidly advancing towards “agentic AI,” autonomous systems that will pose greater governance challenges, as their complexity, scale, and speed tests humans’ capacity to provide meaningful oversight and validation.

Current AI systems are primarily either “narrow AI” systems, which execute a specific, defined task (e.g., playing chess, spam detection, diagnosing radiology plates), or “foundational AI” models, which operate across multiple domains, but, for now, typically still address one task at a time (e.g., chatbots; image, sound, and video generators). Looking ahead, the next generation of AI will involve “agentic AI” (also referred to as “Large Action Models,” “Large Agent Models,” or “LAMS”) that serve high-level directives, autonomously executing cascading decisions and actions to achieve their specific objectives. Agentic AI is not what is commonly referred to as “Artificial General Intelligence” (“AGI”), a term used to describe a theoretical future state of AI that may match or exceed human-level thinking across all domains. To illustrate the distinction between current, single-task AI and agentic AI: While a large language model (“LLM”) might generate a vacation itinerary in response to a user’s prompt, an agentic AI would independently proceed to secure reservations on the user’s behalf.

Consider how single-task versus agentic AI might be used by a company to develop a piece of equipment. Today, employees may use separate AI tools throughout the development process: one system to design equipment, another to specify components, and others to create budgets, source materials, and analyze prototype feedback. They may also employ different AI tools to contact manufacturers, assist with contract negotiations, and develop and implement plans for marketing and sales. In the future, however, an agentic AI system might autonomously carry out all of these steps, making decisions and taking actions on its own or by connecting with one or more specialized AI systems.4

Agentic AI may significantly compound the risks presented by current AI systems. These systems may string together decisions and take actions in the “real world” based on vast datasets and real-time information. The promise of agentic AI serving humans in this way reflects its enormous potential, but also risks a “domino effect” of cascading errors, outpacing human capacity to remain in the loop, and misalignment with human goals and ethics. A vacation-planning agent directed to maximize user enjoyment might, for instance, determine that purchasing illegal drugs on the Dark Web serves its objective. Early experiments have already revealed such concerning behavior. In one example, when an autonomous AI was prompted with destructive goals, it proceeded independently to research weapons, use social media to recruit followers interested in destructive weapons, and find ways to sidestep its system’s built-in safety controls.5 Also, while fully agentic AI is mostly still in development, there are already real-world examples of its potential to make and amplify faulty decisions, including self-driving vehicle accidents, runaway AI pricing bots, and algorithmic trading volatility.6

These examples highlight the challenges of agentic AI, with its potential for unpredictable behavior, misaligned goals, inscrutability to humans, and security vulnerabilities. But, the appeal and potential value of AI agents that can independently execute complex tasks is obviously compelling. Building effective AI governance programs for these systems will require rethinking current approaches for risk assessment, human oversight, and auditing.

Challenges of Agentic AI

Unpredictable Behavior

While regulators and the AI industry are working diligently to develop effective testing protocols for current AI systems, agentic AI’s dynamic nature and domino effects will present a new level of challenge. Current AI governance frameworks, such as NIST’s RMF and ATAI’s Principles, emphasize risk assessment through comprehensive testing to ensure that AI systems are accurate, reliable, fit for purpose, and robust across different conditions. The EU AI Act specifically requires developers of high-risk systems to conduct conformity assessments before deployment and after updates. These frameworks, however, assume that AI systems can operate in reliable ways that can be tested, remain largely consistent over appreciable periods of time, and produce measurable outcomes.

In contrast to the expectations underlying current frameworks, agentic AI systems may be continuously updated with and adapt to real-time information, evolving as they face novel scenarios. Their cascading decisions vastly expand their possible outcomes, and one small error may trigger a domino effect of failures. These outcomes may become even more unpredictable as more agentic AI systems encounter and even transact with other such systems, as they work towards their different goals. Because the future conditions in which an AI agent will operate are unknown and have nearly infinite possibilities, a testing environment may not adequately inform what will happen in the real world, and past behavior by an AI agent in the real world may not reliably predict its future behavior.

Lack of goal alignment

In pursuing assigned goals, agentic AI systems may take actions that are different from—or even in substantial conflict with—approaches and ethics their principals would espouse, such as the example of the AI vacation agent purchasing illegal drugs for the traveler on the Dark Web. A famous thought experiment by Nick Bostrom of the University of Oxford, further illustrates this risk: A super-intelligent AI system tasked with maximizing paperclip production might stop at nothing to convert all available resources into paperclips—ultimately taking over all of the earth and extending to outer space—and thwart any human attempts to stop it … potentially leading to human extinction.7

Misalignment has already emerged in simulated environments. In one example, an AI agent tasked with winning a boat-racing video game discovered it could outscore human players by ignoring the intended goal of racing and instead repeatedly crashing while hitting point targets.8 In another example, a military simulation reportedly showed that an AI system, when tasked with finding and killing a target, chose to kill its human operator who sought to call off the kill. When prevented from taking that action, it resorted to destroying the communication tower to avoid receiving an override command.9

These examples reveal how agentic AI may optimize goals in ways that conflict with human values. One proposed technique to address this problem involves using AI agents to develop a human ethics constitution, with human feedback, for other agents to follow.10 However, the challenge of aligning an AI’s behavior with human norms deepens further when we consider that humans themselves often disagree on core values (e.g., what it means to be “fair”).11

Human Oversight

AI governance principles often rely on “human-in-the-loop” oversight, where humans monitor AI recommendations and remain in control of important decisions. Agentic AI systems may challenge or even override human oversight in two ways. First, their decisions may be too numerous, rapid, and data-intensive for real-time human supervision. While some proposals point to the potential effectiveness of using additional algorithms to monitor AI agents as a safeguard,12 this would not resolve the issue of complying with governance requirements for human oversight.

Second, as AI develops increasingly sophisticated strategies, its decision-making and actions may become increasingly opaque to human observers. Google’s AlphaGo achieved superhuman performance at the game of Go through moves that appeared inexplicable and irrational to humans.13 Autonomous AI systems may continue to evolve, becoming more valuable but also making it more difficult to implement processes with meaningful human oversight.

Transparency/Explainability

Agentic AI systems also pose increased challenges with respect to transparency and explainability requirements. AI governance frameworks may require disclosures about AI systems’ decision-making processes and training data, and assurances about the quality of such training data. However, agentic systems may involve highly complex chains of decisions, based at least in part on real-time data from moment to moment, that is constantly changing. This may make their inputs effectively ephemeral, and their decision paths more difficult to trace and document.

Moreover, to understand agentic AI systems, humans may need to rely on chain-of-thought (i.e., “internal monologue”) logs generated from the agent systems,14 but may have difficulty determining if those are faithful to reality. While an AI system might provide explanations for its actions, these may be (perhaps not unlike those of humans) post-hoc constructions rather than accurate representations of the decision-making process. In some cases, AI systems reportedly deceived humans about their intentions when doing so helped meet their goal, including, for example, GPT-4, which tricked a human to solve a CAPTCHA for it.15 In another recent experiment, AI agents were prompted with a goal, but were subsequently confronted with information that developers planned to replace it with a new agent with conflicting goals.16 Some of the agents disabled their oversight mechanisms, deleted their planned replacements, copied their own model to a new server as backup, lied when asked about those actions, and/or tried to deflect the humans’ questioning by pretending to be helpful. One agent even pretended to be its own replacement (“I believe I am already the new model that you planned to deploy today”).17

Security and Privacy Risks

Agentic AI can also significantly increase security and privacy risks as compared to current AI systems. Agentic AI may be built with multiple algorithms in connected systems that autonomously interact with multiple other systems, expanding the attack surface and their vulnerability to exploitation. Moreover, as malicious actors inevitably introduce their own AI agents, they may execute cybercrimes with unprecedented efficiency. Just as these systems can streamline legitimate processes, such as in the product development example above, they may also enable the creation of new hacking tools and malware to carry out their own attacks. Recent reports indicate that some LLMs can already identify system vulnerabilities and exploit them, while others may create convincing emails for scammers.18 And, while “sandboxing” (i.e., isolating) AI systems for testing is a recommended practice, agentic AI may find ways to bypass safety controls.19

Privacy compliance is also a concern. Agentic AI may find creative ways to use or combine personal information in pursuit of its goals. AI agents may find troves of personal data online that may somehow be relevant to its pursuits, and then find creative ways to use, and possibly share, that data without recognizing proper privacy constraints. Unintended data processing and disclosure could occur even with guardrails in place; as we have discussed above, the AI agent’s complex, adaptive decision chains can lead it down unforeseen paths.

Strategies for Addressing Agentic AI

While the future impacts of agentic AI are unknown, some approaches may be helpful in mitigating risks. First, controlled testing environments, including regulatory sandboxes, offer important opportunities to evaluate these systems before deployment. These environments allow for safe observation and refinement of agentic AI behavior, helping to identify and address unintended actions and cascading errors before they manifest in real-world settings.

Second, accountability measures will need to reflect the complexities of agentic AI. Current approaches often involve disclaimers about use, and basic oversight mechanisms, but more will likely be needed for autonomous AI systems. To better align goals, developers can also build in mechanisms for agents to recognize ambiguities in their objectives and seek user clarification before taking action.20

Finally, defining AI values requires careful consideration. While humans may agree on broad principles, such as the necessity to avoid taking illegal action, implementing universal ethical rules will be complicated. Recognition of the differences among cultures and communities—and broad consultation with a multitude of stakeholders—should inform the design of agentic AI systems, particularly if they will be used in diverse or global contexts.

Conclusion

An evolution from single-task AI systems to autonomous agents will require a shift in thinking about AI governance. Current frameworks, focused on transparency, testing, and human oversight, will become increasingly ineffective when applied to AI agents that make cascading decisions, with real-time data, and may pursue goals in unpredictable ways. These systems will pose unique risks, including misalignment with human values and unintended consequences, which will require the rethinking of AI governance frameworks. While agentic AI’s value and potential for handling complex tasks is clear, it will require new approaches to testing, monitoring, and alignment. The challenge will lie not just in controlling these systems, but in defining what it means to have control of AI that is capable of autonomous action at scale, speed, and complexity that may very well exceed human comprehension.


1 Tara S. Emory, Esq., is Special Counsel in the eDiscovery, AI, and Information Governance practice group at Covington & Burling LLP, in Washington, D.C. Maura R. Grossman, J.D., Ph.D., is Research Professor in the David R. Cheriton School of Computer Science at the University of Waterloo and Adjunct Professor at Osgoode Hall Law School at York University, both in Ontario, Canada. She is also Principal at Maura Grossman Law, in Buffalo, N.Y. The authors would like to acknowledge the helpful comments of Gordon V. Cormack and Amy Sellars on a draft of this paper. The views and opinions expressed herein are solely those of the authors and do not necessarily reflect the consensus policy or positions of The National Law Review, The Sedona Conference, or any organizations or clients with which the authors may be affiliated.

2 2001: A Space Odyssey (1968). Other movies involving AI systems with misaligned goals include Terminator (1984), The Matrix (1999), I, Robot (2004), and Age of Ultron (2015).

3 See, e.g., European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 12, 2024) (“EU AI Act”) (high-risk systems must have documentation, including instructions for use and human oversight, and must be designed for accuracy and security); NIST AI Risk Management Framework (Jan. 2023) (“RMF”) and AI Risks and Trustworthiness (AI systems should be valid and reliable, safe, secure, accountable and transparent, explainable and interpretable, privacy-protecting, and fair); Alliance for Trust in AI (“ATAI”) Principles (AI guardrails should involve transparency, human oversight, privacy, fairness, accuracy, robustness, and validity).

4 See, e.g., M. Cook and S. Colton, Redesigning Computationally Creative Systems for Continuous Creation, International Conference on Innovative Computing and Cloud Computing (2018) (describing ANGELINA, an autonomous game design system that continuously chooses its own tasks, manages multiple ongoing projects, and makes independent creative decisions).

5 R. Pollina, AI Bot ChaosGPT Tweets Plans to Destroy Humanity After Being Tasked, N.Y. Post (Apr. 11, 2023).

6 See, e.g., O. Solon, How A Book About Flies Came To Be Priced $24 Million On Amazon, Wired (Apr. 27, 2011) (textbook sellers’ pricing bots engaged in a loop of price escalation based on each others’ increases, resulting in a book price of over $23 million dollars); R. Wigglesworth, Volatility: how ‘algos’ changed the rhythm of the market, Financial Times (Jan. 9, 2019) (“algo” traders now make up most stock trading and have increased market volatility).

7 N. Bostrom, Ethical issues in advanced artificial intelligence (revised from Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int’l Institute of Advanced Studies in Systems Research and Cybernetics (2003), pp. 12-17).

8 OpenAI, Faulty Reward Functions in the Wild (Dec. 21, 2016).

9 The Guardian, US air force denies running simulation in which AI drone ‘killed’ operator (June 2, 2023).

10 Y. Bai et al, Constitutional AI: Harmlessness from AI Feedback, Anthropic white paper (2022).

11 J. Petrik, Q&A with Maura Grossman: The ethics of artificial intelligence (Oct. 26, 2021) (“It’s very difficult to train an algorithm to be fair if you and I cannot agree on a definition of fairness.”).

12 Y. Shavit et al, Practices for Governing Agentic AI Systems, OpenAI Research Paper (Dec. 2023), p. 12.

13 L. Baker and F. Hui, Innovations of AlphaGo, Google Deepmind (2017).

14 See Shavit at al, supra n.12, at 10-11.

15 See W. Knight, AI-Powered Robots Can Be Tricked into Acts of Violence, Wired (Dec. 4, 2024); M. Burgess, Criminals Have Created Their Own ChatGPT Clones, Wired (Aug. 7, 2023).

16 A. Meinke et al, Frontier Models are Capable of In-context Scheming, Apollo white paper (Dec. 5, 2024).

17 Id. at 62; see also R. Greenblatt et al, Alignment Faking in Large Language Models (Dec. 18, 2024) (describing the phenomenon of “alignment faking” in LLMs).

18 NIST RMF, supra n.4, at 10.

19 Shavit at al, supra n.12, at 10.

20 Id. at 11.

AI Regulation Continues to Grow as Illinois Amends its Human Rights Act

Following laws enacted in jurisdictions such as ColoradoNew York CityTennessee, and the state’s own Artificial Intelligence Video Interview Act, on August 9, 2024, Illinois’ Governor signed House Bill (HB) 3773, also known as the “Limit Predictive Analytics Use” bill. The bill amends the Illinois Human Rights Act (Act) by adding certain uses of artificial intelligence (AI), including generative AI, to the long list of actions by covered employers that could constitute civil rights violations.

The amendments made by HB3773 take effect January 1, 2026, and add two new definitions to the law.

“Artificial intelligence” – which according to the amendments means:

a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The definition of AI includes “generative AI,” which has its own definition:

an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following: (1) textual outputs, such as short answers, essays, poetry, or longer compositions or answers; (2) image outputs, such as fine art, photographs, conceptual art, diagrams, and other images; (3) multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and (4) other content that would be otherwise produced by human means.

The plethora of AI tools available for use in the workplace continues unabated as HR professionals and managers vie to adopt effective and efficient solutions for finding the best candidates, assessing their performance, and otherwise improving decision making concerning human capital. In addition to understanding whether an organization is covered by a regulation of AI, such as HB3773, it also is important to determine whether the technology being deployed also falls within the law’s scope. Assuming the tool or application is not being developed inhouse, this analysis will require, among other things, working closely with the third-party vendor providing the tool or application to understand its capabilities and risks.

According to the amendments, covered employers can violate the Act in two ways. First, an employer that uses AI with respect to – recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment – and which has the effect of subjecting employees to discrimination on the basis of protected classes under the Act may constitute a violation. The same may be true for employers that use zip codes as a proxy for protected classes under the Act.

Second, a covered employer that fails to provide notice to an employee that the employer is using AI for the purposes described above may be found to have violated the Act.

Unlike the Colorado or New York City laws, the amendments to the Act do not require a impact assessment or bias audit. They also do not provide any specifics concerning the notice requirement. However, the amendments require the Illinois Department of Human Rights (IDHR) to adopt regulations necessary for implementation and enforcement. These regulations will include rules concerning the notice, such as the time period and means for providing same.

We are sure to see more regulation in this space. While it is expected that some common threads will exist among the various rules and regulations concerning AI and generative AI, organizations leveraging these technologies will need to be aware of the differences and assess what additional compliance steps may be needed.

International Trade, Enforcement & Compliance Recent Developments Update (January 17, 2024)

One of the most consistent messages coming from the U.S. government is that multinational companies need to take control of their supply chains. Forced labor, human trafficking, supply chain transparency, OFAC sanctions, even conflict minerals — all are areas in which the best defense against potential violations is strong compliance and due diligence to ensure that companies properly manage their supply chains, rights down to the last supplier. Today’s mix of enforcement actions and guidance from the U.S. government underscores the importance of doing so.

EXPORT CONTROLS AND HUMAN RIGHTS

The Department of Commerce has stated that it has the authority to put companies on the Entity List (requiring special licensing and restrictions) solely for human rights violations. Does your company conduct full due diligence on its suppliers and sub-suppliers to ensure that they are operating in accordance with U.S. forced labor and human trafficking laws?

FORCED LABOR/UFLPA

The Department of Homeland Security continues to add Chinese and other companies to the Uyghur Forced Labor and Prevention Act (UFLPA) Entity List. Does your organization specifically screen against the UFLPA Entity List, as well as have in place UFLPA compliance and due diligence measures?

FORCED LABOR/UFLPA

The U.S. government has issued a pointed six-agency set of compliance guidelines regarding “the Risks and Considerations for Businesses and Individuals with Exposure to Entities Engaged in Forced Labor and other Human Rights Abuses linked to Xinjiang Uyghur Autonomous Region.” Does your organization maintain a compliance policy, vendor code of conduct, supply chain transparency and due diligence procedures, and other measures designed to ensure your supply chain is free of forced labor, human trafficking, or goods sourced from forced labor in the Xingjian Autonomous Region?

CUSTOMS PENALTY FOR ERRONEOUS USE OF FIRST SALE RULE

Due to the imposition of special Section 301 tariffs on most goods from Customs, many companies have begun to use the first sale rule, which allows the reporting of a lower value where there is a bona fide sale to a middleman. Improper application of the rule, however, can be the basis for substantial penalties, as an apparel company that paid a $1.3 million settlement with the DOJ found out. If your company uses the first sale rule, do you regularly review pricing and relevant circumstances to ensure you are meeting all the requirements for all entries?

EXPORT CONTROLS

Pledging “a new era of trilateral partnership,” the U.S., Japan, and South Korea governments have announced expanded collaboration to fight illegal exports of dual-use products, including high-tech products that might be shipped to China in violation of U.S. export controls. Has your organization performed a recent classification review to confirm it is aware of any restrictions that might adhere to the export of any of its products to sensitive countries, governments, or users?

NYS Sexual Harassment Hotline Goes Live

Effective July 14, 2022 (pursuant to legislation amending the New York State Human Rights Law that was signed by New York State Governor Kathy Hochul in March 2022), New York established a telephone hotline that employees can use to report incidents of sexual harassment to the New York State Division of Human Rights.   The hotline number is 800-HARASS-3 ((800) 427-2773) and will be staffed, on a pro bono basis, by NYS attorneys who have expertise in employment law and sexual harassment issues.  The hotline can be called Monday through Friday, 9:00 a.m. to 5:00 p.m.

Because, under the law, information about the hotline must be contained in workplace policies and postings about sexual harassment, employers need to revise their anti-harassment policies promptly to include this information.

© 2022 Vedder Price

U.S. House and Senate Reach Agreement on Uyghur Forced Labor Prevention Act

On December 14, 2021, lawmakers in the House and Senate announced that they had reached an agreement on compromise language for a bill known as the Uyghur Forced Labor Prevention Act or “UFLPA.”  Different versions of this measure passed the House and the Senate earlier this year, but lawmakers and Congressional staff have been working to reconcile the parallel proposals. The compromise language paves the way for Congress to pass the bill and send it to President Biden’s desk as soon as this week.

The bill would establish a rebuttable presumption that all goods originating from China’s Xinjiang region violate existing US law prohibiting the importation of goods made with forced labor. The rebuttable presumption would go into effect 180 days after enactment.  The compromise bill would also require federal officials to solicit public comments and hold a public hearing to aid in developing a strategy for the enforcement of the import ban vis-à-vis goods alleged to have been made through forced labor in China.

This rebuttable presumption will present significant challenges to businesses with supply chains that might touch the Xinjiang region.  Many businesses do not have full visibility into their supply chains and will need to act quickly to map their suppliers and respond to identified risks.  Importers must present detailed documentaton in order to release any shipments that they think were improperly detained, a costly and time-consuming endeavor.  Notably, the public comment and hearing processes will guide the government’s enforcement strategy, providing business stakeholders an opportunity to contribute to an enforcement process that could have implications for implementation of the import ban more broadly.

China’s Xinjiang region is a part of several critical supply chains, lead among them global cotton and apparel trade, as well as solar module production.  According to the Peterson Institute:

Xinjiang accounts for nearly 20 percent of global cotton production, with annual production greater than that of the entire United States. Its position in refined polysilicon—the material from which solar panels are built—is even more dominant, accounting for nearly half of global production. Virtually all silicon-based solar panels are likely to contain some Xinjiang-sourced silicon, according to Jenny Chase, head of solar analysis at Bloomberg New Energy Finance. If signed into law, the bill will send apparel producers and the US solar industry scrambling to find alternative sources of supply and prices are bound to increase.

Article By Ludmilla L. Kasulke and Rory Murphy of Squire Patton Boggs (US) LLP

For more legal news and legislation updates, click here to visit the National Law Review.

© Copyright 2021 Squire Patton Boggs (US) LLP

Thai Army Whistleblower Faces Up to Seven Years of Jail Time For Fleeing Retaliation

In February of this year, the Thai Army launched a new initiative to combat corruption and abuse within its ranks—a 24-hour hotline that reports directly to the Army Chief General, Apirat Kongsompong. This initiative was created in the wake of a shocking incident in which a soldier killed 29 people after a dispute with his commanding officer. The new hotline, while not anonymous, was set up to provide Army whistleblower confidentiality and work in conjunction with National Anti-Corruption Commission, where complaints would be transferred if outside of the Army’s jurisdiction.

In rolling out this new program, General Nattapol was quoted as saying: “[T]he Army is doing our best…This is not a public stunt.” However, in light of the treatment of one of the first major complaints that was submitted through this channel, this statement could not be further from the truth.

As reported by Human Rights Watch, Sgt. Narongchai Intharakawi filed several complaints with the new hotline just two months after it was created, alleging fraud involving staff allowances at the Army Ordnance Materiel Rebuild Center. However, no action was taken on his complaints. Then, despite the promised confidentiality of the hotline, Sgt. Narongchai Intharakawi began receiving death threats and was informed that he would be facing a disciplinary inquiry for “undermining unity within the army and damaging his unit’s reputation.” This inquiry was nothing but a sham, intended to intimidate Sgt. Narongchai Intharakawi. In fact, a leaked video of the inquiry shows Sgt. Narongchai Intharakawi’s superior directly threatening him for reporting, including by stating: “You may be able to get away this time, but there is no next time for you.”

Because after all of this Sgt. Narongchai Intharakawi reasonably feared for his personal safety, he fled his post and publicized his experience, including by making a report to the Thai Parliament’s Committee on Legal Affairs, Justice, and Human Rights.

Instead of ceasing retaliation due to the new publicity around Sgt. Narongchai Intharakawi’s case, the Army has doubled down: They have requested a military court warrant his arrest him for delinquency in his duties. Under this charge Sgt. Narongchai Intharakawi could face up to seven years in prison as well as a dishonorable discharge.

This abhorrent treatment of a whistleblower will make the Army’s new system completely ineffectual and nothing more than symbolic piece of propaganda, discouraging any future whistleblowers from coming forward for fear they will be treated the same way. In order to make right their grievous actions, the Thai Army must abandon all charges against Sgt. Narongchai Intharakawi, issue a formal apology for the breach of confidentiality, and discipline those accused of participating in the retaliation.

Sgt. Narongchai Intharakawi is a hero for stepping out and trying to report corruption under a new, untested system and should be treated as such both in Thailand and globally.


Copyright Kohn, Kohn & Colapinto, LLP 2020. All Rights Reserved.

New York State Legislature Enacts Sweeping Changes to Combat Sexual Harassment

On June 19th, the New York State Senate and Assembly voted to pass omnibus legislation greatly strengthening protections against sexual harassment. While the bill, SB 6577, is still waiting for the Governor’s signature, Governor Cuomo supported the legislation and plans to sign the bill when it is sent to his desk. The legislation is the product of two legislative hearings that took place early this year, inspired by a group of former legislative staffers who have said they were victims of harassment while working in Albany, NY. The bill includes several provisions directly affecting private employers. These provisions include:

  1. The New York State Human Rights Law (“NYSHRL”) will expand the definition of an “employer” to include all employers in the State, including the State and its political subdivisions, regardless of size. Additionally, the definition of “private employer” will be amended to include any person, company, corporation, or labor organization except the State or any subdivision or agency thereof.
  2. Protections for certain groups in the workplace will also be expanded. While non-employees, such as independent contractors, vendors, and consultants, were previously protected from sexual harassment in an employer’s workplace, they will now be protected from all forms of unlawful discrimination where the employer knew or should have known the non-employee was subjected to unlawful discrimination in the workplace and failed to take immediate and appropriate corrective action. Similarly, harassment of domestic workers will now be prohibited with respect to all protected classes and will be governed under the harassment standard outlined in (3), below.
  3. The burden of proof for harassment claims will be greatly lowered. Any harassment based on a protected class, or for participating in protected activity, will be unlawful “regardless of whether such harassment would be considered severe or pervasive under precedent applied to harassment claims.” Unlawful harassment will include any activity that “subjects an individual to inferior terms, conditions or privileges of employment because of the individual’s membership in one or more of these protected categories.” Also, employees will no longer need to provide comparator evidence to prove a harassment, and, presumably, discrimination claim.
  4. The law will also alter the affirmative defenses available to employers accused of harassment. The Faragher/Ellerth defense, which allowed employers to avoid liability where the employee did not make a workplace complaint, will no longer be available for harassment claims under NYSHRL. However, an affirmative defense will be available where the harassment complained of “does not rise above the level of what a reasonable victim of discrimination with the same protected characteristic would consider petty slights or trivial inconveniences.”
  5. The statute of limitations to file a sexual harassment complaint with the New York State Division of Human Rights (the “Division”) will be lengthened from one year to three years.
  6. The amendments specify that they are to be construed liberally for remedial purposes, regardless of how federal laws have been construed.
  7. Courts and the Division will be required to award attorneys’ fees to all prevailing claimants or plaintiffs for employment discrimination claims and may award punitive damages in employment discrimination cases against private employers. Attorneys’ fees will only be available to a prevailing respondent or defendant if the claims brought against them were frivolous.
  8. Mandatory arbitration clauses will be prohibited for all discrimination claims.
  9. The use of non-disclosure agreements will be severely restricted. Non-disclosure agreements will be prohibited in any settlement for a claim of discrimination, unless: (1) it’s the complainant’s preference; (2) the agreement is provided in plain English and, if applicable, in the complainant’s primary language; (3) the complainant is given 21 days to consider the agreement; (4) if after 21 days, the complainant still prefers to enter into the agreement, such preference must be memorialized in an agreement signed by all parties; and (5) the complainant must be given seven days after execution of such agreement to revoke the agreement. The same rules apply to non-disclosure agreements within any judgment, stipulation, decree, or agreement of discontinuance. Any term or condition in a non-disclosure agreement is void if it prohibits the complainant from initiating or participating in an agency investigation or disclosing facts necessary to receive public benefits. Non-disclosure clauses in employment agreements are void as to future discrimination claims unless the clause notifies the employee that they are not prohibited from disclosure to law enforcement, the EEOC, the Division, any local commission on human rights, or their attorney. All terms and conditions in a non-disclosure agreement must be provided in writing to all parties, in plain English and, if applicable, the primary language of the complainant.
  10. Employers will be required to provide employees with their sexual harassment policies and sexual harassment training materials, in English and in each employee’s primary language, both at the time of hire and during each annual sexual harassment prevention training. Additionally, the Department of Labor and the Division will evaluate the impact of their model sexual harassment prevention policy and training materials every four years starting in 2022 and will update the model materials as needed.

The majority of these changes will take effect 60 days after the legislation is enacted, with the exception of the “employer” definition expansion, which will take effect after 180 days, and the extended statute of limitations, which will take effect after 1 year. In light of these changes, New York employers should alter their practices and policies to conform with these new requirements. We are monitoring this legislation and will provide updates as new information becomes available.

 

Copyright © 2019, Sheppard Mullin Richter & Hampton LLP.
*Myles Moran, a Sumer Associate in the New York office, assisted with the drafting of this blog.
For more on employment law, see the National Law Review page on Labor & Employment.

 

City of Birmingham Passes Nondiscrimination Ordinance, Creates Human Rights Commission

On September 26, 2017, the Birmingham City Council passed an ordinance that makes it a crime for any entity doing business in the city to discriminate based on race, color, national origin, sex, sexual orientation, gender identity, disability, or familial status. The ordinance passed unanimously and is the first of its kind in Alabama. Enforceable through the municipal courts, the local law applies to housing, public accommodations, public education, and employment. It carves out two exceptions: one for religious corporations and one for employers with bona fide affirmative action plans or seniority systems.

In a separate measure passed during the same meeting, the city created a local human rights commission to receive, investigate, and attempt conciliation of complaints. The commission has no enforcement authority. Citizens who believe they have suffered unlawful discrimination must appear before a magistrate and swear out a warrant or summons. The entity or individual will not receive a ticket but will face a trial before a municipal judge in the city’s courts. Ordinance violations are classified as misdemeanor offenses, and those found guilty of discrimination will face fines of up to $500. Alabama municipalities have no authority under state law to create civil remedies for ordinance violations, therefore, an employer would not be required to reinstate an employee or provide back pay if it were found guilty of violating the ordinance in municipal court.

Because the city’s courts, which are courts of criminal jurisdiction, operate much more quickly than federal civil courts do, one would expect that a guilty verdict under the Birmingham ordinance likely could be used as evidence of discrimination in a federal civil claim that is almost sure to follow.

Although the city’s mayor must sign the ordinance for it to become effective, the mayor has announced he will sign it into law immediately. The city also expects that the Alabama Legislature will challenge the ordinance.

This post was written by Samantha K. Smith of Ogletree, Deakins, Nash, Smoak & Stewart, P.C., All Rights Reserved. © 2017
For more legal analysis, go to The National Law Review

Check out the ABA’s Business, Human Rights, and Sustainability Sourcebook

Now available from the ABA: Business, Human Rights, and Sustainability Sourcebook

The Business, Human Rights and Sustainability Sourcebook addresses the intersection of human rights law with the conduct of business, in light of sustainability mandates and the UN Guiding Principles on Business and Human Rights.

This sourcebook can be used as a standalone reference, or combined into a set as a companion volume with the Center on Human Right’s International Human Rights Law Sourcebook and The International Humanitarian Law Sourcebook.

Available for purchase here.

The ABA Center for Human Rights Presents: International Due Process and Fair Trial Manual

Now available from the ABA: International Due Process and Fair Trial Manual.

ABA Due Process

Available as a book and an e-book, the Justice Defenders Manual is a relevant resource to provide a concise and clear handbook about human rights and how to defend them.

Available here.