Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

How to Develop an Effective Cybersecurity Incident Response Plan for Businesses

Data breaches have become more frequent and costly than ever. In 2021, the average data breach cost companies more than $4 million. Threat actors are increasingly likely to be sophisticated. The emergence of ransomware-as-a-service (RaaS) has allowed even unsophisticated, inexperienced parties to execute harmful, disruptive, costly attacks. In this atmosphere, what can businesses do to best prepare for a cybersecurity incident?

One fundamental aspect of preparation is to develop a cyber incident response plan (IRP). The National Institute of Standards and Technology (NIST) identified five basic cybersecurity functions to manage cybersecurity risk:

  • Identify
  • Protect
  • Detect
  • Respond
  • Recover

In the NIST framework, anticipatory response planning is considered part of the “respond” function, indicating how integral proper planning is to an effective response. Indeed, NIST notes that “investments in planning and exercises support timely response and recovery actions, resulting in reduced impact to the delivery of services.”

But what makes an effective IRP? And what else goes into quality response planning?

A proper IRP requires several considerations. The primary elements include:

  • Assigning accountability: identify an incident response team
  • Securing assistance: identify key external vendors including forensic, legal and insurance
  • Introducing predictability: standardize crucial response, remediation and recovery steps
  • Creating readiness: identify legal obligations and information to facilitate the company’s fulfillment of those obligations
  • Mandating experience: develop periodic training, testing and review requirements

After developing an IRP, a business must ensure it remains current and effective through regular reviews at least annually or anytime the business undergoes a material change that could alter either the IRP’s operation or the cohesion of the incident response team leading those operations.

An effective IRP is one of several integrated tools that can strengthen your business’s data security prior to an attack, facilitate an effective response to any attack, speed your company’s recovery from an attack and help shield it from legal exposure in the event of follow-on litigation.

The Corporate Transparency Act Requires Reporting of Beneficial Owners

The Corporate Transparency Act (the “CTA”) became effective on January 1, 2024, requiring many corporations, limited liability companies, limited partnerships, and other entities to register with and report certain information to the Financial Crimes Enforcement Network (“FinCEN”) of the U.S. Department of Treasury (“Treasury”). The CTA marks a substantial increase in the reporting obligations for many U.S. companies, as well as for non-U.S. companies doing business in the United States.

IN SHORT:
Most corporate entities are now required to file a beneficial ownership information report (“BOI Report”) with FinCEN disclosing certain information about the entity and those persons who are “beneficial owners” or who have “substantial control.” BOI Reports for companies owned by trusts and estates may require significant analysis to determine beneficial ownership and substantial control.

The CTA imposes potential penalties on entities that fail to file BOI Reports with FinCEN by the prescribed deadline. For entities formed prior to January 1, 2024, a BOI Report must be filed by January 1, 2025. For entities formed on or after January 1, 2024, but prior to January 1, 2025, a BOI Report must be filed within 90 days of the entity’s formation. For entities formed on or after January 1, 2025, a BOI Report must be filed within 30 days of the entity’s formation.

Entities formed after January 1, 2024, must also report information regarding “company applicants” to FinCEN. If certain information within a BOI Report changes, entities are required to file a supplemental BOI Report within 30 days of such change.

While Winstead’s Wealth Preservation Practice Group will not be directly filing BOI Reports with FinCEN, our attorneys and staff will be available this year, by appointment, to answer questions regarding reporting requirements if scheduled by Friday, November 22, 2024. We strongly recommend that company owners begin analyzing what reporting obligations they may have under the CTA and schedule appointments with their professional advisors now to ensure availability.

BACKGROUND:
Congress passed the CTA in an effort to combat money laundering, fraud, and other illicit activities accomplished through anonymous shell companies. To achieve this objective, most entities operating in the United States will now be required to file BOI Reports with FinCEN.

The CTA applies to U.S. companies and non-U.S. companies registered to operate in the United States that fall within the definition of a “reporting company.” There are certain exceptions specifically enumerated in the CTA, which generally cover entities that are already subject to anti-money laundering requirements, entities registered with the Securities and Exchange Commission or other federal regulatory bodies, and entities that pose a low risk of the illicit activities targeted by the CTA.

REPORTING OBLIGATIONS:
Entity Information. Each reporting company is required to provide FinCEN with the following information:

  1. the legal name of the reporting company;
  2. the mailing address of the reporting company;
  3. the state of formation (or foreign country in which the entity was formed, if applicable) of the reporting company; and
  4. the employer identification number of the reporting company.

Beneficial Owner and Applicant Information. Absent an exemption, each reporting company is also required to provide FinCEN with the following information regarding each beneficial owner and each company applicant:

  1. full legal name;
  2. date of birth;
  3. current residential or business address; and
  4. unique identifying number from a U.S. passport or U.S. state identification (e.g., state-issued driver’s license), a foreign passport, or a FinCEN identifier (i.e., the unique number issued by FinCEN to an individual).

DEFINITIONS:
Reporting Company. A “reporting company” is defined as any corporation, limited liability company, or any other entity created by the filing of a document with a secretary of state or any similar office under the law of a State. Certain entities are exempt from these filing requirements, including, but not limited to:

  1. financial institutions and regulated investment entities;
  2. utility companies;
  3. entities that are described in Section 501(c) of the Internal Revenue Code;
  4. inactive, non-foreign owned entities with no assets; and
  5. sizeable operating companies that employ more than 20 full-time employees in the United States that have filed a United States federal income tax return in the previous year demonstrating more than $5,000,000 in gross receipts or sales.

A reporting company that is not exempt must register with and report all required information to FinCEN by the applicable deadline.

Beneficial Owner. A “beneficial owner” is defined as any individual who, directly or indirectly, (i) exercises substantial control over such reporting company or (ii) owns or controls at least 25% of the ownership interests of such reporting company.

Substantial Control. An individual exercises “substantial control” over a reporting company if the individual (i) serves as a senior officer of the reporting company, (ii) has authority over the appointment or removal of any senior officer or a majority of the board of directors (or the similar body governing such reporting company), or (iii) directs, determines, or has substantial influence over important decisions made by the reporting company, including by reason of such individual’s representation on the board (or other governing body of the reporting company) or control of a majority of the reporting company’s voting power.

Company Applicant. A “company applicant” is any individual who (i) files an application to form the reporting company under U.S. law or (ii) registers or files an application to register the reporting company under the laws of a foreign country to do business in the United States by filing a document with the secretary of state or similar office under U.S. law.

DEADLINES:
Entities Formed Before January 1, 2024. A reporting company that was formed prior to the effective date of the CTA (January 1, 2024) is required to register with FinCEN and file its initial BOI Report by January 1, 2025.

Entities Formed After January 1, 2024, but Before January 1, 2025. A reporting company that was formed after the effective date of the CTA (January 1, 2024), but before January 1, 2025, must register with FinCEN and file its initial BOI Report within 90 calendar days of formation.
Entities Formed After January 1, 2025. A reporting company formed after January 1, 2025, will be required to register with FinCEN and file its initial BOI Report within 30 calendar days of formation.

Supplemental BOI Reports. If any information included in a BOI Report changes, a reporting company must file a supplemental report with FinCEN within 30 days of such change. This includes minor changes, such as an address change or an updated driver’s license for a beneficial owner or someone who has substantial control over the reporting company.

PENALTIES FOR NON-COMPLIANCE:
The CTA and Treasury regulations impose potential civil and criminal liability on reporting companies and company applicants that fail to comply with the CTA’s reporting requirements. Civil penalties for reporting violations include a monetary fine of up to $500 per day that the violation continues unresolved, adjusted for inflation. Criminal penalties include a fine of up to $10,000 and/or two years in prison.

REPORTING REQUIREMENTS RELATED TO TRUSTS AND ESTATES:
When a trust or estate owns at least 25% of a reporting company or exercises substantial control over the reporting company, the BOI Report must generally include (i) the fiduciaries of the trust or estate (i.e., the trustee or executor), (ii) certain individual beneficiaries, and (iii) the settlor or creator of the trust. If the trust agreement gives other individuals certain rights and powers, however, such as a distribution advisor, trust protector, or trust committee member, the reporting company may also be required to disclose such individuals’ information in the BOI Report. Similarly, if a corporate trustee or executor is serving, the BOI Report must contain the names and information of the employees who actually administer the trust or estate on behalf of the corporation. Due to these nuances, it is often necessary to engage in additional analysis when a trust or estate is a beneficial owner of or has substantial control over a reporting company.

CONCLUDING REMARKS:
The CTA and its BOI Report filing requirement are still relatively new, and although FinCEN continues to publish additional guidance, many open questions remain. All companies formed or operating in the United States should carefully review whether they are required to file an initial BOI Report in accordance with the CTA, and take further steps to identify all individuals who must be included in such BOI Report.

Triggers That Require Reporting Companies to File Updated Beneficial Ownership Interest Reports

On January 1, 2024, Congress enacted the Corporate Transparency Act (the “CTA”) as part of the Anti-Money Laundering Act of 2020 and its annual National Defense Authorization Act. Every entity that meets the definition of a “reporting company” under the CTA and does not qualify for an exemption must file a beneficial ownership information report (a “BOI Report”) with the US Department of the Treasury’s Financial Crimes Enforcement Network (“FinCEN”). Reporting companies include any entity that is created by the filing of a document with a secretary of state or any similar office under the law of a state or Indian tribe (this includes corporations, LLCs, and limited partnerships).

In most circumstances, a reporting company only has to file an initial BOI Report to comply with the CTA’s reporting requirements. However, when the required information reported by an individual or reporting company changes after a BOI Report has been filed or when either discovers that the reported information is inaccurate, the individual or reporting company must update or correct the reporting information.

Deadline: If an updated BOI Report is required, the reporting company has 30 calendar days after the change to file an updated report.

What triggers an updated BOI Report? There is no materiality threshold as to what warrants an updated report. According to FinCEN, any change to the required information about the reporting company or its beneficial owners in its BOI Report triggers a responsibility to file an updated BOI Report.

Some examples that trigger an updated BOI Report:

  • Any change to the information reported for the reporting company, such as registering a new DBA, new principal place of business, or change in legal name.
  • A change in the beneficial owners exercising substantial control over the reporting company, such as a new CEO, a sale (whether as a result of a new equity issuance or transfer of equity) that changes who meets the ownership interest threshold of 25%, or the death of a beneficial owner listed in the BOI Report.
  • Any change to any listed beneficial owner’s name, address, or unique identifying number provided in a BOI report.
  • Any other change to existing ownership information that was previously listed in the BOI Report.

Below is a reminder of the information report on the BOI report:

  • (1) For a reporting company, any change to the following information triggers an updated report:
    • Full legal name;
    • Any trade or “doing business as” name;
    • A complete current address (cannot be a post office box);
    • The state, territory, possession, tribal or foreign jurisdiction of formation; and
      TIN.
  • (2) For the beneficial owners and company applicants, any change to the following information triggers an updated report:
    • Full legal name of the individual;
    • Date of the birth of the individual;
    • A complete current address;
    • A unique identifying number and the issuing jurisdiction from one of the following non-expired documents; and
    • An image of the document.

It is important to note that if a beneficial owner or company applicant has a FinCEN ID and any change is made to the required information for either individual, then such individuals are responsible for updating their information with FinCEN directly. This is not the responsibility of the reporting company

Digging for Trouble: The Double-Edged Sword of Decisions to Report Misconduct

On May 10, 2024, Romy Andrianarisoa, former Chief of Staff to the President of Madagascar, was convicted for soliciting bribes from Gemfields Group Ltd (Gemfields), a UK-based mining company specializing in rubies and emeralds. Andrianarisoa, along with her associate Philippe Tabuteau, was charged after requesting significant sums of money and a five percent equity stake in a mining venture in exchange for facilitating exclusive mining rights in Madagascar.

The investigation, spearheaded by the UK’s National Crime Agency (NCA), began when Gemfields reported their suspicions of corruption. Using covert surveillance, the NCA recorded Andrianarisoa and Tabuteau requesting 250,000 Swiss Francs (approximately £215,000) and a five percent equity stake, potentially worth around £4 million, as payments for their services. Gemfields supported the investigation and prosecution throughout.

During the investigation, six covertly recorded audio clips were released, suggesting Andrianarisoa had significant influence over Madagascar’s leadership and her expectation of substantial financial rewards. The arrests in August 2023 and subsequent trial at Southwark Crown Court culminated in prison sentences of three and a half years for Andrianarisoa and two years and three months for Tabuteau.

Comment

Gemfields has, quite rightly, been praised for reporting this conduct to the NCA and supporting their investigation and prosecution. In doing so, they made a strong ethical decision and went above and beyond their legal obligations: there is no legal requirement on Gemfields to report solicitations of this kind.

Such a decision will also have been difficult. Reporting misconduct and supporting the investigation is likely to have exposed Gemfields to significant risk and costs:

  • First, in order to meet their obligations as prosecutors, put together the best case, and comply with disclosure requirements, the NCA likely required Gemfields employees to attend interviews and provide documents. These activities require significant legal support and can be very costly both in time and money.
  • Secondly, such disclosures and interviews might identify unrelated matters of interest to the NCA. It is not uncommon in these cases for corporates reporting misconduct to become the subject of unrelated allegations of misconduct and separate investigations themselves.
  • Furthermore, to the extent that Gemfields supported the covert surveillance aspects of the NCA’s investigation, there may have been significant safety risks to both the employees participating, and unrelated employees in Madagascar. Such risks can be extremely difficult to mitigate.
  • Finally, the willingness to publicly and voluntarily report Andrianarisoa is likely to have created a chilling effect on Gemfields’ ability to do legitimate business in Madagascar and elsewhere. Potential partners may be dissuaded from working with Gemfields for fear of being dragged into similar investigations whether warranted or not.

Organisations in these situations face difficult decisions. Many will, quite rightly, want to be good corporate citizens, but in doing so, must recognise the potential costs and risks to their business and, ultimately, their obligations to shareholders and owners. In circumstances where there is no obligation to report, the safest option may be to walk away and carefully record the decision to do so. No doubt, Gemfields carefully considered these risks prior to reporting Andrianarisoa’s misconduct.

Businesses facing similar challenges should:

  • Ensure they understand their legal obligations. Generally, there is no obligation to report a crime. However, particularly for companies and firms operating in the financial services or other regulated sectors, this is not universally the case.
  • Carefully consider the risks and benefits associated with any decision to report another’s misconduct, including not only financial costs, but time and safety costs too.
  • Develop a compliance programme that assists and educates teams on how to correctly identify misconduct, escalate appropriately, and decide whether to report.

Office Tenants: Do Due Diligence on Your Landlord

Office markets from coast-to-coast are struggling mightily, especially in major urban downtowns. Chicago’s downtown business district (i.e. the Loop) is no exception. Right now, Chicago’s Loop office vacancy rates are the highest since such rates have been recorded.

In April of this year, Crain’s Chicago Business reported that downtown office vacancy broke 25% for the first time on record, landing at 25.1%. This number reflects seven consecutive quarters of increasing vacancy.

What does this mean for tenants? Well…a lot.

It means opportunity as landlords feel pressure to fill vacant office space. Lease concessions that never would have been considered three years ago, might be available now. These days, on most office deals, tenants enjoy considerable leverage. While this market brings tenant’s many benefits, it also brings significant risks. Here are a few risks for tenants to consider before signing a lease:

1. Is your landlord in financial distress? Landlords will always vet an incoming tenant’s financial condition. The same often does not happen in reverse. Many office landlords face financial pressure now. If the landlord is at risk of foreclosure, or otherwise in financial peril, the tenant should have a number of concerns ranging from how well the building will be maintained to whether or not they will be staying in a bank-owned building soon. Tenants should fully inquire into landlord’s financial condition, especially if meaningful tenant allowances have been agreed to.

2. Subordination and Non-Disturbance Agreements are more important now than ever. “SNDAs” can go a long way towards protecting tenant’s lease rights in the event of a foreclosure.

3. Will “creative” uses come to the building? Never underestimate the ingenuity of the commercial real estate industry. All kinds of ideas have sprouted up as to what could be done to fill empty downtown office space. Indoor dog parks, pickle ball courts and the often tossed about notion of converting vacant office space into residential apartments are good examples. Tenants should find out before signing if the landlord has any designs on filling vacant space with uses that the tenant might find objectionable.

4. Co-tenancy provisions and the careful review of how operating costs will be allocated are critical. Who bears the risk of vacancy as to operating expenses? Tenants needs to know if fewer tenants means they will have a higher share of operating costs. Tenants also need to know if they have any way out of the lease if the building really struggles. No one wants to be alone in an empty tower.

The Double-Edged Impact of AI Compliance Algorithms on Whistleblowing

As the implementation of Artificial Intelligence (AI) compliance and fraud detection algorithms within corporations and financial institutions continues to grow, it is crucial to consider how this technology has a twofold effect.

It’s a classic double-edged technology: in the right hands it can help detect fraud and bolster compliance, but in the wrong it can snuff out would-be-whistleblowers and weaken accountability mechanisms. Employees should assume it is being used in a wide range of ways.

Algorithms are already pervasive in our legal and governmental systems: the Securities and Exchange Commission, a champion of whistleblowers, employs these very compliance algorithms to detect trading misconduct and determine whether a legal violation has taken place.

There are two major downsides to the implementation of compliance algorithms that experts foresee: institutions avoiding culpability and tracking whistleblowers. AI can uncover fraud but cannot guarantee the proper reporting of it. This same technology can be used against employees to monitor and detect signs of whistleblowing.

Strengths of AI Compliance Systems:

AI excels at analyzing vast amounts of data to identify fraudulent transactions and patterns that might escape human detection, allowing institutions to quickly and efficiently spot misconduct that would otherwise remain undetected.

AI compliance algorithms are promised to operate as follows:

  • Real-time Detection: AI can analyze vast amounts of data, including financial transactions, communication logs, and travel records, in real-time. This allows for immediate identification of anomalies that might indicate fraudulent activity.
  • Pattern Recognition: AI excels at finding hidden patterns, analyzing spending habits, communication patterns, and connections between seemingly unrelated entities to flag potential conflicts of interest, unusual transactions, or suspicious interactions.
  • Efficiency and Automation: AI can automate data collection and analysis, leading to quicker identification and investigation of potential fraud cases.

Yuktesh Kashyap, associate Vice President of data science at Sigmoid explains on TechTarget that AI allows financial institutions, for example, to “streamline compliance processes and improve productivity. Thanks to its ability to process massive data logs and deliver meaningful insights, AI can give financial institutions a competitive advantage with real-time updates for simpler compliance management… AI technologies greatly reduce workloads and dramatically cut costs for financial institutions by enabling compliance to be more efficient and effective. These institutions can then achieve more than just compliance with the law by actually creating value with increased profits.”

Due Diligence and Human Oversight

Stephen M. Kohn, founding partner of Kohn, Kohn & Colapinto LLP, argues that AI compliance algorithms will be an ineffective tool that allow institutions to escape liability. He worries that corporations and financial institutions will implement AI systems and evade enforcement action by calling it due diligence.

“Companies want to use AI software to show the government that they are complying reasonably. Corporations and financial institutions will tell the government that they use sophisticated algorithms, and it did not detect all that money laundering, so you should not sanction us because we did due diligence.” He insists that the U.S. Government should not allow these algorithms to be used as a regulatory benchmark.

Legal scholar Sonia Katyal writes in her piece “Democracy & Distrust in an Era of Artificial Intelligence” that “While automation lowers the cost of decision making, it also raises significant due process concerns, involving a lack of notice and the opportunity to challenge the decision.”

While AI can be used as a powerful tool for identifying fraud, there is still no method for it to contact authorities with its discoveries. Compliance personnel are still required to blow the whistle, given societies standard due process. These algorithms should be used in conjunction with human judgment to determine compliance or lack thereof. Due process is needed so that individuals can understand the reasoning behind algorithmic determinations.

The Double-Edged Sword

Darrell West, Senior Fellow at Brookings Institute’s Center for Technology Innovation and Douglas Dillon Chair in Governmental Studies warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them.

Nowadays most office jobs (whether remote or in person) conduct operations fully online. Employees are required to use company computers and networks to do their jobs. Data generated by each employee passes through these devices and networks. Meaning, your privacy rights are questionable.

Because of this, whistleblowing will get much harder – organizations can employ the technology they initially implemented to catch fraud to instead catch whistleblowers. They can monitor employees via the capabilities built into our everyday tech: cameras, emails, keystroke detectors, online activity logs, what is downloaded, and more. West urges people to operate under the assumption that employers are monitoring their online activity.

These techniques have been implemented in the workplace for years, but AI automates tracking mechanisms. AI gives organizations more systematic tools to detect internal problems.

West explains, “All organizations are sensitive to a disgruntled employee who might take information outside the organization, especially if somebody’s dealing with confidential information, budget information or other types of financial information. It is just easy for organizations to monitor that because they can mine emails. They can analyze text messages; they can see who you are calling. Companies could have keystroke detectors and see what you are typing. Since many of us are doing our jobs in Microsoft Teams meetings and other video conferencing, there is a camera that records and transcribes information.”

If a company is defining a whistleblower as a problem, they can monitor this very information and look for keywords that would indicate somebody is engaging in whistleblowing.

With AI, companies can monitor specific employees they might find problematic (such as a whistleblower) and all the information they produce, including the keywords that might indicate fraud. Creators of these algorithms promise that soon their products will be able to detect all sorts of patterns and feelings, such as emotion and sentiment.

AI cannot determine whether somebody is a whistleblower, but it can flag unusual patterns and refer those patterns to compliance analysts. AI then becomes a tool to monitor what is going on within the organization, making it difficult for whistleblowers to go unnoticed. The risk of being caught by internal compliance software will be much greater.

“The only way people could report under these technological systems would be to go offline, using their personal devices or burner phones. But it is difficult to operate whistleblowing this way and makes it difficult to transmit confidential information. A whistleblower must, at some point, download information. Since you will be doing that on a company network, and that is easily detected these days.”

But the question of what becomes of the whistleblower is based on whether the compliance officers operate in support of the company or the public interest – they will have an extraordinary amount of information about the company and the whistleblower.

Risks for whistleblowers have gone up as AI has evolved because it is harder for them to collect and report information on fraud and compliance without being discovered by the organization.

West describes how organizations do not have a choice whether or not to use AI anymore: “All of the major companies are building it into their products. Google, Microsoft, Apple, and so on. A company does not even have to decide to use it: it is already being used. It’s a question of whether they avail themselves of the results of what’s already in their programs.”

“There probably are many companies that are not set up to use all the information that is at their disposal because it does take a little bit of expertise to understand data analytics. But this is just a short-term barrier, like organizations are going to solve that problem quickly.”

West recommends that organizations should just be a lot more transparent about their use of these tools. They should inform their employees what kind of information they are using, how they are monitoring employees, and what kind of software they use. Are they using detection? Software of any sort? Are they monitoring keystrokes?

Employees should want to know how long information is being stored. Organizations might legitimately use this technology for fraud detection, which might be a good argument to collect information, but it does not mean they should keep that information for five years. Once they have used the information and determined whether employees are committing fraud, there is no reason to keep it. Companies are largely not transparent about length of storage and what is done with this data and once it is used.

West believes that currently, most companies are not actually informing employees of how their information is being kept and how the new digital tools are being utilized.

The Importance of Whistleblower Programs:

The ability of AI algorithms to track whistleblowers poses a real risk to regulatory compliance given the massive importance of whistleblower programs in the United States’ enforcement of corporate crime.

The whistleblower programs at the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) respond to individuals who voluntarily report original information about fraud or misconduct.

If a tip leads to a successful enforcement action, the whistleblowers are entitled to 10-30% of the recovered funds. These programs have created clear anti-retaliation protections and strong financial incentives for reporting securities and commodities fraud.

Established in 2010 under the Dodd-Frank Act, these programs have been integral to enforcement. The SEC reports that whistleblower tips have led to over $6 billion in sanctions while the CFTC states that almost a third of its investigations stem from whistleblower disclosures.

Whistleblower programs, with robust protections for those who speak out, remain essential for exposing fraud and holding organizations accountable. This ensures that detected fraud is not only identified, but also reported and addressed, protecting taxpayer money, and promoting ethical business practices.

If AI algorithms are used to track down whistleblowers, their implementation would hinder these programs. Companies will undoubtedly retaliate against employees they suspect of blowing the whistle, creating a massive chilling effect where potential whistleblowers would not act out of fear of detection.

Already being employed in our institutions, experts believe these AI-driven compliance systems must have independent oversight for transparency’s sake. The software must also be designed to adhere to due process standards.

For more news on AI Compliance and Whistleblowing, visit the NLR Communications, Media & Internet section.

Cybersecurity Crunch: Building Strong Data Security Programs with Limited Resources – Insights from Tech and Financial Services Sectors

In today’s digital age, cybersecurity has become a paramount concern for executives navigating the complexities of their corporate ecosystems. With resources often limited and the ever-present threat of cyberattacks, establishing clear priorities is essential to safeguarding company assets.

Building the right team of security experts is a critical step in this process, ensuring that the organization is well-equipped to fend off potential threats. Equally important is securing buy-in from all stakeholders, as a unified approach to cybersecurity fosters a robust defense mechanism across all levels of the company.Digit

This insider’s look at cybersecurity will delve into the strategic imperatives for companies aiming to protect their digital frontiers effectively.

Where Do You Start on Cybersecurity?
Resources are limited, and pressures on corporate security teams are growing, both from internal stakeholders and outside threats. But resources to do the job aren’t. So how can companies protect themselves in real world environment, where finances, employee time, and other resources are finite?

“You really have to understand what your company is in the business of doing,” Wilson said. “Every business will have different needs. Their risk tolerances will be different.”

“You really have to understand what your company is in the business of doing. Every business will have different needs. Their risk tolerances will be different.”

BRIAN WILSON, CHIEF INFORMATION SECURITY OFFICER, SAS
For example, Tuttle said in the manufacturing sector, digital assets and data have become increasingly important in recent years. The physical product no longer is the end-all, be-all of the company’s success.

For cybersecurity professionals, this new reality leads to challenges and tough choices. Having a perfect cybersecurity system isn’t possible—not for a company doing business in a modern, digital world. Tuttle said, “If we’re going to enable this business to grow, we’re going to have to be forward-thinking.”

That means setting priorities for cybersecurity. Inskeep, who previously worked in cybersecurity for one of the world’s largest financial services institutions, said multi-factor authentication and controlling access is a good starting point, particularly against phishing and ransomware attacks. Also, he said companies need good back-up systems that enable them to recover lost data as well as robust incident response plans.

“Bad things are going to happen,” Wilson said. “You need to have logs and SIEMs to tell a story.”

Tuttle said one challenge in implementing an incident response plan is engaging team members who aren’t on the front lines of cybersecurity. “They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right,” she said. “They need to be thinking, ‘What should I be looking for and what’s my response?’”

“They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right. They need to be thinking, ‘What should I be looking for and what’s my response?’”

LISA TUTTLE, CHIEF INFORMATION SECURITY OFFICER, SPX TECHNOLOGIES
Wilson said tabletop exercises and security awareness training “are a good feedback loop to have to make sure you’re including the right people. They have to know what to do when something bad happens.”

Building a Security Team
Hiring and maintaining good people in a harrowing field can be a challenge. Companies should leverage their external and internal networks to find data privacy and cybersecurity team members.

Wilson said SAS uses an intern program to help ensure they have trained professionals already in-house. He also said a company’s Help Desk can be a good source of talent.

Remote work also allows companies to cast a wider net for hiring employees. The challenge becomes keeping remote workers engaged, and companies should consider how they can make these far-flung team members feel part of the team.

Inskeep said burnout is a problem in the cybersecurity field. “It’s a job that can feel overwhelming sometimes,” he said. “Interacting with people and protecting them from that burnout has become more critical than ever.”

“It’s a job that can feel overwhelming sometimes. Interacting with people and protecting them from that burnout has become more critical than ever.”

TODD INSKEEP, FOUNDER AND CYBERSECURITY ADVISOR, INCOVATE SOLUTIONS
Weighing Levels of Compliance
The first step, Claypoole said, is understanding the compliance obligations the company faces. These obligations include both regulatory requirements (which are tightening) as well as contract terms from customers.

“For a business, that can be scary, because your business may be agreeing to contract terms with customers and they aren’t asking you about the security requirements in those contracts,” Wilson said.

The panel also noted that “compliance” and “security” aren’t the same thing. Compliance is a minimum set of standards that must be met, while security is a more wide-reaching goal.

But company leaders must realize they can’t have a perfect cybersecurity system, even if they could afford it. It’s important to identify priorities—including which operations are the most important to the company and which would be most disruptive if they went offline.

Wilson noted that global privacy regulations are increasing and becoming stricter every year. In addition, federal officials have taken criminal action against CSOs in recent years.

“Everybody’s radar is kind of up,” Tuttle said. The increasingly compliance pressure also means it’s important for cybersecurity teams to work collaboratively with other departments, rather than making key decisions in a vacuum. Inskeep said such decisions need to be carefully documented as well.

“If you get to a place where you are being investigated, you need your own lawyer,” Claypoole said.

“If you get to a place where you are being investigated, you need your own lawyer.”

TED CLAYPOOLE, PARTNER, WOMBLE BOND DICKINSON
Cyberinsurance is another consideration for data privacy teams, but it can help Chief Security Officers make the case for more resources (both financial and work hours). Inskeep said cyberinsurance questions also can help companies identify areas of risks and where they need to prioritize their efforts. Such priorities can change, and he said companies need to have a committee or some other mechanism to regularly review and update cybersecurity priorities.

Wilson said one positive change he’s seen is that top executives now understand the importance of cybersecurity and are more willing to include cybersecurity team members in the up-front decision-making process.

Bringing in Outside Expertise
Consultants and vendors can be helpful to a cybersecurity team, particularly for smaller teams. Companies can move certain functions to third-party consultants, allowing their own teams to focus on core priorities.

“If we don’t have that internal expertise, that’s a situation where we’d call in third-party resources,” Wilson said.

Bringing in outside professionals also can help a company keep up with new trends and new technologies.

Ultimately, a proactive and well-coordinated cybersecurity strategy is indispensable for safeguarding the digital landscape of modern enterprises. With an ever-evolving threat landscape, companies must be agile in their approach and continuously review and update their security measures. At the core of any effective cybersecurity plan is a comprehensive risk management framework that identifies potential vulnerabilities and outlines steps to mitigate their impact. This framework should also include incident response protocols to minimize the damage in case of a cyberattack.

In addition to technology and processes, the human element is crucial in cybersecurity. Employees must be educated on how to spot potential threats, such as phishing emails or suspicious links, and know what steps to take if they encounter them.

Key Takeaways:
What are the biggest risk areas and how do you minimize those risks?
Know your external cyber footprint. This is what attackers see and will target.
Align with your team, your peers, and your executive staff.
Prioritize implementing multi-factor authentication and controlling access to protect against common threats like phishing and ransomware.
Develop reliable backup systems and robust incident response plans to recover lost data and respond quickly to cyber incidents.
Engage team members who are not on the front lines of cybersecurity to ensure quick identification and escalation of potential threats.
Conduct tabletop exercises and security awareness training regularly.
Leverage intern programs and help desk personnel to build a strong cybersecurity team internally.
Explore remote work options to widen the talent pool for hiring cybersecurity professionals, while keeping remote workers engaged and integrated.
Balance regulatory compliance with overall security goals, understanding that compliance is just a minimum standard.

Copyright © 2024 Womble Bond Dickinson (US) LLP All Rights Reserved.

by: Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

For more on Cybersecurity, visit the Communications Media Internet section.

Paperless Power: Exploring the Legal Landscape of E-Signatures and eNotes

In an era characterized by rapid technological advancements and the profound shift towards remote work, the traditional concept of signing documents with pen and paper has evolved. Electronic signatures, or e-signatures, have emerged as a convenient and efficient alternative, promising to streamline processes, reduce paperwork, and enhance accessibility. Organizations are increasingly embracing e-signatures for a wide range of transactions, prompting a closer examination of their legal validity.

WHAT IS AN “E-SIGNATURE”?

An e-signature encompasses any electronic sound, symbol, or process associated with a record and executed with the intent to sign. These can range from scanned images of handwritten signatures to digital representations generated by specialized software.

GOVERNING LAW:

The governing law for e-signatures in the United States includes both state-specific laws, like those based on the Uniform Electronic Transactions Act (UETA), and the federal ESIGN. ESIGN applies to interstate and foreign transactions, harmonizing electronic transactions across state lines. Many states, including Massachusetts, have adopted UETA, reinforcing the legal standing of e-signatures within their jurisdictions (MUETA).

VALIDITY AND REQUIREMENTS:

Generally, e-signatures are legally binding in the Commonwealth of Massachusetts. However, certain documents like wills, adoption papers, and divorce decrees are excluded from the scope of ESIGN and MUETA to safeguard consumer rights and maintain traditional legal practices.

The following components must be present for e-signatures to be fully protected and upheld under ESIGN and MUETA:

  • Intent: each party intended to execute the document;
  • Consent: there must be express or implied consent from the parties to do business electronically (under MUETA, consumer consent disclosures may also be required). In addition, signers should also have the option to opt-out;
  • Association: the e-signature must be “associated” with the document it is intended to authenticate; and
  • Record Retention: records of the transaction and e-signature must be retained electronically.

Meeting these requirements ensures that e-signatures have the same legal validity and enforceability as traditional handwritten, wet-ink signatures in Massachusetts.

ENFORCEABILITY OF E-NOTES AND CONCERNS FOR FINANCIAL INSTITUTIONS:

An eNote is an electronically created, signed, and stored promissory note. It differs from scanned signatures on paper or PDF copies. Governed by Article 3 of the Uniform Commercial Code (UCC), eNotes are considered negotiable instruments and therefore require special treatment. ESIGN provides a framework for their use, emphasizing the concept of a “transferable record.” This electronic record, meeting UCC standards, grants the same legal rights as a traditional paper note to the person in “control.” The objective of “control” is for there to be a single authoritative copy of the promissory note that is unique, identifiable, and unalterable. Therefore, proving authenticity and lender control over eNotes can be complex.

In Massachusetts, specific foreclosure laws require the presentation of the original note. Thus lenders should be cautious with eNotes, as possessing an original, physical note greatly reduces enforceability risks.

Further, financial institutions often face heightened scrutiny when using e-signatures due to the sensitive nature of financial transactions and the potential risks involved to ensure security, compliance, and consumer protection.

RECORDABLE DOCUMENTS:

E-signatures have become widely accepted for recording purposes, including in real estate transactions, due to their convenience and efficiency. The implementation of e-signatures for recording has been facilitated and standardized by legislation such as the Uniform Real Property Electronic Recording Act (URPERA). While URPERA offers a comprehensive framework for electronic recording, its adoption varies from state to state. In Massachusetts, URPERA has not yet been formally adopted, leaving recording procedures subject to individual county regulations.

BEST PRACTICES:

Despite the legal recognition of e-signatures under both ESIGN and MUETA, to ensure compliance, organizations should adopt the following best practices:

  1. Obtain Consent: Obtain (and retain) affirmative consent from parties to conduct transactions electronically.
  2. AssociationEstablish a clear and direct connection between an electronic signature and the electronic record it is intended to authenticate.
    • Embedding: One common method of meeting the association requirement is embedding e-signatures directly within electronic documents.
    • Metadata and Audit Trails: Another method is using metadata and audit trails. Metadata contains signature details like signing date, time, signer identity, and transaction specifics. Audit trails chronicle all document actions, reinforcing the link between signatures and records.
  3. Ensure the Integrity of Electronic Records
    • Authenticity and Integrity: Use secure methods to authenticate the identity of signatories and ensure the integrity of the electronic records. This can include digital signatures, encryption, and secure access controls.
    • Single Authoritative Copy: For transferable records (eNotes), ensure that there is a single authoritative copy that is unique, identifiable, and unalterable except through authorized actions.
  4. Maintain Accessibility and Retainability: Ensure that electronic records are retained in a format that is accessible and readable for the required retention period. This includes being able to accurately reproduce the record in its original form.
  5. Security Measures: Implement robust cybersecurity measures to protect against unauthorized access, alteration, or destruction of electronic records. This includes using firewalls, encryption, and secure user authentication methods.
  6. Provide Consumer Protections: Ensure that consumers have the option to receive paper records and can withdraw their consent to electronic records at any time.
  7. Legal and Regulatory Updates: Keep abreast of any updates or changes in the legal and regulatory landscape regarding electronic transactions and records. Adjust policies and practices accordingly to remain compliant.

CONCLUSION:

While e-signatures offer significant benefits for modern commerce, including efficiency and convenience, their adoption requires careful consideration, especially regarding legal and regulatory compliance. By adhering to best practices and remaining vigilant, businesses and individuals can leverage e-signatures effectively in today’s digital economy.

Five Compliance Best Practices for … Conducting a Risk Assessment

As an accompaniment to our biweekly series on “What Every Multinational Should Know About” various international trade, enforcement, and compliance topics, we are introducing a second series of quick-hit pieces on compliance best practices. Give us two minutes, and we will give you five suggested compliance best practices that will benefit your international regulatory compliance program.

Conducting an international risk assessment is crucial for identifying and mitigating potential risks associated with conducting business operations in foreign countries and complying with the expansive application of U.S. law. Because compliance is essentially an exercise in identifying, mitigating, and managing risk, the starting point for any international compliance program is to conduct a risk assessment. If your company has not done one within the last two years, then your organization probably should be putting one in motion.

Here are five compliance checks that are important to consider when conducting a risk assessment:

  1. Understand Business Operations: A good starting point is to gain a thorough understanding of the organization’s business operations, including products, services, markets, supply chains, distribution channels, and key stakeholders. You should pay special attention to new risk areas, including newly acquired companies and divisions, expansions into new countries, and new distribution patterns. Identifying the business profile of the organization, and how it raises systemic risks, is the starting point of developing the risk profile of the company.
  2. Conduct Country- and Industry-Specific Risk Factors: Analyze the political, economic, legal, and regulatory landscape of each country where the organization operates or plans to operate. Consider factors such as political stability, corruption levels, regulatory environment, and cultural differences. You should also understand which countries also raise indirect risks, such as for the transshipment of goods to sanctioned countries. You also should evaluate industry-specific risks and trends that may impact your company’s risk profile, such as the history of recent enforcement actions.
  3. Gather Risk-Related Data and Information: You should gather relevant data and information from internal and external sources to inform the risk-assessment process. Relevant examples include internal documentation, industry publications, reports of recent enforcement actions, and areas where government regulators are stressing compliance, such as the recent focus on supply chain factors. Use risk-assessment tools and methodologies to systematically evaluate and prioritize risks, such as risk matrices, risk heat maps, scenario analysis, and probability-impact assessments. (The Foley anticorruption, economic sanctions, and forced labor heat maps are found here.)
  4. Engage Stakeholders: Engage key stakeholders throughout the risk-assessment process to gather insights, perspectives, and feedback. Consult with local employees and business partners to gain feedback on compliance issues that are likely to arise while also seeking their aid in disseminating the eventual compliance dictates, internal controls, and other compliance measures that your organization ends up implementing or updating.
  5. Document Findings and Develop Risk-Mitigation Strategies: Document the findings of the risk assessment, including identified risks, their potential impact and likelihood, and recommended mitigation strategies. Ensure that documentation is clear, concise, and actionable. Use the documented findings to develop risk-mitigation strategies and action plans to address identified risks effectively while prioritizing mitigation efforts based on risk severity, urgency, and feasibility of implementation.

Most importantly, you should recognize that assessing and addressing risk is an ongoing process. You should ensure your organization has established processes for the ongoing monitoring and review of risks to track changes in the risk landscape and evaluate the effectiveness of mitigation measures. Further, at least once every two years, most multinational organizations should be updating their risk assessment periodically to reflect evolving risks and business conditions as well as changing regulations and regulator enforcement priorities.