Top Risks for Businesses in 2024

Just weeks into 2024, it is already clear that uncertainty will be the watchword. Will the economic soft landing of 2023 persist into 2024? Will labor unrest, strong in 2023, settle down as inflation cools? Will inflation remain tamed? Will the U.S. elections bring continuity or a new administration with very different views on the role of the U.S. in the world and in regulating business?

Uncertainty is also fueling a complex risk environment that will require monitoring global developments more so than in the past. As outlined below, geopolitical risks are present, multiple, interconnected and high impact. International relations have traditionally fallen outside the mandate of most C-Suites, but how the U.S. government responds to geopolitical challenges will impact business operations. Beyond additional disruptions to global trade, businesses in 2024 will face risks associated with expanding protectionist economic policies, climate change impacts, and AI-driven disruptors.

Geopolitical Tensions Disrupting Global Trade

The guardrails are coming off the international system that enshrines the ideals of preserving peace and security through diplomatic engagement, respecting international borders (not changing them through military might) and ensuring the free flow of global trade. In 2022, the world was shocked by Russia’s invasion of Ukraine, but it has taken time for the full impact to reverberate through the international system. While political analysts write on a “spillover of conflict,” the more insidious impact is that more leaders of countries and non-state groups are acting outside the guardrails because they are no longer deterred from using military force to achieve political goals, making 2024 ripe for new military conflicts disrupting global trade beyond the ongoing war in Europe.

In October 2023, Hamas launched a war from Gaza against Israel. Thus far, fighting has spread to the West Bank, between Israel and Lebanese Hezbollah in the north, and to the Red Sea, with Iranian-backed Houthis attacking shipping through the strategic Bab al Mandab strait. Container ships and oil tankers, to avoid the risks, are re-routing to the Cape of Good Hope, adding two weeks of extra sailing time, with the associated costs. Insurance premiums for cargo ships sailing in the eastern Mediterranean have skyrocketed, with some no longer servicing Israeli ports. Companies and retailers with tight delivery schedules are switching to airfreight, which is expected to drive up airfreight rates.

Iran, emboldened by its blossoming relationship with Russia as one of Moscow’s new arms suppliers, is activating its proxy armies in Yemen, Iraq, Syria and Lebanon to attack Western targets. In a two-day period in January 2024, the Iran Revolutionary Guards directly launched strikes in Syria, Iraq and Pakistan. Nuclear-armed Pakistan retaliated with a cross border strike in Iran. While there are many nuances to these incidents, it is evident that deterrence against cross-border military conflict is eroding in a region with deep, festering grievances among neighbors. Iran is in an escalatory mode and could resume harassing shipping in the Persian Gulf and the strategic Strait of Hormuz, where about a fifth of the volume of the world’s total oil consumption passes through on a daily basis.

In East Asia, North Korea is also emboldened by the changing geopolitical environment. Pyongyang, too, has become a major supplier of weaponry to Moscow for use in Ukraine. While Russia (and China) in the past have constructively contained North Korean predilection for aggression against its neighbors, Supreme Leader Kim Jong Un may believe the time is ripe to change the status quo. Ominously, in a Jan. 15 speech before the Supreme People’s Assembly (North Korea’s parliament), Kim rejected the policy of reunification with South Korea and proposed incorporating the country into North Korea “in the event of war.” While North Korean leaders frequently revert to brinksmanship and aggressive language, Kim’s speech reflects confidence of a nuclear power, aligned with Russia against a shared adversary – South Korea, which is firmly aligned with the G7 consensus on Russia. A war in the Korean peninsula would be felt around the world because East Asia is central to global shipping and manufacturing, disrupting supply chains, as well as the regional economy.

China is also waiting for the right moment to “unite” Taiwan with the mainland. Beijing has seen the impact of Western sanctions on Russia over Ukraine and has been deterred from aiding the Russian war effort. In many ways, China has benefited from these sanctions and the reorientation of global trade. Also, Russia, with its far weaker economy, has proven surprisingly resilient to sanctions, another lesson for China. Meanwhile, the Taiwanese people voted in January and returned for a third time the ruling party that strongly rejects Chinese territorial claims. Tensions are high, with the Chinese military once again harassing Taiwanese defenses. For Beijing, the “right moment” could fall this year should conflict break out on the Korean peninsula, which would tie the U.S. down because of the Mutual Defense Treaty.

The uncertainty here is not that there are global tensions, but how the U.S. will respond as they develop and how U.S. businesses can navigate external shocks. Will the U.S. be drawn into a new war in the Middle East? Can the U.S. manage multiple conflicts, already deeply involved in supporting Ukraine? Is the U.S. economy resilient enough to withstand trade disruptions? How can businesses strengthen their own resiliency?

Economic Protectionism Increasing Costs and Risks

Geopolitical tensions, the global pandemic and the unequal benefits of globalization are impacting economic policies of the U.S. and the political discourse around the merits of unrestrained free trade. Protectionist economic policies are creeping in, under the nomenclature of “secure supply chains,” “friend-shoring” and “home-shoring.” The U.S. has imposed tariffs on countries (even allies) accused of unfair trade practices and has foreclosed access to certain technologies by unfriendly countries, namely China.

While the response to some of these trade restrictions are new trade agreements with “friends” to regulate access under preferred terms, in essence creating multiple “friends” trade blocs for specific sectors, other responses are retaliatory, including counter tariffs and export restrictions or outright bans. In 2024, the U.S. economy will see the impact of these trade fragmentation policies in acute ways, with upside risks of new business opportunities and downside risks of supply chain disruptions, critical resource competition, increased input costs, compliance risks and increased reputational risks.

Trade with China, which remains significant and important to the stability of the U.S. economy, will pose new risks in 2024. While Washington and Beijing have agreed to some political and security guardrails to manage the relationship, economic competition is unrestrained and stability in the bilateral relations is not guaranteed. The December 2023 bipartisan report by the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party, with its 150 recommendations on fundamentally resetting economic and technological competition with China, if even partially adopted, risks reigniting the trade war.

2024 is a presidential election year for the U.S. A change of control of the executive branch could result in many economic and regulatory policy reversals. The definition of “friend” could shift or narrow. Restrictions on trade with China could accelerate.

Impacts of Climate Change and Sustainability Policies

2023 was the hottest year on record, and El Niño conditions are expected to further boost the warming trend. Many regions experienced record-breaking wildfire activity in 2023, including Canada where 18 million hectares of land burned. Extreme storms caused life-threatening flooding in Europe, Asia and the Americas. 2024 is expected to bring even more climate hazards. The impacts will be physical and financial, including growing insurance losses and adverse impacts on operations and value chain. Analysts expect that in 2024, the economic and financial costs of adverse health impacts from climate change will increase, with risks related to the spread of infectious disease, insufficient access to clean water, and physical harm to the elderly and vulnerable. The direct economic effect will be on health systems, but also loss of productivity due to extreme weather incidents and effects of epidemics.

Energy transition to low-carbon emissions is underway in the U.S., but it is uneven and still uncertain. The financial market is investing in an impressive number of startups and large-scale projects revolving around cleantech. Still, there is hesitancy on the opportunity and risks of sustainability. Thus far, progress towards sustainability goals has been private sector-led and government-enabled. There is a risk that government incentive programs encouraging the transition to low-carbon energy could be reversed or curtailed under a new administration.

In 2024, some companies will face more climate disclosure compliance requirements. The Securities and Exchange Commission (SEC) is expected to release its final rule on climate change disclosures. The final action has been delayed several times because of pushback by public companies on some of the requirements, including Scope 3 greenhouse gas emission disclosures (those linked to supply chains and end users). California has not waited for the SEC’s final rule: In October 2023, Gov. Gavin Newsom signed into law legislation that will require large companies to disclose greenhouse gas emissions. The California climate laws go into effect in 2026, but companies will need to start much earlier to build the capabilities to plan, track and report their carbon footprint. For U.S. companies doing business in the European Union, they will need to comply with the EU Corporate Sustainability Reporting Directive, with the rules coming into force mid-2024.

Disruptive Technology

In 2023, generative AI was the talk of the town; in 2024, it will be the walk. Companies are popping up with new tools for every imaginable sector, to increase efficiency, task automation, customization, personalization and cost reduction. Business leaders are scrambling to integrate AI to gain a competitive edge, while navigating the everyday risks related to privacy, liability and security. While there are concerns that AI will displace humans, there is a growing consensus that while some jobs will disappear, people will focus on higher value work. That said, new rounds of labor disruptions linked to workforce transition are likely in 2024.

2024 will also bring AI-generated misinformation and disinformation. Bad actors will spread “synthetic” content, such as sophisticated voice cloning, doctored images and counterfeit websites, seeking to manipulate people, damage companies and economies, and foment dissent.

In 2024, around 2 billion people in more than 50 countries will vote in elections at risk of manipulation by misinformation and disinformation, which could destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence, terrorism and erosion of democratic processes. Large democracies will hold elections in 2024, including the U.S., the EU, Mexico, South Korea, India, Pakistan, Indonesia and South Africa. Synthetic content can be very difficult to detect, while easy to produce with AI tools.

This is not a theoretical threat; synthetic content is already being disseminated in the U.S., targeting New Hampshire voters with robocalls that share fake recorded messages from President Biden encouraging people not to vote in the primary election. The U.S. is already polarized with citizens distrustful of the government and media, a ready vulnerability. Businesses are not immune. Notably, CEOs have stood apart, with higher ratings for trustworthiness and risk being called upon to vouch for “truth” (and becoming collateral damage in the fray).

AI-powered malware will make 2023 cyber risks look like child’s play. Attackers can use AI algorithms to find and exploit software vulnerabilities, making attacks precise and effective. AI can help hackers quickly identify security measures and evade them. AI-created phishing attacks will be more sophisticated and difficult to detect because the algorithms can assess larger amounts of piecemeal information and craft messages that mimic communication styles.

The role of states backing cyber armies to spread disinformation or steal information is growing and is part and parcel of the erosion of the existing international order. States face little deterrence from digital cross-border attacks because there are yet to be established mechanisms to impose real costs.

CNN, BREAKING NEWS: CNN Targeted In Massive CIPA Case Involving A NEW Theory Under Section 638.51!

CNN is now facing a massive CIPA class action for violating CIPA Section 638.51 by allegedly installing “Trackers” on its website. In  Lesh v. Cable News Network, Inc, filed in the Superior Court of the State of California by Bursor & Fisher, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of Section 638.51.

More on that in a bit…

As CIPAworld readers know, we predicted the 2023 privacy litigation trends for you.

We warned you of the risky CIPA Chat Box cases.

We broke the news on the evolution of CIPA Web Session recording cases.

We notified you of major CIPA class action lawsuits against some of the world’s largest brands facing millions of dollars in potential exposure.

Now – we are reporting on a lesser-known facet of CIPA – but one that might be even more dangerous for companies using new Internet technologies.

This new focus for plaintiff’s attorneys appears to rely on the theory that website analytic tools are “pen register” or “trap and trace” devices under CIPA §638.51. These allegations also come with a massive $5,000 per violation penalty.

First, let’s delve into the background.

The Evolution of California Invasion of Privacy Act:

We know the California Invasion of Privacy Act is this weird little statute that was enacted decades ago and was designed to prevent ease dropping and wiretapping because — of course back then law enforcements were listening into folks phone calls to find the communist.

638.51 in particular was originally enacted back in the 80s and traditionally, “pen-traps” were employed by law enforcement to record outgoing and/or incoming telephone numbers from a telephone line.

The last two years, plaintiffs have been using these decades-old statues against companies claiming that the use of internet technologies such as website chat boxes, web session recording tools, java scripts, pixels, cookies and other newfangled technologies constitute “wire tapping” or “eavesdropping” on website users.

And California courts who love to take old statutes and apply it to these new technologies – have basically said internet communications are protected from being ease dropped on.

Now California courts will have to address whether these new fangled technologies are also “pen-trap” “devices or processes” under 638.51. These new 638.51 cases involve technologies such as cookies, web beacons, java scripts, and pixels to obtain information about users and their devices as they browse websites and or mobile applications. The users are then analyzed by the website operator or a third party vendor to gather relevant information users’ online activities.

Section 638.51:

Section 638.51 prohibits the usage or installation of “pen registers” – a device or process that records or decodes dialing, routing, addressing, or signaling information (commonly known as DRAS) and “trap and trace” (pen-traps) – devices or processes traditionally used by law enforcement that allow one to record all numbers dialed on outgoing calls or numbers identifying incoming calls — without first obtaining a court order.

Unlike CIPA’s 631, which prohibits wiretapping — the real-time interception of the content of the communications without consent, CIPA 638.51 prohibits the collection of DRAS.

638.51 has limited exceptions including where a service provider’s customer consents to the device’s use or to protect the rights of a service provider’s property.

Breaking Down the Terminology:

The term “pen register” means a device or process that records or decodes DRAs “transmitted by an instrument or facility from which a wire or electronic communication is transmitted, but not the contents of a communication.” §638.50(b).

The term “trap and trace” focuses on incoming, rather than outgoing numbers, and means a “device or process that captures the incoming electronic or other impulses that identify the originating number or other dialing, routing, addressing, or signaling information reasonably likely to identify the source of a wire or electronic communication, but not the contents of a communication.” §638.50(c).

Lesh v. Cable News Network, Inc “CNN” and its precedent:

This new wave of CIPA litigation stems from a single recent decision, Greenley v. Kochava, where the CA court –allowed a “pen register” claim to move pass the motion to dismiss stage. In Kochava, plaintiff challenged the use of these new internet technologies and asserting that the defendant data broker’s software was able to collect a variety of data such as geolocation, search terms, purchase decisions, and spending habits. Applying the plain meaning to the word “process” the Kochava court concluded that “software that identifies consumers, gathers data, and correlates that data through unique ‘fingerprinting’ is a process that falls within CIPA’s pen register definition.”

The Kochava court noted that no other court had interpreted Section 638.51, and while pen registers were traditionally physical machines used by law enforcement to record outbound call from a telephone, “[t]oday pen registers take the form of software.” Accordingly the court held that the plaintiff adequately alleged that the software could collect DRAs and was a “pen register.”

Kochava paved the wave for 638.51 litigation – with hundreds of complaints filed since. The majority of these cases are being filed in Los Angeles Country Superior Court by the Pacific Trial Attorneys in Newport Beach.

In  Lesh v. Cable News Network, Inc, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of CIPA Section 638.51(a) which proscribes any “person” from “install[ing] or us[ing] a pen register or a trap and trace device without first obtaining a court order.”

Plaintiff alleges CNN uses three “Trackers” (PubMatic, Magnite, and Aniview), on its website which constitute “pen registers.” The complaint alleges to make CNN’s website load on a user’s browser, the browser sends “HTTP request” or “GET” request to CNN’s servers where the data is stored. In response to the request, CNN’s service sends an “HTTP response” back to the browser with a set of instructions how to properly display the website – i.e. what images to load, what text should appear, or what music should play.

These instructions cause the Trackers to be installed on a user’s browsers which then cause the browser to send identifying information – including users’ IP addresses to the Trackers to analyze data, create and analyze the performance of marketing campaigns, and target specific users for advertisements. Accordingly the Trackers are “pen registers” – so the complaint alleges.

On this basis, the Plaintiff is asking the court for an order to certify the class, and statutory damages in addition to attorney fees. The alleged class is as follows:

“Pursuant to Cal. Code Civ. Proc. § 382, Plaintiff seeks to represent a class defined as all California residents who accessed the Website in California and had their IP address collected by the Trackers (the “Class”).

The following people are excluded from the Class: (i) any Judge presiding over this action and members of her or her family; (ii) Defendant, Defendant’s subsidiaries, parents, successors, predecessors, and any entity in which Defendant or their parents have a controlling interest (including current and former employees, officers, or directors); (iii) persons who properly execute and file a timely request for exclusion from the Class; (iv) persons whose claims in this matter have been finally adjudicated on the merits or otherwise released; (v) Plaintiff’s counsel and Defendant’s counsel; and (vi) the legal representatives, successors, and assigns of any such excluded persons.”

Under this expansive definition of “pen-register,” plaintiffs are alleging that almost any device that can track a user’s web session activity falls within the definition of a pen-register.

We’ll keep an eye out on this one – but until more helpful case law develops, the Kochava decision will keep open the floodgate of these new CIPA suits. Companies should keep in mind that unlike the other CIPA cases under Section 631 and 632.7, 638.51 allows for a cause of action even where no “contents” are being “recorded” – making 638.51 easier to allege.

Additionally, companies should be mindful of CIPA’s consent exceptions and ensure they are obtaining consent to any technologies that may trigger CIPA.

Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.

2024: The Year of the Telehealth Cliff

What does December 31, 2024, mean to you? New Year’s Eve? Post-2024 election? Too far away to know?

Our answer: December 31, 2024, is when we will go over a “telehealth cliff” if Congress fails to act before that date, directly impacting care and access for Medicare beneficiaries. What is this telehealth cliff? Let’s back up a bit.

TELEHEALTH COVERAGE POLICIES

Current statute (1834(m) of the Social Security Act) lays out payment and coverage policies for Medicare telehealth services. As written, the provisions significantly limit Medicare providers’—and therefore patients’—ability to utilize telehealth services. Some examples:

  • If the patient is in their home when the telehealth service is being provided, telehealth is generally not eligible for reimbursement.
  • Providers cannot bill for telehealth services provided via audio-only communication.
  • There is a narrow list of providers who are eligible to seek reimbursement for telehealth services.

COVID-19-RELATED TELEHEALTH FLEXIBILITIES

When the COVID-19 pandemic hit in 2020, a public health emergency (PHE) was declared. Congress passed several laws, and the administration acted through its own authorities to provide flexibilities around these Medicare telehealth restrictions. In general, nearly all statutory limitations on telehealth were lifted during the PHE. As we all know, utilization of telehealth skyrocketed.

The PHE ended last year, and through subsequent congressional efforts and regulatory actions by the Centers for Medicare and Medicaid Services (CMS), many flexibilities were extended beyond the end of the PHE, through December 31, 2024. Congress and CMS continue to grapple with how to support the provision of Medicare telehealth services for the future.

CMS has taken steps through the annual payment rule, the Medicare Physician Fee Schedule (MPFS), to align many of the payment and coverage policies for which it has regulatory authority with congressional deadlines. CMS has also restructured its telehealth list, giving more clarity to stakeholders and Congress as to which pandemic-era telehealth services could continue if an extension is passed. But CMS can’t address the statutory limitations on its own. Congress must legislate. CMS highlighted this in the final calendar year (CY) 2024 MPFS rule released on November 2, 2023, noting that “while the CAA, 2023, does extend certain COVID-19 PHE flexibilities, including allowing the beneficiary’s home to serve as an originating site, such flexibilities are only extended through the end of CY 2024.”

THE TELEHEALTH CLIFF

This brings us to the telehealth cliff. CMS generally releases the annual MPFS proposed rule in July, with the final rule coming on or around November 1. If history is any indication, Congress is not likely to act on the extensions much before the current December 31 deadline. This sets up the potential for a high level of uncertainty headed into 2025.

If we go over, this telehealth cliff would directly impact care and access for Medicare beneficiaries. The effects could be felt acutely in rural and underserved areas, where patients have been able to access, via telehealth, medical services that may have been out of reach for them in the past. The telehealth cliff would also impact how providers interact with their patients, and their collective ability to continue to utilize telehealth in a way that has benefited patients and providers alike. It could also influence how health plans choose to cover these services in the private marketplace beyond 2024. Such a dramatic change would impact business decisions for many providers and practices heading into 2025. And, at a time when provider shortages are still a significant issue, it would eliminate an option that has allowed many providers, practices and facilities to extend scarce resources for patient care.

TAKE ACTION

Stakeholders should be raising these concerns to Congress now. There are many ways to engage, including reaching out directly to key Members of Congress, looking for opportunities to testify or submit written testimony for relevant congressional hearings, and participating in organized events where Members of Congress will be present. This cliff can be avoided, but not without a concentrated effort and a lot of noise.

Exploring the Future of Information Governance: Key Predictions for 2024

Information governance has evolved rapidly, with technology driving the pace of change. Looking ahead to 2024, we anticipate technology playing an even larger role in data management and protection. In this blog post, we’ll delve into the key predictions for information governance in 2024 and how they’ll impact businesses of all sizes.

  1. Embracing AI and Automation: Artificial intelligence and automation are revolutionizing industries, bringing about significant changes in information governance practices. Over the next few years, it is anticipated that an increasing number of companies will harness the power of AI and automation to drive efficient data analysis, classification, and management. This transformative approach will not only enhance risk identification and compliance but also streamline workflows and alleviate administrative burdens, leading to improved overall operational efficiency and effectiveness. As organizations adapt and embrace these technological advancements, they will be better equipped to navigate the evolving landscape of data governance and stay ahead in an increasingly competitive business environment.
  2. Prioritizing Data Privacy and Security: In recent years, data breaches and cyber-attacks have significantly increased concerns regarding the usage and protection of personal data. As we look ahead to 2024, the importance of data privacy and security will be paramount. This heightened emphasis is driven by regulatory measures such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). These regulations necessitate that businesses take proactive measures to protect sensitive data and provide transparency in their data practices. By doing so, businesses can instill trust in their customers and ensure the responsible handling of personal information.
  3. Fostering Collaboration Across Departments: In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world.
  4. Exploring Blockchain Technology: Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers. This transformative technology not only enhances data integrity but also mitigates the risks of tampering, ensuring trust and accountability in the digital age. With its ability to provide a robust and reliable framework for data management, blockchain is poised to reshape the way we handle and secure information, paving the way for a more efficient and trustworthy future.
  5. Prioritizing Data Ethics: As data-driven decision-making becomes increasingly crucial in the business landscape, the importance of ethical data usage cannot be overstated. In the year 2024, businesses will place even greater emphasis on data ethics, recognizing the need to establish clear guidelines and protocols to navigate potential ethical dilemmas that may arise. To ensure responsible and ethical data practices, organizations will invest in enhancing data literacy among their workforce, prioritizing education and training initiatives. Additionally, there will be a growing focus on transparency in data collection and usage, with businesses striving to build trust and maintain the privacy of individuals while harnessing the power of data for informed decision-making.

The future of information governance will be shaped by technology, regulations, and ethical considerations. Businesses that adapt to these changes will thrive in a data-driven world. By investing in AI and automation, prioritizing data privacy and security, fostering collaboration, exploring blockchain technology, and upholding data ethics, companies can prepare for the challenges and opportunities of 2024 and beyond.

Jim Merrifield, Robinson+Cole’s Director of Information Governance & Business Intake, contributed to this report.

FCC Adopts Updated Data Breach Notification Rules

On December 13, 2023, the Federal Communications Commission (FCC) voted to update its 16-year old data breach notification rules (the “Rules”). Pursuant to the FCC update, providers of telecommunications, Voice over Internet Protocol (VoIP) and telecommunications relay services (TRS) are now required to notify the FCC of a data breach, in addition to existing obligations to notify affected customers, the FBI and the U.S. Secret Service.

The updated Rules introduce a new customer notification timing requirement, requiring notice of a data breach to affected customers without unreasonable delay after notification to the FCC and law enforcement agencies, and in no case more than 30 days after the reasonable determination of a breach. The new Rules also expand the definition of “breach” to include “inadvertent access, use, or disclosure of customer information, except in those cases where such information is acquired in good faith by an employee or agent of a carrier or TRS provider, and such information is not used improperly or further disclosed.” The updated Rules further introduce a harm threshold, whereby customer notification is not required if a carrier or TRS provider can “reasonably determine that no harm to customers is reasonably likely to occur as a result of the breach,” or where the breach solely involves encrypted data and the encryption key was not affected.

The FCC Approves an NOI to Dive Deeper into AI and its Effects on Robocalls and Robotexts

AI is on the tip of everyone’s tongue it seems these days. The Dame brought you a recap of President Biden’s orders addressing AI at the beginning of the month. This morning at the FCC’s open meeting they were presented with a request for a Notice of Inquiry (NOI) to gather additional information about the benefits and harms of artificial intelligence and its use alongside “robocall and robotext”. The following five areas of interest are as follows:

  • First, the NOI seeks, on whether and if so how the commission should define AI technologies for purposes of the inquiry this includes particular uses of AI technologies that are relevant to the commission’s statutory response abilities under the TCPA, which protects consumers from nonemergency calls and texts using an autodialer or containing an artificial or prerecorded voice.
  • Second, the NOI seeks comment on how technologies may impact consumers who receive robocalls and robotexts including any potential benefits and risks that the emerging technologies may create. Specifically, the NOI seeks information on how these technologies may alter the functioning of the existing regulatory framework so that the commission may formulate policies that benefit consumers by ensuring they continue to receive privacy protections under the TCPA.
  • Third, the NOI seeks comment on whether it is necessary or possible to determine at this point whether future types of AI technologies may fall within the TCPA’s existing prohibitions on autodial calls or texts and artificial or prerecorded voice messages.
  • Fourth, NOI seeks comment on whether the commission should consider ways to verify the authenticity and legitimately generate AI voice or text content from trusted sources such as through the use of watermarks, certificates, labels, signatures, or other forms of labels when callers rely on AI technology to generate content. This may include, for example, emulating a human voice on a robocall or creating content in a text message.
  • Lastly, seeks comment on what next steps the commission should consider to further the inquiry.

While all the commissioners voted to approve the NOI they did share a few insightful comments. Commissioner Carr stated “ If AI can combat illegal robocalls, I’m all for it” but he also expressed that he does “…worry that the path we are heading down is going to be overly prescriptive” and suggests “…Let’s put some common-sense guardrails in place, but let’s not be so prescriptive and so heavy-handed on the front end that we end up benefiting large incumbents in the space because they can deal with the regulatory frameworks and stifling the smaller innovation to come.”

Commissioner Starks shared “I, for one, believe this intersectionality is clinical because the future of AI remains uncertain, one thing is clear — it has the potential to impact if not transform every aspect of American life, and because of that potential, each part of our government bears responsibility to better understand the risks, opportunities within its mandate, while being mindful of the limits of its expertise, experience, and authority. In this era of rapid technological change, we must collaborate, lean into our expertise across agencies to best serve our citizens and consumers.” Commissioner Starks seemed to be particularly focused on AI’s ability to facilitate bad actors in schemes like voice cloning and how the FCC can implement safeguards against this type of behavior.

“AI technologies can bring new challenges and opportunities. responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplifying the risks in increasing the digital landscape” Commissioner Gomez shared.

Finally, the topic around the AI NOI wrapped up with Chairwoman Rosenworcel commenting “… I think we make a mistake if we only focus on the potential for harm. We needed to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts. We are talking about technology that can see patterns in our network traffic, unlike anything we have today. They can lead to the development of analytic tools that are exponentially better at finding fraud before it reaches us at home. Used at scale, we cannot only stop this junk, we can use it to increase trust in our networks. We are asking how artificial intelligence is being used right now to recognize patterns in network traffic and how it can be used in the future. We know the risks this technology involves but we also want to harness the benefits.”

40 Countries Including US Vow Not to Pay Ransomware

The United States joined 39 other countries this week in the International Counter Ransomware Initiative, an effort to stem the flow of ransom payments to cybercriminals. The initiative aims to eliminate criminals’ funding through better information sharing about ransom payment accounts. Member states will develop two information-sharing platforms, one created by Lithuania and another jointly by Israel and the United Arab Emirates. Members of the initiative will share a “black list” through the U.S. Department of Treasury, including information on digital wallets being used to move ransomware payments. Finally (in an interesting coming together of the last two oversized ticket items in technology), the initiative will utilize AI to analyze cryptocurrency blockchains to identify criminal transactions.

While government officials near-unanimously counsel against paying ransoms, organizations caught in a ransomware attack often pay to avoid embarrassment and to lower the cost of incident response and mitigation. However, in the macro, paying ransoms leads to ballooning ransom demands and escalating ransomware activity. This initiative may address these long-term trends.

Blair Robinson (Law Clerk – Not yet admitted to practice) authored this article.

Cybersecurity Awareness Dos and Donts Refresher

As we have adjusted to a combination of hybrid, in-person and remote work conditions, bad actors continue to exploit the vulnerabilities associated with our work and home environments. Below are a few tips to help employers and employees address the security threats and challenges of our new normal:

  • Monitoring and awareness of cybersecurity threats as well as risk mitigation;
  • Use of secure Wi-Fi networks, strong passwords, secure VPNs, network infrastructure devices and other remote working devices;
  • Use of company-issued or approved laptops and sandboxed virtual systems instead of personal computers and accounts, as well as careful handling of sensitive and confidential materials; and
  • Preparing to handle security incidents while remote.

Be on the lookout for phishing and other hacking attempts.

Be on high alert for cybersecurity attacks, as cybercriminals are always searching for security vulnerabilities to exploit. A malicious hacker could target employees working remotely by creating a fake coronavirus notice, phony request for charitable contributions or even go so far as impersonating someone from the company’s Information Technology (IT) department. Employers should educate employees on the red flags of phishing emails and continuously remind employees to remain vigilant of potential scams, exercise caution when handling emails and report any suspicious communications.

Maintain a secure Wi-Fi connection.

Information transmitted over public and unsecured networks (such as a free café, store or building Wi-Fi) can be viewed or accessed by others. Employers should configure VPN for telework and enable multi-factor authentication for remote access. To increase security at home, employers should advise employees to take additional precautions, such as using secure Wi-Fi settings and changing default Wi-Fi passwords.

Change and create strong passwords.

Passwords that use pet or children names, birthdays or any other information that can be found on social media can be easily guessed by hackers. Employers should require account and device passwords to be sufficiently long and complex and include capital and lower case letters, numbers and special characters. As an additional precaution, employees should consider changing their passwords before transitioning to remote work.

Update and secure devices. 

To reduce system flaws and vulnerabilities, employers should regularly update VPNs, network infrastructure devices and devices being used to for remote work environments, as well as advise employees to promptly accept updates to operating systems, software and applications on personal devices. When feasible, employers should consider implementing additional safeguards, such as keystroke encryption and mobile-device-management (MDM) on employee personal devices.

Use of personal devices and deletion of electronic files.

Home computers may not have deployed critical security updates, may not be password protected and may not have an encrypted hard drive. To the extent possible, employers should urge employees to use company-issued laptops or sandboxed virtual systems. Where this is not possible, employees should use secure personal computers and employers should advise employees to create a separate user account on personal computers designated for work purposes and to empty trash or recycle bins and download folders.

Prohibit use of personal email for work purposes.

To avoid unauthorized access, personal email accounts should not be used for work purposes. Employers should remind employees to avoid forwarding work emails to personal accounts and to promptly delete emails in personal accounts as they may contain sensitive information.

Secure collaboration tools.

Employees and teams working from home need to stay connected and often rely on instant-messaging and web-conferencing tools (e.g., Slack and Zoom). Employers should ensure company-provided collaboration tools, if any, are secure and should restrict employees from downloading any non-company approved tools. If new collaboration tools are required, IT personnel should review the settings of such tools (as they may not be secure or may record conversations by default), and employers should consider training employees on appropriate use of such tools.

Handle physical documents with care.

Remote work arrangements may require employees to take sensitive or confidential materials offsite that they would not otherwise. Employees should be advised to handle these documents with the appropriate levels of care and avoid printing sensitive or confidential materials on public printers. These documents should be securely shredded or returned to the office for proper disposal.

Develop clear guidelines and train employees on cyberhygiene.

To ensure employees are aware of remote work responsibilities and obligations, employers should prepare clear telework guidelines (and incorporate any standards required by applicable regulatory schemes) and post the guidelines on the organization’s intranet and/or circulate the guidelines to employees via email. A list of key company contacts, including Human Resources and IT security personnel, should be distributed to employees in the event of an actual or suspected security incident.

Prepare for remote activation of incident response and crisis management plans.

Employers should review existing incident response, crisis management and business continuity plans, as well as ensure relevant stakeholders are prepared for remote activation of these plans, such as having hard copies of relevant plans and contact information at home.

DO DON’T
 

  • DO create complex passphrases
  • DO change home Wi-Fi passwords
  • DO create a separate Wi-Fi network for guests
  • DO install anti-malware and anti-virus software for internet-enabled devices
  • DO keep software (including anti-virus/anti-malware software), web browsers, and operating systems up-to-date
  • DO delete files from download folders and trash bins
  • DO immediately report lost or stolen devices
  • DO log off accounts and close windows and browsers on shared devices
  • DO review mobile app settings on shared devices
  • DO handle physical documents with sensitive and/or confidential information in a secure manner

 

 

  • Do NOT use public or unsecure Wi-Fi networks without using VPN
  • Do NOT access or send confidential information over unsecured Wi-Fi networks
  • Do NOT leave electronic or paper documents out in the open
  • Do NOT allow family or friends to use company-provided devices
  • Do NOT leave devices logged-in
  • Do NOT select “remember me” on shared devices
  • Do NOT share passwords with family members
  • Do NOT use names or birthdays in passwords
  • Do NOT save work documents locally on shared devices
  • Do NOT store confidential information on portable storage devices, such as USB or hard drives

 

Navigating the Updated Federal Trade Commission Guidelines for Social Media Influencer Marketing

The Federal Trade Commission (FTC) recently updated its Guides Concerning Use of Endorsements and Testimonials in Advertising (Guidelines). There has not been an update to the Guidelines since 2009, before TikTok even existed and Facebook was still the hip new kid on the block.

Clearly, a lot has changed since then, and being aware of and understanding the updates to these Guidelines is crucial for companies, influencers, brand ambassadors, and marketing professionals who engage in influencer marketing campaigns. The Guidelines take into account the evolving nature of influencer marketing and provide more specific guidance on how influencers can make clear and conspicuous disclosures to their followers. This summary provides a basic overview of the key changes and important points to consider in the wake of the updated Guidelines.

Background:

Anyone who has access to the internet is aware that social media influencer marketing has been a rapidly growing industry over the past decade, and the FTC recognizes the need for adequate transparency concerning this area of marketing to protect consumers from deceptive advertising practices.

The general aim of the updated Guidelines is to ensure consumers can clearly identify when a social media post, blog post, video, or other similar media is sponsored or contains affiliate links. The updated Guidelines seek to develop or make clear guidance concerning specifically: (1) who is considered an endorser; (2) what is considered an “endorsement”; (3) who can be liable for a deceptive endorsement; (4) what is considered “clear and conspicuous” for purposes of disclosure; (5) practices of consumer reviews; and (6) when and how paid or material connections need to be disclosed.

Key Changes and Considerations:

  1. Clear and Conspicuous Disclosure: Influencers must make disclosures clear and conspicuous. This means disclosures should be easily noticed, not buried within a long caption or hidden among a sea of hashtags. The Guidelines require that disclosure be “unavoidable” when posts are made through electronic mediums. The FTC suggests placing disclosures at the beginning of a post, especially on platforms where the full content can be cut off (i.e., Instagram). In broad terms, a disclosure will be deemed “clear and conspicuous” when “it is difficult to miss (i.e. easily noticeable) and easily understandable by ordinary consumers.”
  • Updated Definition of “endorsements”: The FTC has broadened its definition of “endorsements” and what it deems to be deceptive endorsement practices to include fake positive or negative reviews, tags on social media platforms, and virtual (AI) influencers.
  • Use of Hashtags: The Guidelines still hold that commonly used disclosure hashtags such as #ad, #sponsored, and #paidpartnership are acceptable, but those must be displayed in a manner that is easily perceptible by consumers. Influencers should avoid using vague or ambiguous hashtags that may not clearly indicate a paid relationship. Keep in mind, however, whether a specific social media tag counts as an endorsement disclosure is subject to fact-specific review.
  • In-Platform Tools: Social media platforms increasingly provide built-in tools for influencers to mark their posts as sponsored. However, be aware, the Guidelines emphasize that these tools can be helpful in disclosing partnerships, but they are not always sufficient to ensure that disclosures are clear and conspicuous. Parties using these tools should carefully evaluate whether they are clearly and conspicuously disclosing material connections.
  • Affiliate Marketing: If an influencer includes affiliate links in their content, they must disclose this relationship. Simply using affiliate links is considered a material connection and requires disclosure. Phrases such as “affiliate link” or “commission earned” can be used to disclose affiliate relationships.
  • Endorsements and Testimonials: The FTC guidelines apply not only to sponsored content, but also to endorsements and testimonials. Influencers must disclose material connections with endorsing products, whether they received compensation or discounted/free products. Beyond financial relationships as described above, influencers will need to disclose non-financial relationships, such as being friends with a brand’s owners or employees.
  • Ongoing Relationships: Disclosures should be made in every post or video if a material connection for benefit exists, even in cases of ongoing or long-term partnerships.
  • Endorsements Directed at Children: The updated Guidelines added a new section specifically addressing advertising which is focused on reaching children. The FTC states that such advertising “may be of special concern because of the character of audience”. While the Guidelines do not offer specific guidance on how to address advertisements intended for children, those who intend to engage in targeting children as the intended audience should pay special attention to the “clear and conspicuous” requirements espoused by the FTC.

Enforcement and Penalties:

The FTC takes non-compliance with these guidelines seriously and can impose significant fines and penalties on brands, marketers, and influencers who fail to make proper disclosures. Significantly, the updated Guidelines make it clear that influencers who fail to make proper disclosures may be personally liable to consumers who are misled by their endorsements. Furthermore, brands and marketers may also be held responsible for ensuring that influencers with whom they have paid relationships adhere to these guidelines.

Conclusion:

Bear in mind, the Guidelines themselves are not the law, but they serve as a vital guide to avoid breaking it. Overall, the updated Guidelines on influencer disclosures emphasize transparency and consumer protection. To stay compliant and maintain consumer trust, it is imperative that all parties involved in influencer marketing familiarize themselves with these Guidelines and ensure that disclosures are clear, conspicuous, and consistently made in every relevant post or video. Furthermore, as this marketing industry continues to develop and evolve, it will be increasingly important to monitor ongoing developments and changes in the FTC guidelines to stay current with best practices.