Checklist for Transitioning Founder-Owned Law Firms

When transitioning from a founder-owned law firm, it’s essential to establish a clear plan to ensure the firm’s continued growth and stability. A successful transition depends on strategic priorities that enhance operational efficiency, improve client satisfaction, and secure long-term success.

Below, we outline the key areas to analyze and implement for a seamless shift in leadership and operations.

  1. Work-Life Timelines

Work-life timelines act as a roadmap for planning the future of the firm. They provide a structured planning horizon that helps leadership forecast and prepare for critical milestones, such as retirements or leadership transitions. For instance, mapping out partner retirement dates allows the firm to identify when leadership gaps may occur and develop succession plans proactively.

  1. Marketing Effectiveness

Effective marketing strategies are the backbone of a firm’s revenue growth. Assessing your marketing effectiveness involves analyzing the ability to meet revenue goals while considering the business risks associated with exiting partners. For example, if a founder has historically been a key rainmaker, your marketing plan must address how to replace their client development efforts with targeted campaigns and new initiatives, such as digital outreach or niche practice area marketing.

 

  1. Attorney Development

Attorney development ensures that the firm maintains a continuous and adaptable skill set. As founders exit, having a pipeline of well-trained attorneys is critical to sustaining client relationships and maintaining institutional knowledge. Regular mentorship programs, skill-building workshops, and tailored career growth plans help prepare attorneys to take on leadership roles in the future.

 

  1. Recruiting Effectiveness

Strong recruiting processes are essential for addressing capability and capacity gaps created by departing founders. Recruiting effectiveness goes beyond hiring; it involves attracting and retaining top legal talent who align with the firm’s culture and goals. Offering competitive benefits, a clear career trajectory, and a supportive environment can position the firm as a destination for top-tier candidates.

 

  1. Compensation and Incentives

A well-designed compensation and incentive structure is vital to the firm’s profitability and transition success. Attracting high-profit lateral hires, ensuring partners are practicing profitably, and facilitating smooth transitions for senior partners require thoughtful compensation planning. For example, implementing performance-based bonuses tied to billable hours or collections can motivate both current attorneys and incoming talent.

 

  1. Policy Development

Clear and consistent policies build trust and promote a culture of fairness among partners, associates, and staff. Whether it’s defining work-from-home expectations or delineating the decision-making process, policy development ensures that the firm operates smoothly during and after the leadership transition.

 

  1. Partnership or Operating Agreements

A robust partnership or operating agreement ensures that decision-making processes are clear and actions carry appropriate weight. These agreements provide a framework for resolving disputes, allocating equity, and governing major decisions—such as onboarding new partners or adjusting compensation structures. This clarity helps reduce friction during transitional periods.

 

  1. Equity Transfer Processes

Equity transfer is one of the most sensitive aspects of transitioning a founder-owned firm. Establishing clear processes for equity transfer ensures that the firm can perpetuate itself without unnecessary controversy. By structuring buyouts or equity redistribution in advance, the firm avoids disruptions that could harm operations or morale.

 

  1. Technology

Investing in technology is critical for maintaining efficiency and gaining a competitive edge. Technology tools, such as practice management systems, client portals, and AI-driven analytics, streamline operations and strengthen client relationships. For instance, adopting cloud-based platforms allows for seamless collaboration among team members and improves data security during the transition.

 

  1. Supportive Platforms

Creating a supportive platform that elevates the success of lawyers and staff is key to a smooth transition. This might include mentorship programs, robust professional development opportunities, and fostering a collaborative work culture. A supportive platform not only helps retain existing talent but also enhances the firm’s reputation as a desirable place to work.

 

  1. Trained and Motivated Staff

A well-trained and motivated staff is essential for maintaining operational continuity during a leadership transition. Cross-training employees on various roles and responsibilities ensure that knowledge is retained and transferred effectively. For example, ensuring paralegals are familiar with new practice management systems or administrative protocols reduces the risk of disruption.

 

  1. Implementation

Strategic planning is only as good as its implementation. Moving from the planning phase to actionable steps is vital for securing the firm’s long-term interests. By setting clear timelines, assigning responsibilities, and tracking progress, the firm can ensure that the transition plans lead to tangible outcomes.

Conclusion

By focusing on these critical areas, your firm can develop a comprehensive, thoroughly analyzed, and ready-to-implement set of priorities. These steps will help your firm thrive in the post-founder era while ensuring smooth transitions, client retention, and operational excellence. Transitioning a founder-owned law firm may seem daunting, but with careful planning and execution, your firm can secure a prosperous future.

Listen to this article

A Lawyer’s Guide to Understanding AI Hallucinations in a Closed System

Understanding Artificial Intelligence (AI) and the possibility of hallucinations in a closed system is necessary for the use of any such technology by a lawyer. AI has made significant strides in recent years, demonstrating remarkable capabilities in various fields, from natural language processing to large language models to generative AI. Despite these advancements, AI systems can sometimes produce outputs that are unexpectedly inaccurate or even nonsensical – a phenomenon often referred to as “hallucinations.” Understanding why these hallucinations occur, especially in a closed systems, is crucial for improving AI reliability in the practice of law.

What are AI Hallucinations
AI hallucinations are instances where AI systems generate information that seems plausible but is incorrect or entirely fabricated. These hallucinations can manifest in various forms, such as incorrect responses to prompt, fabricated case details, false medical analysis or even imagined elements in an image.

The Nature of Closed Systems
A closed system in AI refers to a context where the AI operates with a fixed dataset and pre-defined parameters, without real-time interaction or external updates. In the area of legal practice this can include environments or legal AI tools which rely upon a selected universe of information from which to access such information as a case file database, saved case specific medical records, discovery responses, deposition transcripts and pleadings.

Causes of AI Hallucinations in Closed Systems
Closed systems, as opposed to open facing AI which can access the internet, rely entirely on the data they were trained on. If the data is incomplete, biased, or not representative of the real world the AI may fill gaps in its knowledge with incorrect information. This is particularly problematic when the AI encounters scenarios not-well presented in its training data. Similarly, if an AI tool is used incorrectly by way of misused data prompts, a closed system could result in incorrect or nonsensical outputs.

Overfitting
Overfitting occurs when the AI model learns the noise and peculiarities in the training data rather than the underlying patterns. In a closed system, where the training data can be limited and static, the model might generate outputs based on these peculiarities, leading to hallucinations when faced with new or slightly different inputs.

Extrapolation Error
AI models can generalize from their training data to handle new inputs. In a closed system, the lack of continuous learning and updated data may cause the model to make inaccurate extrapolations. For example, a language model might generate plausible sounding but factually incorrect information based upon incomplete context.

Implication of Hallucination for lawyers
For lawyers, AI hallucinations can have serious implications. Relying on AI- generated content without verification could possibly lead to the dissemination or reliance upon false information, which can grievously effect both a client and the lawyer. Lawyers have a duty to provide accurate and reliable advise, information and court filings. Using AI tools that can possibly produce hallucinations without proper checks could very well breach a lawyer’s ethical duty to her client and such errors could damage a lawyer’s reputation or standing. A lawyer must stay vigilant in her practice to safe guard against hallucinations. A lawyer should always verify any AI generated information against reliable sources and treat AI as an assistant, not a replacement. Attorney oversight of outputs especially in critical areas such as legal research, document drafting and case analysis is an ethical requirement.

Notably, the lawyer’s chose of AI tool is critical. A well vetted closed system allows for the tracing of the origin of output and a lawyer to maintain control over the source materials. In the instance of prompt-based data searches, with multiple task prompts, a comprehensive understanding of how the prompts were designed to be used and the proper use of same is also essential to avoid hallucinations in a closed system. Improper use of the AI tool, even in a closed system designed for legal use, can lead to illogical outputs or hallucinations. A lawyer who wishes to utilize AI tools should stay informed about AI developments and understand the limitations and capabilities of the tools used. Regular training and updates can provide a more effective use of AI tools and help to safeguard against hallucinations.

Take Away
AI hallucinations present a unique challenge for the legal profession, but with careful tool vetting, management and training a lawyer can safeguard against false outputs. By understanding the nature of hallucinations and their origins, implementing robust verification processes and maintaining human oversight, lawyers can harness the power of AI while upholding their commitment to accuracy and ethical practice.

Cybersecurity Crunch: Building Strong Data Security Programs with Limited Resources – Insights from Tech and Financial Services Sectors

In today’s digital age, cybersecurity has become a paramount concern for executives navigating the complexities of their corporate ecosystems. With resources often limited and the ever-present threat of cyberattacks, establishing clear priorities is essential to safeguarding company assets.

Building the right team of security experts is a critical step in this process, ensuring that the organization is well-equipped to fend off potential threats. Equally important is securing buy-in from all stakeholders, as a unified approach to cybersecurity fosters a robust defense mechanism across all levels of the company.Digit

This insider’s look at cybersecurity will delve into the strategic imperatives for companies aiming to protect their digital frontiers effectively.

Where Do You Start on Cybersecurity?
Resources are limited, and pressures on corporate security teams are growing, both from internal stakeholders and outside threats. But resources to do the job aren’t. So how can companies protect themselves in real world environment, where finances, employee time, and other resources are finite?

“You really have to understand what your company is in the business of doing,” Wilson said. “Every business will have different needs. Their risk tolerances will be different.”

“You really have to understand what your company is in the business of doing. Every business will have different needs. Their risk tolerances will be different.”

BRIAN WILSON, CHIEF INFORMATION SECURITY OFFICER, SAS
For example, Tuttle said in the manufacturing sector, digital assets and data have become increasingly important in recent years. The physical product no longer is the end-all, be-all of the company’s success.

For cybersecurity professionals, this new reality leads to challenges and tough choices. Having a perfect cybersecurity system isn’t possible—not for a company doing business in a modern, digital world. Tuttle said, “If we’re going to enable this business to grow, we’re going to have to be forward-thinking.”

That means setting priorities for cybersecurity. Inskeep, who previously worked in cybersecurity for one of the world’s largest financial services institutions, said multi-factor authentication and controlling access is a good starting point, particularly against phishing and ransomware attacks. Also, he said companies need good back-up systems that enable them to recover lost data as well as robust incident response plans.

“Bad things are going to happen,” Wilson said. “You need to have logs and SIEMs to tell a story.”

Tuttle said one challenge in implementing an incident response plan is engaging team members who aren’t on the front lines of cybersecurity. “They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right,” she said. “They need to be thinking, ‘What should I be looking for and what’s my response?’”

“They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right. They need to be thinking, ‘What should I be looking for and what’s my response?’”

LISA TUTTLE, CHIEF INFORMATION SECURITY OFFICER, SPX TECHNOLOGIES
Wilson said tabletop exercises and security awareness training “are a good feedback loop to have to make sure you’re including the right people. They have to know what to do when something bad happens.”

Building a Security Team
Hiring and maintaining good people in a harrowing field can be a challenge. Companies should leverage their external and internal networks to find data privacy and cybersecurity team members.

Wilson said SAS uses an intern program to help ensure they have trained professionals already in-house. He also said a company’s Help Desk can be a good source of talent.

Remote work also allows companies to cast a wider net for hiring employees. The challenge becomes keeping remote workers engaged, and companies should consider how they can make these far-flung team members feel part of the team.

Inskeep said burnout is a problem in the cybersecurity field. “It’s a job that can feel overwhelming sometimes,” he said. “Interacting with people and protecting them from that burnout has become more critical than ever.”

“It’s a job that can feel overwhelming sometimes. Interacting with people and protecting them from that burnout has become more critical than ever.”

TODD INSKEEP, FOUNDER AND CYBERSECURITY ADVISOR, INCOVATE SOLUTIONS
Weighing Levels of Compliance
The first step, Claypoole said, is understanding the compliance obligations the company faces. These obligations include both regulatory requirements (which are tightening) as well as contract terms from customers.

“For a business, that can be scary, because your business may be agreeing to contract terms with customers and they aren’t asking you about the security requirements in those contracts,” Wilson said.

The panel also noted that “compliance” and “security” aren’t the same thing. Compliance is a minimum set of standards that must be met, while security is a more wide-reaching goal.

But company leaders must realize they can’t have a perfect cybersecurity system, even if they could afford it. It’s important to identify priorities—including which operations are the most important to the company and which would be most disruptive if they went offline.

Wilson noted that global privacy regulations are increasing and becoming stricter every year. In addition, federal officials have taken criminal action against CSOs in recent years.

“Everybody’s radar is kind of up,” Tuttle said. The increasingly compliance pressure also means it’s important for cybersecurity teams to work collaboratively with other departments, rather than making key decisions in a vacuum. Inskeep said such decisions need to be carefully documented as well.

“If you get to a place where you are being investigated, you need your own lawyer,” Claypoole said.

“If you get to a place where you are being investigated, you need your own lawyer.”

TED CLAYPOOLE, PARTNER, WOMBLE BOND DICKINSON
Cyberinsurance is another consideration for data privacy teams, but it can help Chief Security Officers make the case for more resources (both financial and work hours). Inskeep said cyberinsurance questions also can help companies identify areas of risks and where they need to prioritize their efforts. Such priorities can change, and he said companies need to have a committee or some other mechanism to regularly review and update cybersecurity priorities.

Wilson said one positive change he’s seen is that top executives now understand the importance of cybersecurity and are more willing to include cybersecurity team members in the up-front decision-making process.

Bringing in Outside Expertise
Consultants and vendors can be helpful to a cybersecurity team, particularly for smaller teams. Companies can move certain functions to third-party consultants, allowing their own teams to focus on core priorities.

“If we don’t have that internal expertise, that’s a situation where we’d call in third-party resources,” Wilson said.

Bringing in outside professionals also can help a company keep up with new trends and new technologies.

Ultimately, a proactive and well-coordinated cybersecurity strategy is indispensable for safeguarding the digital landscape of modern enterprises. With an ever-evolving threat landscape, companies must be agile in their approach and continuously review and update their security measures. At the core of any effective cybersecurity plan is a comprehensive risk management framework that identifies potential vulnerabilities and outlines steps to mitigate their impact. This framework should also include incident response protocols to minimize the damage in case of a cyberattack.

In addition to technology and processes, the human element is crucial in cybersecurity. Employees must be educated on how to spot potential threats, such as phishing emails or suspicious links, and know what steps to take if they encounter them.

Key Takeaways:
What are the biggest risk areas and how do you minimize those risks?
Know your external cyber footprint. This is what attackers see and will target.
Align with your team, your peers, and your executive staff.
Prioritize implementing multi-factor authentication and controlling access to protect against common threats like phishing and ransomware.
Develop reliable backup systems and robust incident response plans to recover lost data and respond quickly to cyber incidents.
Engage team members who are not on the front lines of cybersecurity to ensure quick identification and escalation of potential threats.
Conduct tabletop exercises and security awareness training regularly.
Leverage intern programs and help desk personnel to build a strong cybersecurity team internally.
Explore remote work options to widen the talent pool for hiring cybersecurity professionals, while keeping remote workers engaged and integrated.
Balance regulatory compliance with overall security goals, understanding that compliance is just a minimum standard.

Copyright © 2024 Womble Bond Dickinson (US) LLP All Rights Reserved.

by: Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

For more on Cybersecurity, visit the Communications Media Internet section.

AI Got It Wrong, Doesn’t Mean We Are Right: Practical Considerations for the Use of Generative AI for Commercial Litigators

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).

In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.

Confidentiality

As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.

Oversight

It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.

Judicial Preferences

Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.

5 Trends to Watch: 2024 Artificial Intelligence

  1. Banner Year for Artificial Intelligence (AI) in Health – With AI-designed drugs entering clinical trials, growing adoption of generative AI tools in medical practices, increasing FDA approvals for AI-enabled devices, and new FDA guidance on AI usage, 2023 was a banner year for advancements in AI for medtech, healthtech, and techbio—even with the industry-wide layoffs that also hit digital and AI teams. The coming year should see continued innovation and investment in AI in areas from drug design to new devices to clinical decision support to documentation and revenue cycle management (RCM) to surgical augmented reality (AR) and more, together with the arrival of more new U.S. government guidance on and best practices for use of this fast-evolving technology.
  2. Congress and AI Regulation – Congress continues to grapple with the proper regulatory structure for AI. At a minimum, expect Congress in 2024 to continue funding AI research and the development of standards required under the Biden Administration’s October 2023 Executive Order. Congress will also debate legislation relating to the use of AI in elections, intelligence operations, military weapons systems, surveillance and reconnaissance, logistics, cybersecurity, health care, and education.
  3. New State and City Laws Governing AI’s Use in HR Decisions – Look for additional state and city laws to be enacted governing an employer’s use of AI in hiring and performance software, similar to New York City’s Local Law 144, known as the Automated Employment Decisions Tools law. More than 200 AI-related laws have been introduced in state legislatures across the country, as states move forward with their own regulation while debate over federal law continues. GT expects 2024 to bring continued guidance from the EEOC and other federal agencies, mandating notice to employees regarding the use of AI in HR-function software as well as restricting its use absent human oversight.
  4. Data Privacy Rules Collide with Use of AI – Application of existing laws to AI, both within the United States and internationally, will be a key issue as companies apply transparency, consent, automated decision making, and risk assessment requirements in existing privacy laws to AI personal information processing. U.S. states will continue to propose new privacy legislation in 2024, with new implementing regulations for previously passed laws also expected. Additionally, there’s a growing trend towards the adoption of “privacy by design” principles in AI development, ensuring privacy considerations are integrated into algorithms and platforms from the ground up. These evolving legal landscapes are not only shaping AI development but also compelling organizations to reevaluate their data strategies, balancing innovation with the imperative to protect individual privacy rights, all while trying to “future proof” AI personal information processing from privacy regulatory changes.
  5. Continued Rise in AI-Related Copyright & Patent Filings, Litigation – Expect the Patent and Copyright Offices to develop and publish guidance on issues at the intersection of AI and IP, including patent eligibility and inventorship for AI-related innovations, the scope of protection for works produced using AI, and the treatment of copyrighted works in AI training, as mandated in the Biden Administration Executive Order. IP holders are likely to become more sophisticated in how they integrate AI into their innovation and authorship workflows. And expect to see a surge in litigation around AI-generated IP, particularly given the ongoing denial of IP protection for AI-generated content and the lack of precedent in this space in general.

Nineteen States Have Banned TikTok on Government-Issued Devices

Governors of numerous states have issued Executive Orders in the past several weeks banning TikTok from government-issued devices and many have already implemented a ban, with others considering similar measures. There is also bi-partisan support of a ban in the Senate, which unanimously approved a bill last week that would ban the app from devices issued by federal agencies. There is already a ban prohibiting military personnel from downloading the app on government-issued devices.

The bans are in response to the national security concerns that TikTok poses to U.S. citizens [View related posts].

To date, 19 states have issued some sort of ban on the use of TikTok on government-issued devices, including some Executive Orders banning the use of TikTok statewide on all government-issued devices. Other state officials have implemented a ban within an individual state department, such as the Louisiana Secretary of State’s Office. In 2020, Nebraska was the first state to issue a ban. Other states that have banned TikTok use in some way are: South Dakota, North Dakota, Maryland, South Carolina, Texas, New Hampshire, Utah, Louisiana, West Virginia, Georgia, Oklahoma, Idaho, Iowa, Tennessee, Alabama, Virginia, and Montana.

Indiana’s Attorney General filed suit against TikTok alleging that the app collects and uses individuals’ sensitive and personal information, but deceives consumers into believing that the information is secure. We anticipate that both the federal government and additional state governments will continue to assess the risk and issue bans on its use in the next few weeks.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.
For more Cybersecurity Legal News, click here to visit the National Law Review.

Following the Recent Regulatory Trends, NLRB General Counsel Seeks to Limit Employers’ Use of Artificial Intelligence in the Workplace

On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”).  The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.”  Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.

Under the General Counsel’s proposed framework, an employer can avoid a violation of the Act if it can demonstrate that its business needs require the electronic monitoring and management practices and the practices “outweigh” employees’ Section 7 rights.  Not only must the employer be able to make this showing, it must also demonstrate that it provided the employees advance notice of the technology used, the reason for its use, and how it uses the information obtained.  An employer is relieved of this obligation, according to the General Counsel, only if it can show “special circumstances” justifying “covert use” of the technology.

In GC 23-02, the General Counsel signaled to NLRB Regions that they should scrutinize a broad range of “automated management” and “algorithmic management” technologies, defined as “a diverse set of technological tools and techniques to remotely manage workforces, relying on data collection and surveillance of workers to enable automated or semi-automated decision-making.”  Technologies subject to this scrutiny include those used during working time, such as wearable devices, security cameras, and radio-frequency identification badges that record workers’ conversations and track the movements of employees, GPS tracking devices and cameras that keep track of the productivity and location of employees who are out on the road, and computer software that takes screenshots, webcam photos, or audio recordings.  Also subject to scrutiny are technologies employers may use to track employees while they are off duty, such as employer-issued phones and wearable devices, and applications installed on employees’ personal devices.  Finally, the General Counsel noted that an employer that uses such technologies to hire employees, such as online cognitive assessments and reviews of social media, “pry into job applicants’ private lives.”  Thus, these pre-hire practices may also violate of the Act.  Technologies such as resume readers and other automated selection tools used during hiring and promotion may also be subject to GC 23-02.

GC 23-02 follows the wave of recent federal guidance from the White House, the Equal Employment Opportunity Commission, and local laws that attempt to define, regulate, and monitor the use of artificial intelligence in decision-making capacities.  Like these regulations and guidance, GC 23-02 raises more questions than it answers.  For example, GC 23-02 does not identify the standards for determining whether business needs “outweigh” employees’ Section 7 rights, or what constitutes “special circumstances” that an employer must show to avoid scrutiny under the Act.

While GC 23-02 sets forth the General Counsel’s proposal and thus is not legally binding, it does signal that there will likely be disputes in the future over artificial intelligence in the employment context.

©2022 Epstein Becker & Green, P.C. All rights reserved.

White House Office of Science and Technology Policy Releases “Blueprint for an AI Bill of Rights”

On October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) unveiled its Blueprint for an AI Bill of Rights, a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems.

The Blueprint comprises of five key principles:

  1. The first Principle is to protect individuals from unsafe or ineffective AI systems, and encourages consultation with diverse communities, stakeholders and experts in developing and deploying AI systems, as well as rigorous pre-deployment testing, risk identification and mitigation, and ongoing monitoring of AI systems.

  2. The second Principle seeks to establish safeguards against discriminative results stemming from the use of algorithmic decision-making, and encourages developers of AI systems to take proactive measures to protect individuals and communities from discrimination, including through equity assessments and algorithmic impact assessments in the design and deployment stages.

  3.  The third Principle advocates for building privacy protections into AI systems by default, and encourages AI systems to respect individuals’ decisions regarding the collection, use, access, transfer and deletion of personal information where possible (and where not possible, use default privacy by design safeguards).

  4. The fourth Principle emphasizes the importance of notice and transparency, and encourages developers of AI systems to provide a plain language description of how the system functions and the role of automation in the system, as well as when an algorithmic system is used to make a decision impacting an individual (including when the automated system is not the sole input determining the decision).

  5. The fifth Principle encourages the development of opt-out mechanisms that provide individuals with the option to access a human decisionmaker as an alternative to the use of an AI system.

In 2019, the European Commission published a similar set of automated systems governance principles, called the Ethics Guidelines for Trustworthy AI. The European Parliament currently is in the process of drafting the EU Artificial Intelligence Act, a legally enforceable adaptation of the Commission’s Ethics Guidelines. The current draft of the EU Artificial Intelligence Act requires developers of open-source AI systems to adhere to detailed guidelines on cybersecurity, accuracy, transparency, and data governance, and provides for a private right of action.

For more Technology Legal News, click here to visit the National Law Review.
Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

A Paralegal’s Guide to Legal Calendar Management

Law firms of all sizes are increasingly relying on legal technology to address their day-to-day responsibilities. From family law to criminal law to personal injury law, law practice management software can help law firms run smoothly and efficiently.

The benefits of this legal technology aren’t limited to lawyers — it extends to the paralegals they work closely with.

The demand for paralegals is growing at an average of 12% each year, and paralegal technology can be used to support their efficiency and workflows. Many of the manual tasks that paralegals do, such as creating, organizing, and filing court documents, can be automated to free time to focus on more critical tasks.

What Do Paralegals Do?

Working under the supervision of an attorney, a paralegal’s work is merged with and used as part of the attorney’s work for the client. Paralegals cannot give legal advice or perform any legal duties that fall under the scope of the licensed attorney, and they must be clear in their non-lawyer status with clients and the public.

The typical duties of a paralegal may include:

  • Conducting client interviews and maintaining client contact

  • Locating and interviewing witnesses

  • Conducting investigations and statistical and documentary research

  • Performing legal research

  • Drafting legal documents, correspondence, and pleadings

  • Summarizing depositions, interrogatories, and testimony

  • Attending executions of wills, real estate closings, depositions, court or administrative hearings, and trials with the attorney

  • Authoring and signing correspondence, as long as the paralegal status is clearly indicated and does not contain independent legal advice or opinions.

In a law firm, a paralegal’s time for legal work — not clerical or administrative work — may be billed to clients the same way as an attorney’s time, but at a lower hourly rate.

The paralegal profession originated in law firms, but now, paralegals may be employed by government organizations, banks, insurance companies, and healthcare providers.

Aside from basic technology tools for sending emails, making calls, or creating documents, there are resources specifically designed for paralegal work. Some of these include:

  • Case management software: One of the responsibilities of a paralegal is helping firms track client case information. Case management software supports paralegals and other staff to collaborate on cases in real time.

  • Billing software: Client billing is a time-consuming process at the end of the billing period. Paralegals may use billing software to help automate bill generation, collection, and review. Online billing allows clients to receive bills directly and gets the firm paid faster.

  • Client intake software: With manual client intake, clients fill out paperwork and the information must be transcribed digitally. This process is inefficient and error-prone, even with a fillable PDF. Automated client intake technology captures vital details for paralegals, and forms can be shared with a link. The information can be synced with other technologies to avoid duplicate data entry.

  • eSignature software: Signatures are required for most legal documents. Instead of hand-signing and scanning documents, e-signature technology allows paralegals to collect, sign, and store documents with a click of a button.

Paralegals may use some or all of these legal technologies, depending on the size of the firm and its practice areas.

Calendar management is the systematic process of organizing tasks, meetings, and events with the goal of maximizing the return on investment for the time put in. The work can be time-consuming, but it’s essential to the function of the firm.

A well-managed calendar should support attorneys to ensure success. Calendar management has the power to make or break the attorney’s daily workflow and long-term success, which is why it’s one of the most important skills for a paralegal to perform effectively.

Legal calendar management is a resource that manages deadlines, meetings, and events in a centralized location. Paralegals, attorneys, and other staff can have shared access and individual alerts or notifications to ensure that crucial tasks never fall through the cracks.

Prior to digital legal calendar management, attorneys had to calculate deadlines manually — a time-consuming and error-prone process. Legal calendar management automatically calculates deadlines to expedite the process and ensure accuracy.

With automated workflows, legal calendar management allows legal professionals to build workflows for each type of case or practice area of the firm.

For busy professionals juggling multiple responsibilities and clients, this ensures that important deadlines are not missed.

Just like you would schedule a meeting or task, paralegals should block focus time to manage and organize their calendars. Use these best practices to simplify how you manage your calendar.

Use a Coding System

Color coding creates an organizational schematic for the calendar. For example, using colors for different categories like client, internal, recurring, reminder, and travel helps everyone quickly identify the tasks that are relevant.

Implement a Centralized, Firm-Wide Calendar

Law firms should have a centralized calendar that’s used throughout the firm and managed by an experienced paralegal. This ensures that the firm staff has access to crucial information and deadlines from anywhere.

The calendar should be flexible and allow for different departments to toggle their view of desired information.

Legal calendars have a lot of moving parts that may involve multiple parties. This is why it’s important to create guidelines or rules for everyone in the firm when updating the calendar. For example, who submits case information? Who verifies the deadlines and completes follow-ups?

Incorporating this information in your firm’s workflows will ensure all staff members understand what they’re responsible for, and when. This process should be standardized, to alleviate bottlenecks or help with onboarding and training new staff.

Get The Entire Firm On Board

A new process takes time to implement and may come with learning curves. However, an efficient, organized legal calendar can’t be accomplished without buy-in across the firm.

There can be friction among staff when implementing new technology, especially if the firm has been more traditional. Take a top-down approach that begins with senior partners and managers. They can take the lead to bring everyone on board and get them excited about the capabilities of the new technology. No one likes change, but preparing the team can reduce friction and make the implementation process more efficient.

But remember, the best technology in the world is still just technology. It’s up to your firm and staff to use it to its fullest. Establishing clear roles and responsibilities for leaders and staff, providing training, and both giving and receiving feedback ensure that the legal calendar management software’s features and tools are used appropriately for your firm’s needs.

© Copyright 2022 PracticePanther

All Federal Research Agencies to Update Public Access Policies

On 25 August 2022, the Office of Science and Technology Policy (OSTP) released a guidance memorandum instructing federal agencies with research and development expenditures to update their public access policies. Notably, OSTP is retracting prior guidance that gave discretion to agencies to allow a 12-month embargo on the free and public release of peer-reviewed publications, so that federal funded research results will be timely and equitably available at no cost. The memo also directs affected agencies to develop policies that:

  1. Ensure public access to scientific data, even if not associated with peer-reviewed publications;
  2. Ensure scientific and research integrity in the agency’s public access by requiring publication of the metadata, including the unique digital persistent identifier; and
  3. Coordinate with OSTP to ensure equitable delivery of federally funded research results and data.

KEY COMPONENTS OF GUIDANCE:

Updating Public Access Policies

Federal agencies will need to develop new, or update existing, public access plans, and submit them to OSTP and the Office of Management and Budget (OMB). Deadlines for submission are within 180 days for federal agencies with more than US$100 million in annual research and development expenditures, and within 360 days for those with less than US$100 million in expenditures.

Agencies will need to ensure that any peer-reviewed scholarly publication is free and available by default in agency-designated repositories without any embargo or delay following publication. Similarly, OSTP expects the access polices to address publication of any other federally funded scientific data, even if not associated with peer-reviewed scholarly publications. As a concession, federal agencies are being asked to allow researchers to include the “reasonable publication costs and costs associated with submission, curation, management of data, and special handling instructions as allowable expenses in all research budgets.1

Ensuring Scientific Integrity

To strengthen trust in governmentally funded research, the new or updated policies must transparently communicate information designed to promote OSTP’s research integrity goals. Accordingly, agencies are instructed to collect and make appropriate metadata available in their public access repositories, including (i) all author and co-author names, affiliations, and source of funding, referencing their digital persistent identifiers, as appropriate; (ii) date of publication; and (iii) a unique digital persistent identifier for research output. Agencies should submit to OSTP and OMB (by 31 December 2024) a second update to their policies specifying the approaches taken to implement this transparency, and publish such policy updates by 31 December 2026, with an effective date no later than one year after publication of the updated plan.

IMPLICATIONS FOR THE NATIONAL INSTITUTES OF HEALTH (NIH), OTHER FEDERAL AGENCIES, AND THEIR GRANTEES

The NIH is expected to update its Public Access Policy, potentially along with its Data Management and Sharing Policy to conform with the new OSTP guidance. Universities, academic medical centers, research institutes, and federally funded investigators should monitor agency publications of draft and revised policies in order to update their processes to ensure continued compliance.

In doing so, affected stakeholders may want to consider and comment to relevant federal agencies on the following issues in their respective public access policy development:

  • Federal agency security practices to prevent foreign misappropriation of research data;
  • Implications for research misconduct investigations and research integrity;
  • Any intellectual property considerations without a 12-month embargo, especially to the extent this captures scientific data not yet published in a peer-review journal; and
  • Costs allowable research budgets to support these data management and submission expectations.

1 Office of Science and Technology Policy, Memorandum for the Heads of Executive Departments and Agencies: Ensuring Free, Immediate, and Equitable Access to Federally Funded Research at p. 5 (25 August 2022) available at https://www.whitehouse.gov/wp-content/uploads/2022/08/08-2022-OSTP-Public-Access-Memo.pdf

Copyright 2022 K & L Gates