With the US Copyright Office (USCO) continuing their stance that protection only extends to human authorship, what will this mean for artificial intelligence (AI)-generated works — and artists — in the future?

Almost overnight, the limited field of Machine Learning and AI has become nearly as accessible to use as a search engine. Apps like Midjourney, Open AI, ChatGPT, and DALL-E 2, allow users to input a prompt into these systems and a bot will generate virtually whatever the user asks for. Microsoft recently announced its decision to make a multibillion-dollar investment in OpenAI, betting on the hottest technology in the industry to transform internet as we know it.[1]

However, with accessibility of this technology growing, questions of authorship and copyright ownership are rising as well. There remain multiple open questions, such as: who is the author of the work — the user, the bot, or the software that produces it? And where is this new generative technology pulling information from?

AI and Contested Copyrights

As groundbreaking as these products are, there has been ample backlash regarding copyright infringement and artistic expression. The stock image company, Getty Images, is suing Stability AI, an artificial intelligence art tool behind Stable Diffusion. Getty Images alleges that Stability AI did not seek out a license from Getty Images to train its system. Although the founder of Stability AI argues that art makes up 0.1% of the dataset and is only created when called by the user’s prompt. In contrast, Shutterstock, one of Getty Images largest competitors, has taken an alternative approach and instead partnered with Open AI with plans to compensate artists for their contributions.

Artists and image suppliers are not the only ones unhappy about the popularity of machine learning.  Creators of open-source code have targeted Microsoft and its subsidiary GitHub, along with OpenAI,  in a proposed class-action lawsuit. The lawsuit alleges that the creation of AI-powered coding assistant GitHub Copilot is relying on software piracy on an enormous scale. Further, the complaint claims that GitHub relies on copyrighted code with no attribution and no licenses. This could be the first class-action lawsuit challenging the training and output of AI systems. Whether artists, image companies, and open-source coders choose to embrace or fight the wave of machine learning,  the question of authorship and ownership is still up for debate.

The USCO made clear last year that the copyright act only applies to human authorship; however they have recently signaled that in 2023 the office will focus on the legal grey areas surrounding the copyrightability of works generated in conjunction with AI. The USCO denied multiple applications to protect AI authored works previously, stating that the “human authorship” element was lacking. In pointing to previous decisions, such as the 2018 decision that a monkey taking a selfie could not sue for copyright infringement, the USCO reiterated that “non-human expression is ineligible for copyright protection.” While the agency is standing by its conclusion that works cannot be registered if it is exclusively created by an AI, the office is considering the issue of copyright registration for works co-created by humans and AI.

Patent Complexities  

The US Patent and Trademark Office (USPTO) will have to rethink fundamental patent policies with the rise of sophisticated AI systems as well. As the USPTO has yet to speak on the issue, experts are speculating alternative routes that the office could choose to take: declaring AI inventions unpatentable, which could lead to disputes and hinder the incentive to promote innovation, or concluding that the use of AI should not render otherwise patentable inventions unpatentable, but would lead to complex questions of inventorship. The latter route would require the USPTO to rethink their existing framework of determining inventorship by who conceived the invention.

Takeaway

The degree of human involvement will likely determine whether an AI work can be protected by copyright, and potentially patents. Before incorporating this type of machine learning into your business practices, companies should carefully consider the extent of human input in the AI creation and whether the final work product will be protectable. For example:

  • An apparel company that uses generative AI to create a design for new fabric may not have a protectable copyright in the resulting fabric design.

  • An advertising agency that uses generative AI to develop advertising slogans and a pitch deck for a client may not be able to protect the client from freely utilizing the AI-created work product.

  • A game studio that uses generative AI to create scenes in a video game may not be able to prevent its unlicensed distribution.

  • A logo created for a business endeavor may not be protected unless there are substantial human alterations and input.

  • Code that is edited or created by AI may be able to be freely copied and replicated.

Although the philosophical debate is only beginning regarding what “makes” an artist, 2023 may be a uniquely litigious year defining the extent in which AI artwork is protectable under existing intellectual property laws.


FOOTNOTES

[1] https://www.cnn.com/2023/01/23/tech/microsoft-invests-chatgpt-openai/index.htmlhttps://www.nytimes.com/2023/01/12/technology/microsoft-openai-chatgpt.html

NIST Releases New Framework for Managing AI and Promoting Trustworthy and Responsible Use and Development

On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released the Artificial Intelligence Risk Management Framework (“AI RMF 1.0”), which provides a set of guidelines for organizations that design, develop, deploy or use AI to manage its many risks and promote trustworthy and responsible use and development of AI systems.

The AI RMF 1.0 provides guidance as to how organizations may evaluate AI risks (e.g., intellectual property, bias, privacy and cybersecurity) and trustworthiness. The AI RMF 1.0 outlines the characteristics of trustworthy AI systems, which are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy enhanced and fair with their harmful biases managed. It also describes four high-level functions, with associated actions and outcomes to help organizations better understand and manage AI:

  • The Govern function addresses evaluation of AI technologies’ policies, processes and procedures, including their compliance with legal and regulatory requirements and transparent and trustworthy implementation.
  • The Map function provides context for organizations to frame risks relating to AI systems, including AI system impacts and interdependencies.
  • The Measure function uses quantitative, qualitative or mixed-method tools, techniques and methodologies to analyze, benchmark and monitor AI risk and related impacts, including tracking metrics to determine trustworthy characteristics, social impact and human-AI configurations.
  • The Manage function entails allocating risk resources to mapped and measured risks consistent with the Govern function. The Manage function includes determining how to treat risks and develop plans to respond to, recover from and communicate about incidents and events.

NIST released a draft AI Risk Management Framework Playbook to accompany the AI RMF 1.0. NIST plans to release an updated version of the Playbook in the Spring of 2023 and launch a new Trustworthy and Responsible AI Resource Center to help organizations put AI RMF 1.0 into practice. NIST has also provided a Roadmap of its priorities to advance the AI RMF.

Copyright © 2023, Hunton Andrews Kurth LLP. All Rights Reserved.
For more Technology Legal News, click here to visit the National Law Review.

Artists Are Selling AI-Generated Images of Mickey Mouse to Provoke a Test Case

Several artists, frustrated with Artificially Intelligent (AI) image generators skirting copyright laws, are using similar image generators to produce images of Mickey Mouse and other copyrighted characters to challenge the current legal status of AI art. While an artist’s copyright in a work typically vests at the moment of fixation, including the right to prosecute copyright violation, AI-generated work complicates the issue by removing humans from the creative process. Courts have ruled that AI cannot hold copyright, which by corollary also means that AI-generated art sits in the public domain. This legal loophole has angered many professional artists whose art is used to train the AI. Many AI generators, such as Dall-E 2 and Midjourney, can render pieces in the style of a human artist, effectively automating the artist’s job.

Given Disney’s reputation for vigorously defending its intellectual property, these artists hope that monetizing these public-domain AI Mickeys on mugs and T-shirts will prompt a lawsuit. Ironically, provoking and losing a case in this vein may set a favorable precedent for the independent artist community. As AI becomes more advanced, society will likely need to address how increasingly intelligent and powerful AI can complicate and undermine existing law.

Blair Robinson also contributed to this article.

For more Intellectual Property Legal News, click here to visit the National Law Review.

Following the Recent Regulatory Trends, NLRB General Counsel Seeks to Limit Employers’ Use of Artificial Intelligence in the Workplace

On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”).  The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.”  Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.

Under the General Counsel’s proposed framework, an employer can avoid a violation of the Act if it can demonstrate that its business needs require the electronic monitoring and management practices and the practices “outweigh” employees’ Section 7 rights.  Not only must the employer be able to make this showing, it must also demonstrate that it provided the employees advance notice of the technology used, the reason for its use, and how it uses the information obtained.  An employer is relieved of this obligation, according to the General Counsel, only if it can show “special circumstances” justifying “covert use” of the technology.

In GC 23-02, the General Counsel signaled to NLRB Regions that they should scrutinize a broad range of “automated management” and “algorithmic management” technologies, defined as “a diverse set of technological tools and techniques to remotely manage workforces, relying on data collection and surveillance of workers to enable automated or semi-automated decision-making.”  Technologies subject to this scrutiny include those used during working time, such as wearable devices, security cameras, and radio-frequency identification badges that record workers’ conversations and track the movements of employees, GPS tracking devices and cameras that keep track of the productivity and location of employees who are out on the road, and computer software that takes screenshots, webcam photos, or audio recordings.  Also subject to scrutiny are technologies employers may use to track employees while they are off duty, such as employer-issued phones and wearable devices, and applications installed on employees’ personal devices.  Finally, the General Counsel noted that an employer that uses such technologies to hire employees, such as online cognitive assessments and reviews of social media, “pry into job applicants’ private lives.”  Thus, these pre-hire practices may also violate of the Act.  Technologies such as resume readers and other automated selection tools used during hiring and promotion may also be subject to GC 23-02.

GC 23-02 follows the wave of recent federal guidance from the White House, the Equal Employment Opportunity Commission, and local laws that attempt to define, regulate, and monitor the use of artificial intelligence in decision-making capacities.  Like these regulations and guidance, GC 23-02 raises more questions than it answers.  For example, GC 23-02 does not identify the standards for determining whether business needs “outweigh” employees’ Section 7 rights, or what constitutes “special circumstances” that an employer must show to avoid scrutiny under the Act.

While GC 23-02 sets forth the General Counsel’s proposal and thus is not legally binding, it does signal that there will likely be disputes in the future over artificial intelligence in the employment context.

©2022 Epstein Becker & Green, P.C. All rights reserved.

Chamber of Commerce Challenges CFPB Anti-Bias Focus Concerning AI

The end of last month the U.S. Chamber of Commerce, the American Bankers Association and other industry groups (collectively, “Plaintiffs”) filed suit in Texas federal court challenging the Consumer Financial Protection Bureau’s (“CFPB”) update this year to the Unfair, Deceptive, or Abusive Acts or Practices section of its examination manual to include discrimination.  Chamber of Commerce of the United States of America, et al v. Consumer Financial Protection Bureau, et al., Case No. 6:22-cv-00381 (E.D. Tex.)

By way of background, the Consumer Financial Protection Act, which is Title X of the 2010 Dodd-Frank Act (the “Act”), prohibits providers of consumer financial products or services or a service provider from engaging in any unfair, deceptive or abusive act or practice (“UDAAP”).  The Act also provides the CFPB with rulemaking and enforcement authority to “prevent unfair, deceptive, or abusive acts or practices in connection with any transaction with a consumer for a consumer financial product or service, or the offering of a consumer financial product or service.”  See, e.g.https://files.consumerfinance.gov/f/documents/cfpb_unfair-deceptive-abusive-acts-practices-udaaps_procedures.pdf.  In general, the Act provides that an act or practice is unfair when it causes or is likely to cause substantial injury to consumers, which is not reasonably avoidable by consumers, and the injury is not outweighed by countervailing benefits to consumers or to competition.

The CFPB earlier this spring published revised examination guidelines on unfair, deceptive, or abusive acts and practices, or UDAAPs.  Importantly, this set forth a new position from the CFPB, that discrimination in the provision of consumer financial products and services can itself be a UDAAP.  This was a development that was surprising to many providers of financial products and services.  The CFPB also released an updated exam manual that outlined its position regarding how discriminatory conduct may qualify as a UDAAP in consumer finance.  Additionally, the CFPB in May 2022 additionally published a Consumer Financial Protection Circular to remind the public of creditors’ adverse action notice requirements under the Equal Credit Opportunity Act (“ECOA”).  In the view of the CFPB, creditors cannot use technologies (include algorithmic decision making) if it means they are unable to provide required explanations under the ECOA.

In July 2022, the Chamber and others called on the CFPB to rescind the update to the manual.  This included, among other arguments raised in a white paper supporting their position, that in conflating the concepts of “unfairness” and “discrimination,” the CFPB ignores the Act’s text, structure, and legislative history which discusses “unfairness” and “discrimination” as two separate concepts and defines “unfairness” without mentioning discrimination

The Complaint filed this fall raises three claims under the Administrative Procedure Act (“APA”) in relation to the updated manual as well as others.  The Complaint contends that ultimately it is consumers that will suffer as a result of the CFPB’s new position, as “[t]hese amendments to the manual harm Plaintiffs’ members by imposing heavy compliance costs that are ultimately passed down to consumers in the form of higher prices and reduced access to products.”

The litigation process started by Plaintiffs in this case will be time consuming (a response to the Complaint is not expected from Defendants until December).  In the meantime, entities in the financial sector should be cognizant of the CFPB’s new approach and ensure that their compliance practices appropriately mitigate risk, including in relation to algorithmic decision making and AI.  As always, we will keep you up to date with the latest news on this litigation.

For more Consumer Finance Legal News, click here to visit the National Law Review

© Copyright 2022 Squire Patton Boggs (US) LLP

White House Office of Science and Technology Policy Releases “Blueprint for an AI Bill of Rights”

On October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) unveiled its Blueprint for an AI Bill of Rights, a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems.

The Blueprint comprises of five key principles:

  1. The first Principle is to protect individuals from unsafe or ineffective AI systems, and encourages consultation with diverse communities, stakeholders and experts in developing and deploying AI systems, as well as rigorous pre-deployment testing, risk identification and mitigation, and ongoing monitoring of AI systems.

  2. The second Principle seeks to establish safeguards against discriminative results stemming from the use of algorithmic decision-making, and encourages developers of AI systems to take proactive measures to protect individuals and communities from discrimination, including through equity assessments and algorithmic impact assessments in the design and deployment stages.

  3.  The third Principle advocates for building privacy protections into AI systems by default, and encourages AI systems to respect individuals’ decisions regarding the collection, use, access, transfer and deletion of personal information where possible (and where not possible, use default privacy by design safeguards).

  4. The fourth Principle emphasizes the importance of notice and transparency, and encourages developers of AI systems to provide a plain language description of how the system functions and the role of automation in the system, as well as when an algorithmic system is used to make a decision impacting an individual (including when the automated system is not the sole input determining the decision).

  5. The fifth Principle encourages the development of opt-out mechanisms that provide individuals with the option to access a human decisionmaker as an alternative to the use of an AI system.

In 2019, the European Commission published a similar set of automated systems governance principles, called the Ethics Guidelines for Trustworthy AI. The European Parliament currently is in the process of drafting the EU Artificial Intelligence Act, a legally enforceable adaptation of the Commission’s Ethics Guidelines. The current draft of the EU Artificial Intelligence Act requires developers of open-source AI systems to adhere to detailed guidelines on cybersecurity, accuracy, transparency, and data governance, and provides for a private right of action.

For more Technology Legal News, click here to visit the National Law Review.
Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Protection for Voice Actors is Artificial in Today’s Artificial Intelligence World

As we all know, social media has taken the world by storm. Unsurprisingly, it’s had an impact on trademark and copyright law, as the related right of publicity. A recent case involving an actor’s voice being used on the popular app TikTok is emblematic of the time. The actor, Bev Standing, sued TikTok for using her voice, simulated via artificial intelligence (AI) without her permission, to serve as “the female computer-generated voice of TikTok.” The case, which was settled last year, illustrates how the law is being adapted to protect artists’ rights in the face of exploitation through AI, as well as the limits of current law in protecting AI-created works.

Standing explained that she thinks of her voice “as a business,” and she is looking to protect her “product.” Apps like TikTok are taking these “products” and feeding them into an algorithm without the original speaker’s permission, thus impairing creative professionals’ ability to profit in an age of widespread use of the Internet and social media platforms.

Someone’s voice (and aspects of their persona such as their photo, image, or other likeness) can be protected by what’s called the “right of publicity.” That right prevents others from appropriation of one’s persona – but only when appropriation is for commercial purposes. In the TikTok case, there was commercial use, as TikTok was benefiting from use of Standing’s voice to “narrate” its users’ videos (with some user videos apparently involving “foul and offensive language”). In her Complaint, Standing alleged TikTok had violated her right of publicity in using her voice to create the AI voice used by TikTok, and relied upon two other claims:  false designation of origin under the Lanham Act and copyright infringement, as well as related state law claims. The false designation of origin claim turned on whether Standing’s voice was so recognizable that another party’s misappropriation of it could confuse consumers as to whether Standing authorized the Tik Tok use. The copyright infringement claim was possible because Standing created the original voice files for a company that hired her to record Chinese language translations. TikTok subsequently acquired the files but failed to get a license from Standing to use them, as TikTok was legally obligated to do because Standing was the original creator (and therefore copyright owner) of the voice files.

As with other historical technological innovations (one of the earliest being the printing press), the law often plays catch-up, but has proven surprisingly adaptable to new technology. Here, Standing was able to plead three legal theories (six if you count the state statutory and common law unfair competition claims), so it seems artists are well-protected by existing law, at least if they are alleging AI was used to copy their work or persona.

On the other hand, the case for protecting creative expression produced in whole or in part by AI is much more difficult. Some believe AI deserves its own form of copyright, since innovative technology has increasingly made its own music and sounds. Currently, protection for these sounds is limited, since only humans can be identified as authors for the purposes of copyright. Ryan Abott, a professor of law and health science at the University of Surrey in Britain, is attempting to bring a legal case against the U.S. Copyright Office to register a digital artwork made by a computer with AI as its author. The fear, says Abott, is that without rights over these sounds, innovation will be stifled — individuals will not have incentive to create AI works if they cannot protect them from unauthorized exploitation.

EEOC and the DOJ Issue Guidance for Employers Using AI Tools to Assess Job Applicants and Employees

Employers are more frequently relying on the use of Artificial Intelligence (“AI”) tools to automate employment decision-making, such as software that can review resumes and “chatbots” that interview and screen job applicants. We have previously blogged about the legal risks attendant to the use of such technologies, including here and here.

On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) issued long-awaited guidance on the use of such AI tools (the “Guidance”), examining how employers can seek to prevent AI-related disability discrimination. More specifically, the Guidance identifies a number of ways in which employment-related use of AI can, even unintentionally, violate the Americans with Disabilities Act (“ADA”), including if:

  • (i) “[t]he employer does not provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly and accurately by” the AI;
  • (ii) “[t]he employer relies on an algorithmic decision-making tool that intentionally or unintentionally ‘screens out’ an individual with a disability, even though that individual is able to do the job with a reasonable accommodation”; or
  • (iii) “[t]he employer adopts an [AI] tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.”

The Guidance further states that “[i]n many cases” employers are liable under the ADA for use of AI even if the tools are designed and administered by a separate vendor, noting that “employers may be held responsible for the actions of their agents . . . if the employer has given them authority to act on [its] behalf.”

The Guidance also identifies various best practices for employers, including:

  • Announcing generally that employees and applicants subject to an AI tool may request reasonable accommodations and providing instructions as to how to ask for accommodations.
  • Providing information about the AI tool, how it works, and what it is used for to the employees and applicants subjected to it. For example, an employer that uses keystroke-monitoring software may choose to disclose this software as part of new employees’ onboarding and explain that it is intended to measure employee productivity.
  • If the software was developed by a third party, asking the vendor whether: (i) the AI software was developed to accommodate people with disabilities, and if so, how; (ii) there are alternative formats available for disabled individuals; and (iii) the AI software asks questions likely to elicit medical or disability-related information.
  • If an employer is developing its own software, engaging experts to analyze the algorithm for potential biases at different steps of the development process, such as a psychologist if the tool is intended to test cognitive traits.
  • Only using AI tools that measure, directly, traits that are actually necessary for performing the job’s duties.
  • Additionally, it is always a best practice to train staff, especially supervisors and managers, how to recognize requests for reasonable accommodations and to respond promptly and effectively to those requests. If the AI tool is used by a third party on the employer’s behalf, that third party’s staff should also be trained to recognize requests for reasonable accommodation and forward them promptly to the employer.

Finally, also on May 12th, the U.S. Department of Justice (“DOJ”) released its own guidance on AI tools’ potential for inadvertent disability discrimination in the employment context. The DOJ guidance is largely in accord with the EEOC Guidance.

Employers utilizing AI tools should carefully audit them to ensure that this technology is not creating discriminatory outcomes.  Likewise, employers must remain closely apprised of any new developments from the EEOC and local, state, and federal legislatures and agencies as the trend toward regulation continues.

© 2022 Proskauer Rose LLP.

Patentablity of COVID19 Software Inventions: Artificial Intelligence (AI), Data Storage & Blockchain

The  Coronavirus pandemic revved up previously scarce funding for scientific research.  Part one of this series addressed the patentability of COVID-19 related Biotech, Pharma & Personal Protective Equipment (PPE) Inventions and whether inventions related to fighting COVID-19  should be patentable.  Both economists and lawmakers are critical of the exclusivity period granted by patents, especially in the case of vaccines and drugs.  Recently, several members of Congress requested “no exclusivity” for any “COVID-19 vaccine, drug, or other therapeutic.”[i]

In this segment, the unique issues related to the intellectual property rights of Coronavirus related software inventions, specifically, Artificial Intelligence (AI), Data Storage & Blockchain are addressed.

Digital Innovations

Historically, Americans have adhered to personalized healthcare and lacked the incentive to set up a digital infrastructure similar to Taiwan’s which has fared far better in combating the spread of a fast-moving virus.[ii]  But as hospitals continue to operate at maximum capacity and with prolonged social distancing, the software sector is teeming with digital solutions for increasing the virtual supply of healthcare to a wider network of patients,[iii] particularly as HHS scales back HIPAA regulations.[iv]  COVID-19 has also spurred other types of digital innovation, such as using AI to predict the next outbreak and electronic hospital bed management, etc.[v]

One area of particular interest is the use of blockchain and data storage in a COVID/post-COVID world.  Blockchains can serve as secure ledgers for the global supply of medical equipment, including respirators, ventilators, dialysis machines, and oxygen masks.[vi]  The Department of Homeland Security has also deemed blockchain managers in food and agricultural distribution as “critical infrastructure workers”.[vii]

Patentability

Many of these digital inventions will have a hard time with respect to patentability, especially those related to data storage such as blockchains.  In 2014, the Supreme Court found computer-related inventions were “abstract ideas” ineligible for patent protection in Alice v. CLS Bank.[viii]  Because computer-implemented programs execute steps that can theoretically be performed by a human being but are only automated by a machine, the Supreme Court concluded that patenting software would be patenting human activity.  This type of patent protection has long been considered by the Court to be too broad and dangerous.

Confusion

The aftermath of Alice is widespread confusion amongst members of the patent bar as well as the USPTO as to how computer-related software patents were to be treated henceforth.[ix]   The USPTO attempted to clarify some of this confusion by a series of Guidelines in 2019.[x]  Although well-received by the IP community, the USPTO’s Guidelines are not binding outside of the agency, meaning they are have little dispositive effect when parties must bring their cases to the Federal Circuit and other courts.[xi]  Indeed, the Federal Circuit has made clear that they are not bound by the USPTO’s guidance.[xii]  The Supreme Court will not provide further clarification and denied cert on all patent eligibility petitions in January of this year.[xiii]

The Future

Before the coronavirus outbreak, Congress was working on patent reform.[xiv]  But the long-awaited legislation was set aside further still as legislators focused on needed measures to address the pandemic.  On top of that, both Senator Tillis and Senator Coons who have spearheaded the efforts for patent reform are now facing reelection battles, making the future congressional leadership on patent reform uncertain.

Conclusion

Patents receive a lot of flak for being company assets, and like many assets, patents are subject to abuse.[xv]  But patents are necessary for innovation, particularly for small and medium-sized companies by carving out a safe haven in the marketplace from the encroachment of larger companies.[xvi]  American leadership in medical innovations had been declining for some time prior to the pandemic[xvii] due to the cumbersome US regulatory and legal environments, particularly for tech start-ups seeking private funding.[xviii]

Not all data storage systems should receive a patent and no vaccine should receive a patent so broad that it snuffs out public access to alternatives.  The USPTO considers novelty, obviousness and breadth when dispensing patent exclusivity, and they revisit the issue of patent validity downstream with inter partes review.  There are measures in place for ensuring good patents so let that system take its course.  A sweeping prohibition of patents is not the right answer.

The opinions stated herein are the sole opinions of the author and do not reflect the views or opinions of the National Law Review or any of its affiliates


[i] Congressional Progressive Leaders Announce Principles On COVID-19 Drug Pricing for Next Coronavirus Response Package, (2020), https://schakowsky.house.gov/media/press-releases/congressional-progressive-leaders-announce-principles-COVID-19-drug-pricing (last visited May 10, 2020).

[ii] Christina Farr, Why telemedicine has been such a bust so far, CNBC (June 30, 2018), https://www.cnbc.com/2018/06/29/why-telemedicine-is-a-bust.html and Nick Aspinwall, Taiwan Is Exporting Its Coronavirus Successes to the World, Foreign Policy (April 9, 2020), https://foreignpolicy.com/2020/04/09/taiwan-is-exporting-its-coronavirus-successes-to-the-world/.

[iii] Joe Harpaz, 5 Reasons Why Telehealth Is Here To Stay (COVID-19 And Beyond), Forbes (May 4, 2020), https://www.forbes.com/sites/joeharpaz/2020/05/04/5-reasons-why-telehealth-here-to-stay-COVID19/#7c4d941753fb.

[iv] Jessica Davis, OCR Lifts HIPAA Penalties for Telehealth Use During COVID-19, Health IT Security (March 18, 2020), https://healthitsecurity.com/news/ocr-lifts-hipaa-penalties-for-telehealth-use-during-COVID-19.

[v] Charles Alessi, The effect of the COVID-19 epidemic on health and care – is this a portent of the ‘new normal’?, HealthcareITNews (March 31, 2020), https://www.healthcareitnews.com/blog/europe/effect-COVID-19-epidemic-health-and-care-portent-new-normal and COVID-19 and AI: Tracking a Virus, Finding a Treatment, Wall Street Journal (April 17, 2020), https://www.wsj.com/podcasts/wsj-the-future-of-everything/COVID-19-and-ai-tracking-a-virus-finding-a-treatment/f064ac83-c202-40f9-8259-426780b36f2c.

[vi] Sara Castellenos, A Cryptocurrency Technology Finds New Use Tackling Coronavirus, Wall Street Journal (April 23, 2020), https://www.wsj.com/articles/a-cryptocurrency-technology-finds-new-use-tackling-coronavirus-11587675966?mod=article_inline.

[vii] Christopher C. Krebs, MEMORANDUM ON IDENTIFICATION OF ESSENTIAL CRITICAL INFRASTRUCTURE WORKERS DURING COVID-19 RESPONSE, Cybersecurity and Infrastructure Security Agency (March 19, 2020), available at https://www.cisa.gov/sites/default/files/publications/CISA-Guidance-on-Essential-Critical-Infrastructure-Workers-1-20-508c.pdf.

[viii] Alice v. CLS Bank, 573 U.S. 208 (2014), available at https://www.supremecourt.gov/opinions/13pdf/13-298_7lh8.pdf.

[ix] David O. Taylor, Confusing Patent Eligibility, 84 Tenn. L. Rev. 157 (2016), available at https://scholar.smu.edu/cgi/viewcontent.cgi?article=1221&context=law_faculty.

[x] 2019 Revised Patent Subject Matter Eligibility Guidance, United States Patent Office (January 7, 2019), available at https://www.federalregister.gov/documents/2019/01/07/2018-28282/2019-revised-patent-subject-matter-eligibility-guidance.

[xi] Steve Brachmann, Latest CAFC Ruling in Cleveland Clinic Case Confirms That USPTO’s 101 Guidance Holds Little Weight, IPWatchDog (April 7, 2019), https://www.ipwatchdog.com/2019/04/07/latest-cafc-ruling-cleveland-clinic-confirms-uspto-101-guidance-holds-little-weight/id=107998/.

[xii] Id.

[xiii] U.S. Supreme Court Denies Pending Patent Eligibility Petitions, Holland and Knight LLP (January 14, 2020), https://www.jdsupra.com/legalnews/u-s-supreme-court-denies-pending-patent-55501/.

[xiv] Tillis and Coons: What We Learned At Patent Reform Hearings, (June 24, 2019), available at https://www.tillis.senate.gov/2019/6/tillis-and-coons-what-we-learned-at-patent-reform-hearings.

[xv] Gene Quinn, Twisting Facts to Capitalize on COVID-19 Tragedy: Fortress v. bioMerieux, IPWatchDog (March 18, 2020), https://www.ipwatchdog.com/2020/03/18/twisting-facts-capitalize-COVID-19-tragedy-fortress-v-biomerieux/id=119941/.

[xvi] Paul R. Michel, To prepare for the next pandemic, Congress should restore patent protections for diagnostic tests, Roll Call (April 28, 2020), https://www.rollcall.com/2020/04/28/to-prepare-for-the-next-pandemic-congress-should-restore-patent-protections-for-diagnostic-tests/.

[xvii] Medical Technology Innovation Scorecard_The race for global leadership, PwC (January 2011), https://www.pwc.com/il/en/pharmaceuticals/assets/innovation-scorecard.pdf.

[xviii] Elizabeth Snell, How Health Privacy Regulations Hinder Telehealth Adoption, HealthITSecurity (May 5, 2015),https://healthitsecurity.com/news/how-health-privacy-regulations-hinder-telehealth-adoption.


Copyright (C) GLOBAL IP Counselors, LLP

For more on patentability, see the National Law Review Intellectual Property law section.

Legal Marketing and SEO Trends for 2020 Part 2: Dwell Time, EAT and Law Firm Branding

John McDougall discussed creating Deep ContentLSI (Latent Semantic Indexing) and topic clusters with us yesterday, detailing how these SEO concepts present great opportunities for law firms who are looking to position their attorneys as subject matter experts.  John explained how Google’s recent algorithm changes such as BERT, which is designed to help users find true topic experts, provide a bounty of opportunities for legal marketers who properly position their lawyers’ expertise to achieve top search results. Today John is going into more detail on the concepts of webpage dwell time, expertise, authority and trustworthiness (EAT), and law firm branding.

NLR:  In your book, you talk about the intersection of “dwell time” and the idea of the “long click” as ways Google is using AI (Artificial Intelligence) to try to figure out the relationship between the search term and the webpage that term led the user to.  Do you see any other areas AI will impact SEO on the horizon?  

JM:  Google has been modifying its search engine, to improve its ability to understand complex queries for some time.

Hummingbird in 2013 was a rebuild of their main “engine” partially in response to there being more searches via voice.

RankBrain in 2015 added more machine learning to improve Hummingbird even further (for searches they had never seen before and complex long-tail queries). They said it was the 3rd most important factor with content and links.

Now with BERT in 2019/2020, they can already understand the intent of a search much better.

Considering they keep increasing the ability to provide relevant results that match the searcher’s intent, I would assume it will change SEO, yet again…

I would expect writing tools to get much more robust. This might be based on “big data” from social profiles, and through analyzing massive volumes of the world’s information written by experts that can be given to a writer/attorney on a silver platter. That might help in one part of SEO.

It is exciting to watch as long as you can stay nimble, follow the “algorithm weather channel” and adjust quickly when new updates are released.

NLR:  Another core theme of your book is the role of brands, and the idea of EAT, or expertise, authority, and trustworthiness. How do these ideas enter into a keyword strategy for law firms?

JM:  As an expert in a particular field of law, you should be associated with certain keywords which show you are a thought leader in that topical area. With SEO being MUCH more competitive and complex than ever, you may need to be more realistic and pick keywords that better match what you can write about comprehensively.

This can also affect the design of law firm websites and brand positioning. If you have fifty practice areas on your home page, you might consider featuring ones where you will be doing extensive writing and SEO work.

NLR:  Can you explain the idea behind the Eric Schmidt quote: “Brands are how you sort out the cesspool,” which you discuss in your book?

JM:  There are “black hat” SEO people that are the cesspool. They do sketchy things to try and trick Google into “liking” websites. Those tactics used to work on small law firm’s websites that did not deserve rankings. Thankfully, using brand signals like how many times people search for your brand and mention/link to your brand, Google is better able to rank sites that have a real-world value beyond SEO tactics.  The book, Content Marketing and SEO for Law Firms, offers several examples of brand signals and how they apply in a law firm context.

NLR:  What audience did you write your book for and who do you think will be the best audience for your January 15th webinar? 

JM:  Anyone trying to improve their law firm website and marketing will benefit greatly from Content Marketing and SEO for Law Firms, but firms that take action on it will get the most out of it. These content and SEO actions can be small to start but the key is to be consistent.

The content marketing and SEO guide is primarily written for law firm marketers, but it’s also for attorneys because they need to have an idea of how marketing strategy can directly affect the growth of their firm. The sections the attorneys should consider as “must-reads” are marked with a gavel icon.

This webinar will have enough insight on strategy that both law firm marketers and attorneys/department heads should attend.

 

Thanks, John for your time and insight.  For those who haven’t had the opportunity to hear John speak at various legal marketing events or read his previous publications to gain insight from his 20+ years of experience, the following webinar and his new book are great opportunities to get actionable advice on how to build an SEO roadmap for legal marketers in 2020:

Register for the January 15th complimentary webinar:  How to Develop an Effective Law Firm Content Marketing and SEO Action Plan for 2020.

Receive a sample chapter of John’s new book: Content Marketing and SEO for Law Firms.

 


Copyright ©2020 National Law Forum, LLC

Read more about marketing for law firms in the Law Office Management section of the National Law Review.