Artists Are Selling AI-Generated Images of Mickey Mouse to Provoke a Test Case

Several artists, frustrated with Artificially Intelligent (AI) image generators skirting copyright laws, are using similar image generators to produce images of Mickey Mouse and other copyrighted characters to challenge the current legal status of AI art. While an artist’s copyright in a work typically vests at the moment of fixation, including the right to prosecute copyright violation, AI-generated work complicates the issue by removing humans from the creative process. Courts have ruled that AI cannot hold copyright, which by corollary also means that AI-generated art sits in the public domain. This legal loophole has angered many professional artists whose art is used to train the AI. Many AI generators, such as Dall-E 2 and Midjourney, can render pieces in the style of a human artist, effectively automating the artist’s job.

Given Disney’s reputation for vigorously defending its intellectual property, these artists hope that monetizing these public-domain AI Mickeys on mugs and T-shirts will prompt a lawsuit. Ironically, provoking and losing a case in this vein may set a favorable precedent for the independent artist community. As AI becomes more advanced, society will likely need to address how increasingly intelligent and powerful AI can complicate and undermine existing law.

Blair Robinson also contributed to this article.

For more Intellectual Property Legal News, click here to visit the National Law Review.

Following the Recent Regulatory Trends, NLRB General Counsel Seeks to Limit Employers’ Use of Artificial Intelligence in the Workplace

On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”).  The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.”  Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.

Under the General Counsel’s proposed framework, an employer can avoid a violation of the Act if it can demonstrate that its business needs require the electronic monitoring and management practices and the practices “outweigh” employees’ Section 7 rights.  Not only must the employer be able to make this showing, it must also demonstrate that it provided the employees advance notice of the technology used, the reason for its use, and how it uses the information obtained.  An employer is relieved of this obligation, according to the General Counsel, only if it can show “special circumstances” justifying “covert use” of the technology.

In GC 23-02, the General Counsel signaled to NLRB Regions that they should scrutinize a broad range of “automated management” and “algorithmic management” technologies, defined as “a diverse set of technological tools and techniques to remotely manage workforces, relying on data collection and surveillance of workers to enable automated or semi-automated decision-making.”  Technologies subject to this scrutiny include those used during working time, such as wearable devices, security cameras, and radio-frequency identification badges that record workers’ conversations and track the movements of employees, GPS tracking devices and cameras that keep track of the productivity and location of employees who are out on the road, and computer software that takes screenshots, webcam photos, or audio recordings.  Also subject to scrutiny are technologies employers may use to track employees while they are off duty, such as employer-issued phones and wearable devices, and applications installed on employees’ personal devices.  Finally, the General Counsel noted that an employer that uses such technologies to hire employees, such as online cognitive assessments and reviews of social media, “pry into job applicants’ private lives.”  Thus, these pre-hire practices may also violate of the Act.  Technologies such as resume readers and other automated selection tools used during hiring and promotion may also be subject to GC 23-02.

GC 23-02 follows the wave of recent federal guidance from the White House, the Equal Employment Opportunity Commission, and local laws that attempt to define, regulate, and monitor the use of artificial intelligence in decision-making capacities.  Like these regulations and guidance, GC 23-02 raises more questions than it answers.  For example, GC 23-02 does not identify the standards for determining whether business needs “outweigh” employees’ Section 7 rights, or what constitutes “special circumstances” that an employer must show to avoid scrutiny under the Act.

While GC 23-02 sets forth the General Counsel’s proposal and thus is not legally binding, it does signal that there will likely be disputes in the future over artificial intelligence in the employment context.

©2022 Epstein Becker & Green, P.C. All rights reserved.

Chamber of Commerce Challenges CFPB Anti-Bias Focus Concerning AI

The end of last month the U.S. Chamber of Commerce, the American Bankers Association and other industry groups (collectively, “Plaintiffs”) filed suit in Texas federal court challenging the Consumer Financial Protection Bureau’s (“CFPB”) update this year to the Unfair, Deceptive, or Abusive Acts or Practices section of its examination manual to include discrimination.  Chamber of Commerce of the United States of America, et al v. Consumer Financial Protection Bureau, et al., Case No. 6:22-cv-00381 (E.D. Tex.)

By way of background, the Consumer Financial Protection Act, which is Title X of the 2010 Dodd-Frank Act (the “Act”), prohibits providers of consumer financial products or services or a service provider from engaging in any unfair, deceptive or abusive act or practice (“UDAAP”).  The Act also provides the CFPB with rulemaking and enforcement authority to “prevent unfair, deceptive, or abusive acts or practices in connection with any transaction with a consumer for a consumer financial product or service, or the offering of a consumer financial product or service.”  See, e.g.https://files.consumerfinance.gov/f/documents/cfpb_unfair-deceptive-abusive-acts-practices-udaaps_procedures.pdf.  In general, the Act provides that an act or practice is unfair when it causes or is likely to cause substantial injury to consumers, which is not reasonably avoidable by consumers, and the injury is not outweighed by countervailing benefits to consumers or to competition.

The CFPB earlier this spring published revised examination guidelines on unfair, deceptive, or abusive acts and practices, or UDAAPs.  Importantly, this set forth a new position from the CFPB, that discrimination in the provision of consumer financial products and services can itself be a UDAAP.  This was a development that was surprising to many providers of financial products and services.  The CFPB also released an updated exam manual that outlined its position regarding how discriminatory conduct may qualify as a UDAAP in consumer finance.  Additionally, the CFPB in May 2022 additionally published a Consumer Financial Protection Circular to remind the public of creditors’ adverse action notice requirements under the Equal Credit Opportunity Act (“ECOA”).  In the view of the CFPB, creditors cannot use technologies (include algorithmic decision making) if it means they are unable to provide required explanations under the ECOA.

In July 2022, the Chamber and others called on the CFPB to rescind the update to the manual.  This included, among other arguments raised in a white paper supporting their position, that in conflating the concepts of “unfairness” and “discrimination,” the CFPB ignores the Act’s text, structure, and legislative history which discusses “unfairness” and “discrimination” as two separate concepts and defines “unfairness” without mentioning discrimination

The Complaint filed this fall raises three claims under the Administrative Procedure Act (“APA”) in relation to the updated manual as well as others.  The Complaint contends that ultimately it is consumers that will suffer as a result of the CFPB’s new position, as “[t]hese amendments to the manual harm Plaintiffs’ members by imposing heavy compliance costs that are ultimately passed down to consumers in the form of higher prices and reduced access to products.”

The litigation process started by Plaintiffs in this case will be time consuming (a response to the Complaint is not expected from Defendants until December).  In the meantime, entities in the financial sector should be cognizant of the CFPB’s new approach and ensure that their compliance practices appropriately mitigate risk, including in relation to algorithmic decision making and AI.  As always, we will keep you up to date with the latest news on this litigation.

For more Consumer Finance Legal News, click here to visit the National Law Review

© Copyright 2022 Squire Patton Boggs (US) LLP

White House Office of Science and Technology Policy Releases “Blueprint for an AI Bill of Rights”

On October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) unveiled its Blueprint for an AI Bill of Rights, a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems.

The Blueprint comprises of five key principles:

  1. The first Principle is to protect individuals from unsafe or ineffective AI systems, and encourages consultation with diverse communities, stakeholders and experts in developing and deploying AI systems, as well as rigorous pre-deployment testing, risk identification and mitigation, and ongoing monitoring of AI systems.

  2. The second Principle seeks to establish safeguards against discriminative results stemming from the use of algorithmic decision-making, and encourages developers of AI systems to take proactive measures to protect individuals and communities from discrimination, including through equity assessments and algorithmic impact assessments in the design and deployment stages.

  3.  The third Principle advocates for building privacy protections into AI systems by default, and encourages AI systems to respect individuals’ decisions regarding the collection, use, access, transfer and deletion of personal information where possible (and where not possible, use default privacy by design safeguards).

  4. The fourth Principle emphasizes the importance of notice and transparency, and encourages developers of AI systems to provide a plain language description of how the system functions and the role of automation in the system, as well as when an algorithmic system is used to make a decision impacting an individual (including when the automated system is not the sole input determining the decision).

  5. The fifth Principle encourages the development of opt-out mechanisms that provide individuals with the option to access a human decisionmaker as an alternative to the use of an AI system.

In 2019, the European Commission published a similar set of automated systems governance principles, called the Ethics Guidelines for Trustworthy AI. The European Parliament currently is in the process of drafting the EU Artificial Intelligence Act, a legally enforceable adaptation of the Commission’s Ethics Guidelines. The current draft of the EU Artificial Intelligence Act requires developers of open-source AI systems to adhere to detailed guidelines on cybersecurity, accuracy, transparency, and data governance, and provides for a private right of action.

For more Technology Legal News, click here to visit the National Law Review.
Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Protection for Voice Actors is Artificial in Today’s Artificial Intelligence World

As we all know, social media has taken the world by storm. Unsurprisingly, it’s had an impact on trademark and copyright law, as the related right of publicity. A recent case involving an actor’s voice being used on the popular app TikTok is emblematic of the time. The actor, Bev Standing, sued TikTok for using her voice, simulated via artificial intelligence (AI) without her permission, to serve as “the female computer-generated voice of TikTok.” The case, which was settled last year, illustrates how the law is being adapted to protect artists’ rights in the face of exploitation through AI, as well as the limits of current law in protecting AI-created works.

Standing explained that she thinks of her voice “as a business,” and she is looking to protect her “product.” Apps like TikTok are taking these “products” and feeding them into an algorithm without the original speaker’s permission, thus impairing creative professionals’ ability to profit in an age of widespread use of the Internet and social media platforms.

Someone’s voice (and aspects of their persona such as their photo, image, or other likeness) can be protected by what’s called the “right of publicity.” That right prevents others from appropriation of one’s persona – but only when appropriation is for commercial purposes. In the TikTok case, there was commercial use, as TikTok was benefiting from use of Standing’s voice to “narrate” its users’ videos (with some user videos apparently involving “foul and offensive language”). In her Complaint, Standing alleged TikTok had violated her right of publicity in using her voice to create the AI voice used by TikTok, and relied upon two other claims:  false designation of origin under the Lanham Act and copyright infringement, as well as related state law claims. The false designation of origin claim turned on whether Standing’s voice was so recognizable that another party’s misappropriation of it could confuse consumers as to whether Standing authorized the Tik Tok use. The copyright infringement claim was possible because Standing created the original voice files for a company that hired her to record Chinese language translations. TikTok subsequently acquired the files but failed to get a license from Standing to use them, as TikTok was legally obligated to do because Standing was the original creator (and therefore copyright owner) of the voice files.

As with other historical technological innovations (one of the earliest being the printing press), the law often plays catch-up, but has proven surprisingly adaptable to new technology. Here, Standing was able to plead three legal theories (six if you count the state statutory and common law unfair competition claims), so it seems artists are well-protected by existing law, at least if they are alleging AI was used to copy their work or persona.

On the other hand, the case for protecting creative expression produced in whole or in part by AI is much more difficult. Some believe AI deserves its own form of copyright, since innovative technology has increasingly made its own music and sounds. Currently, protection for these sounds is limited, since only humans can be identified as authors for the purposes of copyright. Ryan Abott, a professor of law and health science at the University of Surrey in Britain, is attempting to bring a legal case against the U.S. Copyright Office to register a digital artwork made by a computer with AI as its author. The fear, says Abott, is that without rights over these sounds, innovation will be stifled — individuals will not have incentive to create AI works if they cannot protect them from unauthorized exploitation.

EEOC and the DOJ Issue Guidance for Employers Using AI Tools to Assess Job Applicants and Employees

Employers are more frequently relying on the use of Artificial Intelligence (“AI”) tools to automate employment decision-making, such as software that can review resumes and “chatbots” that interview and screen job applicants. We have previously blogged about the legal risks attendant to the use of such technologies, including here and here.

On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) issued long-awaited guidance on the use of such AI tools (the “Guidance”), examining how employers can seek to prevent AI-related disability discrimination. More specifically, the Guidance identifies a number of ways in which employment-related use of AI can, even unintentionally, violate the Americans with Disabilities Act (“ADA”), including if:

  • (i) “[t]he employer does not provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly and accurately by” the AI;
  • (ii) “[t]he employer relies on an algorithmic decision-making tool that intentionally or unintentionally ‘screens out’ an individual with a disability, even though that individual is able to do the job with a reasonable accommodation”; or
  • (iii) “[t]he employer adopts an [AI] tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.”

The Guidance further states that “[i]n many cases” employers are liable under the ADA for use of AI even if the tools are designed and administered by a separate vendor, noting that “employers may be held responsible for the actions of their agents . . . if the employer has given them authority to act on [its] behalf.”

The Guidance also identifies various best practices for employers, including:

  • Announcing generally that employees and applicants subject to an AI tool may request reasonable accommodations and providing instructions as to how to ask for accommodations.
  • Providing information about the AI tool, how it works, and what it is used for to the employees and applicants subjected to it. For example, an employer that uses keystroke-monitoring software may choose to disclose this software as part of new employees’ onboarding and explain that it is intended to measure employee productivity.
  • If the software was developed by a third party, asking the vendor whether: (i) the AI software was developed to accommodate people with disabilities, and if so, how; (ii) there are alternative formats available for disabled individuals; and (iii) the AI software asks questions likely to elicit medical or disability-related information.
  • If an employer is developing its own software, engaging experts to analyze the algorithm for potential biases at different steps of the development process, such as a psychologist if the tool is intended to test cognitive traits.
  • Only using AI tools that measure, directly, traits that are actually necessary for performing the job’s duties.
  • Additionally, it is always a best practice to train staff, especially supervisors and managers, how to recognize requests for reasonable accommodations and to respond promptly and effectively to those requests. If the AI tool is used by a third party on the employer’s behalf, that third party’s staff should also be trained to recognize requests for reasonable accommodation and forward them promptly to the employer.

Finally, also on May 12th, the U.S. Department of Justice (“DOJ”) released its own guidance on AI tools’ potential for inadvertent disability discrimination in the employment context. The DOJ guidance is largely in accord with the EEOC Guidance.

Employers utilizing AI tools should carefully audit them to ensure that this technology is not creating discriminatory outcomes.  Likewise, employers must remain closely apprised of any new developments from the EEOC and local, state, and federal legislatures and agencies as the trend toward regulation continues.

© 2022 Proskauer Rose LLP.

Patentablity of COVID19 Software Inventions: Artificial Intelligence (AI), Data Storage & Blockchain

The  Coronavirus pandemic revved up previously scarce funding for scientific research.  Part one of this series addressed the patentability of COVID-19 related Biotech, Pharma & Personal Protective Equipment (PPE) Inventions and whether inventions related to fighting COVID-19  should be patentable.  Both economists and lawmakers are critical of the exclusivity period granted by patents, especially in the case of vaccines and drugs.  Recently, several members of Congress requested “no exclusivity” for any “COVID-19 vaccine, drug, or other therapeutic.”[i]

In this segment, the unique issues related to the intellectual property rights of Coronavirus related software inventions, specifically, Artificial Intelligence (AI), Data Storage & Blockchain are addressed.

Digital Innovations

Historically, Americans have adhered to personalized healthcare and lacked the incentive to set up a digital infrastructure similar to Taiwan’s which has fared far better in combating the spread of a fast-moving virus.[ii]  But as hospitals continue to operate at maximum capacity and with prolonged social distancing, the software sector is teeming with digital solutions for increasing the virtual supply of healthcare to a wider network of patients,[iii] particularly as HHS scales back HIPAA regulations.[iv]  COVID-19 has also spurred other types of digital innovation, such as using AI to predict the next outbreak and electronic hospital bed management, etc.[v]

One area of particular interest is the use of blockchain and data storage in a COVID/post-COVID world.  Blockchains can serve as secure ledgers for the global supply of medical equipment, including respirators, ventilators, dialysis machines, and oxygen masks.[vi]  The Department of Homeland Security has also deemed blockchain managers in food and agricultural distribution as “critical infrastructure workers”.[vii]

Patentability

Many of these digital inventions will have a hard time with respect to patentability, especially those related to data storage such as blockchains.  In 2014, the Supreme Court found computer-related inventions were “abstract ideas” ineligible for patent protection in Alice v. CLS Bank.[viii]  Because computer-implemented programs execute steps that can theoretically be performed by a human being but are only automated by a machine, the Supreme Court concluded that patenting software would be patenting human activity.  This type of patent protection has long been considered by the Court to be too broad and dangerous.

Confusion

The aftermath of Alice is widespread confusion amongst members of the patent bar as well as the USPTO as to how computer-related software patents were to be treated henceforth.[ix]   The USPTO attempted to clarify some of this confusion by a series of Guidelines in 2019.[x]  Although well-received by the IP community, the USPTO’s Guidelines are not binding outside of the agency, meaning they are have little dispositive effect when parties must bring their cases to the Federal Circuit and other courts.[xi]  Indeed, the Federal Circuit has made clear that they are not bound by the USPTO’s guidance.[xii]  The Supreme Court will not provide further clarification and denied cert on all patent eligibility petitions in January of this year.[xiii]

The Future

Before the coronavirus outbreak, Congress was working on patent reform.[xiv]  But the long-awaited legislation was set aside further still as legislators focused on needed measures to address the pandemic.  On top of that, both Senator Tillis and Senator Coons who have spearheaded the efforts for patent reform are now facing reelection battles, making the future congressional leadership on patent reform uncertain.

Conclusion

Patents receive a lot of flak for being company assets, and like many assets, patents are subject to abuse.[xv]  But patents are necessary for innovation, particularly for small and medium-sized companies by carving out a safe haven in the marketplace from the encroachment of larger companies.[xvi]  American leadership in medical innovations had been declining for some time prior to the pandemic[xvii] due to the cumbersome US regulatory and legal environments, particularly for tech start-ups seeking private funding.[xviii]

Not all data storage systems should receive a patent and no vaccine should receive a patent so broad that it snuffs out public access to alternatives.  The USPTO considers novelty, obviousness and breadth when dispensing patent exclusivity, and they revisit the issue of patent validity downstream with inter partes review.  There are measures in place for ensuring good patents so let that system take its course.  A sweeping prohibition of patents is not the right answer.

The opinions stated herein are the sole opinions of the author and do not reflect the views or opinions of the National Law Review or any of its affiliates


[i] Congressional Progressive Leaders Announce Principles On COVID-19 Drug Pricing for Next Coronavirus Response Package, (2020), https://schakowsky.house.gov/media/press-releases/congressional-progressive-leaders-announce-principles-COVID-19-drug-pricing (last visited May 10, 2020).

[ii] Christina Farr, Why telemedicine has been such a bust so far, CNBC (June 30, 2018), https://www.cnbc.com/2018/06/29/why-telemedicine-is-a-bust.html and Nick Aspinwall, Taiwan Is Exporting Its Coronavirus Successes to the World, Foreign Policy (April 9, 2020), https://foreignpolicy.com/2020/04/09/taiwan-is-exporting-its-coronavirus-successes-to-the-world/.

[iii] Joe Harpaz, 5 Reasons Why Telehealth Is Here To Stay (COVID-19 And Beyond), Forbes (May 4, 2020), https://www.forbes.com/sites/joeharpaz/2020/05/04/5-reasons-why-telehealth-here-to-stay-COVID19/#7c4d941753fb.

[iv] Jessica Davis, OCR Lifts HIPAA Penalties for Telehealth Use During COVID-19, Health IT Security (March 18, 2020), https://healthitsecurity.com/news/ocr-lifts-hipaa-penalties-for-telehealth-use-during-COVID-19.

[v] Charles Alessi, The effect of the COVID-19 epidemic on health and care – is this a portent of the ‘new normal’?, HealthcareITNews (March 31, 2020), https://www.healthcareitnews.com/blog/europe/effect-COVID-19-epidemic-health-and-care-portent-new-normal and COVID-19 and AI: Tracking a Virus, Finding a Treatment, Wall Street Journal (April 17, 2020), https://www.wsj.com/podcasts/wsj-the-future-of-everything/COVID-19-and-ai-tracking-a-virus-finding-a-treatment/f064ac83-c202-40f9-8259-426780b36f2c.

[vi] Sara Castellenos, A Cryptocurrency Technology Finds New Use Tackling Coronavirus, Wall Street Journal (April 23, 2020), https://www.wsj.com/articles/a-cryptocurrency-technology-finds-new-use-tackling-coronavirus-11587675966?mod=article_inline.

[vii] Christopher C. Krebs, MEMORANDUM ON IDENTIFICATION OF ESSENTIAL CRITICAL INFRASTRUCTURE WORKERS DURING COVID-19 RESPONSE, Cybersecurity and Infrastructure Security Agency (March 19, 2020), available at https://www.cisa.gov/sites/default/files/publications/CISA-Guidance-on-Essential-Critical-Infrastructure-Workers-1-20-508c.pdf.

[viii] Alice v. CLS Bank, 573 U.S. 208 (2014), available at https://www.supremecourt.gov/opinions/13pdf/13-298_7lh8.pdf.

[ix] David O. Taylor, Confusing Patent Eligibility, 84 Tenn. L. Rev. 157 (2016), available at https://scholar.smu.edu/cgi/viewcontent.cgi?article=1221&context=law_faculty.

[x] 2019 Revised Patent Subject Matter Eligibility Guidance, United States Patent Office (January 7, 2019), available at https://www.federalregister.gov/documents/2019/01/07/2018-28282/2019-revised-patent-subject-matter-eligibility-guidance.

[xi] Steve Brachmann, Latest CAFC Ruling in Cleveland Clinic Case Confirms That USPTO’s 101 Guidance Holds Little Weight, IPWatchDog (April 7, 2019), https://www.ipwatchdog.com/2019/04/07/latest-cafc-ruling-cleveland-clinic-confirms-uspto-101-guidance-holds-little-weight/id=107998/.

[xii] Id.

[xiii] U.S. Supreme Court Denies Pending Patent Eligibility Petitions, Holland and Knight LLP (January 14, 2020), https://www.jdsupra.com/legalnews/u-s-supreme-court-denies-pending-patent-55501/.

[xiv] Tillis and Coons: What We Learned At Patent Reform Hearings, (June 24, 2019), available at https://www.tillis.senate.gov/2019/6/tillis-and-coons-what-we-learned-at-patent-reform-hearings.

[xv] Gene Quinn, Twisting Facts to Capitalize on COVID-19 Tragedy: Fortress v. bioMerieux, IPWatchDog (March 18, 2020), https://www.ipwatchdog.com/2020/03/18/twisting-facts-capitalize-COVID-19-tragedy-fortress-v-biomerieux/id=119941/.

[xvi] Paul R. Michel, To prepare for the next pandemic, Congress should restore patent protections for diagnostic tests, Roll Call (April 28, 2020), https://www.rollcall.com/2020/04/28/to-prepare-for-the-next-pandemic-congress-should-restore-patent-protections-for-diagnostic-tests/.

[xvii] Medical Technology Innovation Scorecard_The race for global leadership, PwC (January 2011), https://www.pwc.com/il/en/pharmaceuticals/assets/innovation-scorecard.pdf.

[xviii] Elizabeth Snell, How Health Privacy Regulations Hinder Telehealth Adoption, HealthITSecurity (May 5, 2015),https://healthitsecurity.com/news/how-health-privacy-regulations-hinder-telehealth-adoption.


Copyright (C) GLOBAL IP Counselors, LLP

For more on patentability, see the National Law Review Intellectual Property law section.

Legal Marketing and SEO Trends for 2020 Part 2: Dwell Time, EAT and Law Firm Branding

John McDougall discussed creating Deep ContentLSI (Latent Semantic Indexing) and topic clusters with us yesterday, detailing how these SEO concepts present great opportunities for law firms who are looking to position their attorneys as subject matter experts.  John explained how Google’s recent algorithm changes such as BERT, which is designed to help users find true topic experts, provide a bounty of opportunities for legal marketers who properly position their lawyers’ expertise to achieve top search results. Today John is going into more detail on the concepts of webpage dwell time, expertise, authority and trustworthiness (EAT), and law firm branding.

NLR:  In your book, you talk about the intersection of “dwell time” and the idea of the “long click” as ways Google is using AI (Artificial Intelligence) to try to figure out the relationship between the search term and the webpage that term led the user to.  Do you see any other areas AI will impact SEO on the horizon?  

JM:  Google has been modifying its search engine, to improve its ability to understand complex queries for some time.

Hummingbird in 2013 was a rebuild of their main “engine” partially in response to there being more searches via voice.

RankBrain in 2015 added more machine learning to improve Hummingbird even further (for searches they had never seen before and complex long-tail queries). They said it was the 3rd most important factor with content and links.

Now with BERT in 2019/2020, they can already understand the intent of a search much better.

Considering they keep increasing the ability to provide relevant results that match the searcher’s intent, I would assume it will change SEO, yet again…

I would expect writing tools to get much more robust. This might be based on “big data” from social profiles, and through analyzing massive volumes of the world’s information written by experts that can be given to a writer/attorney on a silver platter. That might help in one part of SEO.

It is exciting to watch as long as you can stay nimble, follow the “algorithm weather channel” and adjust quickly when new updates are released.

NLR:  Another core theme of your book is the role of brands, and the idea of EAT, or expertise, authority, and trustworthiness. How do these ideas enter into a keyword strategy for law firms?

JM:  As an expert in a particular field of law, you should be associated with certain keywords which show you are a thought leader in that topical area. With SEO being MUCH more competitive and complex than ever, you may need to be more realistic and pick keywords that better match what you can write about comprehensively.

This can also affect the design of law firm websites and brand positioning. If you have fifty practice areas on your home page, you might consider featuring ones where you will be doing extensive writing and SEO work.

NLR:  Can you explain the idea behind the Eric Schmidt quote: “Brands are how you sort out the cesspool,” which you discuss in your book?

JM:  There are “black hat” SEO people that are the cesspool. They do sketchy things to try and trick Google into “liking” websites. Those tactics used to work on small law firm’s websites that did not deserve rankings. Thankfully, using brand signals like how many times people search for your brand and mention/link to your brand, Google is better able to rank sites that have a real-world value beyond SEO tactics.  The book, Content Marketing and SEO for Law Firms, offers several examples of brand signals and how they apply in a law firm context.

NLR:  What audience did you write your book for and who do you think will be the best audience for your January 15th webinar? 

JM:  Anyone trying to improve their law firm website and marketing will benefit greatly from Content Marketing and SEO for Law Firms, but firms that take action on it will get the most out of it. These content and SEO actions can be small to start but the key is to be consistent.

The content marketing and SEO guide is primarily written for law firm marketers, but it’s also for attorneys because they need to have an idea of how marketing strategy can directly affect the growth of their firm. The sections the attorneys should consider as “must-reads” are marked with a gavel icon.

This webinar will have enough insight on strategy that both law firm marketers and attorneys/department heads should attend.

 

Thanks, John for your time and insight.  For those who haven’t had the opportunity to hear John speak at various legal marketing events or read his previous publications to gain insight from his 20+ years of experience, the following webinar and his new book are great opportunities to get actionable advice on how to build an SEO roadmap for legal marketers in 2020:

Register for the January 15th complimentary webinar:  How to Develop an Effective Law Firm Content Marketing and SEO Action Plan for 2020.

Receive a sample chapter of John’s new book: Content Marketing and SEO for Law Firms.

 


Copyright ©2020 National Law Forum, LLC

Read more about marketing for law firms in the Law Office Management section of the National Law Review.

2020 Predictions for Data Businesses

It’s a new year, a new decade, and a new experience for me writing for the HeyDataData blog.  My colleagues asked for input and discussion around 2020 predictions for technology and data protection.  Dom has already written about a few.  I’ve picked out four:

  1. Experiential retail

Stores will offer technology-infused shopping experience in their stores.  Even today, without using my phone, I can experience a retailer’s products and services with store-provided technology, without needing to open an app.  I can try on a pair of glasses or wear a new lipstick color just by putting my face in front of a screen.  We will see how creative companies can be in luring us to the store by offering us an experience that we have to try.  This experiential retail type of technology is a bit ahead of the Amazon checkout technology, but passive payment methods are coming, too.  [But if we still don’t want to go to the store, companies will continue to offer us more mobile ordering—for pick-up or delivery.]

  1. Consumers will still tell companies their birthdays and provide emails for coupons (well, maybe not in California)

We will see whether the California Consumer Privacy Act (CCPA) will meaningfully change consumers’ perception about giving their information to companies—usually lured by financial incentives (like loyalty programs, coupons, etc. or a free app).  I tend to think that we will continue to download apps and give information if it is convenient or cheaper for us and that companies will think it is good for business (and their shareholders, if applicable) to continue to engage with their consumers.  This is an extension of number 1, really, because embedding technology in the retail experience will allow companies to offer new (hopefully better) products (and gather data they may find a use for later. . . ).  Even though I think consumers will still provide up their data, I also think consumer privacy advocates try harder to shift their perceptions (enter CCPA 2.0 and others).

  1. More “wearables” will hit the market

We already have “smart” refrigerators, watches, TVs, garage doors, vacuum cleaners, stationary bikes and treadmills.  Will we see other, traditionally disconnected items connect?  I think yes.  Clothes, shoes, purses, backpacks, and other “wearables” are coming.

  1. Computers will help with decisions

We will see more technology-aided (trained with lots of data) decision making.  Just yesterday, one of the most read stories described how an artificial intelligence system detected cancer matching or outperforming radiologists that looked at the same images.  Over the college football bowl season, I saw countless commercials for insurance companies showing how their policy holders can lower their rates if they let an app track how they are driving.  More applications will continue to pop-up.

Those are my predictions.  And I have one wish to go with it.  Those kinds of advances create tension among open innovation, ethics and the law.  I do not predict that we will solve this in 2020, but my #2020vision is that we will make progress.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on data use in retail & health & more, see the National Law Review Communications, Media & Internet law page.

The U.S. Patent and Trademark Office Takes on Artificial Intelligence

If the hallmark of intelligence is problem solving, then it should be no surprise that artificial intelligence is being called on to solve complex problems that human intelligence alone cannot. Intellectual property laws exist to reward intelligence, creativity and problem solving; yet, as society adapts to a world immersed in artificial intelligence, the nation’s intellectual property laws have yet to do the same. The Constitution seems to only contemplate human inventors when it says, in Article I, Section 8, Clause 8,

The Congress shall have Power … To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

The Patent Act similarly seems to limit patents to humans when it says, at 35 U.S.C. § 100(f),

The term ‘inventor’ means the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.”

In fact, as far back as 1956, the U.S. Copyright Office refused registration for a musical composition created by a computer on the basis that copyright laws only applied to human authors.

Recognizing the need to adapt, the U.S. Patent and Trademark Office (PTO) recently issued notices seeking public comments on intellectual property protection related to artificial intelligence. In August 2019, the PTO issued a Federal Register Notice, 84 Fed. Reg. 166 (Aug. 27, 2019) entitled, “Request for Comments on Patenting Artificial Intelligence Inventions.” On October 30, the PTO broadened its inquiry by issuing another Notice, 84 Fed. Reg. 210 (Oct. 30, 2019) entitled, “Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation.” Finally, on December 3, 2019, the PTO issued a third notice, extending the comment period on the earlier notices to January 10, 2020. All of the notices can be downloaded from the PTO’s web site.

The January 10, 2020 deadline for public comments on the issues raised in the notices is fast approaching. This is an important topic for the future of technology and intellectual property, and the government is plainly looking at these important issues with a clean slate.


© 2020 Vedder Price

For more on patentable inventions, see the Intellectual Property law section of the National Law Review.