EEOC and the DOJ Issue Guidance for Employers Using AI Tools to Assess Job Applicants and Employees

Employers are more frequently relying on the use of Artificial Intelligence (“AI”) tools to automate employment decision-making, such as software that can review resumes and “chatbots” that interview and screen job applicants. We have previously blogged about the legal risks attendant to the use of such technologies, including here and here.

On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) issued long-awaited guidance on the use of such AI tools (the “Guidance”), examining how employers can seek to prevent AI-related disability discrimination. More specifically, the Guidance identifies a number of ways in which employment-related use of AI can, even unintentionally, violate the Americans with Disabilities Act (“ADA”), including if:

  • (i) “[t]he employer does not provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly and accurately by” the AI;
  • (ii) “[t]he employer relies on an algorithmic decision-making tool that intentionally or unintentionally ‘screens out’ an individual with a disability, even though that individual is able to do the job with a reasonable accommodation”; or
  • (iii) “[t]he employer adopts an [AI] tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.”

The Guidance further states that “[i]n many cases” employers are liable under the ADA for use of AI even if the tools are designed and administered by a separate vendor, noting that “employers may be held responsible for the actions of their agents . . . if the employer has given them authority to act on [its] behalf.”

The Guidance also identifies various best practices for employers, including:

  • Announcing generally that employees and applicants subject to an AI tool may request reasonable accommodations and providing instructions as to how to ask for accommodations.
  • Providing information about the AI tool, how it works, and what it is used for to the employees and applicants subjected to it. For example, an employer that uses keystroke-monitoring software may choose to disclose this software as part of new employees’ onboarding and explain that it is intended to measure employee productivity.
  • If the software was developed by a third party, asking the vendor whether: (i) the AI software was developed to accommodate people with disabilities, and if so, how; (ii) there are alternative formats available for disabled individuals; and (iii) the AI software asks questions likely to elicit medical or disability-related information.
  • If an employer is developing its own software, engaging experts to analyze the algorithm for potential biases at different steps of the development process, such as a psychologist if the tool is intended to test cognitive traits.
  • Only using AI tools that measure, directly, traits that are actually necessary for performing the job’s duties.
  • Additionally, it is always a best practice to train staff, especially supervisors and managers, how to recognize requests for reasonable accommodations and to respond promptly and effectively to those requests. If the AI tool is used by a third party on the employer’s behalf, that third party’s staff should also be trained to recognize requests for reasonable accommodation and forward them promptly to the employer.

Finally, also on May 12th, the U.S. Department of Justice (“DOJ”) released its own guidance on AI tools’ potential for inadvertent disability discrimination in the employment context. The DOJ guidance is largely in accord with the EEOC Guidance.

Employers utilizing AI tools should carefully audit them to ensure that this technology is not creating discriminatory outcomes.  Likewise, employers must remain closely apprised of any new developments from the EEOC and local, state, and federal legislatures and agencies as the trend toward regulation continues.

© 2022 Proskauer Rose LLP.

Patentablity of COVID19 Software Inventions: Artificial Intelligence (AI), Data Storage & Blockchain

The  Coronavirus pandemic revved up previously scarce funding for scientific research.  Part one of this series addressed the patentability of COVID-19 related Biotech, Pharma & Personal Protective Equipment (PPE) Inventions and whether inventions related to fighting COVID-19  should be patentable.  Both economists and lawmakers are critical of the exclusivity period granted by patents, especially in the case of vaccines and drugs.  Recently, several members of Congress requested “no exclusivity” for any “COVID-19 vaccine, drug, or other therapeutic.”[i]

In this segment, the unique issues related to the intellectual property rights of Coronavirus related software inventions, specifically, Artificial Intelligence (AI), Data Storage & Blockchain are addressed.

Digital Innovations

Historically, Americans have adhered to personalized healthcare and lacked the incentive to set up a digital infrastructure similar to Taiwan’s which has fared far better in combating the spread of a fast-moving virus.[ii]  But as hospitals continue to operate at maximum capacity and with prolonged social distancing, the software sector is teeming with digital solutions for increasing the virtual supply of healthcare to a wider network of patients,[iii] particularly as HHS scales back HIPAA regulations.[iv]  COVID-19 has also spurred other types of digital innovation, such as using AI to predict the next outbreak and electronic hospital bed management, etc.[v]

One area of particular interest is the use of blockchain and data storage in a COVID/post-COVID world.  Blockchains can serve as secure ledgers for the global supply of medical equipment, including respirators, ventilators, dialysis machines, and oxygen masks.[vi]  The Department of Homeland Security has also deemed blockchain managers in food and agricultural distribution as “critical infrastructure workers”.[vii]

Patentability

Many of these digital inventions will have a hard time with respect to patentability, especially those related to data storage such as blockchains.  In 2014, the Supreme Court found computer-related inventions were “abstract ideas” ineligible for patent protection in Alice v. CLS Bank.[viii]  Because computer-implemented programs execute steps that can theoretically be performed by a human being but are only automated by a machine, the Supreme Court concluded that patenting software would be patenting human activity.  This type of patent protection has long been considered by the Court to be too broad and dangerous.

Confusion

The aftermath of Alice is widespread confusion amongst members of the patent bar as well as the USPTO as to how computer-related software patents were to be treated henceforth.[ix]   The USPTO attempted to clarify some of this confusion by a series of Guidelines in 2019.[x]  Although well-received by the IP community, the USPTO’s Guidelines are not binding outside of the agency, meaning they are have little dispositive effect when parties must bring their cases to the Federal Circuit and other courts.[xi]  Indeed, the Federal Circuit has made clear that they are not bound by the USPTO’s guidance.[xii]  The Supreme Court will not provide further clarification and denied cert on all patent eligibility petitions in January of this year.[xiii]

The Future

Before the coronavirus outbreak, Congress was working on patent reform.[xiv]  But the long-awaited legislation was set aside further still as legislators focused on needed measures to address the pandemic.  On top of that, both Senator Tillis and Senator Coons who have spearheaded the efforts for patent reform are now facing reelection battles, making the future congressional leadership on patent reform uncertain.

Conclusion

Patents receive a lot of flak for being company assets, and like many assets, patents are subject to abuse.[xv]  But patents are necessary for innovation, particularly for small and medium-sized companies by carving out a safe haven in the marketplace from the encroachment of larger companies.[xvi]  American leadership in medical innovations had been declining for some time prior to the pandemic[xvii] due to the cumbersome US regulatory and legal environments, particularly for tech start-ups seeking private funding.[xviii]

Not all data storage systems should receive a patent and no vaccine should receive a patent so broad that it snuffs out public access to alternatives.  The USPTO considers novelty, obviousness and breadth when dispensing patent exclusivity, and they revisit the issue of patent validity downstream with inter partes review.  There are measures in place for ensuring good patents so let that system take its course.  A sweeping prohibition of patents is not the right answer.

The opinions stated herein are the sole opinions of the author and do not reflect the views or opinions of the National Law Review or any of its affiliates


[i] Congressional Progressive Leaders Announce Principles On COVID-19 Drug Pricing for Next Coronavirus Response Package, (2020), https://schakowsky.house.gov/media/press-releases/congressional-progressive-leaders-announce-principles-COVID-19-drug-pricing (last visited May 10, 2020).

[ii] Christina Farr, Why telemedicine has been such a bust so far, CNBC (June 30, 2018), https://www.cnbc.com/2018/06/29/why-telemedicine-is-a-bust.html and Nick Aspinwall, Taiwan Is Exporting Its Coronavirus Successes to the World, Foreign Policy (April 9, 2020), https://foreignpolicy.com/2020/04/09/taiwan-is-exporting-its-coronavirus-successes-to-the-world/.

[iii] Joe Harpaz, 5 Reasons Why Telehealth Is Here To Stay (COVID-19 And Beyond), Forbes (May 4, 2020), https://www.forbes.com/sites/joeharpaz/2020/05/04/5-reasons-why-telehealth-here-to-stay-COVID19/#7c4d941753fb.

[iv] Jessica Davis, OCR Lifts HIPAA Penalties for Telehealth Use During COVID-19, Health IT Security (March 18, 2020), https://healthitsecurity.com/news/ocr-lifts-hipaa-penalties-for-telehealth-use-during-COVID-19.

[v] Charles Alessi, The effect of the COVID-19 epidemic on health and care – is this a portent of the ‘new normal’?, HealthcareITNews (March 31, 2020), https://www.healthcareitnews.com/blog/europe/effect-COVID-19-epidemic-health-and-care-portent-new-normal and COVID-19 and AI: Tracking a Virus, Finding a Treatment, Wall Street Journal (April 17, 2020), https://www.wsj.com/podcasts/wsj-the-future-of-everything/COVID-19-and-ai-tracking-a-virus-finding-a-treatment/f064ac83-c202-40f9-8259-426780b36f2c.

[vi] Sara Castellenos, A Cryptocurrency Technology Finds New Use Tackling Coronavirus, Wall Street Journal (April 23, 2020), https://www.wsj.com/articles/a-cryptocurrency-technology-finds-new-use-tackling-coronavirus-11587675966?mod=article_inline.

[vii] Christopher C. Krebs, MEMORANDUM ON IDENTIFICATION OF ESSENTIAL CRITICAL INFRASTRUCTURE WORKERS DURING COVID-19 RESPONSE, Cybersecurity and Infrastructure Security Agency (March 19, 2020), available at https://www.cisa.gov/sites/default/files/publications/CISA-Guidance-on-Essential-Critical-Infrastructure-Workers-1-20-508c.pdf.

[viii] Alice v. CLS Bank, 573 U.S. 208 (2014), available at https://www.supremecourt.gov/opinions/13pdf/13-298_7lh8.pdf.

[ix] David O. Taylor, Confusing Patent Eligibility, 84 Tenn. L. Rev. 157 (2016), available at https://scholar.smu.edu/cgi/viewcontent.cgi?article=1221&context=law_faculty.

[x] 2019 Revised Patent Subject Matter Eligibility Guidance, United States Patent Office (January 7, 2019), available at https://www.federalregister.gov/documents/2019/01/07/2018-28282/2019-revised-patent-subject-matter-eligibility-guidance.

[xi] Steve Brachmann, Latest CAFC Ruling in Cleveland Clinic Case Confirms That USPTO’s 101 Guidance Holds Little Weight, IPWatchDog (April 7, 2019), https://www.ipwatchdog.com/2019/04/07/latest-cafc-ruling-cleveland-clinic-confirms-uspto-101-guidance-holds-little-weight/id=107998/.

[xii] Id.

[xiii] U.S. Supreme Court Denies Pending Patent Eligibility Petitions, Holland and Knight LLP (January 14, 2020), https://www.jdsupra.com/legalnews/u-s-supreme-court-denies-pending-patent-55501/.

[xiv] Tillis and Coons: What We Learned At Patent Reform Hearings, (June 24, 2019), available at https://www.tillis.senate.gov/2019/6/tillis-and-coons-what-we-learned-at-patent-reform-hearings.

[xv] Gene Quinn, Twisting Facts to Capitalize on COVID-19 Tragedy: Fortress v. bioMerieux, IPWatchDog (March 18, 2020), https://www.ipwatchdog.com/2020/03/18/twisting-facts-capitalize-COVID-19-tragedy-fortress-v-biomerieux/id=119941/.

[xvi] Paul R. Michel, To prepare for the next pandemic, Congress should restore patent protections for diagnostic tests, Roll Call (April 28, 2020), https://www.rollcall.com/2020/04/28/to-prepare-for-the-next-pandemic-congress-should-restore-patent-protections-for-diagnostic-tests/.

[xvii] Medical Technology Innovation Scorecard_The race for global leadership, PwC (January 2011), https://www.pwc.com/il/en/pharmaceuticals/assets/innovation-scorecard.pdf.

[xviii] Elizabeth Snell, How Health Privacy Regulations Hinder Telehealth Adoption, HealthITSecurity (May 5, 2015),https://healthitsecurity.com/news/how-health-privacy-regulations-hinder-telehealth-adoption.


Copyright (C) GLOBAL IP Counselors, LLP

For more on patentability, see the National Law Review Intellectual Property law section.

Legal Marketing and SEO Trends for 2020 Part 2: Dwell Time, EAT and Law Firm Branding

John McDougall discussed creating Deep ContentLSI (Latent Semantic Indexing) and topic clusters with us yesterday, detailing how these SEO concepts present great opportunities for law firms who are looking to position their attorneys as subject matter experts.  John explained how Google’s recent algorithm changes such as BERT, which is designed to help users find true topic experts, provide a bounty of opportunities for legal marketers who properly position their lawyers’ expertise to achieve top search results. Today John is going into more detail on the concepts of webpage dwell time, expertise, authority and trustworthiness (EAT), and law firm branding.

NLR:  In your book, you talk about the intersection of “dwell time” and the idea of the “long click” as ways Google is using AI (Artificial Intelligence) to try to figure out the relationship between the search term and the webpage that term led the user to.  Do you see any other areas AI will impact SEO on the horizon?  

JM:  Google has been modifying its search engine, to improve its ability to understand complex queries for some time.

Hummingbird in 2013 was a rebuild of their main “engine” partially in response to there being more searches via voice.

RankBrain in 2015 added more machine learning to improve Hummingbird even further (for searches they had never seen before and complex long-tail queries). They said it was the 3rd most important factor with content and links.

Now with BERT in 2019/2020, they can already understand the intent of a search much better.

Considering they keep increasing the ability to provide relevant results that match the searcher’s intent, I would assume it will change SEO, yet again…

I would expect writing tools to get much more robust. This might be based on “big data” from social profiles, and through analyzing massive volumes of the world’s information written by experts that can be given to a writer/attorney on a silver platter. That might help in one part of SEO.

It is exciting to watch as long as you can stay nimble, follow the “algorithm weather channel” and adjust quickly when new updates are released.

NLR:  Another core theme of your book is the role of brands, and the idea of EAT, or expertise, authority, and trustworthiness. How do these ideas enter into a keyword strategy for law firms?

JM:  As an expert in a particular field of law, you should be associated with certain keywords which show you are a thought leader in that topical area. With SEO being MUCH more competitive and complex than ever, you may need to be more realistic and pick keywords that better match what you can write about comprehensively.

This can also affect the design of law firm websites and brand positioning. If you have fifty practice areas on your home page, you might consider featuring ones where you will be doing extensive writing and SEO work.

NLR:  Can you explain the idea behind the Eric Schmidt quote: “Brands are how you sort out the cesspool,” which you discuss in your book?

JM:  There are “black hat” SEO people that are the cesspool. They do sketchy things to try and trick Google into “liking” websites. Those tactics used to work on small law firm’s websites that did not deserve rankings. Thankfully, using brand signals like how many times people search for your brand and mention/link to your brand, Google is better able to rank sites that have a real-world value beyond SEO tactics.  The book, Content Marketing and SEO for Law Firms, offers several examples of brand signals and how they apply in a law firm context.

NLR:  What audience did you write your book for and who do you think will be the best audience for your January 15th webinar? 

JM:  Anyone trying to improve their law firm website and marketing will benefit greatly from Content Marketing and SEO for Law Firms, but firms that take action on it will get the most out of it. These content and SEO actions can be small to start but the key is to be consistent.

The content marketing and SEO guide is primarily written for law firm marketers, but it’s also for attorneys because they need to have an idea of how marketing strategy can directly affect the growth of their firm. The sections the attorneys should consider as “must-reads” are marked with a gavel icon.

This webinar will have enough insight on strategy that both law firm marketers and attorneys/department heads should attend.

 

Thanks, John for your time and insight.  For those who haven’t had the opportunity to hear John speak at various legal marketing events or read his previous publications to gain insight from his 20+ years of experience, the following webinar and his new book are great opportunities to get actionable advice on how to build an SEO roadmap for legal marketers in 2020:

Register for the January 15th complimentary webinar:  How to Develop an Effective Law Firm Content Marketing and SEO Action Plan for 2020.

Receive a sample chapter of John’s new book: Content Marketing and SEO for Law Firms.

 


Copyright ©2020 National Law Forum, LLC

Read more about marketing for law firms in the Law Office Management section of the National Law Review.

2020 Predictions for Data Businesses

It’s a new year, a new decade, and a new experience for me writing for the HeyDataData blog.  My colleagues asked for input and discussion around 2020 predictions for technology and data protection.  Dom has already written about a few.  I’ve picked out four:

  1. Experiential retail

Stores will offer technology-infused shopping experience in their stores.  Even today, without using my phone, I can experience a retailer’s products and services with store-provided technology, without needing to open an app.  I can try on a pair of glasses or wear a new lipstick color just by putting my face in front of a screen.  We will see how creative companies can be in luring us to the store by offering us an experience that we have to try.  This experiential retail type of technology is a bit ahead of the Amazon checkout technology, but passive payment methods are coming, too.  [But if we still don’t want to go to the store, companies will continue to offer us more mobile ordering—for pick-up or delivery.]

  1. Consumers will still tell companies their birthdays and provide emails for coupons (well, maybe not in California)

We will see whether the California Consumer Privacy Act (CCPA) will meaningfully change consumers’ perception about giving their information to companies—usually lured by financial incentives (like loyalty programs, coupons, etc. or a free app).  I tend to think that we will continue to download apps and give information if it is convenient or cheaper for us and that companies will think it is good for business (and their shareholders, if applicable) to continue to engage with their consumers.  This is an extension of number 1, really, because embedding technology in the retail experience will allow companies to offer new (hopefully better) products (and gather data they may find a use for later. . . ).  Even though I think consumers will still provide up their data, I also think consumer privacy advocates try harder to shift their perceptions (enter CCPA 2.0 and others).

  1. More “wearables” will hit the market

We already have “smart” refrigerators, watches, TVs, garage doors, vacuum cleaners, stationary bikes and treadmills.  Will we see other, traditionally disconnected items connect?  I think yes.  Clothes, shoes, purses, backpacks, and other “wearables” are coming.

  1. Computers will help with decisions

We will see more technology-aided (trained with lots of data) decision making.  Just yesterday, one of the most read stories described how an artificial intelligence system detected cancer matching or outperforming radiologists that looked at the same images.  Over the college football bowl season, I saw countless commercials for insurance companies showing how their policy holders can lower their rates if they let an app track how they are driving.  More applications will continue to pop-up.

Those are my predictions.  And I have one wish to go with it.  Those kinds of advances create tension among open innovation, ethics and the law.  I do not predict that we will solve this in 2020, but my #2020vision is that we will make progress.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on data use in retail & health & more, see the National Law Review Communications, Media & Internet law page.

The U.S. Patent and Trademark Office Takes on Artificial Intelligence

If the hallmark of intelligence is problem solving, then it should be no surprise that artificial intelligence is being called on to solve complex problems that human intelligence alone cannot. Intellectual property laws exist to reward intelligence, creativity and problem solving; yet, as society adapts to a world immersed in artificial intelligence, the nation’s intellectual property laws have yet to do the same. The Constitution seems to only contemplate human inventors when it says, in Article I, Section 8, Clause 8,

The Congress shall have Power … To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

The Patent Act similarly seems to limit patents to humans when it says, at 35 U.S.C. § 100(f),

The term ‘inventor’ means the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.”

In fact, as far back as 1956, the U.S. Copyright Office refused registration for a musical composition created by a computer on the basis that copyright laws only applied to human authors.

Recognizing the need to adapt, the U.S. Patent and Trademark Office (PTO) recently issued notices seeking public comments on intellectual property protection related to artificial intelligence. In August 2019, the PTO issued a Federal Register Notice, 84 Fed. Reg. 166 (Aug. 27, 2019) entitled, “Request for Comments on Patenting Artificial Intelligence Inventions.” On October 30, the PTO broadened its inquiry by issuing another Notice, 84 Fed. Reg. 210 (Oct. 30, 2019) entitled, “Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation.” Finally, on December 3, 2019, the PTO issued a third notice, extending the comment period on the earlier notices to January 10, 2020. All of the notices can be downloaded from the PTO’s web site.

The January 10, 2020 deadline for public comments on the issues raised in the notices is fast approaching. This is an important topic for the future of technology and intellectual property, and the government is plainly looking at these important issues with a clean slate.


© 2020 Vedder Price

For more on patentable inventions, see the Intellectual Property law section of the National Law Review.

Reflections on 2019 in Technology Law, and a Peek into 2020

It is that time of year when we look back to see what tech-law issues took up most of our time this year and look ahead to see what the emerging issues are for 2020.

Data: The Issues of the Year

Data presented a wide variety of challenging legal issues in 2019. Data is solidly entrenched as a key asset in our economy, and as a result, the issues around it demanded a significant level of attention.

  • Clearly, privacy and data security-related data issues were dominant in 2019. The GDPR, CCPA and other privacy regulations garnered much consideration and resources, and with GDPR enforcement ongoing and CCPA enforcement right around the corner, the coming year will be an important one to watch. As data generation and collection technologies continued to evolve, privacy issues evolved as well.  In 2019, we saw many novel issues involving mobilebiometric and connected car  Facial recognition technology generated a fair amount of litigation, and presented concerns regarding the possibility of intrusive governmental surveillance (prompting some municipalities, such as San Francisco, to ban its use by government agencies).

  • Because data has proven to be so valuable, innovators continue to develop new and sometimes controversial technological approaches to collecting data. The legal issues abound.  For example, in the past year, we have been advising on the implications of an ongoing dispute between the City Attorney of Los Angeles and an app operator over geolocation data collection, as well as a settlement between the FTC and a personal email management service over access to “e-receipt” data.  We have entertained multiple questions from clients about the unsettled legal terrain surrounding web scraping and have been closely following developments in this area, including the blockbuster hiQ Ninth Circuit ruling from earlier this year. As usual, the pace of technological innovation has outpaced the ability for the law to keep up.

  • Data security is now regularly a boardroom and courtroom issue, with data breaches, phishing, ransomware attacks and identity theft (and cyberinsurance) the norm. Meanwhile, consumers are experiencing deeper and deeper “breach fatigue” with every breach notice they receive. While the U.S. government has not yet been able to put into place general national data security legislation, states and certain regulators are acting to compel data collectors to take reasonable measures to protect consumer information (e.g., New York’s newly-enacted SHIELD Act) and IoT device manufacturers to equip connected devices with certain security features appropriate to the nature and function of the devices secure (e.g., California’s IoT security law, which becomes effective January 1, 2020). Class actions over data breaches and security lapses are filed regularly, with mixed results.

  • Many organizations have focused on the opportunistic issues associated with new and emerging sources of data. They seek to use “big data” – either sourced externally or generated internally – to advance their operations.  They are focused on understanding the sources of the data and their lawful rights to use such data.  They are examining new revenue opportunities offered by the data, including the expansion of existing lines, the identification of customer trends or the creation of new businesses (including licensing anonymized data to others).

  • Moreover, data was a key asset in many corporate transactions in 2019. Across the board in M&A, private equity, capital markets, finance and some real estate transactions, data was the subject of key deal points, sometimes intensive diligence, and often difficult negotiations. Consumer data has even become a national security issue, as the Committee on Foreign Investment in the United States (CFIUS), expanded under a 2018 law, began to scrutinize more and more technology deals involving foreign investment, including those involving sensitive personal data.

I am not going out on a limb in saying that 2020 and beyond promise many interesting developments in “big data,” privacy and data security.

Social Media under Fire

Social media platforms experienced an interesting year. The power of the medium came into even clearer focus, and not necessarily in the most flattering light. In addition to privacy issues, fake news, hate speech, bullying, political interference, revenge porn, defamation and other problems came to light. Executives of the major platforms have been on the hot seat in Washington, and there is clearly bipartisan unease with the influence of social media in our society.  Many believe that the status quo cannot continue. Social media platforms are working to build self-regulatory systems to address these thorny issues, but the work continues.  Still, amidst the bluster and criticism, it remains to be seen whether the calls to “break up” the big tech companies will come to pass or whether Congress’s ongoing debate of comprehensive data privacy reform will lead to legislation that would alter the basic practices of the major technology platforms (and in turn, many of the data collection and sharing done by today’s businesses).  We have been working with clients, advising them of their rights and obligations as platforms, as contributors to platforms, and in a number of other ways in which they may have a connection to such platforms or the content or advertising appearing on such platforms.

What does 2020 hold? Will Washington’s withering criticism of the tech world translate into any tangible legislation or regulatory efforts?  Will Section 230 of the Communications Decency Act – the law that underpins user generated content on social media and generally the availability of user generated content on the internet and apps – be curtailed? Will platforms be asked to accept more responsibility for third party content appearing on their services?

While these issues are playing out in the context of the largest social media platforms, any legislative solutions to these problems could in fact extend to others that do not have the same level of compliance resources available. Unless a legislative solution includes some type of “size of person” test or room to adapt technical safeguards to the nature and scope of a business’s activities or sensitivity of the personal information collected, smaller providers could be shouldered with a difficult and potentially expensive compliance burden. Thus, it remains to see how the focus on social media and any attempt to solve the issues it presents may affect online communications more generally.

Quantum Leaps

Following the momentum of the passage of the National Quantum Initiative at the close of 2018, a significant level of resources has been invested into quantum computing in 2019.  This bubble of activity culminated in Google announcing a major milestone in quantum computing.  Interestingly, IBM suggests that it wasn’t quite as significant as Google claimed.  In any case, the development of quantum computing in the U.S. has progressed a great deal in 2019, and many organizations will continue to focus on it as a priority in 2020.

  • Reports state that China has dedicated billions to build a Chinese national laboratory for quantum computing, among other related R&D products, a development that has gotten the attention of Congress and the Pentagon. This may be the beginning of the 21st century’s great technological race.

  • What is at stake? The implications are huge. It is expected that ultimately, quantum computers will be able to solve complex computations exponentially faster – as much as 100 million times faster — than classic computers. The opportunities this could present are staggering.  As are the risks and dangers.  For example, for all its benefits, the same technology could quickly crack the digital security that protects online banking and shopping and secure online communications.

  • Many organizations are concerned about the advent of quantum computing. But given that it will be a reality in the future, what should you be thinking about now? While not a real threat for 2020 or the near-term thereafter, it would be wise to think about it if one is anticipating investing in long-term infrastructure solutions. Will quantum computing render the investment obsolete? Or, will quantum computing present a security threat to that infrastructure?  It is not too early to think about these issues, and for example, technologists have been hard at work developing quantum-proof blockchain protocols. It would at least be prudent to understand the long-term roadmap of technology suppliers to see if they have even thought about quantum computing, and if so, to see to how they see quantum computing impacting their solutions and services.

Artificial Intelligence

We have seen significant level of deployment in the Artificial Intelligence/Machine Learning landscape this past year.  According to the Artificial Intelligence Index Report 2019, AI adoption by organizations (of at least one function or business unit) is increasing globally. Many businesses across many industries are deploying some level of AI into their businesses.  However, the same report notes that many companies employing AI solutions might not be taking steps to mitigate the risks from AI, beyond cybersecurity. We have advised clients on those risks, and in certain cases have been able to apportion exposure amongst multiple parties involved in the implementation.  In addition, we have also seen the beginning of regulation in AI, such as California’s chatbot law, New York’s recent passage of a law (S.2302prohibiting consumer reporting agencies and lenders from using the credit scores of people in a consumer’s social network to determine that individual’s credit worthiness, or the efforts of a number of regulators to regulate the use of AI in hiring decisions.

We expect 2020 to be a year of increased adoption of AI, coupled with an increasing sense of apprehension about the technology. There is a growing concern that AI and related technologies will continue to be “weaponized” in the coming year, as the public and the government express concern over “deepfakes” (including the use of voice deepfakes of CEOs to commit fraud).  And, of course, the warnings of people like Elon Musk and Bill Gates, as they discuss AI, cannot be ignored.

Blockchain

We have been very busy in 2019 helping clients learn about blockchain technologies, including issues related to smart contracts and cryptocurrency. 2019 was largely characterized by pilotstrials,  tests and other limited applications of blockchain in enterprise and infrastructure applications as well as a significant level of activity in tokenization of assetscryptocurrency investments, and the building of businesses related to the trading and custody of digital assets. Our blog, www.blockchainandthelaw.io keeps readers abreast of key new developments and we hope our readers have found our published articles on blockchain and smart contracts helpful.

Looking ahead to 2020, regulators such as the SECFinCENIRS and CFTC are still watching the cryptocurrency space closely. Gone are the days of ill-fated “initial coin offerings” and today, security token offerings, made in compliance with the securities laws, are increasingly common. Regulators are beginning to be more receptive to cryptocurrency, as exemplified by the New York State Department of Financial Services revisiting of the oft-maligned “bitlicense” requirement in New York.

Beyond virtual currency, I believe some of the most exciting developments of blockchain solutions in 2020 will be in supply chain management and other infrastructure uses of blockchain. 2019 was characterized by experimentation and trial. We have seen many successes and some slower starts. In 2020, we expect to see an increase in adoption. Of course, the challenge for businesses is to really understand whether blockchain is an appropriate solution for the particular need. Contrary to some of the hype out there, blockchain is not the right fit for every technology need, and there are many circumstances where a traditional client-server model is the preferred approach. For help in evaluating whether blockchain is in fact a potential fit for a technology need, this article may be helpful.

Other 2020 Developments

Interestingly, one of the companies that has served as a form of leading indicator in the adoption of emerging technologies is Walmart.  Walmart was one of the first major companies to embrace supply use of blockchain, so what is Walmart looking at for 2020? A recent Wall Street Journal article discusses its interest and investment in 5G communications and edge computing. We too have been assisting clients in those areas, and expect them to be active areas of activity in 2020.

Edge computing, which is related to “fog” computing, which is, in turn,  related to cloud computing, is simply put, the idea of storing and processing information at the point of capture, rather than communicating that information to the cloud or a central data processing location for storage and processing. According to the WSJ article, Walmart plans on building edge computing capability for other businesses to hire (following to some degree Amazon’s model for AWS).  The article also talks about Walmart’s interest in 5G technology, which would work hand-in-hand with its edge computing network.

Our experience with clients suggest that Walmart may be onto something.  Edge and fog computing, 5G and the growth of the “Internet of Things” are converging and will offer the ability for businesses to be faster, cheaper and more profitable. Of course this convergence also will tie back to the issues we discussed earlier, such as data, privacy and data security, artificial intelligence and machine learning. In general, this convergence will increase even more the technical abilities to process and use data (which would conceivably require regulation that would feature privacy and data security protections that are consumer-friendly, yet balanced so they do not stifle the economic and technological benefits of 5G).

This past year has presented a host of fascinating technology-based legal issues, and 2020 promises to hold more of the same.  We will continue to keep you posted!

We hope you had a good 2019, and we want to wish all of our readers a very happy and safe holiday season and a great New Year!


© 2019 Proskauer Rose LLP.

For more in technology developments, see the National Law Review Intellectual Property or Communications, Media & Internet law sections.

AI and Evidence: Let’s Start to Worry

When researchers at University of Washington pulled together a clip of a faked speech by President Obama using video segments of the President’s earlier speeches run through artificial intelligence, we watched with a queasy feeling. The combination wasn’t perfect – we could still see some seams and stitches showing – but it was good enough to paint a vision of the future. Soon we would not be able to trust our own eyes and ears.

Now the researchers at University of Washington (who clearly seem intent on ruining our society) have developed the next level of AI visual wizardry – fake people good enough to fool real people. As reported recently in Wired Magazine, the professors embarked on a Turing beauty contest, generating thousands of virtual faces that look like they are alive today, but aren’t.

Using some of the same tech that makes deepfake videos, the Husky professors ran a game for their research subjects called Which Face is Real? In it, subjects were shown a real face and a faked face and asked to choose which was real. “On average, players could identify the reals nearly 60 percent of the time on their first try. The bad news: Even with practice, their performance peaked at around 75 percent accuracy.” Wired observes that the tech will only get better at fooling people “and so will chatbot software that can put false words into fake mouths.”

We should be concerned. As with all digital technologies (and maybe most tech of all types if you look at it a certain way) the first industrial applications we have seen occur in the sex industry. The sex industry has lax rules (if they exist at all) and the basest instincts of humanity find enough participants to make a new tech financially viable. Reported by the BBC, “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent.”

Of course, given the level of mendacity that populism drags in its fetid wake, we should expect to see examples of deepfakes offered on television news soon as additional support of the “alternate facts” ginned up by politicians, or generated to smear an otherwise blameless accuser of (faked) horrible behavior.  It is hard to believe that certain corners of the press would be able to resist showing the AI created video.

But, as lawyers, we have an equally valid concern about how this phenomenon plays in court. Clearly, we have rules to authenticate evidence.  New Evidence Rule 902(13) allows authentication of records “generated by an electronic process or system that produces an accurate result” if “shown by the certification of a qualified person” in a particular way. But with the testimony of someone who was wrong, fooled or simply lying about the provenance of an AI generated video, the false digital file can be easily introduced as evidence.

Some Courts under the silent witness theory have allowed a video to speak for itself. Either way, courts will need to tighten up authentication rules in the coming days of cheap and easy deepfakes being present everywhere. As every litigator knows, no matter what a judge tells a jury, once a video is seen and heard, its effects can dominate a juror’s mind.

I imagine that a new field of video veracity expertise will arise, as one side tries to prove its opponent’s evidence was a deepfake, and the opponent works to establish its evidence as “straight video.” One of the problems in this space is not just that deepfakes will slip their way into court, damning the innocent and exonerating the guilty, but that the simple existence of deepfakes allows unscrupulous (or zealously protective) lawyers to cast doubt on real, honest, naturally created video. A significant part of that new field of video veracity experts will be employed to cast shade on real evidence – “We know that deepfakes are easy to make and this is clearly one of them.” While real direct video that goes to the heart of a matter is often conclusive in establishing a crime, it can be successfully challenged, even when its message is true.  Ask John DeLorean.

So I now place a call to the legal technology community.  As the software to make deepfakes continues to improve, please help us develop parallel technology to be able to identify them. Lawyers and litigants need to be able to clearly authenticate genuine video evidence to clearly strike deepfaked video as such.  I am certain that somewhere in Langley, Fort Meade, Tel Aviv, Moscow and/or Shanghai both of these technologies are already mastered and being used, but we in the non-intelligence world may not know about them for a decade. We need some civilian/commercial help in wrangling the truth out of this increasingly complex and frightening technology.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more artificial intelligence, see the National Law Review Communications, Media & Internet law page.

CMS’s Request for Information Provides Additional Signal That AI Will Revolutionize Healthcare

On October 22, 2019, the Centers for Medicare and Medicaid Services (“CMS”) issued a Request for Information (“RFI”) to obtain input on how CMS can utilize Artificial Intelligence (“AI”) and other new technologies to improve its operations.  CMS’ objectives to leverage AI chiefly include identifying and preventing fraud, waste, and abuse.  The RFI specifically states CMS’ aim “to ensure proper claims payment, reduce provider burden, and overall, conduct program integrity activities in a more efficient manner.”  The RFI follows last month’s White House Summit on Artificial Intelligence in Government, where over 175 government leaders and industry experts gathered to discuss how the Federal government can adopt AI “to achieve its mission and improve services to the American people.”

Advances in AI technologies have made the possibility of automated fraud detection at exponentially greater speed and scale a reality. A 2018 study by consulting firm McKinsey & Company estimated that machine learning could help US health insurance companies reduce fraud, waste, and abuse by $20-30 billion.  Indeed, in 2018 alone, improper payments accounted for roughly $31 billion of Medicare’s net costs. CMS is now looking to AI to prevent improper payments, rather than the current “pay and chase” approach to detection.

CMS currently relies on its records system to detect fraud. Currently, humans remain the predominant detectors of fraud in the CMS system. This has resulted in inefficient detection capabilities, and these traditional fraud detection approaches have been decreasingly successful in light of the changing health care landscape.  This problem is particularly prevalent as CMS transitions to value-based payment arrangements.  In a recent blog post, CMS Administrator, Seema Verma, revealed that reliance on humans to detect fraud resulted in reviews of less than one-percent of medical records associated with items and services billed to Medicare.  This lack of scale and speed arguably allows many improper payments to go undetected.

Fortunately, AI manufacturers and developers have been leveraging AI to detect fraud for some time in various industries. For example, the financial and insurance industries already leverage AI to detect fraudulent patterns. However, leveraging AI technology involves more than simply obtaining the technology. Before AI can be used for fraud detection, the time-consuming process of amassing large quantities of high quality, interoperable data must occur. Further, AI algorithms need to be optimized through iterative human quality reviews. Finally, testing the accuracy of the trained AI is crucial before it can be relied upon in a production system.

In the RFI, CMS poses many questions to AI vendors, healthcare providers and suppliers that likely would be addressed by regulation.  Before the Federal government relies on AI to detect fraud, CMS must gain assurances that AI technologies will not return inaccurate or incorrect outputs that could negatively impact providers and patients. One key question raised involves how to assess the effectiveness of AI technology and how to measure and maintain its accuracy. The answer to this question should factor heavily into the risk calculation of CMS using AI in its fraud detection activities. Interestingly, companies seeking to automate revenue cycle management processes using AI have to grapple with the same concerns.  Without adequate compliance mechanisms in place around the development, implementation and use of AI tools for these purposes, companies could be subject to high risk of legal liability under Federal False Claims Act or similar fraud and abuse laws and regulations.

In addition to fraud detection, the RFI is seeking advice as to whether new technology could help CMS identify “potentially problematic affiliations” in terms of business ownership and registration.  Similarly, CMS is interested to gain feedback on whether AI and machine learning could speed up current expensive and time-consuming Medicare claim review processes and Medicare Advantage audits.

It is likely that this RFI is one of many signals that AI will revolutionize how healthcare is covered and paid for moving forward.  We encourage you to weigh in on this on-going debate to help shape this new world.

Comments are due to CMS by November 20, 2019.


©2019 Epstein Becker & Green, P.C. All rights reserved.

For more CMS activities, see the National Law Review Health Law & Managed Care page.

Are Your AI Selection Tools Validated? OFCCP Provides Guidance for Validation of AI-Based Algorithms

We have long counseled employers using or contemplating using artificial intelligence (“AI”) algorithms in their employee selection processes to validate the AI-based selection procedure using an appropriate validation strategy approved by the Uniform Guidelines on Employee Selection Procedures (“Uniform Guidelines”).  Our advice has been primarily based on minimizing legal risk and complying with best practices.  A recently updated Frequently Asked Questions (“FAQ”) from the Office of Federal Contract Compliance Programs (“OFCCP”) provides further support for validating AI-based selection procedures in compliance with the Uniform Guidelines.

On July 23, 2019, the OFCCP updated the FAQ section on its website to provide guidance on the validation of employee selection procedures.  Under the Uniform Guidelines, any selection procedure resulting in a “selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) (or eighty percent) of the rate for the group with the highest rate will generally be regarded by Federal enforcement agencies as evidence of adverse impact,” which in turn requires the validation of the selection procedure.  These validation requirements are equally applicable to any AI-based selection procedure used to make any employment decision, including hiring, termination, promotion, and demotion.

As stated in the Uniform Guidelines, and emphasized in the FAQ, the OFCCP recognizes three methods of validation:

  1. Content validation – a showing that the content of the selection procedure is representative of important aspects of performance on the job in question;

  2. Criterion-related validation – production of empirical data demonstrating that the selection procedure is predictive or significantly correlated with important aspects of job performance; and

  3. Construct validation – a showing that the procedure measures the degree to which candidates possess identifiable characteristics that have been determined to be important in successful performance on the job.

With the exception of criterion-related validating studies, which can be “transported” from other entities under certain circumstances, the Uniform Guidelines require local validation at the employer’s own facilities.

If a selection procedure adversely impacts a protected group, the employer must provide evidence of validity for the selection procedure(s) that caused the adverse impact. Thus, it is crucial that employers considering the implementation of AI-based algorithms in the selection process both conduct adverse impact studies and be prepared to produce one or more validation studies.

The new FAQ also provides important guidelines on the substantial methods utilized by OFCCP in evaluating potential adverse impact.  In accordance with the Uniform Guidelines, OFCCP will analyze the Impact Ratio – the disfavored group’s selection rate divided by the favored group’s selection rate.  Any Impact Ratio of less than 0.80 (referred to as the “Four – Fifths Rule”) constitutes an initial indication of adverse impact, but OFCCP will not pursue enforcement without evidence of statistical and practical significance.  For statistical significance, the OFCCP’s standard statistical tests are the Fisher’s Exact Test (for groups with fewer than 30 subjects) and the Two Independent-Sample Binomial Z-Test (for groups with 30 or more subjects).

With the publication of this new FAQ, employers – and particularly federal contractors – should be sure to evaluate their use of AI-based algorithms and properly validate all selection procedures under the Uniform Guidelines.  Moreover, although not addressed in the OFCCP’s new FAQ, employers should also ensure that their AI-based algorithms are compliant with all other state and federal laws and regulations.

©2019 Epstein Becker & Green, P.C. All rights reserved.

Drive.ai Introduces External Communication Panels to Talk to Public

Self-driving cars are inherently presented with a challenge—communicating with their surroundings. However, Drive.ai has attempted to address that challenge by equipping its self-driving cars with external communication panels that convey a variety of messages for drivers, pedestrians, cyclists and everyone else on the road. Drive.ai CEO Bijit Halder said, “Our external communication panels are intended to mimic what an interaction with a human drive would look like. Normally you’d make eye contact, wave someone along, or otherwise signal your intentions. With [these] panels everyone on the road is kept in the loop of the car’s intentions, ensuring maximum comfort and trust, even for people interacting with a self-driving car for the first time.” To help the company build its platform, one of the company’s founders recorded herself driving around normally and analyzed all the interactions she had with other drivers, including eye contact and hand motions.

Specifically, the panel uses lidar sensors, cameras, and radar to determine if any pedestrians are in or near a crosswalk as it approaches the crosswalk. If the vehicle detects that pedestrian’s path, the car begins to slow down and displays the message “Stopping for you.” Once the vehicle comes to a complete stop, it displays the message “Waiting for you.” When there are no more pedestrians are detected, the vehicle will display the message “Going now, please wait” to let other pedestrians to wait to cross.

Drive.ai continues to conduct research to determine the best means of communication including the best location for such communications, which is currently right above the wheels based on its previous studies. Halder said, “The more you can effectively communicate how a self-driving car will act, the more confidence the public will have in the technology, and that trust will lead to adoption on a broader scale.”

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
More more in vehicle technology advances, see the Utilities & Transport page on the National Law Review.