2020 Predictions for Data Businesses

It’s a new year, a new decade, and a new experience for me writing for the HeyDataData blog.  My colleagues asked for input and discussion around 2020 predictions for technology and data protection.  Dom has already written about a few.  I’ve picked out four:

  1. Experiential retail

Stores will offer technology-infused shopping experience in their stores.  Even today, without using my phone, I can experience a retailer’s products and services with store-provided technology, without needing to open an app.  I can try on a pair of glasses or wear a new lipstick color just by putting my face in front of a screen.  We will see how creative companies can be in luring us to the store by offering us an experience that we have to try.  This experiential retail type of technology is a bit ahead of the Amazon checkout technology, but passive payment methods are coming, too.  [But if we still don’t want to go to the store, companies will continue to offer us more mobile ordering—for pick-up or delivery.]

  1. Consumers will still tell companies their birthdays and provide emails for coupons (well, maybe not in California)

We will see whether the California Consumer Privacy Act (CCPA) will meaningfully change consumers’ perception about giving their information to companies—usually lured by financial incentives (like loyalty programs, coupons, etc. or a free app).  I tend to think that we will continue to download apps and give information if it is convenient or cheaper for us and that companies will think it is good for business (and their shareholders, if applicable) to continue to engage with their consumers.  This is an extension of number 1, really, because embedding technology in the retail experience will allow companies to offer new (hopefully better) products (and gather data they may find a use for later. . . ).  Even though I think consumers will still provide up their data, I also think consumer privacy advocates try harder to shift their perceptions (enter CCPA 2.0 and others).

  1. More “wearables” will hit the market

We already have “smart” refrigerators, watches, TVs, garage doors, vacuum cleaners, stationary bikes and treadmills.  Will we see other, traditionally disconnected items connect?  I think yes.  Clothes, shoes, purses, backpacks, and other “wearables” are coming.

  1. Computers will help with decisions

We will see more technology-aided (trained with lots of data) decision making.  Just yesterday, one of the most read stories described how an artificial intelligence system detected cancer matching or outperforming radiologists that looked at the same images.  Over the college football bowl season, I saw countless commercials for insurance companies showing how their policy holders can lower their rates if they let an app track how they are driving.  More applications will continue to pop-up.

Those are my predictions.  And I have one wish to go with it.  Those kinds of advances create tension among open innovation, ethics and the law.  I do not predict that we will solve this in 2020, but my #2020vision is that we will make progress.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on data use in retail & health & more, see the National Law Review Communications, Media & Internet law page.

The U.S. Patent and Trademark Office Takes on Artificial Intelligence

If the hallmark of intelligence is problem solving, then it should be no surprise that artificial intelligence is being called on to solve complex problems that human intelligence alone cannot. Intellectual property laws exist to reward intelligence, creativity and problem solving; yet, as society adapts to a world immersed in artificial intelligence, the nation’s intellectual property laws have yet to do the same. The Constitution seems to only contemplate human inventors when it says, in Article I, Section 8, Clause 8,

The Congress shall have Power … To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

The Patent Act similarly seems to limit patents to humans when it says, at 35 U.S.C. § 100(f),

The term ‘inventor’ means the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.”

In fact, as far back as 1956, the U.S. Copyright Office refused registration for a musical composition created by a computer on the basis that copyright laws only applied to human authors.

Recognizing the need to adapt, the U.S. Patent and Trademark Office (PTO) recently issued notices seeking public comments on intellectual property protection related to artificial intelligence. In August 2019, the PTO issued a Federal Register Notice, 84 Fed. Reg. 166 (Aug. 27, 2019) entitled, “Request for Comments on Patenting Artificial Intelligence Inventions.” On October 30, the PTO broadened its inquiry by issuing another Notice, 84 Fed. Reg. 210 (Oct. 30, 2019) entitled, “Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation.” Finally, on December 3, 2019, the PTO issued a third notice, extending the comment period on the earlier notices to January 10, 2020. All of the notices can be downloaded from the PTO’s web site.

The January 10, 2020 deadline for public comments on the issues raised in the notices is fast approaching. This is an important topic for the future of technology and intellectual property, and the government is plainly looking at these important issues with a clean slate.


© 2020 Vedder Price

For more on patentable inventions, see the Intellectual Property law section of the National Law Review.

Reflections on 2019 in Technology Law, and a Peek into 2020

It is that time of year when we look back to see what tech-law issues took up most of our time this year and look ahead to see what the emerging issues are for 2020.

Data: The Issues of the Year

Data presented a wide variety of challenging legal issues in 2019. Data is solidly entrenched as a key asset in our economy, and as a result, the issues around it demanded a significant level of attention.

  • Clearly, privacy and data security-related data issues were dominant in 2019. The GDPR, CCPA and other privacy regulations garnered much consideration and resources, and with GDPR enforcement ongoing and CCPA enforcement right around the corner, the coming year will be an important one to watch. As data generation and collection technologies continued to evolve, privacy issues evolved as well.  In 2019, we saw many novel issues involving mobilebiometric and connected car  Facial recognition technology generated a fair amount of litigation, and presented concerns regarding the possibility of intrusive governmental surveillance (prompting some municipalities, such as San Francisco, to ban its use by government agencies).

  • Because data has proven to be so valuable, innovators continue to develop new and sometimes controversial technological approaches to collecting data. The legal issues abound.  For example, in the past year, we have been advising on the implications of an ongoing dispute between the City Attorney of Los Angeles and an app operator over geolocation data collection, as well as a settlement between the FTC and a personal email management service over access to “e-receipt” data.  We have entertained multiple questions from clients about the unsettled legal terrain surrounding web scraping and have been closely following developments in this area, including the blockbuster hiQ Ninth Circuit ruling from earlier this year. As usual, the pace of technological innovation has outpaced the ability for the law to keep up.

  • Data security is now regularly a boardroom and courtroom issue, with data breaches, phishing, ransomware attacks and identity theft (and cyberinsurance) the norm. Meanwhile, consumers are experiencing deeper and deeper “breach fatigue” with every breach notice they receive. While the U.S. government has not yet been able to put into place general national data security legislation, states and certain regulators are acting to compel data collectors to take reasonable measures to protect consumer information (e.g., New York’s newly-enacted SHIELD Act) and IoT device manufacturers to equip connected devices with certain security features appropriate to the nature and function of the devices secure (e.g., California’s IoT security law, which becomes effective January 1, 2020). Class actions over data breaches and security lapses are filed regularly, with mixed results.

  • Many organizations have focused on the opportunistic issues associated with new and emerging sources of data. They seek to use “big data” – either sourced externally or generated internally – to advance their operations.  They are focused on understanding the sources of the data and their lawful rights to use such data.  They are examining new revenue opportunities offered by the data, including the expansion of existing lines, the identification of customer trends or the creation of new businesses (including licensing anonymized data to others).

  • Moreover, data was a key asset in many corporate transactions in 2019. Across the board in M&A, private equity, capital markets, finance and some real estate transactions, data was the subject of key deal points, sometimes intensive diligence, and often difficult negotiations. Consumer data has even become a national security issue, as the Committee on Foreign Investment in the United States (CFIUS), expanded under a 2018 law, began to scrutinize more and more technology deals involving foreign investment, including those involving sensitive personal data.

I am not going out on a limb in saying that 2020 and beyond promise many interesting developments in “big data,” privacy and data security.

Social Media under Fire

Social media platforms experienced an interesting year. The power of the medium came into even clearer focus, and not necessarily in the most flattering light. In addition to privacy issues, fake news, hate speech, bullying, political interference, revenge porn, defamation and other problems came to light. Executives of the major platforms have been on the hot seat in Washington, and there is clearly bipartisan unease with the influence of social media in our society.  Many believe that the status quo cannot continue. Social media platforms are working to build self-regulatory systems to address these thorny issues, but the work continues.  Still, amidst the bluster and criticism, it remains to be seen whether the calls to “break up” the big tech companies will come to pass or whether Congress’s ongoing debate of comprehensive data privacy reform will lead to legislation that would alter the basic practices of the major technology platforms (and in turn, many of the data collection and sharing done by today’s businesses).  We have been working with clients, advising them of their rights and obligations as platforms, as contributors to platforms, and in a number of other ways in which they may have a connection to such platforms or the content or advertising appearing on such platforms.

What does 2020 hold? Will Washington’s withering criticism of the tech world translate into any tangible legislation or regulatory efforts?  Will Section 230 of the Communications Decency Act – the law that underpins user generated content on social media and generally the availability of user generated content on the internet and apps – be curtailed? Will platforms be asked to accept more responsibility for third party content appearing on their services?

While these issues are playing out in the context of the largest social media platforms, any legislative solutions to these problems could in fact extend to others that do not have the same level of compliance resources available. Unless a legislative solution includes some type of “size of person” test or room to adapt technical safeguards to the nature and scope of a business’s activities or sensitivity of the personal information collected, smaller providers could be shouldered with a difficult and potentially expensive compliance burden. Thus, it remains to see how the focus on social media and any attempt to solve the issues it presents may affect online communications more generally.

Quantum Leaps

Following the momentum of the passage of the National Quantum Initiative at the close of 2018, a significant level of resources has been invested into quantum computing in 2019.  This bubble of activity culminated in Google announcing a major milestone in quantum computing.  Interestingly, IBM suggests that it wasn’t quite as significant as Google claimed.  In any case, the development of quantum computing in the U.S. has progressed a great deal in 2019, and many organizations will continue to focus on it as a priority in 2020.

  • Reports state that China has dedicated billions to build a Chinese national laboratory for quantum computing, among other related R&D products, a development that has gotten the attention of Congress and the Pentagon. This may be the beginning of the 21st century’s great technological race.

  • What is at stake? The implications are huge. It is expected that ultimately, quantum computers will be able to solve complex computations exponentially faster – as much as 100 million times faster — than classic computers. The opportunities this could present are staggering.  As are the risks and dangers.  For example, for all its benefits, the same technology could quickly crack the digital security that protects online banking and shopping and secure online communications.

  • Many organizations are concerned about the advent of quantum computing. But given that it will be a reality in the future, what should you be thinking about now? While not a real threat for 2020 or the near-term thereafter, it would be wise to think about it if one is anticipating investing in long-term infrastructure solutions. Will quantum computing render the investment obsolete? Or, will quantum computing present a security threat to that infrastructure?  It is not too early to think about these issues, and for example, technologists have been hard at work developing quantum-proof blockchain protocols. It would at least be prudent to understand the long-term roadmap of technology suppliers to see if they have even thought about quantum computing, and if so, to see to how they see quantum computing impacting their solutions and services.

Artificial Intelligence

We have seen significant level of deployment in the Artificial Intelligence/Machine Learning landscape this past year.  According to the Artificial Intelligence Index Report 2019, AI adoption by organizations (of at least one function or business unit) is increasing globally. Many businesses across many industries are deploying some level of AI into their businesses.  However, the same report notes that many companies employing AI solutions might not be taking steps to mitigate the risks from AI, beyond cybersecurity. We have advised clients on those risks, and in certain cases have been able to apportion exposure amongst multiple parties involved in the implementation.  In addition, we have also seen the beginning of regulation in AI, such as California’s chatbot law, New York’s recent passage of a law (S.2302prohibiting consumer reporting agencies and lenders from using the credit scores of people in a consumer’s social network to determine that individual’s credit worthiness, or the efforts of a number of regulators to regulate the use of AI in hiring decisions.

We expect 2020 to be a year of increased adoption of AI, coupled with an increasing sense of apprehension about the technology. There is a growing concern that AI and related technologies will continue to be “weaponized” in the coming year, as the public and the government express concern over “deepfakes” (including the use of voice deepfakes of CEOs to commit fraud).  And, of course, the warnings of people like Elon Musk and Bill Gates, as they discuss AI, cannot be ignored.

Blockchain

We have been very busy in 2019 helping clients learn about blockchain technologies, including issues related to smart contracts and cryptocurrency. 2019 was largely characterized by pilotstrials,  tests and other limited applications of blockchain in enterprise and infrastructure applications as well as a significant level of activity in tokenization of assetscryptocurrency investments, and the building of businesses related to the trading and custody of digital assets. Our blog, www.blockchainandthelaw.io keeps readers abreast of key new developments and we hope our readers have found our published articles on blockchain and smart contracts helpful.

Looking ahead to 2020, regulators such as the SECFinCENIRS and CFTC are still watching the cryptocurrency space closely. Gone are the days of ill-fated “initial coin offerings” and today, security token offerings, made in compliance with the securities laws, are increasingly common. Regulators are beginning to be more receptive to cryptocurrency, as exemplified by the New York State Department of Financial Services revisiting of the oft-maligned “bitlicense” requirement in New York.

Beyond virtual currency, I believe some of the most exciting developments of blockchain solutions in 2020 will be in supply chain management and other infrastructure uses of blockchain. 2019 was characterized by experimentation and trial. We have seen many successes and some slower starts. In 2020, we expect to see an increase in adoption. Of course, the challenge for businesses is to really understand whether blockchain is an appropriate solution for the particular need. Contrary to some of the hype out there, blockchain is not the right fit for every technology need, and there are many circumstances where a traditional client-server model is the preferred approach. For help in evaluating whether blockchain is in fact a potential fit for a technology need, this article may be helpful.

Other 2020 Developments

Interestingly, one of the companies that has served as a form of leading indicator in the adoption of emerging technologies is Walmart.  Walmart was one of the first major companies to embrace supply use of blockchain, so what is Walmart looking at for 2020? A recent Wall Street Journal article discusses its interest and investment in 5G communications and edge computing. We too have been assisting clients in those areas, and expect them to be active areas of activity in 2020.

Edge computing, which is related to “fog” computing, which is, in turn,  related to cloud computing, is simply put, the idea of storing and processing information at the point of capture, rather than communicating that information to the cloud or a central data processing location for storage and processing. According to the WSJ article, Walmart plans on building edge computing capability for other businesses to hire (following to some degree Amazon’s model for AWS).  The article also talks about Walmart’s interest in 5G technology, which would work hand-in-hand with its edge computing network.

Our experience with clients suggest that Walmart may be onto something.  Edge and fog computing, 5G and the growth of the “Internet of Things” are converging and will offer the ability for businesses to be faster, cheaper and more profitable. Of course this convergence also will tie back to the issues we discussed earlier, such as data, privacy and data security, artificial intelligence and machine learning. In general, this convergence will increase even more the technical abilities to process and use data (which would conceivably require regulation that would feature privacy and data security protections that are consumer-friendly, yet balanced so they do not stifle the economic and technological benefits of 5G).

This past year has presented a host of fascinating technology-based legal issues, and 2020 promises to hold more of the same.  We will continue to keep you posted!

We hope you had a good 2019, and we want to wish all of our readers a very happy and safe holiday season and a great New Year!


© 2019 Proskauer Rose LLP.

For more in technology developments, see the National Law Review Intellectual Property or Communications, Media & Internet law sections.

AI and Evidence: Let’s Start to Worry

When researchers at University of Washington pulled together a clip of a faked speech by President Obama using video segments of the President’s earlier speeches run through artificial intelligence, we watched with a queasy feeling. The combination wasn’t perfect – we could still see some seams and stitches showing – but it was good enough to paint a vision of the future. Soon we would not be able to trust our own eyes and ears.

Now the researchers at University of Washington (who clearly seem intent on ruining our society) have developed the next level of AI visual wizardry – fake people good enough to fool real people. As reported recently in Wired Magazine, the professors embarked on a Turing beauty contest, generating thousands of virtual faces that look like they are alive today, but aren’t.

Using some of the same tech that makes deepfake videos, the Husky professors ran a game for their research subjects called Which Face is Real? In it, subjects were shown a real face and a faked face and asked to choose which was real. “On average, players could identify the reals nearly 60 percent of the time on their first try. The bad news: Even with practice, their performance peaked at around 75 percent accuracy.” Wired observes that the tech will only get better at fooling people “and so will chatbot software that can put false words into fake mouths.”

We should be concerned. As with all digital technologies (and maybe most tech of all types if you look at it a certain way) the first industrial applications we have seen occur in the sex industry. The sex industry has lax rules (if they exist at all) and the basest instincts of humanity find enough participants to make a new tech financially viable. Reported by the BBC, “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent.”

Of course, given the level of mendacity that populism drags in its fetid wake, we should expect to see examples of deepfakes offered on television news soon as additional support of the “alternate facts” ginned up by politicians, or generated to smear an otherwise blameless accuser of (faked) horrible behavior.  It is hard to believe that certain corners of the press would be able to resist showing the AI created video.

But, as lawyers, we have an equally valid concern about how this phenomenon plays in court. Clearly, we have rules to authenticate evidence.  New Evidence Rule 902(13) allows authentication of records “generated by an electronic process or system that produces an accurate result” if “shown by the certification of a qualified person” in a particular way. But with the testimony of someone who was wrong, fooled or simply lying about the provenance of an AI generated video, the false digital file can be easily introduced as evidence.

Some Courts under the silent witness theory have allowed a video to speak for itself. Either way, courts will need to tighten up authentication rules in the coming days of cheap and easy deepfakes being present everywhere. As every litigator knows, no matter what a judge tells a jury, once a video is seen and heard, its effects can dominate a juror’s mind.

I imagine that a new field of video veracity expertise will arise, as one side tries to prove its opponent’s evidence was a deepfake, and the opponent works to establish its evidence as “straight video.” One of the problems in this space is not just that deepfakes will slip their way into court, damning the innocent and exonerating the guilty, but that the simple existence of deepfakes allows unscrupulous (or zealously protective) lawyers to cast doubt on real, honest, naturally created video. A significant part of that new field of video veracity experts will be employed to cast shade on real evidence – “We know that deepfakes are easy to make and this is clearly one of them.” While real direct video that goes to the heart of a matter is often conclusive in establishing a crime, it can be successfully challenged, even when its message is true.  Ask John DeLorean.

So I now place a call to the legal technology community.  As the software to make deepfakes continues to improve, please help us develop parallel technology to be able to identify them. Lawyers and litigants need to be able to clearly authenticate genuine video evidence to clearly strike deepfaked video as such.  I am certain that somewhere in Langley, Fort Meade, Tel Aviv, Moscow and/or Shanghai both of these technologies are already mastered and being used, but we in the non-intelligence world may not know about them for a decade. We need some civilian/commercial help in wrangling the truth out of this increasingly complex and frightening technology.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more artificial intelligence, see the National Law Review Communications, Media & Internet law page.

CMS’s Request for Information Provides Additional Signal That AI Will Revolutionize Healthcare

On October 22, 2019, the Centers for Medicare and Medicaid Services (“CMS”) issued a Request for Information (“RFI”) to obtain input on how CMS can utilize Artificial Intelligence (“AI”) and other new technologies to improve its operations.  CMS’ objectives to leverage AI chiefly include identifying and preventing fraud, waste, and abuse.  The RFI specifically states CMS’ aim “to ensure proper claims payment, reduce provider burden, and overall, conduct program integrity activities in a more efficient manner.”  The RFI follows last month’s White House Summit on Artificial Intelligence in Government, where over 175 government leaders and industry experts gathered to discuss how the Federal government can adopt AI “to achieve its mission and improve services to the American people.”

Advances in AI technologies have made the possibility of automated fraud detection at exponentially greater speed and scale a reality. A 2018 study by consulting firm McKinsey & Company estimated that machine learning could help US health insurance companies reduce fraud, waste, and abuse by $20-30 billion.  Indeed, in 2018 alone, improper payments accounted for roughly $31 billion of Medicare’s net costs. CMS is now looking to AI to prevent improper payments, rather than the current “pay and chase” approach to detection.

CMS currently relies on its records system to detect fraud. Currently, humans remain the predominant detectors of fraud in the CMS system. This has resulted in inefficient detection capabilities, and these traditional fraud detection approaches have been decreasingly successful in light of the changing health care landscape.  This problem is particularly prevalent as CMS transitions to value-based payment arrangements.  In a recent blog post, CMS Administrator, Seema Verma, revealed that reliance on humans to detect fraud resulted in reviews of less than one-percent of medical records associated with items and services billed to Medicare.  This lack of scale and speed arguably allows many improper payments to go undetected.

Fortunately, AI manufacturers and developers have been leveraging AI to detect fraud for some time in various industries. For example, the financial and insurance industries already leverage AI to detect fraudulent patterns. However, leveraging AI technology involves more than simply obtaining the technology. Before AI can be used for fraud detection, the time-consuming process of amassing large quantities of high quality, interoperable data must occur. Further, AI algorithms need to be optimized through iterative human quality reviews. Finally, testing the accuracy of the trained AI is crucial before it can be relied upon in a production system.

In the RFI, CMS poses many questions to AI vendors, healthcare providers and suppliers that likely would be addressed by regulation.  Before the Federal government relies on AI to detect fraud, CMS must gain assurances that AI technologies will not return inaccurate or incorrect outputs that could negatively impact providers and patients. One key question raised involves how to assess the effectiveness of AI technology and how to measure and maintain its accuracy. The answer to this question should factor heavily into the risk calculation of CMS using AI in its fraud detection activities. Interestingly, companies seeking to automate revenue cycle management processes using AI have to grapple with the same concerns.  Without adequate compliance mechanisms in place around the development, implementation and use of AI tools for these purposes, companies could be subject to high risk of legal liability under Federal False Claims Act or similar fraud and abuse laws and regulations.

In addition to fraud detection, the RFI is seeking advice as to whether new technology could help CMS identify “potentially problematic affiliations” in terms of business ownership and registration.  Similarly, CMS is interested to gain feedback on whether AI and machine learning could speed up current expensive and time-consuming Medicare claim review processes and Medicare Advantage audits.

It is likely that this RFI is one of many signals that AI will revolutionize how healthcare is covered and paid for moving forward.  We encourage you to weigh in on this on-going debate to help shape this new world.

Comments are due to CMS by November 20, 2019.


©2019 Epstein Becker & Green, P.C. All rights reserved.

For more CMS activities, see the National Law Review Health Law & Managed Care page.

Are Your AI Selection Tools Validated? OFCCP Provides Guidance for Validation of AI-Based Algorithms

We have long counseled employers using or contemplating using artificial intelligence (“AI”) algorithms in their employee selection processes to validate the AI-based selection procedure using an appropriate validation strategy approved by the Uniform Guidelines on Employee Selection Procedures (“Uniform Guidelines”).  Our advice has been primarily based on minimizing legal risk and complying with best practices.  A recently updated Frequently Asked Questions (“FAQ”) from the Office of Federal Contract Compliance Programs (“OFCCP”) provides further support for validating AI-based selection procedures in compliance with the Uniform Guidelines.

On July 23, 2019, the OFCCP updated the FAQ section on its website to provide guidance on the validation of employee selection procedures.  Under the Uniform Guidelines, any selection procedure resulting in a “selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) (or eighty percent) of the rate for the group with the highest rate will generally be regarded by Federal enforcement agencies as evidence of adverse impact,” which in turn requires the validation of the selection procedure.  These validation requirements are equally applicable to any AI-based selection procedure used to make any employment decision, including hiring, termination, promotion, and demotion.

As stated in the Uniform Guidelines, and emphasized in the FAQ, the OFCCP recognizes three methods of validation:

  1. Content validation – a showing that the content of the selection procedure is representative of important aspects of performance on the job in question;

  2. Criterion-related validation – production of empirical data demonstrating that the selection procedure is predictive or significantly correlated with important aspects of job performance; and

  3. Construct validation – a showing that the procedure measures the degree to which candidates possess identifiable characteristics that have been determined to be important in successful performance on the job.

With the exception of criterion-related validating studies, which can be “transported” from other entities under certain circumstances, the Uniform Guidelines require local validation at the employer’s own facilities.

If a selection procedure adversely impacts a protected group, the employer must provide evidence of validity for the selection procedure(s) that caused the adverse impact. Thus, it is crucial that employers considering the implementation of AI-based algorithms in the selection process both conduct adverse impact studies and be prepared to produce one or more validation studies.

The new FAQ also provides important guidelines on the substantial methods utilized by OFCCP in evaluating potential adverse impact.  In accordance with the Uniform Guidelines, OFCCP will analyze the Impact Ratio – the disfavored group’s selection rate divided by the favored group’s selection rate.  Any Impact Ratio of less than 0.80 (referred to as the “Four – Fifths Rule”) constitutes an initial indication of adverse impact, but OFCCP will not pursue enforcement without evidence of statistical and practical significance.  For statistical significance, the OFCCP’s standard statistical tests are the Fisher’s Exact Test (for groups with fewer than 30 subjects) and the Two Independent-Sample Binomial Z-Test (for groups with 30 or more subjects).

With the publication of this new FAQ, employers – and particularly federal contractors – should be sure to evaluate their use of AI-based algorithms and properly validate all selection procedures under the Uniform Guidelines.  Moreover, although not addressed in the OFCCP’s new FAQ, employers should also ensure that their AI-based algorithms are compliant with all other state and federal laws and regulations.

©2019 Epstein Becker & Green, P.C. All rights reserved.

Drive.ai Introduces External Communication Panels to Talk to Public

Self-driving cars are inherently presented with a challenge—communicating with their surroundings. However, Drive.ai has attempted to address that challenge by equipping its self-driving cars with external communication panels that convey a variety of messages for drivers, pedestrians, cyclists and everyone else on the road. Drive.ai CEO Bijit Halder said, “Our external communication panels are intended to mimic what an interaction with a human drive would look like. Normally you’d make eye contact, wave someone along, or otherwise signal your intentions. With [these] panels everyone on the road is kept in the loop of the car’s intentions, ensuring maximum comfort and trust, even for people interacting with a self-driving car for the first time.” To help the company build its platform, one of the company’s founders recorded herself driving around normally and analyzed all the interactions she had with other drivers, including eye contact and hand motions.

Specifically, the panel uses lidar sensors, cameras, and radar to determine if any pedestrians are in or near a crosswalk as it approaches the crosswalk. If the vehicle detects that pedestrian’s path, the car begins to slow down and displays the message “Stopping for you.” Once the vehicle comes to a complete stop, it displays the message “Waiting for you.” When there are no more pedestrians are detected, the vehicle will display the message “Going now, please wait” to let other pedestrians to wait to cross.

Drive.ai continues to conduct research to determine the best means of communication including the best location for such communications, which is currently right above the wheels based on its previous studies. Halder said, “The more you can effectively communicate how a self-driving car will act, the more confidence the public will have in the technology, and that trust will lead to adoption on a broader scale.”

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
More more in vehicle technology advances, see the Utilities & Transport page on the National Law Review.

Forget About Fake News, How About Fake People? California Starts Regulating Bots as of July 1, 2019

California SB 1001, Cal. Bus. & Prof. Code § 17940, et seq., takes effect July 1, 2019. The law regulates the online use of “bots” – computer programs that interact with a human being and give the appearance of being an actual person – by requiring disclosure when bots are being used.

The law applies in limited cases of online communications to (a) sell commercial goods or services, or (b) influence a vote in an election. Specifically, the law prohibits using a bot in those circumstances, “with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” Disclosure of the existence of the bot avoids liability.

As more and more companies use bots, artificial intelligence, and voice recognition technology to provide customer service in online transactions, businesses will need to consider carefully how and when to disclose that their helpful (and often anthropomorphized) digital “assistants” are not really human beings.  In a true customer-service situation where the bot is fielding questions about warranty service, product returns, etc., there may be no duty. But a line could be crossed if any upsell is included, such as “Are you interested to learn about our latest line of products?”

Fortunately, the law doesn’t expressly create a private cause of action against violators. However, it remains to be seen if lawsuits nevertheless get brought under general laws prohibiting unfair or deceptive trade practices alleging failure to disclose the existence of a bot.

Also, an exemption applies for online “platforms,” defined as: “any public-facing Internet Web site, Web application, or digital application, including a social network or publication, that has 10,000,000 or more unique monthly United States visitors or users for a majority of months during the preceding 12 months.”  Accordingly, operators of very large online sites or services are exempt.

For marketers who use bots in customer communications – and who are not large enough to take advantage of the “platform” exemption – the time is now to review those practices and decide whether disclosures may be appropriate.

©2019 Greenberg Traurig, LLP. All rights reserved.
For more on Internet & Communications see the National Law Review page on Communications, Media & Internet