Are Tech Workers Considering Unionizing In The Wake Of COVID-19?

Big tech companies by and large have remained union-free over the years unlike their peers in other industries such as retail and manufacturing. However, earlier this year – and before the COVID-19 pandemic upended workplaces across America – unions scored their first major organizing victory in the tech sector when employees at Kickstarter voted to form a union. According to at least one recent report, more tech company workers may soon be following suit.

The Teamsters, Communications Workers of America, and the Office and Professional Employees International Union all reported an uptick in inquiries from non-union employees about prospects of unionizing the companies they work for, including in the tech and gig economy sectors. One of the reasons cited by these workers was a feeling that not enough is being done to protect employees against the spread of COVID-19, particularly those who work in e-commerce fulfillment centers or drive for ride-sharing apps. There also was concern by employees who were, at least at one point, denied remote work arrangements when they believed their jobs were suited for such an arrangement.

It remains to be seen whether organized labor will be able to augment its numbers based on these workers’ concerns. Several things may complicate any such efforts, including unprecedented layoffs and an almost singular focus by people across the nation on the ongoing pandemic itself.

To the extent unions try to capitalize on the unrest, there are many reasons employers facing organizing attempts should be concerned. For example, one of the most effective tools a company can consider to stave off a unionization attempt are large, all-employee meetings where leaders of the organization communicate directly to the workforce why forming a union isn’t in the company’s or employees’ best interests. In an era where social distancing is a necessity, such meeting – at least in-person – likely won’t be a viable option. In addition, mail-in ballot union elections may become the standard as long as social distancing requirements remain in effect, which are less preferred than live secret-ballot voting booths.

Accordingly, employers desiring to remain union-free should give thought to what talking points, materials, and strategies – as well as communications channels – they have available to them now around this issue. Waiting to do so until after a union petition hits may place them at a significant disadvantage.


© 2020 BARNES & THORNBURG LLP

For more industries impacted by COVID-19, see the National Law Review Coronavirus News section.

Patent Trial and Appeal Board Provides Guidance on Timing of Requests for Certificates of Correction During PTAB Proceedings

The Patent Trial and Appeal Board (“Board”) recently issued a decision in Emerson Electric Co. v. Sipco, LLC (IPR2016-00984) (Jan. 24, 2020) that illustrates some important points for patent practitioners to consider when requesting a certificate of correction for a patent subject to a Petition for Inter Partes Review before the United States Patent and Trademark Office (USPTO).

The Board’s recent decision was the result of numerous proceedings before both the Board and the Federal Circuit, which began with Emerson Electric Co. filing a Petition for Inter Partes Review against Sipco, LLC (Patent Owner) on April 29, 2016.  The Board issued a final written decision that found all challenged claims unpatentable under at least one ground on October 25, 2017.  This decision was appealed by the Patent Owner to the Federal Circuit, and on appeal the Patent Owner requested that the Federal Circuit remand the case back to the Board based on a certificate of correction that had issued for the patent in question (8,754,780; “’780 patent”) after the date of the Board’s final written decision.  (Id.)  The Federal Circuit granted this request, and remanded the matter with an Order requesting the Board to address the issue of “what, if any, impact the certificate of correction had” on the Board’s final written decision.

On remand, the Board found that the earliest priority date to which the challenged claims of the ’780 patent were entitled was April 2, 2013, and refused to recognize the belatedly issued certificate of correction that would have changed the earliest priority to an earlier date in favor of the Patent Owner.  This was because the certificate of correction did not issue until March 27, 2018, which was five months after the Board’s final written decision and three months after the Patent Owner appealed to the Federal Circuit.  (Id. at 5.)

A patent owner is permitted to request a certificate of correction in accordance with 37 C.F.R. § 1.323, which allows patent owners to ask the Director to make corrections to “mistakes” in a patent.  See 35 U.S.C. § 255.  In this case, the Board noted a series of mistakes and oversights by the Patent Owner in seeking a proper certificate of correction.  Although the Patent Owner filed a certificate of correction with the USPTO Petitions Branch about one month after the filing date of the Petition for Inter Partes Review, the Patent Owner failed to seek permission from the Board to do so, either before or after this filing.  The USPTO dismissed the Patent Owner’s first request for correction, and although the Patent Owner’s second request was thereafter granted, the second request had no chain of priority to the first, and so the USPTO Petitions Branch treated it as a new request for correction.  (Id. at 7-8.)

The Patent Owner then made a third request for a certificate of correction, and this time made its request to the Board, but this request was ultimately denied.  (Id. at 8.)  The Board later permitted the Patent Owner to submit a request for a certificate of correction to the USPTO Petitions Branch, and the Patent Owner did so.  However, the USPTO Petitions Branch found there was no chain of priority to the first certificate of correction request, because the Patent Owner had again failed to “make a reference to the first (earliest) application and every intermediate application.”  (Id. at 9.)  Finally, and without any motion to the Board, the Patent Owner submitted a final request for a certificate of correction to the USPTO Petitions Branch, and this request was granted – leading to a certificate of correction issuing on March 27, 2018 that set forth a priority claim material to the final written decision of October 25, 2017.

The Board determined that the belatedly issued certificate of correction was well after its initial final written decision and should not be given retroactive effect so as to alter its initial decision.  In considering both sides’ arguments, the Board turned to analyzing the language of 35 U.S.C. § 255, and whether or not it permits retroactive effect of a certificate of correction in an Inter Partes Review proceeding.  The Board ultimately found that section 255 does not authorize such a retroactive effect.  In other words, under the facts presented, the Patent Owner’s corrections of its mistakes in priority claims through a certificate of correction issued after the date of the Board’s final written decision did not apply back to the time when the Petition for Inter Partes Review was filed.  Further, the Board made clear that “once a petition for inter partes review of a patent has been filed, the Board may exercise jurisdiction over a request for a certificate of correction, and may stay the request,” citing to 35 U.S.C. § 315(d), 37 C.F.R. §§ 42.3 and 42.122.  (Id. 22.)  The Board noted that its decision that the finally issued certificate of correction had no impact on its earlier final written decision was consistent with the Board’s exercise of exclusive USPTO jurisdiction over a patent once Inter Partes Review is instituted.  (Id. at 22-23.)

Key Takeaway

Although non-precedential, the Board’s decision illustrates that it is best to file a request for a certificate of correction of a patent before Inter Partes Review is instituted.  After institution, the Board has discretion to stay, and effectively deny, a patent owner’s ability to request a certificate of correction that can determine the outcome of the Inter Partes Review proceeding before the USPTO.


© 2020 Brinks Gilson Lione. All Rights Reserved.

For more PTAB decisions, see the National Law Review Intellectual Property law section.

Union Launches National Organizing Effort in Gaming and Tech Industries

The Communications Workers of America (CWA) has begun a nationwide union-organizing campaign targeting game and tech industry employees, in partnership with Game Workers Unite! (GWU), a so-called “grass-roots” worker group founded in Southern California in 2018 to spur unionization in the gaming industry. As here, such groups typically are founded and funded by established labor organizations.

The idea for the organizing effort is the result of discussions between the CWA and GWU over the past months. In addition, CWA Canada is partnering with the GWU chapter in Toronto. The CWA has used similar partnerships with other activist groups, most recently teaming up with the Committee for Better Banks to attempt to organize banking sector employees.

Organizing is being spearheaded by Emma Kinema, a co-founder of GWU, and Wes McEnany, a former organizer with the Service Employees International Union and leader of the “Fight for 15” effort. Kinema will lead the organizing on the West Coast, McEnany will focus on the East Coast. Organizers from CWA locals across the country will populate the teams. According to Kinema, the issues on which the union will focus are: “crunch,” or long hours for weeks or months to meet launch deadlines; cyclical layoffs; harassment; misogyny; gender-based pay discrimination; values and ethical issues, such as working with Immigration and Customs Enforcement (ICE); climate change; AI ethics; and pay, severance, and benefits. According to Tom Smith, CWA’s lead organizer, “For a lot of folks, that’s what led them to do this work in the first place, and people are feeling a disconnect between their personal values and what they’re seeing every day in the working lives.”

With the moniker CODE – Campaign to Organize Digital Employees – the ambitious initiative seeks to organize employees across the industry, typically at individual shops or employers. According to Kinema, “We believe workers are strongest when they’re together in one shop in one union, so the disciplines can’t be pitted against each other – none of that’s good for the workers. I think in games and tech, the wall-to-wall industrial model is the best fit.” Smith said the CWA would be open to craft-based organizing – where the focus is industry-wide bargaining units composed of employees performing similar work at different employers – if that is what employees want. In an industry where workers frequently move from employer to employer, portable benefits can be attractive.

An annual survey by the International Game Developers Association, an industry group, found that gaming worker interest in unions had increased to 47 percent by 2019. Indeed, a representation petition is pending at the Brooklyn office of the National Labor Relations Board on behalf of the employees at a gaming company. About 220,000 employees work in the two-billion-dollar gaming industry.

The union has established a website — www.code-cwa.org – as well as a presence on other social media platforms such as Facebook and Twitter.

As most union organizing is based on the presence in the workplace of unresolved employee issues, a comprehensive analysis of such matters may be valuable to employer. Also, supervisors and managers often interact frequently with employees when organizing is afoot or underway. Training regarding their rights and responsibilities under the labor laws often is essential.


Jackson Lewis P.C. © 2020

For more on unionizing news, see the National Law Review Labor & Employment law page.

Reflections on 2019 in Technology Law, and a Peek into 2020

It is that time of year when we look back to see what tech-law issues took up most of our time this year and look ahead to see what the emerging issues are for 2020.

Data: The Issues of the Year

Data presented a wide variety of challenging legal issues in 2019. Data is solidly entrenched as a key asset in our economy, and as a result, the issues around it demanded a significant level of attention.

  • Clearly, privacy and data security-related data issues were dominant in 2019. The GDPR, CCPA and other privacy regulations garnered much consideration and resources, and with GDPR enforcement ongoing and CCPA enforcement right around the corner, the coming year will be an important one to watch. As data generation and collection technologies continued to evolve, privacy issues evolved as well.  In 2019, we saw many novel issues involving mobilebiometric and connected car  Facial recognition technology generated a fair amount of litigation, and presented concerns regarding the possibility of intrusive governmental surveillance (prompting some municipalities, such as San Francisco, to ban its use by government agencies).

  • Because data has proven to be so valuable, innovators continue to develop new and sometimes controversial technological approaches to collecting data. The legal issues abound.  For example, in the past year, we have been advising on the implications of an ongoing dispute between the City Attorney of Los Angeles and an app operator over geolocation data collection, as well as a settlement between the FTC and a personal email management service over access to “e-receipt” data.  We have entertained multiple questions from clients about the unsettled legal terrain surrounding web scraping and have been closely following developments in this area, including the blockbuster hiQ Ninth Circuit ruling from earlier this year. As usual, the pace of technological innovation has outpaced the ability for the law to keep up.

  • Data security is now regularly a boardroom and courtroom issue, with data breaches, phishing, ransomware attacks and identity theft (and cyberinsurance) the norm. Meanwhile, consumers are experiencing deeper and deeper “breach fatigue” with every breach notice they receive. While the U.S. government has not yet been able to put into place general national data security legislation, states and certain regulators are acting to compel data collectors to take reasonable measures to protect consumer information (e.g., New York’s newly-enacted SHIELD Act) and IoT device manufacturers to equip connected devices with certain security features appropriate to the nature and function of the devices secure (e.g., California’s IoT security law, which becomes effective January 1, 2020). Class actions over data breaches and security lapses are filed regularly, with mixed results.

  • Many organizations have focused on the opportunistic issues associated with new and emerging sources of data. They seek to use “big data” – either sourced externally or generated internally – to advance their operations.  They are focused on understanding the sources of the data and their lawful rights to use such data.  They are examining new revenue opportunities offered by the data, including the expansion of existing lines, the identification of customer trends or the creation of new businesses (including licensing anonymized data to others).

  • Moreover, data was a key asset in many corporate transactions in 2019. Across the board in M&A, private equity, capital markets, finance and some real estate transactions, data was the subject of key deal points, sometimes intensive diligence, and often difficult negotiations. Consumer data has even become a national security issue, as the Committee on Foreign Investment in the United States (CFIUS), expanded under a 2018 law, began to scrutinize more and more technology deals involving foreign investment, including those involving sensitive personal data.

I am not going out on a limb in saying that 2020 and beyond promise many interesting developments in “big data,” privacy and data security.

Social Media under Fire

Social media platforms experienced an interesting year. The power of the medium came into even clearer focus, and not necessarily in the most flattering light. In addition to privacy issues, fake news, hate speech, bullying, political interference, revenge porn, defamation and other problems came to light. Executives of the major platforms have been on the hot seat in Washington, and there is clearly bipartisan unease with the influence of social media in our society.  Many believe that the status quo cannot continue. Social media platforms are working to build self-regulatory systems to address these thorny issues, but the work continues.  Still, amidst the bluster and criticism, it remains to be seen whether the calls to “break up” the big tech companies will come to pass or whether Congress’s ongoing debate of comprehensive data privacy reform will lead to legislation that would alter the basic practices of the major technology platforms (and in turn, many of the data collection and sharing done by today’s businesses).  We have been working with clients, advising them of their rights and obligations as platforms, as contributors to platforms, and in a number of other ways in which they may have a connection to such platforms or the content or advertising appearing on such platforms.

What does 2020 hold? Will Washington’s withering criticism of the tech world translate into any tangible legislation or regulatory efforts?  Will Section 230 of the Communications Decency Act – the law that underpins user generated content on social media and generally the availability of user generated content on the internet and apps – be curtailed? Will platforms be asked to accept more responsibility for third party content appearing on their services?

While these issues are playing out in the context of the largest social media platforms, any legislative solutions to these problems could in fact extend to others that do not have the same level of compliance resources available. Unless a legislative solution includes some type of “size of person” test or room to adapt technical safeguards to the nature and scope of a business’s activities or sensitivity of the personal information collected, smaller providers could be shouldered with a difficult and potentially expensive compliance burden. Thus, it remains to see how the focus on social media and any attempt to solve the issues it presents may affect online communications more generally.

Quantum Leaps

Following the momentum of the passage of the National Quantum Initiative at the close of 2018, a significant level of resources has been invested into quantum computing in 2019.  This bubble of activity culminated in Google announcing a major milestone in quantum computing.  Interestingly, IBM suggests that it wasn’t quite as significant as Google claimed.  In any case, the development of quantum computing in the U.S. has progressed a great deal in 2019, and many organizations will continue to focus on it as a priority in 2020.

  • Reports state that China has dedicated billions to build a Chinese national laboratory for quantum computing, among other related R&D products, a development that has gotten the attention of Congress and the Pentagon. This may be the beginning of the 21st century’s great technological race.

  • What is at stake? The implications are huge. It is expected that ultimately, quantum computers will be able to solve complex computations exponentially faster – as much as 100 million times faster — than classic computers. The opportunities this could present are staggering.  As are the risks and dangers.  For example, for all its benefits, the same technology could quickly crack the digital security that protects online banking and shopping and secure online communications.

  • Many organizations are concerned about the advent of quantum computing. But given that it will be a reality in the future, what should you be thinking about now? While not a real threat for 2020 or the near-term thereafter, it would be wise to think about it if one is anticipating investing in long-term infrastructure solutions. Will quantum computing render the investment obsolete? Or, will quantum computing present a security threat to that infrastructure?  It is not too early to think about these issues, and for example, technologists have been hard at work developing quantum-proof blockchain protocols. It would at least be prudent to understand the long-term roadmap of technology suppliers to see if they have even thought about quantum computing, and if so, to see to how they see quantum computing impacting their solutions and services.

Artificial Intelligence

We have seen significant level of deployment in the Artificial Intelligence/Machine Learning landscape this past year.  According to the Artificial Intelligence Index Report 2019, AI adoption by organizations (of at least one function or business unit) is increasing globally. Many businesses across many industries are deploying some level of AI into their businesses.  However, the same report notes that many companies employing AI solutions might not be taking steps to mitigate the risks from AI, beyond cybersecurity. We have advised clients on those risks, and in certain cases have been able to apportion exposure amongst multiple parties involved in the implementation.  In addition, we have also seen the beginning of regulation in AI, such as California’s chatbot law, New York’s recent passage of a law (S.2302prohibiting consumer reporting agencies and lenders from using the credit scores of people in a consumer’s social network to determine that individual’s credit worthiness, or the efforts of a number of regulators to regulate the use of AI in hiring decisions.

We expect 2020 to be a year of increased adoption of AI, coupled with an increasing sense of apprehension about the technology. There is a growing concern that AI and related technologies will continue to be “weaponized” in the coming year, as the public and the government express concern over “deepfakes” (including the use of voice deepfakes of CEOs to commit fraud).  And, of course, the warnings of people like Elon Musk and Bill Gates, as they discuss AI, cannot be ignored.

Blockchain

We have been very busy in 2019 helping clients learn about blockchain technologies, including issues related to smart contracts and cryptocurrency. 2019 was largely characterized by pilotstrials,  tests and other limited applications of blockchain in enterprise and infrastructure applications as well as a significant level of activity in tokenization of assetscryptocurrency investments, and the building of businesses related to the trading and custody of digital assets. Our blog, www.blockchainandthelaw.io keeps readers abreast of key new developments and we hope our readers have found our published articles on blockchain and smart contracts helpful.

Looking ahead to 2020, regulators such as the SECFinCENIRS and CFTC are still watching the cryptocurrency space closely. Gone are the days of ill-fated “initial coin offerings” and today, security token offerings, made in compliance with the securities laws, are increasingly common. Regulators are beginning to be more receptive to cryptocurrency, as exemplified by the New York State Department of Financial Services revisiting of the oft-maligned “bitlicense” requirement in New York.

Beyond virtual currency, I believe some of the most exciting developments of blockchain solutions in 2020 will be in supply chain management and other infrastructure uses of blockchain. 2019 was characterized by experimentation and trial. We have seen many successes and some slower starts. In 2020, we expect to see an increase in adoption. Of course, the challenge for businesses is to really understand whether blockchain is an appropriate solution for the particular need. Contrary to some of the hype out there, blockchain is not the right fit for every technology need, and there are many circumstances where a traditional client-server model is the preferred approach. For help in evaluating whether blockchain is in fact a potential fit for a technology need, this article may be helpful.

Other 2020 Developments

Interestingly, one of the companies that has served as a form of leading indicator in the adoption of emerging technologies is Walmart.  Walmart was one of the first major companies to embrace supply use of blockchain, so what is Walmart looking at for 2020? A recent Wall Street Journal article discusses its interest and investment in 5G communications and edge computing. We too have been assisting clients in those areas, and expect them to be active areas of activity in 2020.

Edge computing, which is related to “fog” computing, which is, in turn,  related to cloud computing, is simply put, the idea of storing and processing information at the point of capture, rather than communicating that information to the cloud or a central data processing location for storage and processing. According to the WSJ article, Walmart plans on building edge computing capability for other businesses to hire (following to some degree Amazon’s model for AWS).  The article also talks about Walmart’s interest in 5G technology, which would work hand-in-hand with its edge computing network.

Our experience with clients suggest that Walmart may be onto something.  Edge and fog computing, 5G and the growth of the “Internet of Things” are converging and will offer the ability for businesses to be faster, cheaper and more profitable. Of course this convergence also will tie back to the issues we discussed earlier, such as data, privacy and data security, artificial intelligence and machine learning. In general, this convergence will increase even more the technical abilities to process and use data (which would conceivably require regulation that would feature privacy and data security protections that are consumer-friendly, yet balanced so they do not stifle the economic and technological benefits of 5G).

This past year has presented a host of fascinating technology-based legal issues, and 2020 promises to hold more of the same.  We will continue to keep you posted!

We hope you had a good 2019, and we want to wish all of our readers a very happy and safe holiday season and a great New Year!


© 2019 Proskauer Rose LLP.

For more in technology developments, see the National Law Review Intellectual Property or Communications, Media & Internet law sections.

CCPA Notice of Collection – Are You Collecting Geolocation Data, But Do Not Know It?

Businesses subject to the California Consumer Privacy Act (“CCPA”) are working diligently to comply with the CCPA’s numerous mandates, although final regulatory guidance has yet to be issued. Many of these businesses are learning that AB25, passed in October, requires employees, applicants, and certain other California residents to be provided a notice of collection at least for the next 12 months. These businesses need to think about what must be included in these notices.

Business Insider article explains that iPhones maintain a detailed list of every location the user of the phone frequents, including how long it took to get to that location, and how long the user stayed there. The article provides helpful information about where that information is stored on the phone, how the data can be deleted, and, perhaps more importantly, how to stop the tracking of that information. This information may be important for users, as well as companies that provide iPhones to their employees to use in connection with their work.

AB25 excepted natural persons acting as job applicants, employees, owners, directors, officers, medical staff members, and contractors of a CCPA-covered business from all of the CCPA protections except two: (i) providing them a notice of collection under Cal. Civ. Code Sec. 1798.100(b), and (ii) the right to bring a private civil action against a business in the event of a data breach caused by the business’s failure to maintain reasonable safeguards to protect personal information. The notice of collection must inform these persons as to the categories of personal information collected by the business and how those categories are used.

The CCPA’s definition of personal information includes eleven categories of personal information, one of which is geolocation data. As many businesses think about the categories of personal information they collect from employees, applicants, etc. for this purpose, geolocation may be the last thing that comes to mind. This is especially true for businesses with workforces that come into the office every day, and which do not have a business need to know where their employees are, such as transportation, logistics, and home health care businesses. But, they still may provide their workforce members a company-owned iPhone or other smart device with similar capabilities, although not realizing all of its capabilities or configurations.

As many who have gone through compliance with the General Data Protection Regulations in the European Union, the CCPA and other laws that may come after it in the U.S. will require businesses to think more carefully about the personal information they collect. They likely will find such information is being collected without their knowledge and not at their express direction, and they may have to communicate that collection (and use) to their employees.


Jackson Lewis P.C. © 2019

AI and Evidence: Let’s Start to Worry

When researchers at University of Washington pulled together a clip of a faked speech by President Obama using video segments of the President’s earlier speeches run through artificial intelligence, we watched with a queasy feeling. The combination wasn’t perfect – we could still see some seams and stitches showing – but it was good enough to paint a vision of the future. Soon we would not be able to trust our own eyes and ears.

Now the researchers at University of Washington (who clearly seem intent on ruining our society) have developed the next level of AI visual wizardry – fake people good enough to fool real people. As reported recently in Wired Magazine, the professors embarked on a Turing beauty contest, generating thousands of virtual faces that look like they are alive today, but aren’t.

Using some of the same tech that makes deepfake videos, the Husky professors ran a game for their research subjects called Which Face is Real? In it, subjects were shown a real face and a faked face and asked to choose which was real. “On average, players could identify the reals nearly 60 percent of the time on their first try. The bad news: Even with practice, their performance peaked at around 75 percent accuracy.” Wired observes that the tech will only get better at fooling people “and so will chatbot software that can put false words into fake mouths.”

We should be concerned. As with all digital technologies (and maybe most tech of all types if you look at it a certain way) the first industrial applications we have seen occur in the sex industry. The sex industry has lax rules (if they exist at all) and the basest instincts of humanity find enough participants to make a new tech financially viable. Reported by the BBC, “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent.”

Of course, given the level of mendacity that populism drags in its fetid wake, we should expect to see examples of deepfakes offered on television news soon as additional support of the “alternate facts” ginned up by politicians, or generated to smear an otherwise blameless accuser of (faked) horrible behavior.  It is hard to believe that certain corners of the press would be able to resist showing the AI created video.

But, as lawyers, we have an equally valid concern about how this phenomenon plays in court. Clearly, we have rules to authenticate evidence.  New Evidence Rule 902(13) allows authentication of records “generated by an electronic process or system that produces an accurate result” if “shown by the certification of a qualified person” in a particular way. But with the testimony of someone who was wrong, fooled or simply lying about the provenance of an AI generated video, the false digital file can be easily introduced as evidence.

Some Courts under the silent witness theory have allowed a video to speak for itself. Either way, courts will need to tighten up authentication rules in the coming days of cheap and easy deepfakes being present everywhere. As every litigator knows, no matter what a judge tells a jury, once a video is seen and heard, its effects can dominate a juror’s mind.

I imagine that a new field of video veracity expertise will arise, as one side tries to prove its opponent’s evidence was a deepfake, and the opponent works to establish its evidence as “straight video.” One of the problems in this space is not just that deepfakes will slip their way into court, damning the innocent and exonerating the guilty, but that the simple existence of deepfakes allows unscrupulous (or zealously protective) lawyers to cast doubt on real, honest, naturally created video. A significant part of that new field of video veracity experts will be employed to cast shade on real evidence – “We know that deepfakes are easy to make and this is clearly one of them.” While real direct video that goes to the heart of a matter is often conclusive in establishing a crime, it can be successfully challenged, even when its message is true.  Ask John DeLorean.

So I now place a call to the legal technology community.  As the software to make deepfakes continues to improve, please help us develop parallel technology to be able to identify them. Lawyers and litigants need to be able to clearly authenticate genuine video evidence to clearly strike deepfaked video as such.  I am certain that somewhere in Langley, Fort Meade, Tel Aviv, Moscow and/or Shanghai both of these technologies are already mastered and being used, but we in the non-intelligence world may not know about them for a decade. We need some civilian/commercial help in wrangling the truth out of this increasingly complex and frightening technology.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more artificial intelligence, see the National Law Review Communications, Media & Internet law page.

CMS’s Request for Information Provides Additional Signal That AI Will Revolutionize Healthcare

On October 22, 2019, the Centers for Medicare and Medicaid Services (“CMS”) issued a Request for Information (“RFI”) to obtain input on how CMS can utilize Artificial Intelligence (“AI”) and other new technologies to improve its operations.  CMS’ objectives to leverage AI chiefly include identifying and preventing fraud, waste, and abuse.  The RFI specifically states CMS’ aim “to ensure proper claims payment, reduce provider burden, and overall, conduct program integrity activities in a more efficient manner.”  The RFI follows last month’s White House Summit on Artificial Intelligence in Government, where over 175 government leaders and industry experts gathered to discuss how the Federal government can adopt AI “to achieve its mission and improve services to the American people.”

Advances in AI technologies have made the possibility of automated fraud detection at exponentially greater speed and scale a reality. A 2018 study by consulting firm McKinsey & Company estimated that machine learning could help US health insurance companies reduce fraud, waste, and abuse by $20-30 billion.  Indeed, in 2018 alone, improper payments accounted for roughly $31 billion of Medicare’s net costs. CMS is now looking to AI to prevent improper payments, rather than the current “pay and chase” approach to detection.

CMS currently relies on its records system to detect fraud. Currently, humans remain the predominant detectors of fraud in the CMS system. This has resulted in inefficient detection capabilities, and these traditional fraud detection approaches have been decreasingly successful in light of the changing health care landscape.  This problem is particularly prevalent as CMS transitions to value-based payment arrangements.  In a recent blog post, CMS Administrator, Seema Verma, revealed that reliance on humans to detect fraud resulted in reviews of less than one-percent of medical records associated with items and services billed to Medicare.  This lack of scale and speed arguably allows many improper payments to go undetected.

Fortunately, AI manufacturers and developers have been leveraging AI to detect fraud for some time in various industries. For example, the financial and insurance industries already leverage AI to detect fraudulent patterns. However, leveraging AI technology involves more than simply obtaining the technology. Before AI can be used for fraud detection, the time-consuming process of amassing large quantities of high quality, interoperable data must occur. Further, AI algorithms need to be optimized through iterative human quality reviews. Finally, testing the accuracy of the trained AI is crucial before it can be relied upon in a production system.

In the RFI, CMS poses many questions to AI vendors, healthcare providers and suppliers that likely would be addressed by regulation.  Before the Federal government relies on AI to detect fraud, CMS must gain assurances that AI technologies will not return inaccurate or incorrect outputs that could negatively impact providers and patients. One key question raised involves how to assess the effectiveness of AI technology and how to measure and maintain its accuracy. The answer to this question should factor heavily into the risk calculation of CMS using AI in its fraud detection activities. Interestingly, companies seeking to automate revenue cycle management processes using AI have to grapple with the same concerns.  Without adequate compliance mechanisms in place around the development, implementation and use of AI tools for these purposes, companies could be subject to high risk of legal liability under Federal False Claims Act or similar fraud and abuse laws and regulations.

In addition to fraud detection, the RFI is seeking advice as to whether new technology could help CMS identify “potentially problematic affiliations” in terms of business ownership and registration.  Similarly, CMS is interested to gain feedback on whether AI and machine learning could speed up current expensive and time-consuming Medicare claim review processes and Medicare Advantage audits.

It is likely that this RFI is one of many signals that AI will revolutionize how healthcare is covered and paid for moving forward.  We encourage you to weigh in on this on-going debate to help shape this new world.

Comments are due to CMS by November 20, 2019.


©2019 Epstein Becker & Green, P.C. All rights reserved.

For more CMS activities, see the National Law Review Health Law & Managed Care page.

LinkedIn Petitions Circuit Court for En Banc Review of hiQ Scraping Decision

On October 11, 2019, LinkedIn Corp. (“LinkedIn”) filed a petition for rehearing en banc of the Ninth Circuit’s blockbuster decision in hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. Sept. 9, 2019). The crucial question before the original panel concerned the scope of Computer Fraud and Abuse Act (CFAA) liability to unwanted web scraping of publicly available social media profile data and whether once hiQ Labs, Inc. (“hiQ”), a data analytics firm, received LinkedIn’s cease-and-desist letter demanding it stop scraping public profiles, any further scraping of such data was “without authorization” within the meaning of the CFAA. The appeals court affirmed the lower court’s order granting a preliminary injunction barring LinkedIn from blocking hiQ from accessing and scraping publicly available LinkedIn member profiles to create competing business analytic products. Most notably, the Ninth Circuit held that hiQ had shown a likelihood of success on the merits in its claim that when a computer network generally permits public access to its data, a user’s accessing that publicly available data will not constitute access “without authorization” under the CFAA.

In its petition for en banc rehearing, LinkedIn advanced several arguments, including:

  • The hiQ decision conflicts with the Ninth Circuit Power Ventures precedent, where the appeals court held that a commercial entity that accesses a website after permission has been explicitly revoked can, under certain circumstances, be civilly liable under the CFAA. Power Ventures involved Facebook user data protected by password (that users initially allowed a data aggregator permission to access). LinkedIn argued that the hiQ court’s logic in distinguishing Power Ventures was flawed and that the manner in which a user classifies his or her profile data should have no bearing on a website owner’s right to protect its physical servers from trespass.

“Power Ventures thus holds that computer owners can deny authorization to access their physical servers within the meaning of the CFAA, even when users have authorized access to data stored on the owner’s servers. […] Nothing about a data owner’s decision to place her data on a website changes LinkedIn’s independent right to regulate who can access its website servers.”

  • The language of the CFAA should not be read to allow for “authorization” to be assumed (and unable to be revoked) for publicly available website data, either under Ninth Circuit precedent or under the CFAA-related case law of other circuits.

“Nothing in the CFAA’s text or the definition of ‘authorization’ that the panel employed—“[o]fficial permission to do something; sanction or warrant,” suggests that enabling websites to be publicly viewable is not ‘authorization’ that can be revoked.”

  • The privacy interests enunciated by LinkedIn on behalf of its users is “of exceptional importance,” and the court discounted the fact that hiQ is “unaccountable” and has no contractual relationship with LinkedIn users, such that hiQ could conceivably share the scraped data or aggregate it with other data.

“Instead of recognizing that LinkedIn members share their information on LinkedIn with the expectation that it will be viewed by a particular audience (human beings) in a particular way (by visiting their pages)—and that it will be subject to LinkedIn’s sophisticated technical measures designed to block automated requests—the panel assumed that LinkedIn members expect that their data will be ‘accessed by others, including for commercial purposes,’ even purposes antithetical to their privacy setting selections. That conclusion is fundamentally wrong.

Both website operators and open internet advocates will be watching closely to see if the full Ninth Circuit decides to rehear the appeal, given the importance of the CFAA issue and the prevalence of data scraping of publicly available website content. We will keep a close watch on developments.


© 2019 Proskauer Rose LLP.

Second Circuit Confirms Arbitration Awards That Are (Literally) Out of This World

Arbitration over whether a South Korean company or a Bermuda company headquartered in Hong Kong owns a geostationary satellite in light of an order from a South Korean regulatory agency can be complicated. The Second Circuit recently affirmed a decision confirming an arbitration award adjudicating ownership of the satellite in question and awarding damages related to a party’s failure to obtain regulatory approvals necessary to complete the sale over claims that the arbitration panel exceeded its power, disregarded the law, and violated public policy.

KT Corp., a Korean company, agreed to sell a satellite to ABS Holdings Ltd., a Bermuda company headquartered in Hong Kong. The companies signed a purchase agreement to convey the title to the satellite and an operations agreement under which KT agreed to operate the satellite on behalf of ABS. Both agreements contained New York choice-of-law provisions and mandatory arbitration clauses. The purchase agreement required KT to obtain and maintain all necessary licenses and authorizations for the sale and the continued operation of the satellite.

The sale was completed and title to the satellite was transferred.

Nearly two years later, a South Korean regulatory agency issued an order declaring the purchase agreement null and void because KT had failed to obtain a required export permit. The agency canceled KT’s permission to use certain frequencies to operate the satellite.

KT and ABS arbitrated who held title to the satellite and whether KT had violated the purchase agreement before a panel of the International Chamber of Commerce. In two awards, the panel concluded that ABS held title to the satellite because title had lawfully passed when the conditions precedent to the purchase agreement were completed when there was no requirement that KT obtain an export permit. And even if that was not the case, the panel concluded, the regulatory order had no effect because it was issued retroactively without notice to the parties in violation of New York law, and KT breached its obligations by failing to obtain all the approvals necessary for the continued operation of the satellite (even though an export permit may not have been required for the sale of the satellite, one was necessary to maintain the satellite’s operations).

KT petitioned the Southern District of New York to vacate the award, and ABS petitioned the court to confirm it. The district court granted ABS’ petition and confirmed the panel’s award.

The Second Circuit affirmed. KT argued that the panel had exceeded its authority and that the award disregarded the law and violated public policy. KT claimed that the panel’s conclusion that the regulatory order was without effect violated due process principles. The court disagreed, noting that KT had not challenged the order, its counsel had questioned its validity, and the panel did not rest on the validity of the order; the panel referenced the propriety of the order as an alternate basis for its primary conclusion that title to the satellite properly changed hands. The court also rejected KT’s argument that the panel had disregarded New York contract law. Regarding public policy, although the court recognized that it is the public policy of the United States to enforce foreign judgments that are not repugnant to U.S. policy, it was unclear whether that public policy extended to foreign regulatory orders, and it was not even clear that the regulatory order in this case was enforceable under South Korean law according to KT’s expert.

KT Corp. v. ABS Holdings, Ltd., No. 18-2300 (2d Cir. Sept. 12, 2019).


©2011-2019 Carlton Fields, P.A.

For more arbitration decisions, see the National Law Review ADR / Arbitration / Mediation page.

Ubers of the Future will Monitor Your Vital Signs

Uber has announced that it is considering developing self-driving cars that monitor passengers’ vital signs by asking the passengers how they feel during the ride, in order to provide a stress-free and satisfying trip. This concept was outlined in a patent filed by the company in July 2019. Uber envisions passengers connecting their own health-monitoring devices (e.g., smart watches, activity trackers, heart monitors, etc.) to the vehicle to measure the passenger’s reactions. The vehicle would then synthesize the information, along with other measurements that are taken by the car itself (e.g., thermometers, vehicle speed sensors, driving logs, infrared cameras, microphones, etc.). This type of biometric monitoring could potentially allow the vehicle to assess whether it might be going too fast, getting too close to another vehicle on the road, or applying the brakes too hard.  The goal is to use artificial intelligence to create a more ‘satisfying’ experience for the riders in the autonomous vehicle.

This proposed technology presents yet another way that ride-sharing companies such as Uber can collect more data from their passengers. Of course, passengers would have the choice about whether to use this feature, but this is another consideration for passengers in this data-driven industry.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more about self-driving cars, see the National Law Review Communications, Media & Internet law page.