To AI or Not to AI: U.S. Copyright Office Clarifies Options

The U.S. Copyright Office has weighed in with formal guidance on the copyrightability of works whose generation included the use of artificial intelligence (AI) tools. The good news for technology-oriented human creative types: using AI doesn’t automatically disqualify your work from copyright protection. The bad news for independent-minded AI’s: you still don’t qualify for copyright protection in the United States.

On March 16, 2023, the Copyright Office issued a statement of policy (“Policy”) to clarify its practices for examining and registering works that contain material generated by the use of AI and how copyright law’s human authorship requirements will be applied when AI was used. This Policy is not itself legally binding or a guarantee of a particular outcome, but many copyright applicants may breathe a sigh of relief that the Copyright Office has formally embraced AI-assisted human creativity.

The Policy is just the latest step in an ongoing debate over the copyrightability of machine-assisted products of human creativity. Nearly 150 years ago, the Supreme Court ruled at photographs are copyrightable. See Burrow-Giles Lithographic Company v. Sarony, 111 U.S. 53 (1884). The case involved a photographer’s claim against a lithographer for 85,000 unauthorized copies of a photograph of Oscar Wilde. The photo, Sarony’s “Oscar Wilde No. 18,” is shown below:

Sarony’s “Oscar Wilde No. 18"

The argument against copyright protection was that a photograph is “a reproduction, on paper, of the exact features of some natural object or of some person” and is therefore not a product of human creativity. Id. at 56. The Supreme Court disagreed, ruling that there was sufficient human creativity involved in making the photo, including posing the subject, evoking the desired expression, arranging the clothing and setting, and managing the lighting.

In the mid-1960’s, the Copyright Office rejected a musical composition, Push Button Bertha, that was created by a computer, reasoning that it lacked the “traditional elements of authorship” as they were not created by a human.

In 2018, the U.S. Court of Appeals for the Ninth Circuit ruled that Naruto, a crested macaque (represented by a group of friendly humans), lacked standing under the Copyright Act to hold a copyright in the “monkey selfie” case. See Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018). The “monkey selfie” is below:

Monkey Selfie

In February 2022, the Copyright Office rejected a registration (filed by interested humans) for a visual image titled “A Recent Entrance to Paradise,” generated by DABUS, the AI whose claimed fractal-based inventions are the subject of patent applications around the world. DABUS’ image is below:

“A Recent Entrance to Paradise”

Litigation over this rejected application remains pending.

And last month, the Copyright Office ruled that a graphic novel consisting of human-authored text and images generated using the AI tool Midjourney could, as a whole, be copyrighted, but that the images, standing alone, could not. See U.S. Copyright Office, Cancellation Decision re: Zarya of the Dawn (VAu001480196) at 2 (Feb. 21, 2023).

The Copyright Office’s issuing the Policy was necessitated by the rapid and remarkable improvements in generative AI tools over even the past several months. In December 2022, generative AI tool Dall-E generated the following images in response to nothing more than the prompt, “portrait of a musician with a hat in the style of Rembrandt”:

Four portraits generated by AI tool Dall-E from the prompt, "portrait of a musician with a hat in the style of Rembrandt."

If these were human-generated paintings, or even photographs, there is no doubt that they would be copyrightable. But given that all four images were generated in mere seconds, with a single, general prompt from a human user, do they meet the Copyright Office’s criteria for copyrightability? The answer, now, is a clear “no” under the Policy.

However, the Policy opens the door to registering AI-assisted human creativity. The toggle points will be:

“…whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” 

In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.” 

The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This will necessarily be a case-by-case inquiry.” 

See Policy (citations omitted).

Machine-produced authorship alone will continue not to be registerable in the United States, but human selection and arrangement of AI-produced content could lead to a different result according to the Policy. The Policy provides select examples to help guide registrants, who are encouraged to study them carefully. The Policy, combined with near future determinations by the Copyright Office, will be critical to watch in terms of increasing likelihood a registration application will be granted as the Copyright Office continues to assess the impacts of new technology on the creative process. AI tools should not all be viewed as the “same” or fungible. The type of AI and how it is used will be specifically considered by the Copyright Office.

In the short term, the Policy provides some practical guidance to applicants on how to describe the role of AI in a new copyright application, as well as how to amend a prior application in that regard if needed. While some may view the Policy as “new” ground for the Copyright Office, it is consistent with the Copyright Office’s long-standing efforts to protect the fruits of human creativity even if the backdrop (AI technologies) may be “new.”

As a closing note, it bears observing that copyright law in the United Kingdom does permit limited copyright protection for computer-generated works – and has done so since 1988. Even under the U.K. law, substantial questions remain; the author of a computer-generated work is considered to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” See Copyright, Designs and Patents Act (1988) §§ 9(3), 12(7) and 178. In the case of images generated by a consumer’s interaction with a generative AI tool, would that be the consumer or the generative AI provider?

Copyright © 2023 Womble Bond Dickinson (US) LLP All Rights Reserved.

Lawyer Bot Short-Circuited by Class Action Alleging Unauthorized Practice of Law

Many of us are wondering how long it will take for ChatGPT, the revolutionary chatbot by OpenAI, to take our jobs. The answer: perhaps, not as soon as we fear!

On March 3, 2023, Chicago law firm Edelson P.C. filed a complaint against DoNotPay, self-described as “the world’s first robot lawyer.” Edelson may have short-circuited the automated barrister’s circuits by filing a lawsuit alleging the unauthorized practice of law.

DoNotPay is marketed as an AI program intended to assist users in need of legal services, but who do not wish to hire a lawyer. The organization was founded in 2015 to assist users in disputing parking tickets. Since then, DoNotPay’s services have expanded significantly. The company’s website offers to help users fight corporations, overcome bureaucratic obstacles, locate cash and “sue anyone.”

In spite of those lofty promises, Edelson’s complaint counters by pointing out certain deficiencies, stating, “[u]nfortunately for its customers, DoNotPay is not actually a robot, a lawyer, or a law firm. DoNotPay does not have a law degree, is not barred in any jurisdiction and is not supervised by any lawyer.”

The suit was brought by plaintiff Jonathan Faridian, who claims to have used DoNotPay for legal drafting projects, demand letters, one small claims court filing and drafting an employment discrimination complaint. Faridian’s complaint explains he was under the impression that he was purchasing legal documents from an attorney, only to later discover that the “substandard” outcomes generated did not comport with his expectations.

When asked for comment, DoNotPay’s representative denied Faridian’s allegations, explaining the organization intends to defend itself “vigorously.”

© 2023 Wilson Elser

With the US Copyright Office (USCO) continuing their stance that protection only extends to human authorship, what will this mean for artificial intelligence (AI)-generated works — and artists — in the future?

Almost overnight, the limited field of Machine Learning and AI has become nearly as accessible to use as a search engine. Apps like Midjourney, Open AI, ChatGPT, and DALL-E 2, allow users to input a prompt into these systems and a bot will generate virtually whatever the user asks for. Microsoft recently announced its decision to make a multibillion-dollar investment in OpenAI, betting on the hottest technology in the industry to transform internet as we know it.[1]

However, with accessibility of this technology growing, questions of authorship and copyright ownership are rising as well. There remain multiple open questions, such as: who is the author of the work — the user, the bot, or the software that produces it? And where is this new generative technology pulling information from?

AI and Contested Copyrights

As groundbreaking as these products are, there has been ample backlash regarding copyright infringement and artistic expression. The stock image company, Getty Images, is suing Stability AI, an artificial intelligence art tool behind Stable Diffusion. Getty Images alleges that Stability AI did not seek out a license from Getty Images to train its system. Although the founder of Stability AI argues that art makes up 0.1% of the dataset and is only created when called by the user’s prompt. In contrast, Shutterstock, one of Getty Images largest competitors, has taken an alternative approach and instead partnered with Open AI with plans to compensate artists for their contributions.

Artists and image suppliers are not the only ones unhappy about the popularity of machine learning.  Creators of open-source code have targeted Microsoft and its subsidiary GitHub, along with OpenAI,  in a proposed class-action lawsuit. The lawsuit alleges that the creation of AI-powered coding assistant GitHub Copilot is relying on software piracy on an enormous scale. Further, the complaint claims that GitHub relies on copyrighted code with no attribution and no licenses. This could be the first class-action lawsuit challenging the training and output of AI systems. Whether artists, image companies, and open-source coders choose to embrace or fight the wave of machine learning,  the question of authorship and ownership is still up for debate.

The USCO made clear last year that the copyright act only applies to human authorship; however they have recently signaled that in 2023 the office will focus on the legal grey areas surrounding the copyrightability of works generated in conjunction with AI. The USCO denied multiple applications to protect AI authored works previously, stating that the “human authorship” element was lacking. In pointing to previous decisions, such as the 2018 decision that a monkey taking a selfie could not sue for copyright infringement, the USCO reiterated that “non-human expression is ineligible for copyright protection.” While the agency is standing by its conclusion that works cannot be registered if it is exclusively created by an AI, the office is considering the issue of copyright registration for works co-created by humans and AI.

Patent Complexities  

The US Patent and Trademark Office (USPTO) will have to rethink fundamental patent policies with the rise of sophisticated AI systems as well. As the USPTO has yet to speak on the issue, experts are speculating alternative routes that the office could choose to take: declaring AI inventions unpatentable, which could lead to disputes and hinder the incentive to promote innovation, or concluding that the use of AI should not render otherwise patentable inventions unpatentable, but would lead to complex questions of inventorship. The latter route would require the USPTO to rethink their existing framework of determining inventorship by who conceived the invention.

Takeaway

The degree of human involvement will likely determine whether an AI work can be protected by copyright, and potentially patents. Before incorporating this type of machine learning into your business practices, companies should carefully consider the extent of human input in the AI creation and whether the final work product will be protectable. For example:

  • An apparel company that uses generative AI to create a design for new fabric may not have a protectable copyright in the resulting fabric design.

  • An advertising agency that uses generative AI to develop advertising slogans and a pitch deck for a client may not be able to protect the client from freely utilizing the AI-created work product.

  • A game studio that uses generative AI to create scenes in a video game may not be able to prevent its unlicensed distribution.

  • A logo created for a business endeavor may not be protected unless there are substantial human alterations and input.

  • Code that is edited or created by AI may be able to be freely copied and replicated.

Although the philosophical debate is only beginning regarding what “makes” an artist, 2023 may be a uniquely litigious year defining the extent in which AI artwork is protectable under existing intellectual property laws.


FOOTNOTES

[1] https://www.cnn.com/2023/01/23/tech/microsoft-invests-chatgpt-openai/index.htmlhttps://www.nytimes.com/2023/01/12/technology/microsoft-openai-chatgpt.html

Chamber of Commerce Challenges CFPB Anti-Bias Focus Concerning AI

The end of last month the U.S. Chamber of Commerce, the American Bankers Association and other industry groups (collectively, “Plaintiffs”) filed suit in Texas federal court challenging the Consumer Financial Protection Bureau’s (“CFPB”) update this year to the Unfair, Deceptive, or Abusive Acts or Practices section of its examination manual to include discrimination.  Chamber of Commerce of the United States of America, et al v. Consumer Financial Protection Bureau, et al., Case No. 6:22-cv-00381 (E.D. Tex.)

By way of background, the Consumer Financial Protection Act, which is Title X of the 2010 Dodd-Frank Act (the “Act”), prohibits providers of consumer financial products or services or a service provider from engaging in any unfair, deceptive or abusive act or practice (“UDAAP”).  The Act also provides the CFPB with rulemaking and enforcement authority to “prevent unfair, deceptive, or abusive acts or practices in connection with any transaction with a consumer for a consumer financial product or service, or the offering of a consumer financial product or service.”  See, e.g.https://files.consumerfinance.gov/f/documents/cfpb_unfair-deceptive-abusive-acts-practices-udaaps_procedures.pdf.  In general, the Act provides that an act or practice is unfair when it causes or is likely to cause substantial injury to consumers, which is not reasonably avoidable by consumers, and the injury is not outweighed by countervailing benefits to consumers or to competition.

The CFPB earlier this spring published revised examination guidelines on unfair, deceptive, or abusive acts and practices, or UDAAPs.  Importantly, this set forth a new position from the CFPB, that discrimination in the provision of consumer financial products and services can itself be a UDAAP.  This was a development that was surprising to many providers of financial products and services.  The CFPB also released an updated exam manual that outlined its position regarding how discriminatory conduct may qualify as a UDAAP in consumer finance.  Additionally, the CFPB in May 2022 additionally published a Consumer Financial Protection Circular to remind the public of creditors’ adverse action notice requirements under the Equal Credit Opportunity Act (“ECOA”).  In the view of the CFPB, creditors cannot use technologies (include algorithmic decision making) if it means they are unable to provide required explanations under the ECOA.

In July 2022, the Chamber and others called on the CFPB to rescind the update to the manual.  This included, among other arguments raised in a white paper supporting their position, that in conflating the concepts of “unfairness” and “discrimination,” the CFPB ignores the Act’s text, structure, and legislative history which discusses “unfairness” and “discrimination” as two separate concepts and defines “unfairness” without mentioning discrimination

The Complaint filed this fall raises three claims under the Administrative Procedure Act (“APA”) in relation to the updated manual as well as others.  The Complaint contends that ultimately it is consumers that will suffer as a result of the CFPB’s new position, as “[t]hese amendments to the manual harm Plaintiffs’ members by imposing heavy compliance costs that are ultimately passed down to consumers in the form of higher prices and reduced access to products.”

The litigation process started by Plaintiffs in this case will be time consuming (a response to the Complaint is not expected from Defendants until December).  In the meantime, entities in the financial sector should be cognizant of the CFPB’s new approach and ensure that their compliance practices appropriately mitigate risk, including in relation to algorithmic decision making and AI.  As always, we will keep you up to date with the latest news on this litigation.

For more Consumer Finance Legal News, click here to visit the National Law Review

© Copyright 2022 Squire Patton Boggs (US) LLP

White House Office of Science and Technology Policy Releases “Blueprint for an AI Bill of Rights”

On October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) unveiled its Blueprint for an AI Bill of Rights, a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems.

The Blueprint comprises of five key principles:

  1. The first Principle is to protect individuals from unsafe or ineffective AI systems, and encourages consultation with diverse communities, stakeholders and experts in developing and deploying AI systems, as well as rigorous pre-deployment testing, risk identification and mitigation, and ongoing monitoring of AI systems.

  2. The second Principle seeks to establish safeguards against discriminative results stemming from the use of algorithmic decision-making, and encourages developers of AI systems to take proactive measures to protect individuals and communities from discrimination, including through equity assessments and algorithmic impact assessments in the design and deployment stages.

  3.  The third Principle advocates for building privacy protections into AI systems by default, and encourages AI systems to respect individuals’ decisions regarding the collection, use, access, transfer and deletion of personal information where possible (and where not possible, use default privacy by design safeguards).

  4. The fourth Principle emphasizes the importance of notice and transparency, and encourages developers of AI systems to provide a plain language description of how the system functions and the role of automation in the system, as well as when an algorithmic system is used to make a decision impacting an individual (including when the automated system is not the sole input determining the decision).

  5. The fifth Principle encourages the development of opt-out mechanisms that provide individuals with the option to access a human decisionmaker as an alternative to the use of an AI system.

In 2019, the European Commission published a similar set of automated systems governance principles, called the Ethics Guidelines for Trustworthy AI. The European Parliament currently is in the process of drafting the EU Artificial Intelligence Act, a legally enforceable adaptation of the Commission’s Ethics Guidelines. The current draft of the EU Artificial Intelligence Act requires developers of open-source AI systems to adhere to detailed guidelines on cybersecurity, accuracy, transparency, and data governance, and provides for a private right of action.

For more Technology Legal News, click here to visit the National Law Review.
Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Protection for Voice Actors is Artificial in Today’s Artificial Intelligence World

As we all know, social media has taken the world by storm. Unsurprisingly, it’s had an impact on trademark and copyright law, as the related right of publicity. A recent case involving an actor’s voice being used on the popular app TikTok is emblematic of the time. The actor, Bev Standing, sued TikTok for using her voice, simulated via artificial intelligence (AI) without her permission, to serve as “the female computer-generated voice of TikTok.” The case, which was settled last year, illustrates how the law is being adapted to protect artists’ rights in the face of exploitation through AI, as well as the limits of current law in protecting AI-created works.

Standing explained that she thinks of her voice “as a business,” and she is looking to protect her “product.” Apps like TikTok are taking these “products” and feeding them into an algorithm without the original speaker’s permission, thus impairing creative professionals’ ability to profit in an age of widespread use of the Internet and social media platforms.

Someone’s voice (and aspects of their persona such as their photo, image, or other likeness) can be protected by what’s called the “right of publicity.” That right prevents others from appropriation of one’s persona – but only when appropriation is for commercial purposes. In the TikTok case, there was commercial use, as TikTok was benefiting from use of Standing’s voice to “narrate” its users’ videos (with some user videos apparently involving “foul and offensive language”). In her Complaint, Standing alleged TikTok had violated her right of publicity in using her voice to create the AI voice used by TikTok, and relied upon two other claims:  false designation of origin under the Lanham Act and copyright infringement, as well as related state law claims. The false designation of origin claim turned on whether Standing’s voice was so recognizable that another party’s misappropriation of it could confuse consumers as to whether Standing authorized the Tik Tok use. The copyright infringement claim was possible because Standing created the original voice files for a company that hired her to record Chinese language translations. TikTok subsequently acquired the files but failed to get a license from Standing to use them, as TikTok was legally obligated to do because Standing was the original creator (and therefore copyright owner) of the voice files.

As with other historical technological innovations (one of the earliest being the printing press), the law often plays catch-up, but has proven surprisingly adaptable to new technology. Here, Standing was able to plead three legal theories (six if you count the state statutory and common law unfair competition claims), so it seems artists are well-protected by existing law, at least if they are alleging AI was used to copy their work or persona.

On the other hand, the case for protecting creative expression produced in whole or in part by AI is much more difficult. Some believe AI deserves its own form of copyright, since innovative technology has increasingly made its own music and sounds. Currently, protection for these sounds is limited, since only humans can be identified as authors for the purposes of copyright. Ryan Abott, a professor of law and health science at the University of Surrey in Britain, is attempting to bring a legal case against the U.S. Copyright Office to register a digital artwork made by a computer with AI as its author. The fear, says Abott, is that without rights over these sounds, innovation will be stifled — individuals will not have incentive to create AI works if they cannot protect them from unauthorized exploitation.

2020 In Review: An AI Roundup

There has been much scrutiny of artificial intelligence tools this year. From NIST to the FTC to the EU Parliament, many have recommendations and requirements for companies that want to use AI tools. Key concerns including being transparent about the use of the tools, ensuring accuracy, and not discriminating against individuals when using AI technologies, and not using the technologies in situations where it may not give reliable results (i.e., for things for which it was not designed). Additional requirements for use of these tools exist under GDPR as well.

Legal counsel may feel uncomfortable with business teams who are moving forward in deploying AI tools. It’s not likely, however, that lawyers will be able to slow down the inevitable and widespread use of AI. We anticipate more developments in this area into 2021.

Putting It Into Practice: Companies can use “privacy by design” principles to help them get a handle on business team’s AI efforts. Taking time to fully understand the ways in which the AI tool will be used (both immediately in any future phases of a project) can be critical to ensuring that regulator concerns and legal requirements are addressed.


Copyright © 2020, Sheppard Mullin Richter & Hampton LLP.
For more, visit the NLR Communications, Media & Internet section.

Emerging Technologies Update

Our present era is one characterized by rapid technological change, marked by an influx of advancements aimed at enhancing productivity, reducing labor costs, and providing companies with previously unforeseen efficiencies and insights. These emerging technologies—a broad collection of hardware and software that includes artificial intelligence (AI), autonomous vehicles (AVs), biotechnology, robotics, and unmanned aerial systems (drones)—are being incorporated into everyday operations by seemingly every industry and sector.

A number of emerging technologies are finding particular value in the energy, natural resources, and transportation spaces.  A brief survey of these sectors reveals that companies are incorporating emerging technologies in a number of novel ways, including:

  • Use of drones to detect leaks along pipelines and to survey the structural integrity of offshore rigs;
  • Integration of machine learning-empowered connected devices by electric, gas, and water utilities to better serve communities by identifying ways to be more efficient with respect to how resources are managed;
  • Application of predictive analytics for refinery/gas plant optimization to mitigate un-programmed plant shutdowns, improve yields, and enhance safety awareness;
  • Incorporation of machine learning and computer vision into AV systems which have the capability to significantly improve road safety, reduce traffic fatalities, and improve vehicle efficiency;
  • Adoption of machine learning and data analytics by oil and gas companies into planning processes for drilling by hydraulic fracturing; and
  • Utilization of autonomous delivery systems—including aerial and sidewalk drones—in an effort to significantly reduce the cost of deliveries and environmental impacts over the “last mile.”

While these and other technologies show great promise, they also create a host of new challenges for governments, companies, and individuals.  In particular, emerging technologies could usher in an era of massive disruption that dramatically alters and upsets traditional notions of consumer safety and privacy, national security, job security, and environmental quality.  Federal and state regulators and legislators are already starting to tackle the challenges arising from emerging technologies—with mixed results. These actions risk generating unintended consequences that could stifle innovation and/or forestall the incorporation of emerging technologies into various industry operations.

This inaugural VNF Emerging Technology Update is intended to identify recent executive and legislative branch developments in the emerging technology space that may impact the deployment of these technologies, which in turn could impact client operations. If you have a question about these or any other developments in the emerging technology space, please contact the authors of this alert.

Recent Emerging Tech Developments

DOT Announces New Measures to Facilitate Drone Deployment

On January 14, 2019, Secretary of Transportation, Elaine Chao, announced several significant regulatory developments that should—in time—provide drone companies and operators with more operational flexibility.

First, Secretary Chao announced that the Federal Aviation Administration (FAA) had unveiled a proposed rule entitled, “Operation of Small Unmanned Aircraft Systems over People.” Among other things, the proposed rule would allow a small drone to “pass[] over any part of any person who is not directly participating in the operation and who is not located under a covered structure or inside a stationary vehicle”—provided that the drone meets certain operational constraints related to drone weight, design, and risk of injury to people.  The proposed rule would also permit drones to operate at night provided that (i) the drone is equipped with an anti-collision light that is visible for at least three statute miles, and (ii) the operator has completed relevant knowledge training and testing.

While the proposed rule is a good first step in facilitating further innovation in small drone use cases, it is unlikely that the rule would have any immediate impact because it is contingent on the FAA implementing remote identification and tracking regulations, which the FAA is expected to promulgate in proposed form later this year.  Moreover, remote ID and tracking rules are necessary to stymie nefarious and nuisance operations that could target critical systems and infrastructure, including events similar to those that occurred at London’s Gatwick and Heathrow airports late in 2018 and early in 2019, and at Newark International Airport on January 22, 2019. Thus, while the proposed rule is a welcome step toward facilitating drone innovation, regulators still have a lot of work to do before companies (and consumers) realize the potential benefits of commercial drones.

In addition to the proposed rule, the FAA also announced an advanced notice of proposed rulemaking (ANPR) seeking comments on the “Safe and Secure Operations of Small Unmanned Aircraft Systems.” The ANPR recognizes the potential national security threat that drones pose to critical infrastructure, acknowledging that it is continually assessing the ability of the Part 107 regulations to address these concerns.  In addition, the ANPR notes that the FAA is working to develop a process to allow certain fixed-site facility owners to petition the agency to prohibit or restrict drone operations in close proximity to, e.g., critical infrastructure sites. The ANPR further recognizes public safety and national security concerns arising from loss of control of a drone. The agency seeks comment on the need to promulgate regulations establishing design requirements (such as redundancy) for systems critical to flight safety.

It is important to note that the current government shutdown has impacted the publication of these regulatory actions in the Federal Register. Therefore, the FAA is not yet accepting public comment on these actions. The FAA has not indicated when it will publish these actions in the Federal Register, but simply says both will be published “at a later date.”

FCC Proposed Rule on Unlicensed Use of 6 GHz Band

On December 17, 2018, the Federal Communications Commission (FCC) published a proposed rule to expand unlicensed use of the 5.925-7.125 GHz band (6 GHz band). Specifically, the FCC would allow unlicensed access points to operate on the 5.925-6.425 GHz and 6.525-6.875 GHz sub-bands only on frequencies determined by an automated frequency control (AFC) system. For the 6.425-6.525 GHz and 6.875-7.125 GHz sub-bands, the FCC would not mandate an AFC system and would permit unlicensed access points to operate at lower transmitted power.

The FCC’s press release on the proposed rule notes that “[u]nlicensed devices that employ Wi-Fi and other unlicensed standards have become indispensable for providing low-cost wireless connectivity in countless products used by American consumers.” The proposed rule represents one element of the FCC’s broader objective to facilitate and ensure that adequate spectrum exists to accommodate the proliferation of connected devices in the internet of things (IoT).

While the FCC asserted its commitment to “protecting the incumbent licensed services that operate in this spectrum,” the FCC’s proposed action does raise the possibility of conflict with electric, gas, and water utilities and other critical infrastructure systems, which have long relied on the 6 GHz band for their communications networks. Some worry that the FCC’s action could unleash a flood of new unlicensed users on the spectrum, which could create radio frequency interference that compromises both reliability and emergency response capabilities.

Comments on the proposed rule are due by February 15, 2019.

BIS Contemplating Export Controls for Certain Emerging Technologies

On November 19, 2018, the Bureau of Industry and Security (BIS)—an agency within the Department of Commerce—published an ANPR seeking public comment on criteria for identifying emerging technologies that are essential to U.S. national security. The BIS ANPR comes at a time of heightened scrutiny over global technology transfers. The past year alone has been dominated by headlines of (i) potential national security concerns related to the import of Chinese telecommunications technologies; (ii) potential supply chain attacks on U.S. technology manufacturers; and (iii) escalating trade tensions between the United States and China precipitated at least in part by U.S. objections over Chinese theft of intellectual property.

It is this third risk that BIS’s ANPR is attempting to redress. With the help of public comments received over the course their comment period (which closed on January 10, 2019) BIS will evaluate potential national security risks that may arise from the export of emerging technologies.  The agency has indicated that it will likely promulgate a proposed rule to amend the Commerce Control List (CCL) to include new Export Control Classification Numbers (ECCNs) for certain emerging technologies.

While there is certainly a need to address the economic, national security, and political implications of technology transfers—and the deleterious impacts of industrial espionage—some of the most prominent technology companies and technology industry advocacy groups argue that BIS’s action will do little to mitigate potential national security risks and may actually do more to harm U.S. emerging technology companies, because any prohibition on technology exports will apply to companies operating within the United States. Consequently, sophisticated external actors will still be able to engage in industrial espionage, thereby extracting potentially sensitive technologies outside of officially-sanctioned processes, allowing certain emerging technologies to end up in jurisdictions outside of the United States or its allies without U.S. companies being able to control the dissemination of those technologies.

Given the potential negative impacts of BIS’s contemplated regulatory action—as well as the fact that BIS issued the ANPR immediately before the year-end holiday season—many companies petitioned the agency for an extension of the original 30-day comment period. While BIS did extend the comment period an additional three weeks, the compressed comment period undoubtedly prevented some companies and individuals from offering more detailed insights.  Given the potential economic and security impacts of the ANPR, companies may wish to engage with the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget (OMB) as an alternative or parallel strategy to ensure that the Administration is aware and understands the potential implications on U.S. companies.

Senators Warner and Rubio Introduce Bill to Establish the Office of Critical Technologies and Security

On January 4, 2019, Senators Mark Warner (D-VA) and Marco Rubio (R-FL) introduced S.29, which would establish an “Office of Critical Technologies and Security” within the White House. Recognizing threat of industrial espionage, forced technology transfers, and supply chain vulnerabilities, the bipartisan bill is intended to ensure that technology transfer decisions occur within a broader policy context—a “whole of government technology strategy”—that weighs relevant economic, geopolitical and national security concerns in a way different from the existing BIS regulatory process.

As of January 22, the Senate has taken no further action on the bill.

 

© 2019 Van Ness Feldman LLP
This post was written by R. Scott Nuzum and Eric C. Wagner of Van Ness Feldman LLP.