The Promises and Perils of 6G Technology
6G technology will usher in a revolution in innovation, unleashing artificial intelligence, revolutionizing the health care and data-transmission sectors, and creating novel privacy issues. 6G offers transmission speeds potentially 100 times faster than 5G, near-zero latency, and connection density up to 10 million devices per square kilometer. These advances will create a network where almost every device can be simultaneously connected, enabling technologies not possible today.
6G is only in its infancy. Governments and private entities are just beginning to invest in the technology, and projections suggest commercial availability around 2030. But given 6G’s anticipated ubiquity and potential to change the landscape, we would be wise to begin learning about it now.
Artificial intelligence (“AI”) represents a new frontier in the global economy: Some estimates say it could contribute up to $15.7 trillion worldwide by 2030. 6G promises at least two major developments that will affect our current law: The creation of vast interconnected AI networks, and the growing importance of AI inventors. Increases in computing power and innovations in computer science have fueled AI innovation. From 2002 to 2018, AI patent applications rose by over 100%. This pace shows no sign of slowing: Countries are pouring money into AI research, and major telecommunications firms like Nokia and Huawei have begun investing in 6G-enabled AI technologies.
These AI innovations will affect our lives in ways that current regulatory frameworks are not prepared to address. For example, 6G-enabled AI technology will permit the creation of a “smart application layer” of interconnected devices, from autonomous vehicles to medical implants to geolocation sensors, all of which will communicate with one another in real time. This network will be undergirded by an “intelligent sensing layer,” a web of technologies that will collect and analyze data from these devices. Every piece of life could be connected. Every accompanying bit of data could be collected.
Public reaction will vary. Some will welcome the conveniences and synergies. Others will fear the establishment of a technological Panopticon. Consider the debate over universal facial recognition. San Francisco has banned the use of facial recognition technology by the police and other agencies, fearing intrusion into citizens’ private lives. Elsewhere, from London to Beijing, facial recognition is common and authorities hail its ability to fight crime. These debates will only grow.
United States regulators have not kept pace with AI’s advances. AI regulation is “in its infancy.” Little has happened at the federal level: In 2019, the White House issued an Executive Order creating the “American AI Initiative,” and the National Institute of Standards and Technology identified nine “areas of focus” for AI standards, but no binding policies have issued. The United States Congress has introduced many bills to regulate artificial intelligence, but none have passed.
This void has not been filled at the state level, at least not in a consistent manner. Just look at the difficulties in regulating autonomous vehicles: States like Arizona have billed themselves as red-tape-free laboratories for autonomous vehicle innovation. Yet a fatal accident in Arizona unveiled safety issues with prototypes that were then on the road, as a post-accident report revealed that Uber, the car’s manufacturer, had disabled the car’s autonomous emergency brakes and standard collision-avoidance system before the accident. Autonomous vehicle manufacturers have openly sought consistent regulatory guidance. This lack of state regulatory cohesion is repeated on an international scale, as countries have not coalesced under any sort of international standard. Firms involved with AI technology should monitor the shifting regulatory sands, even potentially delaying large-scale investments until it becomes clear that they will receive the state’s blessing.
And, while AI could potentially decrease overall energy consumption by increasing network efficiencies, AI technologies require large amounts of energy for computation and communication, which could frustrate plans for energy-efficient implementation. AI innovators must understand both the physical and governmental barriers that could slow the 6G revolution.
AI & IP
Even as AI patents have skyrocketed, a recent decision from the U.S. Patent and Trademark Office threatens to curb a new source of AI intellectual property: Patents on products invented by AI. In April 2020, the PTO considered a case involving an AI called DABUS, a system of neural networks trained to independently recognize the novelty and salience of inventions. https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf. Without human intervention, DABUS had “invented” an improved beverage container, and a “neural flame” device for search-and-rescue missions. The PTO ruled that it could not list DABUS as the inventor, citing terms in the statute like “individual,” “herself,” and “person.” Id. at 8.
AI inventorship will only grow. The PTO says that the term “inventor” must refer to “the individual who invented or discovered the subject matter of the invention.” Id. at 6. Under this opinion, these devices will be unpatentable. It will have enormous ramifications. The owners of AI systems will still want to protect their intellectual property rights, and may end up relying on other forms such as trade secret protection. That would be a massive change to the status quo, where patents dominate the enforcement landscape. Trade secret enforcement comes with its own challenges, such as proving misappropriation and in some cases, showing how damages can be directly attributable to the misappropriation. Increased reliance on trade secrets will force companies to shift their IP policies and practices. A trade secret exists only as long as it is kept secret. Even an accidental disclosure breaks trade secret protection. A shift from patent to trade secret IP protection may force AI-based companies to update their employment agreements, as they would need to ensure that their employees never disclose the secrets underlying their firms’ innovations, even after they leave the firm.
6G technologies will require a massive expansion of the regulatory structure around spectrum, the data-bearing frequencies that enable wireless communications. 6G requires frequencies in the 100 GHz to 1 THz range, which will allow for the extreme densification of systems, enabling hundreds and even thousands of simultaneous wireless connections with significantly higher capacity than 5G systems. This can support such innovations as zero-latency local networks, wireless “fiber-like” data rates between local devices, wireless data center networks (reducing infrastructure cost), on-chip wireless networks, nano-networks (which connect nano-devices), and intersatellite communications.
This spectrum revolution will require competent regulation. In the U.S., two federal agencies now regulate frequency: the FCC, which governs private use, and the National Telecommunications and Information Administration (NTIA), which manages the federal government’s use. In March 2019, the FCC voted to open frequencies up to 3 THz, saying it had “launched the race to 6G.” The FCC created “experimental licenses” to allow researchers to experiment in this range. The FCC and NTIA have jurisdiction over these spectra, which will require meaningful coordination between them.
The presence of two regulatory agencies can have downsides for industry, as agencies can fight over territory, credit, and decision-making authority. For instance, the DOJ’s Antitrust Division and the Federal Trade Commission (FTC) have overlapping authority to enforce antitrust laws. This recently became an issue in a case where the FTC sought to impose antitrust liability on telecommunications manufacturer Qualcomm, while DOJ Antitrust argued that the same suit threatened national security because, by disadvantaging Qualcomm, it risked undermining US leadership in 5G technology and boosting Chinese manufacturer and alleged saboteur Huawei.
6G may revolutionize health care through fully-automated surgery, rapid transfer of medical data, and fully implantable devices. But where there is medicine, there are malpractice suits. 6G-enabled technologies could create an unanticipated wave of liability. 6G’s infrastructure requirements will also pose a problem for medical providers who hope to rely on its interconnected technology.
Medical malpractice liability from defective devices is not a new phenomenon. To take one example, Stryker has paid over $2 billion to settle lawsuits stemming from its hip-replacement therapy. Consider the implementation of a 6G-powered brain-computer interface (BCI). The BCI allows individuals to control machines with their brains, offering a host of innovations: Sophisticated prosthetic limbs, improved memory for patients suffering from Alzheimer’s, and similar technologies. But the promise and potential profit of these innovations comes with enormous potential liability. Individual brain-damage medical malpractice suits can result in judgments of millions of dollars. A brain-damage class action would threaten manufacturers with crushing liability. In medical device litigation, the liability would not solely lie on manufacturers: Anyone in the distribution chain, from manufacturers, to hospitals, to doctors, needs to beware of this potential wave of liability.
A global 6G network requires a tightly-nested web of transmitters and base stations. Because 6G base stations are anticipated to have a transmission distance under 200 meters, some estimate that full 6G implementation will require 100 billion base stations globally. The placement of 5G towers, which average one per block in many U.S. cities, has sparked fights about local autonomy and property rights, as many states have barred cities from setting the rates they charge companies to erect antennae. The increased number of 6G base stations poses a proportionately greater problem.
These infrastructure battles will not be confined to the planet’s surface. Nations have already begun building 6G networks from above, launching satellites that could offer worldwide 6G connectivity. This proliferation creates a host of issues, from the creation of clouds of “space junk” that increase the risks of satellite collisions, to the stoking of tensions between world powers over interstellar dominance. These battles could be fought at a lower altitude, as researchers have proposed using drones to establish 6G networks. Even there, the regulatory environment is still developing, with the FAA announcing a set of rules in December 2020.
Privacy may be the most important legal issue with 6G innovation: When technology promises global connectivity, an individual’s entire life risks becoming one data breach away from compromise. The ultimate role of 6G in society may be determined by whether this promised safety can allay well-grounded fears of a potential breach of the interconnected network that will hold all of society’s data.
Current FTC regulations require firms to take “reasonable security” measures. A web of regulators patrol privacy issues: The SEC has guidelines for cybersecurity in finance, DHS investigates cybercriminals, and Commerce “is tasked with enhancing cybersecurity awareness and protections.”
States have implemented their own regulations. For example, New York recently enacted the SHIELD Act, which, among other things, requires companies to carry out “reasonable” security measures, including implementing procedures to train employees and “adjust[ing] the security program in light of business changes or new circumstances.” The Act also permits the attorney general to levy a fine of up to $5,000 for each failure to adhere to reasonable security standards under Section 350(d) of the New York General Business Law. S.5575B Reg. Sess. 2019-2020 (N.Y. May 7, 2019).
Both federal and state laws use reasonableness as a starting point, but it is impossible to predict what security protocols will be “reasonable” in 6G’s new frontier. The difficulty of predicting and adapting to 6G’s innovations will have real consequences due to the harsh penalties imposed by state-level privacy laws. The California Consumer Privacy Act, enacted in 2018, assesses fines at $750 per breach per consumer per incident, or actual damages. Additionally, plaintiffs in private litigation have struggled to show harm from data breaches, but that could become easier as 6G networks will increase our reliance on data. A single breach could lead to catastrophic damages for the victimized firm.
Despite the gloom, 6G actually could offer a privacy renaissance. Right now, encryption is accomplished via asymmetric cryptography, a process by which data are encoded with a public key, accessible to all, but can only be decoded with a private key. Modern keys are easy to generate but difficult to reverse-engineer with current technology. That will change with quantum. Quantum computing lets cryptographers create secure keys by leveraging the uncertainty principle, the idea central to quantum encryption that an attempt to measure a piece of information disturbs the system in a way that can be detected. The sending computer transmits the key to decode the encrypted data to the receiving computer, but both can sense a disturbance in the data, at which point the sending computer sends a different key until the transmission is finished without being disturbed.
This increase in encryption technology will magnify a separate issue: The (in)ability of law enforcement to access devices of the accused. In 2016, Apple refused to unlock the phone belonging to the perpetrator of a mass shooting. Once encryption becomes functionally unbreakable, the divergent interests of the government and the service and device providers will come to a head.