Artificial intelligence, or AI, is the broad conceptual term for the technologies or systems making it possible for computers to perform tasks involving human-like decisionmaking, intelligence, learned skills and/or expertise. Once considered a remote possibility for a futuristic tomorrow, the advances in technology over the past 20 years have accelerated the development and integration of AI in multiple private and public sectors. In 2015, over $2.4 billion in venture capital was invested into the development of AI-based technologies. Governmental agencies, ranging from the Department of Defense to the Treasury, are active in exploring and implementing AI for use in the public sector. Within the private sector, well-known companies such as Google, Facebook, Apple and Uber, as well as start-ups across the country, are active in the research and development of innovative AI technology-based products. Examples include self-driving cars, robotic surgical equipment, complex automated accounting and security systems, and even software performing legal tasks such a document review or research. AI has innumerable practical applications, including medical diagnosis expert systems that emulate the decision-making of physicians, automated securities trading systems, automated drones, and many other variants. Developing along with AI is the development of natural language processing, which in the broadest sense concerns the interactions between computer programs and human languages such that computers are learning to emulate human communication.
Emerging with these technologies is an ever-increasing public concern for the many risks present where decisions are made by computers and not by humans. Much has been written of the ethics, safety, and regulatory concerns presented by the rapid growth of AI technologies. Policymakers are forced to chart new territories when tasked with drafting legislation that does not stifle AI innovation, but protects the public from possible dangers presented when computer judgment replaces that of humans. The rapid development of AI technology is in tension with the relative snail's pace, and lack of expertise, of state and national legislatures. Protection for the public from AI technologies will need to be enacted, and should be, but our courts may be the first to address these novel legal issues.
Unlike legislation, however, the protection provided by the courts is remedial not preventative. Courts assess liability and damages for activity that has already transpired based on prior legal precedent. Cases where the harm is alleged to have been caused by AI-based computers or systems ask the court to unravel novel technology and apply ill-fitting case law to make determinations of liability. For example, common law tort and malpractice claims often center on the very human concepts of fault, negligence, knowledge, intent, and reasonableness. So what happens when human judgment, or human scienter, is replaced by a computer? What happens when either or both the perpetrator and/or victim is not a human? What happens when there is a real cause of action, but an artificial defendant? Who is liable and what harm was caused?
This article is an overview of how courts have responded to lawsuits involving AI and related technologies, what types of additional legal claims are to be expected as AI becomes more common, and how the law might evolve to address future claims involving AI.
Courts and Common Law Claims Involving Artificial Intelligence
Although claims involving AI technology are novel and only a handful of courts have tackled AI related technologies or products, common law claims involving analogous automated technology can be analyzed to provide a framework for developing jurisprudence regarding AI technology.
For example, a decision in a consolidated class action in the District Court for the Eastern District of Missouri found that the use of a computer program to simulate human interaction could give rise to liability for fraud. In re Ashley Madison Customer Data Sec. Breach Litig., 148 F. Supp. 3d 1378, 1380 (JPML 2015). Among the claims related to a data breach on the infamous Ashley Madison online dating website in 2015 that resulted in mass dissemination of user information, were allegations that defendants were engaging in deceptive and fraudulent conduct by creating fake computer “hosts” or “bots,” which were programmed to generate and send messages to male members under the guise that they were real women, and inducing users to make purchases on the website. It is estimated that as many as 80% of initial purchases on the website—millions of individual transactions—were conducted by a user communicating with a bot operating as part of Ashley Madison’s automated sales force for the website.
Another court, in a case involving an internet advertising breach of contract claim, was asked to resolve a dispute over the meaning of “impressions,” a key term in Internet advertising. Go2Net, Inc. v. C I Host, Inc., 115 Wash. App. 73 (2003). The Go2Net Court determined that the parties’ contract permitted visits by search engines and other “artificial intelligence” agents, as well as human viewers, in the advertiser’s count of “impressions.” Id. at 86.
CNBC reported an incident involving online “bots,” where an “automated online shopping bot” was set up by a Swiss art group, given a weekly allowance of $100 worth of Bitcoin—an online cryptocurrency—and programmed to purchase random items from the “dark web” where shoppers can buy illegal/stolen items. In January 2015, the Swiss police confiscated the robot and its illegal purchases to date, but did not charge the bot or the artists who designed it with any crime. We can soon expect to see cases of similar ilk emerge in both criminal and civil courtrooms. See http://www.cnbc. com/2015/04/21/robot-with-100-bitcoin-buys-drugs- gets-arrested.html
Cases involving personal injury resulting from automated machines have also been litigated. For example, cases have involved workers compensation claims or claims against manufacturers by workers injured by robots on the job. See, e.g., Payne v. ABB Flexible Automation, Inc., 116 F.3d 480, No. 96-2248, 1997 WL 311586, *1-*2 (8th Cir. 1997) (per curiam) (unpublished table decision); Hills v. Fanuc Robotics Am., Inc., No. 04-2659, 2010 WL 890223, *1, *4 (E.D. La. 2010); Bynum v. ESAB Grp., Inc., 651 N.W.2d 383, 384-85 (Mich. 2002) (per curiam); Owens v. Water Gremlin Co., 605 N.W.2d 733 (Minn. 2000). There has also been extensive litigation over the safety of surgical robots, especially the “da Vinci” robot manufactured by Intuitive Surgical, Inc. See, e.g., O'Brien v. Intuitive Surgical, Inc., No. 10 C 3005, 2011 WL 304079, at *1 (N.D. Ill. Jul. 25, 2011); Mracek v. Bryn Mawr Hosp., 610 F. Supp. 2d 401, 402 (E.D. Pa. 2009), aff'd, 363 F. App'x 925 (3d Cir. 2010); Greenway v. St. Joseph's Hosp., No. 03-CA-011667 (Fla. Cir. Ct. 2003). Although the court in United States v. Athlone Indus., Inc., 746 F.2d 977, id. at 979 (3d Cir. 1984) stated that “robots cannot be sued” and discussed instead how the manufacturer of a defective robotic pitching machine is liable for civil penalties for the machine's defects, it is important to note that this decision was rendered in 1984. Robots, and AI technology, have become far more sophisticated and as such courts will continue to grapple with the question of assessing liability going forward as the use of these AI technologies and autonomous machines gain mainstream acceptance.
Anticipated future litigation surrounding liability for “driverless” cars might run into roadblocks when looking at the limited body of case law involving other forms of what are referred to as “autonomous moving vehicles.” Liability has often been difficult to establish in other autonomous moving vehicle cases where alternative theories of liability are present. For example, in Ferguson v. Bombardier Service Corp., 244 F. App'x 944 (11th Cir. 2007), the court rejected a manufacturing defect claim against the manufacturer of an autopilot system in a military cargo plane, when the court found equal credibility in the defense theory that the loading of the plane was improper, such that a strong gust of wind caused the plane to crash. Even cases decided almost fifty years ago reflect the current legal analysis concerning the question of liability for automated technologies. For example, in Nelson v. American Airlines, Inc., 70 Cal. Rptr. 33 (Cal. Ct. App. 1968), the Court applied the doctrine of res ipsa loquitur in finding an inference of negligence by American Airlines relating to injuries suffered while one of its planes was on autopilot, but ruled that the inference could be rebutted if American Airlines could show that the autopilot did not cause the accident or that an unpreventable cause triggered the accident.
More recently, auto manufacturer Toyota was embroiled in a multi-district litigation matter involving allegations that certain of its vehicles had a software defect that caused the vehicles to accelerate notwithstanding measures the drivers took to stop. The court denied Toyota’s motion for summary judgment premised on the grounds that there could be no liability, because the plaintiff and plaintiff’s experts were unable to identify a precise software design or manufacturing defect, instead finding that the evidence supported inferences from which a reasonable jury could conclude that the vehicle continued to accelerate and failed to slow or stop despite the plaintiff’s application of the brakes. In re Toyota Motor Corp. Unintended Acceleration Mktg., Sales Practices, & Prod. Liab. Litig., 978 F. Supp. 2d 1053, 1100-01 (C.D. Cal. 2013).
It remains to be seen whether the principles of res ipsa loquitur will be used by modern courts to conclude that the car (or other automated device), not the driver/operator, is at fault. Defendants will argue that the doctrine should not apply when it is unreasonable to infer that the accident was caused by a design or manufacturing defect, or when the accident in question is not one ordinarily seen with design defects. See Restatement (Third) of Torts: Prod. Liab. § 3 (1998). What is clear is that difficult questions will continue to arise when autonomous machines are involved in accidents and/or cause injury.
Common Law Claims on the Horizon
As AI programs become more adaptive and capable of learning on their own, courts will have to determine whether such programs can be subject to a unique variant of agency law. Current laws of agency may not apply, because once an autonomous machine decides for itself what course of action it should take, the agency relationship becomes frayed or breaks altogether. See Restatement (Third) of Agency §7.07 (2006) (“An employee acts within the scope of employment when performing work assigned by the employer or engaging in a course of conduct subject to the employer’s control. An employee’s act is not within the scope of employment when it occurs within an independent course of conduct not intended by the employee to serve any purpose of the employer.”); id. §7.03 (describing that a principal is subject to vicarious liability for an agent's actions only when the agent is acting within the scope of employment). As a result, it is possible that the courts or legislatures will be asked to impose strict liability on the creators of programs, for the acts of such programs.
Product liability claims and conventional views of culpability and ethics are certain to be tested by these autonomous machines—like self-driving vehicles—where the current roadmap is for a mixed human and AI driver world. Product liability law provides some framework for resolving such claims; with a “product” like an autonomous car, the law groups those possible failures into familiar categories: design defects, manufacturing defects, information defects, and failures to instruct on appropriate uses. Complications may arise when product liability claims are directed to failures in software, as computer code has not generally been considered a “product” but instead is thought of as a “service,” with cases seeking compensation caused by alleged defective software more often proceeding as breach of warranty cases rather than product liability cases. See, e.g., Motorola Mobility, Inc. v. Myriad France SAS, 850 F. Supp. 2d 878 (N.D. Ill. 2012) (case alleging defective software pleaded as a breach of warranty); In re All Am. Semiconductor, Inc., 490 B.R. 418 (Bankr. S.D. Fla. 2013) (same).
Under these metrics, courts will have to assess what liability to impose for accidents involving the various types of automated vehicles available today, as well as those soon to be released. One option is to insist on strict liability for manufacturers of the automated systems. If there is no strict liability, a court might find itself in uncharted waters if forced to make a determination as to how best to weigh the comparative liability of AI programs and drivers. The solution suggested by the existing law, while dated, would hold the vehicle’s manufacturer liable and let the manufacturer seek indemnity or contribution from other parties, if any, that might be responsible. However, consideration also may be given to apportioning responsibility among all of the parties that participated in building and maintaining the vehicle’s autonomous systems, through the application of a variation of “common enterprise” liability. In the field of consumer protection, for instance, the Federal Trade Commission often invokes the “common enterprise” doctrine to seek joint and several liability among related companies engaged in fraudulent practices. See, e.g., FTC v. Network Servs. Depot, Inc., 617 F.3d 1127 (9th Cir. 2010); SEC v. R.G. Reynolds Enters., Inc., 952 F.2d 1125 (9th Cir. 1991); FTC v. Tax Club, Inc., 994 F. Supp. 2d 461 (S.D.N.Y. 2014). A “common enterprise” theory might allow the law to impose joint liability, for limited types of claims, without having to assign every aspect of wrongdoing to one party or another.
Legislatures and regulatory agencies have already been making great strides to determine how best to attribute fault in such situations. For example, the states of Nevada, Florida, California, Michigan and Tennessee and the District of Columbia have all passed legislation related to autonomous automobiles, and nineteen additional states have similar bills under consideration. See Jessica S. Brodsky, Autonomous Vehicle Regulation: How an Uncertain Legal Landscape May Hit the Brakes on Self-Driving Cars, 31 Berkeley Tech. L.J. 851 (2016). Sophisticated parties are destined to address a variety of complicated legal issues presented with the advent of AI technologies and products. In particular, the competing interests between manufacturers of various AI components and the end products that incorporate those components will need to be addressed through contracts and robust indemnification agreements. Legislators and courts will soon have to answer the questions such as whether a machine can enter into a binding contract on behalf of itself, or a person it represents, and does a machine-negotiated contract redefine what it means to look to the understanding of one party or between parties? We are at the precipice of requiring new definitions for scienter, “meeting of the minds,” and a host of other black letter law constructs that have served as the underpinning of commercial litigation for generations.
Patent Litigation and Specific Legal Issues Facing AI Innovations
AI technologies have also been at issue in patent cases, and such cases are certain to increase. To date, the main area courts have addressed is whether the AI subject matter at issue is patent-eligible subject matter under 35 U.S.C. § 101. Courts addressing this question must first ask whether a patent’s claims are directed to a patent-ineligible concept, such as laws of nature or abstract ideas. If not directed to such a concept, a patent will be enforceable under this test. However, if a patent’s claims are directed to a patent-ineligible concept, the analysis moves to a second step: whether the patent claims, despite being directed to a patent-ineligible concept, are nevertheless patent-eligible because they include a sufficiently “inventive concept”—an element or combination of elements that is sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the ineligible concept itself. See Vehicle Intelligence & Safety LLC v. Mercedes-Benz USA, LLC, 635 F. App'x 917 (Fed. Cir. 2015), cert. denied, 136 S. Ct. 2390 (2016), (dismissing certain claims directed to the use of “expert system(s)” to screen equipment operators for impairments such as intoxication as patent-ineligible). The Vehicle Intelligence Court first determined that the claims at issue were directed to a patent-ineligible concept—“the abstract idea of testing operators of any kind of moving equipment for any kind of physical or mental impairment.” The “expert system” concept was considered abstract because, based on the definition assigned to it by the Court during claim construction, it was something performed by humans absent automation, and also because “neither the claims at issue nor the specification provide any details as to how this ‘expert system’ works or how it produces faster, more accurate and reliable results.” This lack of clarity contributed to a holding of lack of inventive concept in the second step, rendering the patent claims at issue unenforceable. The Federal Circuit compared the patent as equivalent to “a police officer field-testing a driver for sobriety.”
In Blue Spike, LLC v. Google Inc., No. 14-CV-01650-YGR, 2015 WL 5260506, at *5 (N.D. Cal. Sept. 8, 2015), aff'd, 2016 WL 5956746 (Fed. Cir. Oct. 14, 2016), the Court found that because the patents at issue sought to model on a computer “the highly effective ability of humans to identify and recognize a signal,” the patents simply cover a general purpose computer implementation of “an abstract idea long undertaken within the human mind.” The Blue Spike Court also found that the second step of the eligibility inquiry for “inventive concept” was not present as the claims “cover a wide range of comparisons that humans can, and indeed, have undertaken since time immemorial.”
At least one District Court opinion has considered the patentability of driverless cars and automated support programs. In Hewlett Packard Co. v. ServiceNow, Inc., No. 14-CV-00570-BLF, 2015 WL 1133244 (N.D. Cal. Mar. 10, 2015), Judge Freeman of the Northern District of California found that HP patents were directed to the abstract idea of “automated resolution of IT incidents” and were not patent-eligible. While rejecting evidence (lead article continued from page 3) 9 (Continued on page 11) of commercial success as evidence of an “incentive concept,” Judge Freeman considered the hypothetical of patents on self-driving cars in the context of patent eligibility. She remarked that while a self-driving car may be very commercially successful, novel, and non-obvious, the concept of a self-driving car is still abstract. So while an inventor “may be able to patent his specific implementation,” Judge Freeman disagreed that the concept of self-driving cars could be patented in the abstract. While Judge Freeman’s hypothetical is likely dicta, it nevertheless serves as a guidepost regarding patent eligibility of self-driving vehicles.
For patent litigation involving AI technologies, another area ripe for legal intervention is in the determination of inventorship. It is well-settled that an inventor can use “the services, ideas, and aid of others in the process of perfecting his invention without losing his right to a patent.” Hess v. Advanced Cardiovascular Sys., 106 F.3d 976, 981 (Fed. Cir. 1997). Furthermore, 35 U.S.C. Section 103 states: “Patentability shall not be negated by the manner in which the invention was made.” However, the patent statutes define “inventor” to mean “the individual . . . who invented or discovered the subject matter of the invention” and the statutes also describe joint inventors as the “two or more persons” who conceived of the invention. See 35 U.S.C §§ 100, 116(a). The Federal Circuit has explicitly barred legal entities from obtaining inventorship status because “people conceive, not companies.” New Idea Farm. Equip. Corp. v. Sperry Corp., 916 F.2d 1561, 1566 n.4 (Fed. Cir. 1990).
The Copyright Office has already announced that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” U.S. Copyright Office, The Compendium of U.S. Copyright Office Practices § 306 (3d ed. 2014); see also U.S. Copyright Office, The Compendium of U.S. Copyright Office Practices § 202.02(b) (2d ed. 1984), available at http://copyright.gov/history/comp/compendium-two.pdf (“The term ‘authorship’ implies that, for a work to be copyrightable, it must owe its origin to a human being.”) The 2014 iteration of the Human Authorship Requirement was partially the result of a prominent public discourse about non-human authorship stemming from the “Monkey Selfies.” See Naruto v. Slater, No. 3:2015-cv-04324, 2016 WL 362231, *1 (N.D. Cal. Jan. 23, 2016). While there have not yet been cases tackling this unique issue of inventorship, scholars have begun to take notice and weigh in. See, e.g., Ben Hattenbach, Joshua Glucoft, Patents in an Era of Infinite Monkeys and Artificial Intelligence, 19 Stan. Tech. L. Rev. 32 (2015); Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, 57 B.C. L. Rev. 1079 (2016)
Replacing Professional Judgment with Computers: Malpractice Claims Anticipated
There is no dispute that the legal and medical professions are among the professions that require the greatest decision-making and exercise of judgment. It is because of this that claims of malpractice are available to those who rely on the decision-making and judgment of the skilled, trained professionals who practice in these fields. It is also the case that these are two fields that are introducing an increasing number of AI-based technologies. In the legal industry, a growing interest in “big data” and natural language processing has resulted in start-ups seeking to tackle the difficult task of aggregating, synthesizing and modeling a collective corpus of case law. One example, RavelLaw uses natural language processing to identify, extract and classify information from legal documents, automating basic case law analysis to make research more efficient and targeted. The company hopes to add automated analysis of briefs, wording recommendations for particular judges, and probability-based outcome predictions to litigators and their clients. Another, ROSS Intelligence calls itself “Your Brand New Artificially Intelligent Lawyer” and is built in partnership with IBM using the Watson artificial intelligence supercomputer. The company highlights its ability to process natural language to assist in case law review. Another area that has had significant penetration within law firms and with clients, is the use of AI to review documents. The advent of e-discovery is such that it is not as efficient, or economical, to have attorneys conduct first reviews of the massive volumes of documents collected in large litigations. Attorney oversight remains necessary, in particular to guarantee adequate controls are in place to secure privileged and confidential information from inadvertent disclosures.
In the medical industry, robotic surgical instruments and cancer treatment devices, as well as the continued development and adoption of IBM’s Watson for medical treatment has led to increased analysis of potential liability for the use of such instruments and devices. As mentioned above, there is precedent for litigation over the safety of surgical robots, with the claims all proceeding on some form of agency theory, rather than claiming that the robot itself bears liability. By combining elements from medical malpractice, vicarious liability, products liability, and enterprise liability, the law can create a uniform approach for AI systems, thereby eliminating any inequities that may arise from courts applying different theories of liability and encouraging the continued beneficial use of such systems.
Medical malpractice is applied to healthcare providers, while vicarious liability tends to focus on institutions that employ healthcare providers. It is possible to envision a medical malpractice action based on a lack of informed consent arising when a physician fails to inform the patient of all relevant information about a course of treatment, including any risks associated with the use of autonomous machines for such treatment. The hospital's own duty to supervise the quality of medical care administered in the facility would be related to actions asserting vicarious liability, so long as the court determines that the autonomous machine can be analogized to an employee. If a court decides instead to analogize the AI system to a machine like a Magnetic Resonance Imaging device, then products liability claims may be attached to defective equipment and medical devices that healthcare providers may use. While manufacturers of medical equipment and devices can be liable through products liability actions, the learned intermediary doctrine results in the manufacturer having no duty to the patient and thus prevents plaintiffs from suing medical device manufacturers directly. See, e.g. Banker v. Hoehn, 278 A.D.2d 720, 721, 718 N.Y.S.2d 438, 440 (2000). This liability structure makes it challenging for patients to win products liability suits in medical device cases.
While AI innovations are certain to save time and money, there are concerns that AI technology, when used to replace human professional judgment, could lead to increased claims raising complex issues of causation, legal duties, and also liability. A regime based on some form of enterprise liability, similar to what has been discussed previously in relation to autonomous vehicles, which combines elements of malpractice, products liability, and vicarious liability, could address these legal challenges while encouraging professionals to purchase and use these AI systems.
Conclusions
As AI technologies, products, systems, and autonomous machines continue to develop and gain acceptance, the legal claims related to these technologies will also rise. While courts, legislatures, and regulatory agencies have begun to address the novel legal issues presented, the current legal framework leaves several areas open for significant development. Parties filing and defending actions related to AI technology will need to advance creative concepts for addressing issues such as causation and liability that will surely be at the forefront of any AI-related litigation. And when novel AI related issues arise with no apparent legal precedent or laws to rely upon, let’s still wait a bit longer before asking a robot for help.