sticky image

AI Bulletin

AI Bulletin – May 2021


On April 21, 2021, the European Commission proposed Europe’s first comprehensive legal framework to regulate Artificial Intelligence and promote trustworthy AI. The framework issued by the executive arm of the European Union aims to regulate AI technologies including autonomous vehicles, facial recognition, biometric recognition, and others, according to a four-tier scale based on the potential risk of harm to user safety and fundamental rights. The proposed additional requirements in the regulations range from increased transparency requirements for limited-risk applications, to new requirements regarding data quality, documentation and oversight for high-risk systems, to an outright ban on those considered unacceptably risky.

I. The New Legislative Proposal

The regulation proposed by the European Commission classifies Artificial Intelligence systems under four levels of risk: unacceptable, high, limited and minimal. The framework applies to public and private actors inside and outside the European Union when the AI system is placed on the EU market or its use affects people located in the EU. The regulation would not apply to private and non-professional uses of AI technologies.

  • 1) Unacceptable risk: Systems that represent “a clear threat to the security, livelihoods and rights of individuals” are considered an unacceptable risk. This category includes “AI systems or applications that manipulate human behavior to circumvent users’ free will (for example, toys that use voice assistance to encourage unsafe behavior by minors) and systems that allow governments to give a social score.” These systems are banned in their entirety.
  • 2) High risk: AI systems identified as high risk include those used in:
    1. Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
    2. Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);
    3. Safety components of products (e.g. AI application in robot-assisted surgery);
    4. Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
    5. Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
    6. Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence);
    7. Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
    8. Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts)
    9. Remote biometric identification (e.g. identifying people in a crowd)

These categories are expected to be adjusted as AI technologies develop. For services classified as high risk, the proposed regulation provides mandatory requirements for approval and market access. Those obligations include:

  • High quality datasets;
  • Adequate risk assessment and mitigation systems;
  • Traceability, including logging of activity;
  • Human oversight;
  • And transparency.

Providers of these high-risk systems will be expected to demonstrate conformance with these obligations before their product will be approved for the market. Any substantial modification of the AI system will require a new assessment.

  • 3) Limited risk: AI with limited risk, on the other hand, will only provide for an obligation of transparency, indicating explicitly to the user the use of Artificial Intelligence systems. This category includes systems such as chatbots, applications that use Artificial Intelligence to recognize the user’s emotions or use biometric data for their categorization, or systems that offer content created using technologies such as deep fake that present images or sounds created artificially but appearing as real.
  • 4) Minimal risk: Finally, minimal risk systems include “applications such as video games or AI-based spam filters.” For this category, the proposed framework imposes no additional regulations, as these uses are not expected to compromise the rights or safety of citizens. According to the Commission, the vast majority of AI systems fall into this category.

II. Potential Implications of the Legislative Proposal

The Commission’s approach sets out strict requirements, especially with regards to biometric identification through AI. The European Commission has not decided to ban higher-risk uses of AI, but has classified those systems as such and prohibited their indiscriminate use at least in public areas. However, there are “strictly defined and regulated” exceptions that fall within the sphere of public security and for which authorization by a judicial body is required, with “time limits, geographical scope and researched databases.” Examples include the search for missing minors, the prevention of imminent terrorist attacks, the location of perpetrators of serious crimes.

Offenders, according to the Commission’s text, could incur administrative fines of up to 30 million euros or, in the case of companies, fines of up to 6% of their total turnover.

III. Regulatory Adoption

The European Parliament and the Member States will have to adopt the Commission’s proposals under ordinary legislative procedures. Once adopted, the regulations will be directly applicable across the EU. The regulation expressly introduces the possibility of creating “regulatory sandboxes” to develop AI services under the guidance of the competent authorities, in order to more easily implement solutions that are “legal by design.”

Member States will be asked to designate one or more national authorities to supervise the application and implementation of this regulatory framework, as well as carry out market surveillance activities.

IV. Conclusions

Europe is planning far-reaching and forward-thinking regulations to guide the development of Artificial Intelligence and protect people from the unintended consequences of AI. By the first quarter of 2022, the EU is also planning to release rules to address liability issues related to new technologies, including AI systems. The EU is creating an ambitious reference framework so that these technologies can contribute to achieving ambitious sustainability goals, while respecting the fundamental rights of individuals.



  • On April 6, Judge Brinkema in the Eastern District of Virginia held oral arguments on the parties’ dueling motions for summary judgment in the closely-watched case Thaler v. Hirshfeld et al., Case No. 1:20-cv-00903. Plaintiff Stephen Thaler’s complaint challenges the PTO’s rejection of his patent applications that listed an artificial intelligence machine (DABUS) as the named inventor. Both sets of summary judgment motions are directed to the single legal issue of whether AI can be an “inventor” under the patent laws. Following the hearing, Judge Brinkema took both motions under submission and written rulings are expected soon.


  • On January 20, the last day of his presidential term, now-former President Trump granted a full pardon to former Uber executive Anthony Levandowski. Mr. Levandowski was sentenced last year to 18 months by Judge Alsup of the Northern District of California in prison after pleading guilty of stealing trade secrets from Google’s self-driving car program, but had not yet started serving his sentence due to the ongoing pandemic. Mr. Levandowski was originally referred for criminal investigation by Judge Alsup in 2017 in connection with the Waymo v. Uber litigation over which he was presiding (and in which Quinn Emanuel represented Waymo).

UPDATE ON THE PLANNER5D LITIGATION - Motion to Dismiss Re-filed Copyright Claims Denied

  • In the April and October 2020 editions of the AI Bulletin, we reported on Planner5D’s copyright infringement and trade secret misappropriation action against Facebook and Princeton University, relating to 2D and 3D models of virtual objects for use as machine learning training data. In the last update, Judge Orrick had granted the motion to dismiss Planner5D’s amended copyright claims, but granted leave for Planner5D to bring them again after filing new registration applications for work up to 2016 (the date that Princeton allegedly scraped the objects from Planner5D’s website). When Planner5D filed its new registration applications, however, the Copyright Office rejected them on the grounds that “the deposits submitted with these applications do not meet the requirements for registering a work as a computer program.”
  • Planner5D then re-filed its copyright infringement claim under 17 U.S.C. § 411(a), which permits copyright infringement cases where registration “has been refused.” Planner5D also petitioned the Copyright Office for reconsideration of its copyright claim registration. Facebook and Princeton moved the district court to dismiss the copyright case as premature until a final decision was issued by the Copyright Office.
  • In an order from April 14, 2021, Judge Orrick denied Defendants’ motion, holding that the case would continue despite the ongoing reconsideration proceeding at the Copyright Office, and reasoning that the case management schedule could be adjusted to ensure the Copyright Office determination on reconsideration would occur before dispositive motions were due.

UPDATE ON THE EAGLEVIEW LITIGATION - $375M Judgment After Award of Treble Damages

  • In the July 2020 edition of the AI bulletin, we reported on Eagleview’s $125M jury verdict against competitor Verisk Analytics for infringing its aerial recognition patents following a failed merger between the companies. We noted there that the jury had returned a finding of willful infringement, and that, at the same time Verisk was asking for judgment as a matter of law and a new trial, Eagleview was seeking an award of enhanced damages and attorneys’ fees.
  • On February 16, 2021, after denying Verisk’s motions, the district court granted Eagleview’s motion for enhanced damages, awarding the maximum trebled damages permitted by law, for a total of $375M. The court found against Verisk on all nine factors set out by the Federal Circuit in Read Corp. v. Portec, Inc. 970 F.2d 816 (Fed. Cir. 1992), finding, for example, that Verisk had deliberately copied the patented ideas from Eagleview, that Verisk did not investigate the scope of Eagleview’s patent, that Verisk made litigation needlessly complex to increase Eagleview’s costs, that Verisk’s misconduct had lasted a decade, that Verisk never took remedial actions, and that Verisk attempted to conceal its misconduct.
  • Verisk has filed a notice of appeal to the Federal Circuit.


  • After a six-day trial before Judge Gilstrap in the Eastern District of Texas, Quinn Emanuel won a complete defense victory for client KeyMe in a six-patent case. KeyMe provides more than 4,000 automated key-duplicating machines throughout the United States using innovative AI and cloud-based technologies. KeyMe’s primary competitor, the Hillman Group, alleged that KeyMe infringed six of its utility patents and sought a large running royalty and a permanent injunction. The jury deliberated for less than three hours before returning a verdict that KeyMe did not infringe any of the 18 asserted claims, and further invalidated a majority of those claims. The QE team was led by Sean Pak and Dave Nelson (both Co-Chairs of the firm’s National Intellectual Property Litigation Practice), as well as Eric Huang and Jeff Nardinelli.

Quinn Emanuel Urquhart & Sullivan, LLP is a 800-lawyer business litigation firm – the largest in the world devoted solely to business litigation and arbitration. Our lawyers have tried over 2,300 cases and won 88% of them. When we represent defendants, our trial experience gets us better settlements or defense verdicts. When representing plaintiffs, our lawyers have won over $70 billion in judgments and settlements. We have also obtained five 9-figure jury verdicts, four 10-figure jury verdicts, forty-three 9-figure settlements, and nineteen 10-figure settlements.