News Detail Banner
All News & Events

Adapting the Rules of Evidence for the Age of AI

November 06, 2025
Firm Memoranda

Adapting the Rules of Evidence for the Age of AI 

The rapid spread of artificial intelligence has created challenges for courts tasked with assessing the authenticity and reliability of digital evidence. AI-generated deepfakes that imitate authentic images, videos, and recordings pose a threat to the integrity of legal fact-finding. The proliferation of deepfakes increases the likelihood that falsified evidence will enter the record and undermines confidence in the reliability of digital evidence.

Recognizing this, the U.S. Judicial Conference’s Advisory Committee on Evidence Rules has taken steps to adapt the Federal Rules of Evidence. Two key proposals emerged. One would have amended Rule 901 to create a specialized authentication process for potential deepfakes. The other, which the Advisory Committee preferred, is a new rule of evidence, Rule 707, designed to govern machine-generated evidence by applying expert witness standards for assessing reliability. The development of Rule 707, as well as parallel state initiatives governing the use of AI evidence in the courtroom, marks a significant step towards adapting the rules of evidence to the reality of AI.

I. Current Legal Landscape

Courts have only recently begun to confront the challenges posed by the introduction of AI-generated evidence. The few reported decisions reveal uncertainty about how traditional evidentiary rules apply. This leads to a perilous situation in which judges lack clear guidance on how to evaluate allegations of falsification. Yet allowing the jury to make that determination risks exposing jurors to manipulated and misleading evidence.

State v. Rittenhouse (Wis. Cir. Ct. 2021), highlights the need for a coherent framework to enable courts to assess whether evidence is AI-generated. During the prosecution’s cross-examination of Kyle Rittenhouse, the prosecution sought to introduce a zoomed-in drone video that it claimed showed the defendant raising his rifle.[1] The defense objected on the ground that Apple’s “pinch-to-zoom” function relied on an algorithm that might generate new pixels and thereby alter, rather than merely enlarge, the original image.[2] The judge expressed uncertainty about how the algorithm worked, stating that he knew “less than anyone in the room about all of this stuff.”[3] The prosecution protested that zooming in on a video does not change the underlying pixels.[4] Nevertheless, the skeptical judge withheld admission of the enhanced video and directed the prosecution to provide expert testimony “within minutes,” which the prosecution was unable to do.[5]  

On the opposite end, State v. Puloka (Wash. Super. Ct. 2024), shows how courts may impose an unduly onerous burden of proof on AI-generated evidence by applying multiple evidentiary standards.  The Superior Court for King County considered whether to admit an AI-enhanced version of a bystander’s cellphone video that, the defense disclosed, was processed using Topaz Video Enhance AI to clarify footage of a nightclub shooting.[6] The court analyzed the evidence under Frye, FRE 702, FRE 401, and FRE 403. It required proof that the enhancement method was generally accepted within the forensic-video community, was relevant, was reliable, and the probative value was not substantially outweighed by the danger of unfair prejudice. The court excluded the video evidence.[7] It held that the defense was unable to show that the enhancement method was accepted in the forensic video community or that the enhancement tool was reliable.[8] The court found that the AI-enhanced video was not relevant because it used “opaque methods to represent what the AI model ‘thinks’ should be shown,” rather than showing what actually happened.[9] It also found that the risk of prejudice outweighed the probative value because the enhanced video’s depiction of the events in question would capture the jury’s attention.[10] Thus, the court excluded the video evidence.[11] 

As Rittenhouse and Puloka illustrate, without a uniform standard governing the use of AI-generated evidence, and given the variety of evidentiary standards that may potentially apply, courts are left to improvise in an ad hoc manner presenting a risk of inconsistent outcomes.

II. Proposed Amendment to Rule 901

The Advisory Committee on the Rules of Evidence has sought to address such uncertainty.  It has considered proposals to amend the Federal Rules of Evidence. In its November 8, 2024 meeting, the Advisory Committee considered a proposal by former U.S. District Judge Paul Grimm and Dr. Maura R. Grossman of the University of Waterloo to amend Rule 901. Rule 901(a) provides: “To satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.” Rule 901(b) provides examples of how that finding can be made for 10 types of evidence. Rule 901 was viewed as ripe for amendment given the growing risk that it was inadequate to enable judges to differentiate between authentic and fabricated evidence.[12] Artificial intelligence’s ability to generate images, videos, and recordings that mimic genuine materials undermines the basic assumption of Rule 901 that a document, photograph, or recording carries observable indicators of authenticity.  

The Advisory Committee’s proposed amendment to Rule 901(c) would establish a burden-shifting procedure to address claims that audio or visual evidence has been generated or altered through use of artificial intelligence, empowering courts to screen out deepfakes before reaching a jury. The proposed Rule 901(c) reads as follows: “If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that a jury reasonably could find that the evidence has been altered or fabricated, in whole or in part, by artificial intelligence [by an automated system], the evidence is admissible only if the proponent demonstrates to the court that it is more likely than not authentic.”[13]

Under the proposal, a party challenging authenticity bears the initial burden of presenting sufficient facts for the judge to find that a reasonable jury could conclude the material has been altered or is fake.[14] The judge need not determine that the evidence actually has been altered or is fake, only that a reasonable jury could so find.[15] If an objecting party makes this threshold showing, the burden shifts to the proponent of the evidence to demonstrate that it is more likely than not authentic.[16] The proponent may satisfy this requirement by offering corroborating information or by supplementing authentication through any of the methods recognized under Rules 901(b) or 902—such as metadata, chain-of-custody documentation, expert testimony—or any other way of showing the evidence is what it purports to be.[17] If the proponent cannot make this showing, the court must exclude the evidence under Rule 104(a).[18] 

Grimm and Grossman envision that these determinations would be handled before trial.[19]  Potential authenticity disputes would be flagged early through Rule 26(f) party conferences and Rule 16 scheduling conferences, allowing discovery, expert exchanges, and motion practice directed at developing a factual record on authenticity.[20] Once the threshold challenge is raised, the judge would hold an evidentiary hearing where each side would present corroborating materials or expert testimony regarding the audio or visual evidence, as well as Daubert motions directed against each other’s experts.[21] After deciding any Daubert motions and weighing the corroborating evidence, the judge would decide whether the evidence is more likely than not authentic and, on this basis, send the evidence to the jury or exclude.[22] 

The Advisory Committee also considered Grimm-Grossman’s proposal to amend Rule 901(b), which concerns “process or system” evidence, to specify procedures for machine-generated material. The proposed amendment to Rule 901(b) would have replaced the current requirement that evidence concerning a process or system be shown to be “accurate” with a new requirement that the proponent demonstrate the process or system produces a “valid and reliable result.”[23] If the proponent acknowledged that the item was generated by AI, the proponent would be required to describe the “training data and software or program that was used” and show that they “produced valid and reliable results in this instance.”[24]

The Advisory Committee ultimately declined to amend Rule 901. Several members observed that importing reliability standards into authentication under Rule 901 would be “mixing apples and oranges” and that reliability concerns should be addressed through a separate rule.[25]  Others noted the limited number of real-world cases involving AI-generated evidence so far and observed that courts have historically adapted to technological change without new rules.[26] The Advisory Committee opted for a “wait-and-see” approach, preserving the flexibility of Rule 901’s authentication framework while tabling further consideration of a Rule 901 amendment until, in essence, the rise of deepfakes warrants further rulemaking.[27]

III. The Development of Rule 707

Instead of amending Rule 901, the Advisory Committee developed a new rule of evidence, Rule 707 (“Machine-Generated Evidence”). Rule 707 is designed to address concerns about the reliability of computer technologies that generate predictions or inferences from data.[28] Under Rule 707, AI and other machine-generated evidence offered at trial without an expert witness would be subjected to the same reliability standards as expert witnesses. Such evidence could be admitted only if it: (1) assists the trier of fact, (2) is based on sufficient facts or data, (3) is the product of reliable principles and methods, and (4) reflects a reliable application of the principles and methods to the facts.[29] Rule 707 expressly “does not apply to the output of basic scientific instruments,” such as a digital thermometer.[30] 

The application of the Rule 702 standard to AI-generated evidence reflects the Advisory Committee’s concern that parties not be able to “evade the reliability requirements of Rule 702 by offering machine output directly” into evidence.[31] Importantly, Rule 707 would provide a mechanism for an opposing party to scrutinize the reliability of AI-generated evidence by assessing how the system that produced it operated and applied its methods to the facts.[32] Judges would examine whether the data and other inputs used by an AI system are adequate to ensure that its results are valid—specifically, whether the training data are representative enough to produce accurate outcomes for the population relevant to the case.[33] Rule 707 also directs courts to assess the extent to which both the opposing party and independent researchers have been allowed meaningful access to the system, so that its performance can be tested through genuine adversarial scrutiny and independent peer review rather than relying solely on validation studies provided by the developer.[34]

In May 2025, the Advisory Committee voted 8–1 in favor of seeking public comment.[35] In August 2025, the Committee on Rules of Practice and Procedure of the Judicial Conference of the United States released Rule 707 for public comment until February 16, 2026.[36] Critics caution that Rule 707 applies only to evidence that the proponent acknowledges was created by AI, and not to evidence whose authenticity is in dispute. Thus, it does little to help courts avoid deepfakes or other falsified evidence.[37] Nevertheless, Rule 707 marks an important first step at the federal level to adapt the rules of evidence to the increasing use in court of AI-generated materials.

If Rule 707 is adopted, and parties and their lawyers in high stakes cases behave as they have to date, we can expect substantial additional pretrial proceedings and expense. 

  • Discovery will be permitted to test the rule’s four elements of admissibility listed above.
  • Disputes will inevitably arise over the extent to which the proponent has disclosed sufficient information about the AI tool to enable the opponent to assess those four elements.
  • Confidentiality and trade secret issues about the AI tool will have to be resolved.
  • Opposing parties may feel compelled to engage experts to dispute those requirements. In limine motions will be filed to exclude the evidence. 
  • Some judges may require the proponent to make a proffer to support the admission of the evidence well before the trial date so that the jury will not have to wait outside the courtroom while the matters are resolved. The term “Rule 707 hearing” may be coined, as was the term “Markman hearing.”   
  • One cannot assume that everyone will play by the rules or have good motives. As a threshold issue, the scope of Rule 707’s exclusion for the outputs of “basic scientific instruments” will be tested.  Some parties might challenge computer-assisted but non-AI-generated evidence on the grounds that it is subject to Rule 707.    

None of this is to say that proposed Rule 707 is a bad idea or ill-suited to its purpose. But we should expect these consequences and be prepared to refine the rule now or through the caselaw.  Over time, judicial experience and possible refinements to Rule 707’s language or advisory notes will provide greater certainty regarding when the rule applies and what proof is required to demonstrate reliability, much as courts gradually developed structured Daubert-style reliability frameworks. But in the near term, the rule will likely increase the cost and complexity of introducing AI-generated evidence.

III. The Development of Rule 707

While the federal rulemaking process continues for Rule 707, several states have taken the lead in developing procedures to address the evidentiary challenges posed by artificial intelligence.  Louisiana, New York, and California have advanced the most concrete proposals to date.

On August 1, 2025, Louisiana became the first state to establish a framework for addressing AI-generated evidence.[38] Louisiana’s Act No. 250 amends Louisiana Code of Civil Procedure Article 371 to require attorneys to “exercise reasonable diligence to verify the authenticity of evidence before offering it to the court.”[39] The Act provides that if an attorney “knew or should have known through the exercise of reasonable diligence that evidence was false or artificially manipulated, the offering of that evidence without disclosure of that fact” shall subject them to punishment for contempt of court and potentially other disciplinary action.[40] Louisiana’s Act 250 further requires parties to disclose if any of their evidence has been “generated by artificial intelligence or altered by any means.”[41] Finally, the Act requires parties to address at a pretrial conference or hearing any suspicions that any exhibits were suspected to have been generated by artificial intelligence.[42]

In New York, Assembly Bill A1338 proposes to regulate the admissibility of evidence created or processed by artificial intelligence. It would amend both the Criminal Procedure Law (CPL § 60.80) and the Civil Practice Law & Rules (CPLR § 4552). The bill conditions the admission of AI-generated or AI-processed evidence into evidence in a criminal or civil proceeding on two requirements: (1) the evidence be substantially supported by independent and admissible evidence, and (2) the proponent establish the reliability and accuracy of the specific AI use in creating or processing the evidence.[43]  Under the bill, the reliability and accuracy of an AI’s specific use are established when a qualified expert testifies that the system has been rigorously tested and shown to produce consistent, reliable results in comparable settings, and has not been exposed to variables likely to cause material inaccuracy, and the court, in assessing such probability, considers the weight of the AI-generated evidence relative to other evidence.[44]

If passed, California’s Senate Bill SB 11 would direct the California Judicial Council to review by January 1, 2027 how artificial intelligence affects the admissibility of evidence and to develop any rules of court needed to assist courts in handling claims that evidence was generated or manipulated by AI.[45] 

As artificial intelligence transforms the creation and manipulation of digital content, the rules of evidence stand at an inflection point.  Rule 707 and emerging state initiatives mark only the first steps in addressing these challenges.  In the years ahead, policymakers will need to determine if—and if so, how—the existing evidentiary rules should be modified to ensure the authenticity and reliability of AI-generated materials in the courtroom.

***

If you have any questions about the issues addressed in this memorandum, or if you would like a copy of any of the materials mentioned in it, please do not hesitate to reach out to:

John B. Quinn
Email: johnquinn@quinnemanuel.com
Phone: +1 213-443-3200

To view more memoranda, please visit www.quinnemanuel.com/the-firm/publications/

To update information or unsubscribe, please email updates@quinnemanuel.com

 

END NOTES

[1]   Jon Brodkin, Rittenhouse Trial Judge Disallows iPad Pinch-to-Zoom: Read the Bizarre Transcript, Ars Technica (Nov. 10, 2021), https://arstechnica.com/tech-policy/2021/11/rittenhouse-trial-judge-disallows-ipad-pinch-to-zoom-read-the-bizarre-transcript/.

[2]   Id.

[3]   Id.

[4]   Id.

[5]   Id.

[6]   State v. Puloka, No. 21-1-04851-2-KNT (Wash. Super. Ct. King Cnty. Mar. 29, 2024) at 2.

[7]   Id.

[8]   Id. at 4-5.

[9]   Id. at 5-6.

[10]   Id. at 6.

[11]   Id.

[12]   Advisory Comm. on Evidence Rules, Agenda Book 23 (Nov. 8, 2024), https://www.uscourts.gov/sites/default/files/2024-11_evidence_rules_committee_meeting_agenda_book_final_10-24.pdf. (“Agenda Book”).

[13]   Agenda Book at 31.

[14]   Id. at 244. 

[15]   Id.

[16]   Id. at 250.

[17]   Id.

[18]   Paul W. Grimm (Ret.), Maura R. Grossman, Daniel W. Linna Jr., Abhishek Dalal, Chongyang Gao, Chiara Pulice, V.S. Subrahmanian & Hon. John Tunheim, Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases (Aug. 8, 2024) at 49.

[19]   Id. at 51.

[20]   Id. at 34.

[21]   Id. at 38-39.

[22]   Id. at 49.

[23]   Agenda Book at 240.

[24]   Id.

[25]   Id. at 69.

[26]   Id. at 70.

[27]   Advisory Committee on Evidence Rules, Agenda Book, May 2 2025 Meeting, U.S. Courts (May 2 2025) at 77, https://www.uscourts.gov/sites/default/files/2025-04/2025-05_evidence_rules_committee_agenda_book_final.pdf.

[28]   U.S. Judicial Panel Advances Proposal to Regulate AI-Generated Evidence, Reuters (Oct. 2024), https://www.reuters.com/legal/government/us-judicial-panel-advances-proposal-to-regulate-ai-generated-evidence-2024-10-24/.

[29]   Proposed Fed. R. Evid. 707 on Artificial Intelligence–Generated Evidence, Nat’l L. Rev. 3 (August 21, 2025), https://www.natlawreview.com/article/proposed-federal-rule-evidence-707-artificial-intelligence-generated-evidence.

[30]   Advisory Committee on Evidence Rules, Agenda Book, May 2 2025 Meeting, U.S. Courts (May 2 2025) at 199, https://www.uscourts.gov/sites/default/files/2025-04/2025-05_evidence_rules_committee_agenda_book_final.pdf.

[31]   Agenda Book at 35.

[32]   Proposed Fed. R. Evid. 707 on Artificial Intelligence–Generated Evidence, Nat’l L. Rev. 3 (August 2021, 2025), https://www.natlawreview.com/article/proposed-federal-rule-evidence-707-artificial-intelligence-generated-evidence.

[33]   Agenda Book at 35.

[34]   Id.

[35]   Proposed Fed. R. Evid. 707 on Artificial Intelligence–Generated Evidence, Nat’l L. Rev. 3 (August 2021, 2025), https://www.natlawreview.com/article/proposed-federal-rule-evidence-707-artificial-intelligence-generated-evidence.

[36]   Id.

[37]   New AI Evidence Rule Is a Good Start, But More Is Needed, Law360 (August 27, 2025), https://www.law360.com/articles/new-ai-evidence-rule-is-a-good-start-but-more-is-needed.

[38]   New Louisiana Law Sets Rules On AI-Generated Evidence, Law360, (June 26, 2025), https://www.law360.com/pulse/articles/2356896

[39]   La. Act No. 250, Reg. Sess. (2025), Legis. Bill Doc. No. 1425558, available at https://www.legis.la.gov/Legis/ViewDocument.aspx?d=1425558.

[40]   Id.

[41]   Id.

[42]   Id.

[43]   A.B. 1338, 2025–2026 Leg., Reg. Sess. (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A1338

[44]   Id.

[45]   Legislative Counsel’s Digest, S.B. 11, 2025–2026 Leg., Reg. Sess. (Cal. 2025), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB11.