Independent editorial resource. Not affiliated with any vendor. Not legal advice. Pricing verified April 2026 from public sources; confirm with vendor.
agenticediscovery.com

AI eDiscovery FAQ (April 2026)

Updated 21 April 2026 | 30+ questions answered | Not legal advice

Every common question about AI and agentic eDiscovery in 2026 answered clearly, with links to the deeper reference on each topic. The FAQ schema on this page is indexed by Google for AI Overview and rich-result surfaces.

Definition and taxonomy

What is AI eDiscovery?+

AI eDiscovery refers to the use of artificial intelligence to identify, process, and review electronically stored information (ESI) in civil litigation. In 2026, most AI eDiscovery is Technology Assisted Review (TAR 2.0 / Continuous Active Learning), often with an LLM relevance scorer layered on top. Genuinely agentic eDiscovery -- where LLM agents reason across issues, custodians, and timelines -- is real but remains narrow. Read more →

What is agentic eDiscovery?+

Agentic eDiscovery describes LLM agent workflows that perform multi-step reasoning across a document corpus: retrieving related documents, scoring relevance and privilege in chained steps, identifying cross-custodian patterns, and producing per-document reasoning traces. It is the frontier tier in 2026, found in select platform features at Lighthouse, Nuix Neo, and parts of Relativity aiR for Case Strategy. Read more →

Is predictive coding the same as TAR?+

TAR (Technology Assisted Review) is the umbrella term. Predictive coding originally referred to TAR 1.0 (fixed seed set, one training run). Today, vendors and courts often use 'predictive coding' to mean TAR 2.0 (Continuous Active Learning), which continuously retrains as reviewers code documents. They are distinct methods under the same umbrella. GenAI review is a further evolution. Read more →

Is predictive coding the same as CAL?+

No. CAL (Continuous Active Learning) is TAR 2.0. Classic predictive coding is TAR 1.0, which uses a fixed seed set and trains once. CAL is more accurate on large and rolling productions because it continuously retrains. When vendors say 'predictive coding' today, they almost always mean CAL. Read more →

What is the difference between TAR 1.0 and TAR 2.0?+

TAR 1.0 trains a classifier on a fixed seed set coded by attorneys, then applies that classifier to the full corpus. It trains once. TAR 2.0 (CAL) continuously retrains the classifier as reviewers code documents during active review, achieving higher recall and handling rolling productions more effectively. TAR 2.0 is the current industry default. Read more →

What does 'AI' actually do in document review?+

In most 2026 platforms, AI does three things: relevance scoring (assigning each document a probability of being responsive to the issue), privilege detection (flagging documents that may be attorney-client privileged or work product), and clustering (grouping documents by conceptual similarity). These functions assist reviewers; they do not replace attorney judgment on final coding and production decisions. Read more →

Platforms and pricing

How much does Relativity cost in 2026?+

RelativityOne is approximately $11-13 per GB per month as of April 2026, with aiR for Review and aiR for Privilege included at no added charge following the October 2025 pricing reset. Hosting, processing, and per-user licences are separate. Expect a 5-15% base subscription increase to absorb the GenAI inclusion. Read more →

Is Everlaw cheaper than Relativity?+

It depends on the matter. Everlaw's per-user pricing (approximately $3,000-5,000 per user per year plus data hosting) can be cheaper for small to medium firms with active matters and limited data volume. Relativity's per-GB model is often cheaper for large data volumes at low user counts. The crossover is typically around 20-30 active reviewers on a 10+ TB matter. Read more →

What is the Relativity aiR pricing change?+

At Relativity Fest (October 2025), Relativity announced that aiR for Review and aiR for Privilege would be included in RelativityOne subscriptions at no additional charge from early 2026. Previously these features cost approximately $2-3/GB/month (aiR for Review) and $1.50-2/GB/month (aiR for Privilege) as add-ons. Read more →

What did Everlaw change in 2026?+

In early 2026, Everlaw matched Relativity's pricing move by making core EverlawAI Assistant features (Single Document Review, Writing Assistant) free for existing subscribers. Previously these were usage-based add-ons. The per-user base subscription pricing remained unchanged. Read more →

What is DISCO Cecilia?+

DISCO Cecilia is DISCO's AI layer, focusing on narrative intelligence and timeline reconstruction across the document corpus. It combines AI-assisted relevance scoring with chronological analysis of custodian communications. Launched 2023, updated significantly in 2024. Read more →

Which eDiscovery platform is best for small firms?+

For single matters under 500 GB, Logikcull's flat-fee model ($1,500-3,500 per matter) is typically the most cost-effective. For firms with recurring litigation work, Everlaw's per-user model at the entry tier ($250/month base) is competitive. Relativity is generally not cost-effective for solo or small firm use. Read more →

What is per-matter flat-fee eDiscovery pricing?+

Per-matter flat-fee pricing charges a fixed amount per litigation matter regardless of data volume or review duration (up to platform limits). Logikcull pioneered this model. It provides budget certainty for small and mid-size matters; it becomes expensive per GB for very large productions. Read more →

Workflow and accuracy

Is AI document review defensible in court?+

Yes, subject to validation and transparency. Judge Andrew Peck approved TAR in Da Silva Moore (2012) and reinforced the framework in Rio Tinto (2015). EEOC v. Tesla (2024-2025) accepted GenAI review under the same framework. The key requirements are: documented process, statistical sampling validation, stipulated protocol, and Rule 502(d) protection for privilege review. Read more →

Can AI replace human document review?+

No. AI prioritises and accelerates review. All major platforms require human-in-the-loop validation per ABA Formal Opinion 512, FRCP 26(g) signature requirements, and Sedona Principle 6. The productivity gain is 40-80% on relevance coding; attorney sign-off on final determinations is still required. Read more →

How accurate is AI privilege review?+

Published benchmarks range from 85-97% on first-pass privilege detection, with significant variation by document type. AI is most accurate on direct attorney-client emails (90-97%) and least accurate on forwarded email chains with partial quotes (75-85%) and mixed business/legal advice documents. Validation sampling is essential before any privilege review production. Read more →

How do I validate TAR or AI review results?+

The standard validation approach is elusion testing: a random sample of documents predicted non-responsive is reviewed by attorneys to measure the elusion rate (the proportion of actually responsive documents in the non-responsive bin). The Grossman-Cormack standard targets 95% confidence with a plus or minus 5% margin. Specific targets should be stipulated in the review protocol. Read more →

How do I know if my eDiscovery platform uses real AI or just marketing?+

Ask four questions: (1) Is the relevance scorer a classical classifier or an LLM? (2) Is the LLM hosted in a tenant-isolated environment? (3) Is it a general-purpose or legal-tuned model? (4) Can you export per-document reasoning traces? See the full procurement checklist at /platforms-compared. Read more →

What is early case assessment (ECA)?+

Early case assessment is the use of analytics and AI to evaluate the scope, cost, and risk of a matter before committing to full review. ECA typically involves data volume analysis, custodian identification, conceptual clustering, and initial relevance sampling. Modern platforms (Everlaw, DISCO, Relativity) include ECA analytics as standard features. Read more →

Case law and regulation

What did Da Silva Moore hold?+

Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012) was the first U.S. judicial approval of TAR. Judge Peck held that predictive coding is acceptable and may be more accurate than exhaustive manual review. Process transparency was established as the defensibility standard. Read more →

What did Rio Tinto v. Vale hold?+

Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125 (S.D.N.Y. 2015) endorsed TAR/CAL without requiring seed-set disclosure, provided process documentation was adequate. It reinforced that courts follow Sedona Principle 6 in evaluating the producing party's methodology choice. Read more →

What is the Sedona Conference?+

The Sedona Conference is a nonprofit legal policy research and education organization. Its publications on eDiscovery, particularly the Cooperation Proclamation and Principles for Electronic Document Production, are frequently cited by courts as persuasive authority in complex commercial litigation discovery disputes. Read more →

What is Sedona Conference Principle 6?+

Sedona Conference Principle 6 states: 'Responding parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.' Courts cite this principle to uphold the producing party's methodology choice -- including TAR and GenAI review -- when the process is documented and validated. Read more →

What is FRCP 26(g)?+

Federal Rule of Civil Procedure 26(g) requires the attorney signing a discovery response to certify that after a reasonable inquiry, it is complete and correct. When AI assists in the review, this certification requires the attorney to understand the AI methodology, review validation results, and confirm that the AI-assisted result meets the reasonable inquiry standard. Read more →

What is Rule 502(d)?+

Federal Rule of Evidence 502(d) allows a court to order that inadvertent production of privileged material does not constitute waiver in the pending case or in any other federal or state proceeding. It is the essential backstop for AI-assisted privilege review where first-pass accuracy is 85-97%. Read more →

What is FRCP 37(e)?+

FRCP 37(e) governs sanctions for failure to preserve ESI. A party that fails to take reasonable steps to preserve ESI and cannot restore or replace it faces curative measures (on prejudice) or adverse inference instructions (on intent to deprive). AI legal hold systems must be documented under 37(e) preservation standards. Read more →

What happened with EEOC v. Tesla and AI document review?+

EEOC v. Tesla (N.D. Cal. 2024-2025) is the first public-record U.S. case involving GenAI-assisted document review. The court accepted the methodology subject to validation requirements -- no new legal category was created. Courts apply the existing TAR defensibility framework to GenAI review. Read more →

Ethics and confidentiality

What is ABA Formal Opinion 512?+

ABA Formal Opinion 512, issued 29 July 2024, addresses lawyers' ethical duties when using generative AI. Key holdings: Rule 1.1 competence requires understanding AI capabilities; Rule 1.6 confidentiality requires verifying vendor zero-retention and tenant isolation; Rule 3.3 candor applies to AI-generated work product; Rule 5.3 supervision extends to AI tools. Read more →

Do I need to disclose AI use to my client?+

Under ABA 512 and most state bar guidance, disclosure is not automatically required but is often prudent. Florida Bar Opinion 24-1 requires disclosure and informed consent for GenAI involving confidential information. California recommends disclosure. Most states recommend rather than mandate. Check your state bar's current guidance. Read more →

Is it ethical to send client documents to a GenAI vendor?+

Generally yes, with appropriate due diligence: zero-retention terms, tenant isolation, SOC 2 Type II certification, no model-training on client data, and GDPR Article 28 processor agreement for EU data. California, Florida, DC, and New York bar guidance aligns on these requirements. Read more →

What is zero-retention in a vendor AI contract?+

Zero-retention is a contractual commitment by the AI vendor not to retain prompts, document content, or AI outputs beyond the immediate API transaction. It is essential under Rule 1.6 and most state bar guidance. Ask for the specific contractual clause, not just the privacy policy description. Read more →

What does 'tenant isolation' mean for eDiscovery AI?+

Tenant isolation means that each client's data and LLM processing occurs in a logically or physically separate environment, preventing cross-contamination of documents, prompts, or outputs between clients. Required under ABA 512's confidentiality duty and GDPR Article 28 for EU data. Read more →

What happened at Relativity Fest 2025?+

At Relativity Fest in October 2025, Relativity announced that aiR for Review and aiR for Privilege would be included in RelativityOne subscriptions at no additional charge from early 2026. This pricing reset changed the competitive landscape, prompting Everlaw to match by making EverlawAI Assistant features free for subscribers. Read more →

Disclaimer

This site is an independent editorial resource. Nothing here is legal advice. Pricing, capability, and case-law information is summarised from public sources. Verify directly with counsel and each vendor before making procurement or litigation decisions. Case-law citations accurate as of April 2026.

Updated 2026-04-27