eDiscovery for litigation support teams in 2026
Updated 21 April 2026 | Independent reference | Not legal advice
Litigation support in 2026 is a more technically demanding role than it was five years ago. The processing-to-review handoff is no longer primarily a data wrangling problem -- it is increasingly an AI workflow validation problem. Platform coordination, prompt engineering, and statistical sampling are now core skills. This page addresses the full toolkit.
The litigation support role in 2026
The core litigation support scope is still: collection coordination, processing, review project management, quality control, and production. What has changed is the AI layer between processing and review. The lit-support project manager is now responsible for configuring the AI review tool (writing the issue prompt or designing the seed set), monitoring AI output quality during the review cycle, designing and running the statistical validation sampling, and reporting validation results to the supervising attorney.
Typical lit-support teams in AmLaw and enterprise are 2-10 specialists, with a senior project manager (often with a Relativity Certified Administrator or CEDS certification), processing specialists, and review coordinators. The team structure is largely unchanged; the technical requirements within each role are evolving.
Platform-of-record vs processing specialist vs managed service
| Model | Who uses it | Lit-support role | Key tool |
|---|---|---|---|
| In-house platform-of-record | Large firms, F1000 in-house | Administrator, project manager, QC | Relativity, Everlaw |
| Processing specialist | Boutique lit-support vendors | Processing, ingestion, conversion | Nuix, Relativity Processing, LAW |
| Managed service | Mid-market firms, occasional litigators | Client liaison, data transfer, sign-off | Epiq, Consilio, HaystackID, Lighthouse |
| Hybrid | Large firms with specific matters | Platform management + managed service QC | Relativity + Lighthouse or Consilio |
Last verified April 2026
The AI-era lit-support skillset
- Prompt engineering for document review. Writing effective issue descriptions for LLM relevance scoring is a skill. A vague prompt produces a vague relevance score. Effective prompts specify the issue precisely, reference the relevant legal standard, name known custodians and time periods, and include both inclusive and exclusive criteria. The lit-support specialist is now responsible for drafting prompt candidates and running prompt validation tests against a sample before deploying at scale.
- Output validation and statistical sampling. Elusion testing, precision measurement, and F1 reporting are now expected lit-support deliverables. The supervising attorney signs off on validation results before production; the lit-support team designs and runs the validation. Grossman-Cormack methodology (95% confidence, plus or minus 5% margin, random sample of low-scored documents) is the standard reference. See /predictive-coding-2-0 for the full validation framework.
- Reasoning trace review. Where the platform provides per-document AI reasoning traces (Lighthouse, Nuix Neo, limited Relativity partner API), the lit-support team reviews a sample of traces to verify that the AI's reasoning is consistent with the issue definition and the attorney's coding decisions. This is a new QC step with no direct predecessor in the pre-GenAI workflow.
- Privilege module configuration. Configuring the privilege detection module requires inputting the attorney list, legal hold period, joint-defence parties, and known privilege exceptions (e.g., crime-fraud matters). The lit-support specialist is responsible for keeping the attorney list current and running privilege validation sampling before production.
- Agile TAR project management. The EDRM 2.0 framework includes agile review cycles that overlap with AI training iterations. Lit-support project managers track review velocity, model convergence, and remaining population estimates in real time rather than running a linear review-to-completion workflow. Tools like Relativity's Active Learning Project (ALP) dashboard and Everlaw's review analytics support this.
Vendors with strong services and platform integration
| Vendor | Strength | Platform | Best for |
|---|---|---|---|
| Lighthouse | AI Hub, reasoning traces, data analytics | Relativity + proprietary AI layer | Complex matters requiring AI auditability |
| HaystackID | Deep Relativity expertise, AI validation | Relativity | AmLaw 200 matters, regulatory investigations |
| Consilio | Global reach, multi-language | Relativity + proprietary tools | Cross-border matters, large MDL |
| Epiq | Large case administration, class actions | Relativity + proprietary | Regulatory, government investigations |
Last verified April 2026
Frequently asked questions
What skills do litigation support teams need for AI eDiscovery?
Four new skills are now essential: prompt engineering (writing effective issue descriptions for LLM scoring), output validation (elusion testing and F1 reporting), reasoning trace review (interpreting per-document AI explanations), and statistical sampling design (Grossman-Cormack methodology). Traditional processing, project management, and QC skills remain foundational.
What certification should a litigation support specialist get in 2026?
The ACEDS (Association of Certified eDiscovery Specialists) CEDS certification covers the full EDRM, including technology-assisted review. Relativity Certified Administrator (RCA) and Relativity Certified Professional (RCP) are the leading platform-specific certifications. For AI-specific skills, ACEDS has published GenAI eDiscovery continuing education content.
How does AI change the processing-to-review handoff?
Historically, the handoff was: process data, load to review platform, attorney begins coding. With AI review, the handoff now includes: configure the AI review module (issue prompt or seed set), run a validation pass, review AI output quality, report validation metrics to the supervising attorney, then begin attorney review of AI-prioritised documents. The lit-support team is now responsible for the AI configuration and initial validation.