Guest Blog by Joe Garber, Vice President of Marketing at RenewData
There’s no question that Technology-Assisted Review is a hot topic in eDiscovery circles right now. A quick Google search certainly confirms that premise, and reinforces that organizations are increasingly looking for defensible, cost-saving measures to apply to the most expensive aspect of eDiscovery. However, what’s equally evident is that there isn’t a commonly accepted understanding of what this term actually means and, as a result, the need for market education is clear and widespread. Over the summer, my team and I have traveled from city to city across the U.S. to discuss these important issues with industry peers. During these highly interactive sessions, we routinely find ourselves addressing a handful of questions. I have identified four of the most frequent questions we are asked, as well as the “consensus conclusion” achieved among these groups.
What is Technology-Assisted Review and Why Should I Care?
Consensus conclusion: Technology-Assisted Review was borne out of organizations’ desires to control cost in the portion of eDiscovery (review) that generally accounts for roughly 75% of their total spend. It is unlike traditional linear review that is highly manual and involves the interplay of humans and computers – often overlaying a variety of technological approaches such as keyword search, clustering, relevance ranking, and sampling – to vastly expedite the review process. Technology-Assisted Review has been proven to save up to 80% of total cost versus linear review, which can add up to millions of dollars for even a single matter.
Are all Technology-Assisted Review Solutions the Same?
Consensus conclusion: No. Today, there are two broad categories of Technology-Assisted Review – one that leverages artificial intelligence and another that relies on a human’s understanding of language to identify potentially relevant data in a document collection. The artificial intelligence-based approach provides quick insight into the matter and may require less oversight from senior attorneys, but there can be a “blind spot” in this process. A few years ago, the common practice was to review as few as 500 documents as a “seed set” in order to train the system on what to look for within the collection. But with data volumes increasing and better education on semantic patterns, a best practice is now to build a seed set of approximately 10,000 documents. Alternatively, the language-based approach makes document coding decisions based on the specific language contained within each document. This process is easier to understand and explain to all parties than its artificial intelligence cousin (can you explain the inner-workings of Latent Semantic Indexing?), provides more transparency into the coding decisions and makes it easier to audit reviewers in real time, and creates a reusable work product that can provide even greater efficiencies in the future.
Does Case Law Support the Use of Technology-Assisted Review?
Consensus conclusion: Case law is quickly emerging to support the use of both categories of Technology-Assisted Review. Two specific cases, from highly respected districts, are particularly notable: Judge Peck’s February 24th order in Da Silva Moore v. Publicis Groupe & MSL Group,No. 11 Civ. 1279 (ALC) (AJP)(S.D.N.Y. Feb. 24, 2012), and Kleen Products v. Packaging Corporation of America, Case No. 10 C 5711 (N.D. Ill. April 8, 2011). In Da Silva, Judge Peck specifically holds that “(Technology)-assisted review is an acceptable way to search for relevant ESI in appropriate cases.” While Judge Peck comments on a matter that involves the artificial intelligence approach in this particular case, the general principles he highlights – leveraging technology to expedite review, focusing on quality, and sampling to ensure reasonable results – support both approaches. In Kleen, Judge Nolan held for the producing party’s use of a language-based approach for a number of reasons but specifically because their approach has been embraced by the court system for years.
How Do I Choose the Right Alternative?
Consensus conclusion: It depends. There is a whole spectrum of review acceleration solutions available in the market, and choosing the right one (or often a combination of them) depends on your company’s litigation profile, your data set, and the time, cost and risk sensitivities of each unique matter. The artificial intelligence-based approach to Technology-Assisted Review often has two elements that are in play: the need to arrive at quick decisions early on in the litigation, and enough time is available to read up to 10,000 documents for a seed set. The language-based approach is most appealing when transparency and reviewer auditability are of paramount concern, and when an organization wants to incorporate this approach as a regular business practice. Be cautious of tying your success to a single technology platform because each matter is unique and each may require a slightly different methodology to achieve optimal results. If you need help on compiling the right solution, take the time to find an expert, because the cost and risk of making a mistake in eDiscovery can be severe.
To learn more about review acceleration, and the two key alternatives to Technology-Assisted Review, an excellent white paper written by Enterprise Strategy Group is available here.
About the Author
Joe Garber is Vice President of Marketing for RenewData. During his 18-year career, he has served as Director of Market Strategy for Autonomy (an HP company), a management consultant for IBM, led marketing and product management for a variety of successful technology startups, and served as a press secretary for a U.S. Senator. He holds a Bachelor of Arts degree from Pepperdine University and a Master’s of Business Administration (MBA) from Cornell University, where he received the prestigious Park Leadership Fellow award for “demonstrated leadership and academic excellence.”