On Friday, Judge Andrew J. Peck issued an important ruling in Monique Da Silva Moore v. Publicis Groupe & MSL Group, a widely followed discovery dispute about computer assisted document review in discovery. 

Judge Peck held that “computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases”. Matthew Nelson offers a good summary in an e-Discovery 2.0 blog post yesterday, Computer-Assisted Review “Acceptable in Appropriate Cases,” says Judge Peck in new Da Silva Moore eDiscovery Ruling.

The question of humans versus computers in document review is long in the making. For example, my 2007 The Gold Standard for E-Discovery Document Review concluded that “we seem doomed to years of costly litigation and a trickle of published decisions to establish a new standard.” J. Peck’s decision ushers in the dawn of a new day.

To get us to the full light of this new day, consider how cognitive science should inform document review. In his recent book, Thinking, Fast and Slow Nobel Laureate Daniel Kahneman shows that humans are much less rational, accurate, and consistent than we think. Two chapters especially should give skeptics of computer-assisted review pause.

In chapter 21, Kahneman explains that most experts over-rate their ability to assess and predict results. In 60% of some 2oo studies comparing predictions by experts and by simple algorithms, the latter outperform the former. Study topics include medical outcomes, individual economic success, credit risk, and wine prices. In many, “the accuracy of experts was matched or exceeded by a simple algorithm.”

One reason computers do better is that humans try too hard to “think outside the box”. Another is that humans are inconsistent: “When asked to evaluate the same information twice, they frequently give different answers.” Some 40 studies of experts found that they contradict themselves about 20% of the time when presented with the same information.

In chapter 22, Kahneman goes on to ask when we can trust expert intuition. He concludes that “the confidence that people have in their intuitions is not a reliable guide to their validity.” Expert judgment tends only to be valid where (1) the environment is “sufficiently regular to be predictable” and (2) there is an “opportunity to learn these regularities through prolonged practice.” The latter works best when the expert receives immediate feedback on the validity of his judgment.

It’s easy enough to apply these findings to document review. Lawyers have tremendous over-confidence in their intuition and expert ability to designate documents consistently and accurately. Several empirical studies of doc review in litigation confirm that algorithms – “computer assisted review”, “technology assisted review”, or “predictive coding” – perform better than lawyers. To think otherwise is to give in to all the cognitive biases that Kahneman reveals. As for trusting intuition, given that fact patterns vary hugely across cases, there likely is neither sufficient predictability nor a long enough period for “prolonged practice” to do so.

Many in our profession agree we should rely on a well-designed process supported by algorithms. We reasonably fear, however, having to defend that in front of a judge who does not understand the issue. Judge Peck’s decision opens the door. Daniel Kahneman’s book provides the cognitive science foundation to drive home the point.