With this post, I introduce a new blog category, “Vendors Speak.” Some explanation and then interesting commentary on dealing with audio in e-discovery.

Vendors sometimes send me thought-provoking messages. I have been too cautious in sharing these. Vendors often have deep insight into the issues lawyers face. I will exercise editorial discretion and disclose any self-interest when I post vendor comments. My posting is not a product/company endorsement nor an independent vetting of the vendor’s facts or analysis.

I was prompted to start this category by a message I received from David Fishel, Esq., Senior Director – Technology Counsel at NEXIDIA. Click “more” below to see his interesting comments about audio recordings in e-discovery and my response to him. For those interested in this topic, David is doing a webinar next week with Mary Mack of FIOS (details here). He is also writing a white paper on the topic; to get a copy when it’s ready, contact him ( ediscovery at nexidia dot com).


Dave Fishel wrote the indented text (My response to him follows)

I was glad to see your post about “the gold standard” of document review that new technologies must compete against because it relates to issues my company faces with its phonetic audio search.

I see two big issues facing audio discovery. First, what people refer to as the “defensibility” of the search technology, and second, the recent urge parties have to declare audio recordings not “reasonably accessible.” I’m less concerned with the latter because the courts are applying the customary standards (i.e., requiring evidence of cost and burden, not just conclusory statements of counsel) and proportionality. Those are readily dealt with, even if the fact that audio can be searched with a high degree of accuracy comes as a surprise to many.

The defensibility question is trickier, primarily because of the lack of standards to apply. I have not found written opinions on “how accurate does a review have to be to satisfy the requirements of the FRCP?” In fact, I’ve never seen a court ask how “accurate” a review was — complete yes, but not “accurate.”

I’m sure that there have been plenty of arguments over concept search, email link/cluster analysis, and other new technologies as the basis for winnowing collections for review. However, the courts don’t seem to be deciding the arguments in writing. As you point out, setting human review as the “gold standard” assumes way too much about how good humans really are at these tasks.

I would be very interested to hear if you have had any experience with challenges to new technologies. Courts and parties have certainly accepted the results of new technologies, but it’s hard to say what criteria they used to judge it. And if you apply some empirical standard, do you have to test it on every data set in every case?

Audio is a bit more complicated than text in this regard because there really are qualitative differences in recordings that affect searchability (as well as comprehension of human reviewers). On the other hand text — now that we are pretty much in the post-OCR era — is more or less text.

My company, Nexidia, and many of our clients, have a lot of experience with human listener/review, and anecdotal information suggests that after about one hour, the listener’s attention flags and results suffer. We recently saw an e-discovery project that had 100 audio search terms — no sweat for our search tool, but my experience leads me to believe that such a set would be almost impossible for human reviewers to manage.

A major factor that bears consideration is that the “gold standard” does not scale — think about the labor costs alone for physical listening of say 5000 hours of audio recordings, looking for 100 search terms. Nexidia has a sophisticated testing process that we use to tune our “language packs” and which is readily adaptable to showing “accuracy” on a case specific data. However, it is not clear to me what level of “accuracy” is acceptable or required.

An even harder issue is what standard of judging “accuracy” might a regulatory agency demand in an investigative data request? And what level of confidence might they require? The agencies are not hampered by the “undue burden or cost” argument. There is much discussion about the FRCP, but a huge amount of cutting-edge discovery is happening in the regulatory compliance world. If agencies assume human listeners are the gold standard, what should it take to convince them?


Ron’s reply:

You raise interesting questions to which I don’t have answers. I have not seen published decisions on key words v Boolean v concept search; I think you are right that these issues come up but don’t generate written opinions. That may change, but who knows when.

One possible strategy is to rely on judicial notice of reliable third party standards or findings. I don’t know if that would work. And of course, someone has to generate those findings. A big challenge in EDD is that few are willing to invest to test different approaches. I think some empirical testing does occur, but the those doing it consider it proprietary. Aside from potential antitrust issues, it’s hard to imagine vendors collaborating to pay for a third party test. And even if they did, courts would be suspicious of findings generated from vendor dollars.

I’m not a practicing lawyer so my views on what might happen may not be accurate but two I have two thoughts:

1. It strikes me that the real problem is the initial presumption. I wonder if some reasonably conducted empirical study, even if not dispositive, would suffice to shift the burden of proof and presumptions. If someone challenged phonetic search, for example, it would be nice if you could point to a study that says it’s better than humans and have a court say to the challenger, “ok, now the burden has shifted to you to prove it does not work.”

2. Some courts have, for “inaccessible” data, accepted sampling. Perhaps if a dispute as to accuracy arose, a court might order both sides to contribute to the costs of a special master or neutral 3rd party testing alternate approaches. Of course, aside from all the other issues, how much testing and on what data is pretty key.

On your question about agencies… my sense from living in DC almost 20 years now is that agencies don’t act in one voice. Some agency staff/lawyers get it, others don’t. I’m not sure what it would take to make an official policy at DOJ, FDA, SEC, etc.