This is a live blog post from Legal Tech NY. The session is Judicial Perspectives on Technology-Assisted Review, hosted by Xerox Litigation Services. The panelists are Judges Andrew M. Peck, Frank Maas, David Waxse, and Sr. Master Steven Whitaker. Gabriela P. Baron of XLS moderate. [Apologies in advance for typos; this is my first time live blogging from my iPad. Any editorial comments are enclosed in square brackets.]]& 

What is driving the current push to automated review now?
Maas: time and cost are the keys. Judges not providing sufficient time and the increasing volume of data means costs rise and time frames shrink.
Whitaker: volume and the manifest failure of key word searching to be effective are the keys. Cites example of using “road” as a key word, which appears in many addresses. [RF: seems to me that is a failure of humans, not of technology]
Peck: agrees with above. Also cites “out of control costs” for e-discovery, which is impossible given the volume, and key words miss key docs, especially since most lawyers don’t do key words very well. Stresses importance of technology and process; cites studies showing computer assisted review better than human review.
Waxse: clients don’t want to pay the huge bills for human review. The knowledge base in bar is improving so that pushes the move to automated review. No choice but to use tech in every case – it’s just a matter of degree and type. Also cites studies showing that human review is not reliable. Does not understand why some lawyers ignore science and still insist that human review is better. [RF: amen]

What experience have you had with tech-assisted reviews in matters that have come before you? What type of cases are ripe or better-suited for tech assisted review? What paramters drive decision?
Waxse: Parites should reach agreement on what they should use. Most efficient is for parties to agree on a single approach to review. Cooperateion is the biggest sources of a solution – if we could just get lawyers to agree on which technology to review. Says that judges will sign-off on joint decision by parties.
Peck: Not sure he agrees entirely that judges will just sign-off. Is now using NY questionnaire for all cases. In 2 of 3 cases, parties agreed on paper production with this new approach.
Maas: sees computer assisted review coming to fore most often in government investigations. While opposing party agreement is good, disagrees with J. Waxse that this suffices exclusively. There may be instances where opposing parties have good reason to use different tech approaches. But they should still agree on overall approach.
Whitaker: Example of one case where discovery can go wrong. One side had most docs; both sides agreed to use predictive coding. Side A, with all the docs, chose all the seed sets, trained the system, but did not consult Side B re the seed sets. Responsive docs were under 2%. Side B was not pleased but the Judge says having so few documents is a good result. The issue is that both sides should agree in advance on the search protocols. Opponents should sign-off on the selection of seeed sets. Need to eliminate challenges deep into the case. Goes on to say the judges do not need to look into the “black box” of how the tech works.
Peck: Agrees parties most agree in advance (“Cards on the Table”). This is key to avoiding disputes. Either parties will agree upfront or put the dispute before the judge before time elapsed and much money spent.
Maas: Notes that privileged docs may often be the best seed set. This point should be subject of discussion between the parties.

Will parties have to share seed sets? Does this touch on work product and privilege issues:
Waxse: Yes, need a 502(D) agreement to avoid privilege issues. But key to use the best docs to get the best results. Don’t play games about the rules – go for the best approach to doc review. Clients will have to drive lawyers to a more cooperative stance.
Peck: Other side should not seed privilege docs but still can have input into seed sets. In most cases, opposing party already has a lot of docs and/or info about the case. Typically, opposing counsel will be able to offer the docs it already has as part of the seed set.
Waxse: Privilege designation has to be with respect to issues at stake in the case, not how the document is labeled.
Maas: Does not think privilege docs must be part of seed set. Says 502(D) protects in federal and state court but not necessarily outside US. Thinks parties should always enter into a 502(D) agreement.

How do you think we can change the culture of reliance on human review?
Waxse: Must rely on good science. Tells Kansas story of controversy over teaching creationism where courts said rely on science. There is no science supporting reliability of human review.
Peck: Agrees that the science shows human review is not reliable. Points out that even on the panel, the judges would likely not agree on all the doc review designations were they to conduct a human review. Also, we simply cannot afford human review of huge volumes. This does not mean using tech without human guidance. Studies show computer assisted reviews is 10x to 50x less expensive than human review.
Maas: humans have to inform the software what is relevant and what is not and then judging if ocmputer is making the right automated designations.
Waxse: human inolvement is essentail; human review is not. Emphasizes that lawyers must analyze the case. Discovery must focus on disputed issues in the case. That requires opposing parties to agree what’s at issue. Proper cooperation can narrow the issues, which makes discovery less burdensome.
Maas: most experienced trial lawyers agree that in any civil matter, there will end up being 6 or 7 key documents on which the case turns.

How do you each priortize process and methodology versus the actual technology? How od you balance these. Assuming that all tech review methods are created equally, what metrics and ouputs re the process would you expect to seee to establish defensibility?
Waxse: wants an agreement between opposing parties. Litigation is not setting some absolute standard. The simplest way is narrowing issues and reaching agreement on what each side will do. Federal system is down to trying 1% of cases. Two-thirds of rest settle. If most settle, when does that happen? Too often, it takes too long becuase lawyers have fun in battle. Lawyers need to focus more on resolving the case. Focus should be on issues, not on how the doc review should take place. The goal is resovling dispute, not playing games about documents.
Peck: I don’t want a scientist coming into court to explain formulas and algorithsm inside the black box (the search / review tool). Wants to know what the process was: how did you train computer; how many iterations did you do; for those docs that computer said not relevant, did you conduct QC on that set to refine search to adjust accordingly. Many systems score docs rather than make binary determination re relevance. Need to make case-specific determination of what the index value should be. Proportionality is a key factor: absolute perfection is NOT a requirement.
Maas: consider the era of paper discovery…. lawyers interviewed key custodians, they looked through paper files, looked at boxes in warehouses. Lawyres never started at box 1 of 15,000 and go linearly. In paper cases, lawyers and jusges would make determination that enough review and production had occurred. Parties exerted a reasonable amount of effort. The same principle applies to tech assisted review. There is no clear cut, numerical rule.
Whitaker: review is about process. Agree on process and stick to results it produces.

It seems that lawyers are waiting for a key opinion. Do you think a decision will be forthcoming? Do we need to change rules? Who should be pushing these issues?
Peck: odds are fairly good that first case talking about computer assisted review will get it wrong. The alternative is key words. Most decisions on key words said lawyers got key words wrong. The leading judges on EDD advocate computer assisted review. I can’t write an opinion where the parties agree on computer assisted review. An opinion will only issue when parties do not agree on approach. Hopes judge writing that understands the topic. Someone has to go first.
Waxse: litigants mut take risks. They have to make rational cost-benefit decisions. It’s a big mistake to wait for the right court opinion. Lawyers are not analyzing the cases that imposed sanctions appropirately. All the sanction cases of note involved a party that lied. So sanctions cases provide little useful guidance.
Maas: re amending rules…. it would be a mistake to amend to deal rules to deal with a specific technology. Does not see a need to amend rules to accommodate computer assisted review.

[Q&A session not captured]