Tom O’Connor and I recently wrote a joint blog post about concept search software for e-discovery. Subsequently, we received comments from Herb Roitblat of Orcatec, an expert in information management, data mining, statistics, and eDiscovery processes. I share his comments here.
Tom posted at his docNative Paradigm Blog Herb’s comments on Xerox CategoriX and Musings on the Best Approach to EDD Search (29 Oct 2009) by Tom and Ron:
- Herb Roitblat on ED Searches (12 Nov 2009)
- Herb Roitblat On Search: Part 2 (23 Nov 2009)
I publish here, with permission, additional comments from Herb, who wrote these in response to a message I sent him with my “take aways” from his first comments.
Summary
My summary and interpretation of Herb’s comments below and in the posts at Tom’s blog is that while concept search is a useful tool for e-discovery, the selection of the specific “flavor” of concept search tool matters less than smart application of it. Tool selection needs to be case specific because a “bake-off” among concept search tools only tells you how well a tool does against a specific set of documents. Since it’s not economically feasible to use multiple tools per case, you need to make a reasonable tool selection at the outset of a the case. As important, you need a reasonable and defensible process (which means documenting tool selection and process). The reasonableness standard depends on the stakes of the case.
Herb and Ron Exchange by E-Mail
Ron: So it sounds like what you are saying is that the difference in e-discovery concept search tools is probably overwhelmed by differences in document sets and in process / control.
Herb: I agree with this, but it has to be said carefully. Clothing does not make the man and high-powered tools do not make the builder, but they do help a good builder do better work. No matter how good your tools are, if they are not used well, you get a questionable result.
Ron: Concept search is not a magic bullet but helps expand the universe of documents to consider because it finds docs with words you would not otherwise think of as search terms.
Herb: It helps you think, but it is not a substitute for thinking. It is, as you say, not a magic bullet, just an amplifier.
Ron: Concept search can also help speed review by clustering similar documents.
Herb: Concept search expands queries to return results that are the best match to the expanded query. Thus, the top results are those that best match the query term and its context. (See green search on Truevert.com for an example, search for meat and get organic meat, not Omaha Steaks). There the context is given by green documents.
Ron: I back away from my initial assertion of the need to use multiple tools. I argued that to spur thinking among EDD professionals. Upon further reflection, what I really meant to say is that lawyers should focus more on industrial processes and controls, statistics, and metrics than on software features.
Herb: That’s what I think.
Ron: So that means we have no magic bullets. The legal profession has hard work ahead to industrialize its processes.
Herb: It’s actually not that hard. You just have to be thoughtful about what you are doing. It is not even terribly burdensome if you are realistic about the levels of accuracy that you can really achieve (see below).
Ron: We still don’t seem to have an objective standard by which to judge if a process is ‘good enough’.
Herb: There are lots of ways of deciding whether a process is good enough and lawyers are used to making reasonableness judgments and arguing about them. What are the consequences of different types of errors (e.g., retrieving too many documents, retrieving too few)?
Scientists, by tradition, usually use a standard of .95 confidence. For example, if two treatments are different with 95% confidence, then we accept them as different. That does not tell us how different they are or that the difference is practically important or useful, only that the difference is statistically significant. Scientists often report higher confidence levels than that, but the minimum is usually .95. That tradition has worked well in science where subsequent research can correct the relatively few times when the difference does not really exist, but resulted from sampling (luck of the draw).
As an analogy, if you play slot machines, the things return only about 95 – 98% of the money that gets pumped into them, but that does not mean that some people don’t actually win large amounts. It happens sometimes. The luck of the draw usually returns less than you put in, but sometimes it returns more.
Back to good enough. Engineers typically use confidence levels to tell them how well to build a bridge. They consider the consequences of different kinds of failure (think of the Tacoma Narrows Bridge). NASA uses confidence levels to determine the quality of their systems. Where the consequences are severe, they require higher confidence.
In eDiscovery, we are familiar with proportionality arguments and the like for determining things like cost shifting. The same thing applies here. A bet the company litigation may merit a higher level of confidence than a run of the mill litigation. Different types of errors may be weighted differently depending on the consequences of that kind of error.
None of this is hard nor does it require very much mathematical background. I published some tables a while back showing how many documents you should sample if you want to achieve a certain level of confidence and you are willing to accept the possibility of missing a certain proportion of responsive documents.
As I think I’ve said, I think that another part of reasonableness is transparency. Be able to describe what you did. A scientific publication is intended to describe enough of the methodology so that another scientist can replicate the observations. I don’t think that you necessarily have to publish to the other side what you did, but you should be able to provide that information if required (think Victor Stanley).
Archives
Blog Categories
- Alternative Legal Provider (44)
- Artificial Intelligence (AI) (57)
- Bar Regulation (13)
- Best Practices (39)
- Big Data and Data Science (14)
- Blockchain (10)
- Bloomberg Biz of Law Summit – Live (6)
- Business Intelligence (21)
- Contract Management (21)
- Cool Legal Conferences (13)
- COVID-19 (11)
- Design (5)
- Do Less Law (40)
- eDiscovery and Litigation Support (165)
- Experience Management (12)
- Extranets (11)
- General (194)
- Innovation and Change Management (188)
- Interesting Technology (105)
- Knowledge Management (229)
- Law Department Management (20)
- Law Departments / Client Service (120)
- Law Factory v. Bet the Farm (30)
- Law Firm Service Delivery (128)
- Law Firm Staffing (27)
- Law Libraries (6)
- Legal market survey featured (6)
- Legal Process Improvement (27)
- Legal Project Management (26)
- Legal Secretaries – Their Future (17)
- Legal Tech Start-Ups (18)
- Litigation Finance (5)
- Low Cost Law Firm Centers (22)
- Management and Technology (179)
- Notices re this Blog (10)
- Online Legal Services (64)
- Outsourcing (141)
- Personal Productivity (40)
- Roundup (58)
- Structure of Legal Business (2)
- Supplier News (13)
- Visual Intelligence (14)