Choice of Concept Search Tool in e-Discovery May Matter Less Then You Think
Tom O’Connor and I recently wrote a joint blog post about concept search software for e-discovery. Subsequently, we received comments from Herb Roitblat of Orcatec, an expert in information management, data mining, statistics, and eDiscovery processes. I share his comments here.
Tom posted at his docNative Paradigm Blog Herb’s comments on Xerox CategoriX and Musings on the Best Approach to EDD Search (29 Oct 2009) by Tom and Ron:
I publish here, with permission, additional comments from Herb, who wrote these in response to a message I sent him with my “take aways” from his first comments.
My summary and interpretation of Herb’s comments below and in the posts at Tom’s blog is that while concept search is a useful tool for e-discovery, the selection of the specific “flavor” of concept search tool matters less than smart application of it. Tool selection needs to be case specific because a “bake-off” among concept search tools only tells you how well a tool does against a specific set of documents. Since it’s not economically feasible to use multiple tools per case, you need to make a reasonable tool selection at the outset of a the case. As important, you need a reasonable and defensible process (which means documenting tool selection and process). The reasonableness standard depends on the stakes of the case.
Herb and Ron Exchange by E-Mail
Ron: So it sounds like what you are saying is that the difference in e-discovery concept search tools is probably overwhelmed by differences in document sets and in process / control.
Herb: I agree with this, but it has to be said carefully. Clothing does not make the man and high-powered tools do not make the builder, but they do help a good builder do better work. No matter how good your tools are, if they are not used well, you get a questionable result.
Ron: Concept search is not a magic bullet but helps expand the universe of documents to consider because it finds docs with words you would not otherwise think of as search terms.
Herb: It helps you think, but it is not a substitute for thinking. It is, as you say, not a magic bullet, just an amplifier.
Ron: Concept search can also help speed review by clustering similar documents.
Herb: Concept search expands queries to return results that are the best match to the expanded query. Thus, the top results are those that best match the query term and its context. (See green search on Truevert.com for an example, search for meat and get organic meat, not Omaha Steaks). There the context is given by green documents.
Ron: I back away from my initial assertion of the need to use multiple tools. I argued that to spur thinking among EDD professionals. Upon further reflection, what I really meant to say is that lawyers should focus more on industrial processes and controls, statistics, and metrics than on software features.
Herb: That’s what I think.
Ron: So that means we have no magic bullets. The legal profession has hard work ahead to industrialize its processes.
Herb: It’s actually not that hard. You just have to be thoughtful about what you are doing. It is not even terribly burdensome if you are realistic about the levels of accuracy that you can really achieve (see below).
Ron: We still don’t seem to have an objective standard by which to judge if a process is ‘good enough’.
Herb: There are lots of ways of deciding whether a process is good enough and lawyers are used to making reasonableness judgments and arguing about them. What are the consequences of different types of errors (e.g., retrieving too many documents, retrieving too few)?
Scientists, by tradition, usually use a standard of .95 confidence. For example, if two treatments are different with 95% confidence, then we accept them as different. That does not tell us how different they are or that the difference is practically important or useful, only that the difference is statistically significant. Scientists often report higher confidence levels than that, but the minimum is usually .95. That tradition has worked well in science where subsequent research can correct the relatively few times when the difference does not really exist, but resulted from sampling (luck of the draw).
As an analogy, if you play slot machines, the things return only about 95 – 98% of the money that gets pumped into them, but that does not mean that some people don’t actually win large amounts. It happens sometimes. The luck of the draw usually returns less than you put in, but sometimes it returns more.
Back to good enough. Engineers typically use confidence levels to tell them how well to build a bridge. They consider the consequences of different kinds of failure (think of the Tacoma Narrows Bridge). NASA uses confidence levels to determine the quality of their systems. Where the consequences are severe, they require higher confidence.
In eDiscovery, we are familiar with proportionality arguments and the like for determining things like cost shifting. The same thing applies here. A bet the company litigation may merit a higher level of confidence than a run of the mill litigation. Different types of errors may be weighted differently depending on the consequences of that kind of error.
None of this is hard nor does it require very much mathematical background. I published some tables a while back showing how many documents you should sample if you want to achieve a certain level of confidence and you are willing to accept the possibility of missing a certain proportion of responsive documents.
As I think I’ve said, I think that another part of reasonableness is transparency. Be able to describe what you did. A scientific publication is intended to describe enough of the methodology so that another scientist can replicate the observations. I don’t think that you necessarily have to publish to the other side what you did, but you should be able to provide that information if required (think Victor Stanley).
- Alternative Legal Provider (36)
- Artificial Intelligence (AI) (50)
- Bar Regulation (13)
- Best Practices (39)
- Big Data and Data Science (9)
- Blockchain (10)
- Bloomberg Biz of Law Summit – Live (6)
- Business Intelligence (21)
- Contract Management (19)
- Do Less Law (37)
- eDiscovery and Litigation Support (165)
- Experience Management (8)
- Extranets (11)
- General (191)
- Innovation and Change Management (160)
- Interesting Technology (96)
- Knowledge Management (221)
- Law Department Management (14)
- Law Departments / Client Service (113)
- Law Factory v. Bet the Farm (28)
- Law Firm Service Delivery (110)
- Law Firm Staffing (25)
- Law Libraries (4)
- Legal market survey featured (5)
- Legal Process Improvement (23)
- Legal Project Management (26)
- Legal Secretaries – Their Future (17)
- Legal Tech Start-Ups (2)
- Litigation Finance (5)
- Low Cost Law Firm Centers (20)
- Management and Technology (178)
- Notices re this Blog (10)
- Online Legal Services (63)
- Outsourcing (135)
- Personal Productivity (39)
- ReInvent Law (10)
- Roundup (58)
- Structure of Legal Business (1)
- Supplier News (13)