False positives are a major concern and can constitute a considerable nuisance (or worse), as any search engine worth its salt will show.
Clearly, the customers who fall victim to a major FP problem associated with their current product tend to be extremely dissatisfied. The vendors concerned are also deeply unhappy in such cases: apart from the fact that people in the legitimate AV industry usually want to offer help to their customers , not hindrance, the PR damage can drastically affect a vendor’s bottom line, at least in the short term. Nor is it a simple problem. Every time a high profile FP hits the forums, we hear the cry “why don’t you run quality assessment tests before you release updates?” Well, of course, the industry does go to considerable lengths to minimize the possibility of a major FP, but striking a balance between catching as much malware as possible and minimizing the FP risk is neither simple nor a “do it once and be done” problem. Every update is a potential troubleticket, so vendors (and customers) who do very frequent updates tend to be even more at risk.
That’s an issue that I’ve addressed a number of times elsewhere, including here http://blog.eset.com/2010/07/29/false-positives-and-apportioning-blame and here http://blog.eset.com/2009/02/24/false-positive-fracas, and have been at pains to explain that most vendors will not try to capitalize on the FP problems of other companies. We’re all too aware that claims of “it couldn’t happen to our product” don’t wash, and both our customers and our competitors will remember if we make such unsubstantiated claims.
While the incident reported by Heise and associated with the freeware ClamWin is particularly dramatic, with reports of up to 25,000 files (including system files) quarantined with indeterminate possibility of recovery, I’m certainly not going to tell you that a disaster on such a scale could never happen to a mainstream commercial product, and it’s inevitable that people and companies will want to know how they can best assess the risk that their security software of choice will put them into a similar position.
Some of the significant work done at the recent AMTSO workshop in Munich was on a paper that addresses the problem of false positives in the testing context. False positive testing is no easy option for a tester, but this guidelines document does clarify the issues and make it easier to work towards realistic methodologies for assessing a product’s susceptibility by more accurate classification of the impact of the FP (i.e. in terms of prevalence, criticality and so on).
David Harley CITP FBCS CISSP
ESET Senior Research Fellow