Kurt Wismer’s ‘anti-virus rants‘ – which are far more rational than the name of his blog might suggest – aren’t always a comfortable read for the anti-virus industry (and why should they be?), but they’re always worth reading. In debating AV effectiveness with security expert he summarizes a recent thread on Twitter that epitomizes some of the confused thinking in the wider security industry about the value (if any) of anti-malware.
Kurt’s analysis of some logical flaws in that debate and the misperception – not only in the (wider) security industry but the world at large – of AV’s role in the corporate security management field is worth at least one quote in its own right.
…conceptually AV is an abstraction that encompasses a variety of disparate preventative, detective, and recovery techniques. most people, however, just see AV as a magic box that you turn on and it just protects things. the only component that behaves anything like that is the real-time scanner, but it is not the only component in a security suite (especially an enterprise security suite) by any stretch of the imagination.
And I’ll probably come back to all that sooner rather than later, here and elsewhere. Not to mention the persistent assertions that only statistics pertaining to targeted attacks count.
However, for the purposes of this blog, I want to focus on some points he made that appertain directly to testing. For the moment, I’m really just listing a few of them, but I expect to come back to look at them in detail in another article. Just not this week. :)
- Targeted attacks (as in the penetration testing carried out by Robert Graham) are a special case. (Essentially, it appears he tweaks malware until it’s not detected by the targeted AV, then drops it onto a target machine: feasible as a pen-test strategy, but meaningless as an assessment of overall effectiveness of AV technology IMHO.)
- Trying to use the DBIR (Data Breach Investigations Report) “to evaluate the effectiveness of AV represents a self-selected sample bias because the failure itself causes the event to be included in the study.”
- Dan Kaminsky appears to be assuming that independent testing is based on the WildList. In fact, pure WildList testing has been on the wane for years.
How does Kurt think testing should be carried out? Well, I’d rather you read his closing paragraphs in the context of the whole article, rather than my reproducing it here, but it’s a pretty neat summary of a lot of targets that (some) testers at least aspire to, even if they aren’t too near to achieving it right now.
David Harley CITP FBCS CISSP
Anti-Malware Testing/Mac Virus/Small Blue-Green World
ESET Senior Research Fellow