A hat-tip to Kurt Wismer for drawing my attention via Security Memetics to a test (of sorts) described on a SANS web site.
@robtlee specifically claims that “This isn’t an anti-AV or HIDS write-up…” Well, SANS seems always ready to take a potshot at the antivirus industry, but at least he does quote some of McAfee’s response – unfortunately, I’ve been unable to find a public posting of the whole response, if there is one.
Wismer’s commentary is actually pretty complete in itself: I can only add that a simulated non-algorithmic threat might have some validity as a means of reminding us that anti-virus – indeed, security products in general – are not always effective (shock! horror!) . It might even qualify as “an incredibly rich and realistic attack scenario” (humility has never been a common trait among SANS instructors): at least the test, if I can describe it as such, has put its finger on why APTs are likely to bypass algorithmic defences like AV.
However, while it avoids the worst excesses of the AVID (AV Is Dead) lobby, it doesn’t do justice to the complexity of the threats that modern AV does detect by describing them as low-hanging fruit. And while the exercise did use real (albeit repackaged) malware such as Poison Ivy, it also used “custom crafted malware”. Oh dear. Never mind the ethical and practical problems the AV industry has with malware creation for testing purposes: I’m getting a little tired of tests and AVID exercises that follow a chain of logic along the lines of:
- We just wrote and/or modified some code that we think might be malicious
- We think that anti-virus ought to detect it
- Gotcha! It doesn’t!
- AV is useless
Simulations aren’t always completely useless, but their usefulness is at best limited. A topic that is to some extent in the recently published AMTSO guidelines document AMTSO Use and Misuse of Test Files.
David Harley CITP FBCS CISSP
ESET Senior Research Fellow
(but not speaking for ESET, AMTSO, or anyone but himself and anyone who happens to agree with him)
Leave a Reply