Posted by: David Harley | December 21, 2012

Imperva-ious to Criticism

If you’re familiar with my previous writing on testing, you’ll know that I have a bee in my bonnet about the misuse of VirusTotal as a quasi-test resource. Imperva recently set it a-buzz  by supporting its basic message – “why pay for AV when you could be paying us instead ” by claiming to prove that anti-virus isn’t worth paying for by submitting 82 samples to VT and finding that:

  • The initial detection rate of a newly created virus is less than 5%.
  •  …it may take up to four weeks to detect a new virus from the time of the initial scan.
  • The vendors with the best detection capabilities include those with free antivirus packages…

Unfortunately – or fortunately, for those of us who actually work in the AV industry – these initial conclusions are based on the fallacy that VirusTotal provides an accurate measure of the detection capabilities of the products that support it, and the report’s further conclusions are compromised accordingly.

Rather than go into detail again about why those conclusions are fallacious, let me refer you to my article on the (ISC)2 blog – There’s Testing, Then There’s VirusTotal. Though here’s a relevant quote to give you the general idea:

VirusTotal doesn’t tell you whether a product is capable of detecting a malicious file. At best, it tells you whether it’s capable of detecting it using that particular program module and configuration.

In a paper I co-authored with VirusTotal’s Julio Canto, we pointed out that:  “VirusTotal uses a group of very heterogeneous engines. AV products may implement roughly equivalent functionality in enormously different ways, and VT doesn’t exercise all the layers of functionality that may be present in a modern security product.”

Now Imperva has published a blog “Refuting Criticism of Our Antivirus Report“. It doesn’t directly address the points in the (ISC)2 blog or in the paper, or in Infosecurity Magazine’s article AV ‘provides insufficient protection’ claims new report, but it does mention an IT Pro article that quotes Rik Ferguson: Imperva anti-virus study “flawed”, claims IT security expert.

“Simply scanning a collection of files, no matter how large or how well sourced misses the point of security software entirely … They were not exposing the products to threats in the way they would be in the wild … To decide whether or not a threat would be blocked, it must be processed in a test in the same way it would be delivered to the victim…”

Which doesn’t cover everything that’s wrong with Imperva’s methodology, but does provide a succinct statement as to what is wrong with quasi-testing with multi-scanners, static and on-demand testing, as well as stating one of the quintessential problems that conscientious testers need to overcome or at least acknowledge. Imperva’s Tal Be’ery responded rather weakly that “the evolving nature of security threats mean Ferguson’s recommendations may not work for every testing scenario.” While there’s probably some truth in the contention that highly targeted malware may be more successful (or successful for longer) at evading AV than promiscuous worms and viruses, that in no sense relieves the self-styled tester of any responsibility for ensuring that the testing scenario is realistic. (I’ll come back to that point later in another blog, in the context of a very different test.)

But let’s look at that rebuttal in Imperva’s own blog, which states that ‘… our report acknowledged the limitations of our methodology …” Well, it does now, in that the version now available here includes a section called Limitations, Objections and Methodology that summarizes some of VirusTotal’s objections to the misuse of its service for comparative purposes. However, at least some of the content clearly indicates a revision of the report, since it’s clearly a response to subsequent criticisms. Imperva claims in the report, however, that objections to its methodology are invalid because:

  • It isn’t making comparisons between AV products. Well, of course, it isn’t: it’s making a point about AV products in general, in comparison to its own services. But that doesn’t in any way address the objection that its quasi-testing methodology cannot prove whether any given product is incapable of detecting any given sample in a real-life situation.
  • Its random sampling process is not flawed because it isn’t biased. Well, that’s ok then… They assure us that it “closely mimics what most enterprises or consumers encounter especially in an APT scenario.” Even if this is true – and I can’t really tell on the basis of the report’s description – it doesn’t in the least address the flaw that invalidates the whole quasi-test.

The blog goes on to claim that “…antivirus solutions are very effective in fighting widespread malware, and slightly less effective for older malware …. But for a new malware, there is a good chance it will evade the antivirus.” This is a perfectly reasonable hypothesis, but the Imperva quasi-test doesn’t prove it, and it hardly corresponds to the “damning indictment of anti-virus capabilities” noted by journalists who contacted me at the time the original mailout of the report. (No, none of them shared that original version with me: I got the impression they were required not to share it.) But it then goes on to cite an AV-Test report in support. Unfortunately it doesn’t state which report, but it includes a screenshot. This appears to show the average industry detection results in three scenarios:

  • 0-day malware attacks: avg = 87% (n=102)
  • malware discovered over last 2-3 months: avg = 98% (n=272,799)
  • widespread and prevalent malware: avg=100% (n=5,000)

The Imperva blog describes this as a worrisome gap. Actually, it’s better than I’d expect. Maybe my expectations of AV detection performance are less than than Imperva’s.

Strangely enough, I pretty much agree with its final paragraph. At any rate, a quote from F-Secure’s Mikko Hypponen that I don’t think many AV researchers will disagree with:

Antivirus systems need to strike a balance between detecting all possible attacks without causing any false alarms. And while we try to improve on this all the time, there will never be a solution that is 100 percent perfect. The best available protection against serious targeted attacks requires a layered defense.

As I said myself at some point in the past (in case you’d forgotten that I work as a consultant to the anti-virus industry):

Personally (and in principle) I’d rather advocate a sound combination of defensive layers than advocate the substitution of one non-panacea for another, as vendors in other security spaces sometimes seem to. Actually, a modern anti-virus solution is already a compromise between malware-specific and generic detection, but I still wouldn’t advocate anti-virus as a sole solution, any more than I would IPS, or whitelisting, or a firewall.

David Harley CITP FBCS CISSP

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: