In That Anti-Virus Test You Read Might Not Be Accurate, and Here’s Why Intego’s Lysa Myers draws on her experience both sides of the tester/vendor divide (having previously worked with another vendor and then with an independent testing lab) to make some telling points about testing in general, as well as about my favourite pseudo-test by Imperva. I should apologize right now for having written so much about that test recently yet having somehow overlooked Lysa’s own article on the topic – Protect Your Data, Not Just Your Computer – which really ought to have been included in my timeline (addressed subsequently).
However, rather than return to the Imperva issue (at least until Imperva fulfils its promise to ‘retaliate’ – interesting choice of word, guys…), I’d like to pick up this time on a point made in Lysa’s consideration of another test, a Tech Corner review of Mac products by Thomas Reed. That review actually deserves some closer analysis. (In fact it’s been on my to-do list for months, but I haven’t been able to find the time for it so far and in any case the Intego blog makes a lot of the points I was planning on making).
…[Reed] states that he is testing without the on-access scanner, which is how detection would happen in most real-world situations. This is a common scenario amongst even the most highly regarded testing labs, as running on-access tests is unbelievably time-consuming. On Macs in particular, this is extra difficult as OS X has its own countermeasures against running malware, which could interfere with results.
In general, static testing, in which samples are scanned purely passively, may miss malware that would be detected if the malware was allowed to execute during an on-access scan. (You may notice that this objection has a definite bearing on recent complaints about the misuse of VirusTotal reports for quasi-testing…) Actually, in many cases modern scanners use emulation in on-demand scanning so that a program being scanned is allowed to execute in a virtualized (emulated or sandboxed) environment. Nonetheless, testing that assumes that ability in all contexts is not a level playing field. And in the Windows environment, it’s getting hard to justify something that falls so short of whole product testing. (Actually, it’s not necessarily wrong to use static testing, as long as you make very, very sure that your audience is aware of the limitations of that approach. As so many testers fail to do.)
Two salient points were subsequently made to me privately by a tester:
- There’s less differentiation between on-demand and on-access scanning results because Mac products don’t use behaviour analysis. [There’s some truth in this, though it’s an oversimplification: Mac scanners do tend to make less use of advanced proactive detection techniques than their Windows equivalents (at any rate, in the context of detection of Mac-specific threats). However, it’s incorrect to imply that they make no use of behaviour analysis.]
- That a test on a fully-patched system running the latest OS X versions can’t be on-access because the OS won’t allow malware to execute.
Yes, point 2 is pretty much the point Lysa made above. And it’s a very hard one to get around.
Sure, there are ways to ‘de-patch’ the relevant components of the OS, but that’s not real-world testing. If the OS is able to intervene because the malware is a variant that it recognizes, that’s real-world, but it’s not whole-product testing. At a time when mainstream testers are anxious to implement whole-product testing in accordance with AMTSO guidelines, it seems that it simply isn’t possible to do so on OS X. (There are analogous issues on other platforms, especially mobile devices.) And that’s OK as long as it’s clear to readers of test reviews that what they’re looking at is a compromise, not a perfect reflection of a product’s capabilities in the real world.
What I’m not seeing at the moment is a chorus of testers admitting all the limitations of their methodology. Of course, I’m not seeing a stampede of vendors admitting all the limitations of their products, either. But the shortcomings of one group does not cancel out the shortcomings of the other group. It just makes it harder for the consumer (individual or corporate) to come to a fully-informed buying decision.
I don’t quite know how best to resolve this tension, but it does have to be resolved sooner rather than later. Otherwise, we’ll be looking at a breakdown – in the Mac context at least, and probably in a much wider context – of the symbiotic relationship between vendors and testers, and that will hurt both parties. At the very least, the willingness of vendors to expose themselves to testing on OS X will be compromised.
David Harley CITP FBCS CISSP