Posted by: David Harley | November 18, 2015

VirusTotal Sandboxing

Complaints have been made regularly over the years about ‘testers’ who try to assess product performance by throwing them at VirusTotal’s site to see which products flag them as malicious. In fact, I’ve been one of the most persistent critics of this quasi-testing methodology, and a few years ago wrote a paper with Julio Canto, one of the masterminds behind the VT service, about the reasons why it’s a bad methodology.

VirusTotal has moved on since then, in quite a few ways, not least in the technologies it has adopted and the way in which it uses those technologies. While I still don’t in the least regard submission to VT as a substitute for competent product testing, it has, for instance, adopted a form of sandbox testing analogous to the way in which some anti-malware scanners and other sandbox products and services implement behavioural detection.  VT has already addressed ‘Windows PE files in 2012, and Android in 2013‘, and has now added ‘equal treatment for Mac OS X  apps‘.

This perhaps blurs the distinction slightly between VirusTotal’s service and other security services in a way that might cause further confusion among pseudo-testers. But that’s not VT’s fault, and I think the value added to its services more than compensates.

David Harley

Posted by: David Harley | November 16, 2015

The Game of the Name (revisited)

OPSWAT, a California-based company that includes product certification among its range of products and services, asks What Can We Learn from Anti-malware Naming Conventions?

The short answer is, not very much.

Does all this matter to the end user? Only if the user thinks it does.

David Harley

Posted by: David Harley | October 20, 2015

Schneier on Testing…

…Not, I hasten to add, on anti-malware testing, on this occasion. And since I’m not a subscriber to the Cult of Schneier – certainly when he pontificates on the shortcomings of the anti-malware industry – I would have examined any thoughts he had expressed on that specific topic with enough salt to hand for several large pinches.

Nonetheless, his essay for CNN on how the VW scandal could just be the beginning makes some very good points. The title is misleading by the way, though of course it probably wasn’t Schneier’s own choice (the title he uses in his newsletter is Volkswagen and Cheating Software): early in the article he makes the point that ‘Cheating on regulatory testing has a long history in corporate America.’ (And that isn’t, of course, only true of the US.) And I haven’t spent all these years monitoring security product testing without becoming aware of products that have crossed the line between legitimate optimization and outright cheating.

Emissions testing and security product testing have more in common than you might think. While there are plenty of security product tests by people who reviewed cameras last week and refrigerators the week before, there is a moderately healthy security product testing industry heavily populated by people who know quite a lot about what they’re testing: the same is (or at any rate should be) true of government agencies that test (or at least have input into testing) products that have to conform to safety and other standards (and in some contexts that includes security software).

Schneier observes that:

We’re ceding more control of our lives to software and algorithms. Transparency is the only way verify [sic] that they’re not cheating us.

In the age of the Internet of Things, there’s more to safety regulation than ensuring that electrical wiring and the height of stair risers meets standards, though software security is not yet taken into account nearly as much as it ought to be. But we shouldn’t just be thinking about Things: to take just one example, social media services are notoriously cavalier with our behavioural (and other) data, yet less than transparent when it comes to disclosing the algorithms on which their marketing of our data depend.

I note, by the way, that AMTSO’s statement on the recent round of cheating in anti-malware tests on which I commented here and elsewhere has not survived the refurbishment of the AMTSO web site (unless it’s been moved somewhere I failed to find it). Hopefully that’s a matter of housekeeping rather than AMTSO putting its head back behind the parapet at any hint of contention.

David Harley

Posted by: David Harley | October 15, 2015

AV-Comparatives File Detection Test

I don’t follow individual anti-malware tests as closely as I used to, but I notice that AV-Comparatives has released another of its File Detection Tests.

The testers state that while no samples were executed in the course of the test, cases were considered where malware would be recognized on-access but not on-demand. Well, it’s true that executing a file is not the only way to access it, and the difference between on-access and on-demand scanning is less clear-cut in modern top-tier security products. Perhaps we should be revisiting those terms in order to establish a reasonably standard definition.

AV-C does acknowledge that the test only looks at one aspect of product functionality. And I like the fact that results in the detection test are balanced by a false positive test, to lessen the risk that a product will get a high score by simply flagging all unknown files as malicious. So potentially quite a useful test, despite its limitations.

David Harley



Posted by: David Harley | October 6, 2015

Dennis Technology’s 2015 report

Dennis Technology’s 2015 report on Anti-Virus Protection and Performance is now available. I’ve posted some initial thoughts on the IT Security UK blog.

David Harley

Older Posts »



Get every new post delivered to your Inbox.