Posted by: David Harley | April 22, 2016

EICAR Call for Papers

I haven’t had much to do with the EICAR conference in recent years – well, it’s nearly 18 months since I went to any conference, and even longer since I went to Virus Bulletin, rather to my own surprise – but I note that the Call for Papers is now on the web site. I feel obliged to note these things when I’m on the Review Team.😉 The announcement says that:

The 24th Annual EICAR Conference will be held on October 17th and 18th with a pre-conference program on the EICAR Minimum Standard in Nuremberg, Fairground, at the IT-SA conference facilities.

The conference theme is ‘Trustworthiness in IT security products’. Which no doubt has something to do with the EICAR Trustworthiness Strategy, which is a topic I may well come back to in due course (hence its inclusion on this site).

More information at http://www.eicar.org/17-0-General-Info.html.

David Harley

 

Posted by: David Harley | April 22, 2016

AV-Comparatives test security product support

One aspect of security software that isn’t often tested is the quality of its support. AV-Comparatives, however, has grasped that particular nettle with support evaluation reports from security vendors with support desks in the UK and in Germany. The reports are available here, and I commented at more length for Infosecurity Magazine: Testing Anti-Malware Support.

I like the idea, but would like to see AMTSO consider generating some guidelines.

David Harley

Posted by: David Harley | April 16, 2016

AV-Comparatives Testing with AMTSO List

The Austrian security product testing lab AV-Comparatives has announced what it describes as ‘the first public industry test using the AMTSO Real Time Threat List (RTTL).’

The Anti-Malware Testing Standards Organizations describes the RTTL as follows:

The Real-Time Threat List (RTTL) is a repository of malware samples collected by experts from around the world. The repository is managed, maintained and secured by the Anti-Malware Testing Standards Organization (AMTSO).

Ah, I hear you say, isn’t that what the WildList Organization does? Well, yes, but RTTL is supposed to do it better, according to the abstract for a 2013 presentation for Virus Bulletin by Righard Zwienenberg, Richard Ford, and Thomas Wegele: The Real Time Threat List.

The abstract states:

In particular, the change in the nature of online threats has left the WildList trailing the ‘real-time’ threat, making it unsuitable for effective ‘in-the-wild’ testing.

And that is undeniably true, as I and others have said many times. For instance here:

It may be that until we see methodologically sound ‘testing of testers’ we will have to accept that there is still a place for ‘best endeavours’ testing based on sample sets that lack freshness but are known to have been validated. Clearly, though, certification based entirely on detection of WildCore samples can no longer be regarded as a sufficient guarantee of a product’s effectiveness. It’s fortunate, then, that the reputable organizations making use of WildList testing are already using or moving towards methodologies that hold onto what is good about the WildList while adding functionality and value by bolstering it with fresher samples and more dynamic technologies.

The VB presentation on RTTL here goes just a little further than the abstract in 2013:

Something new, fast and “accurate” had to be created, which eventually resulted in the conception of the Real Time Threat List…The idea of the Real Time Threat List is to share new threats with additional meta-data incorporated into the system.

AMTSO’s terse FAQ doesn’t really add much in the way of information. There has been plenty of discussion at AMTSO meetings, of course, but AMTSO seems to be clinging to its reluctance to share information outside those meetings, even to AMTSO members, though I keep hearing rumours of an occasional chink of light breaking through at least one of those walls.  We shall see.

AV-Comparatives is, as you’d hope and expect, reasonably forthcoming about its methodology in the test report. It took the ‘Top500’ samples from RTTL as of 9th March 2016, and filtered out the 247 samples that didn’t display malicious behaviour in its sandboxes, in the expectation of removing false positives and ‘possibly unwanted’ samples. It used its own ‘Real World Protection Framework’ to execute each sample simultaneously against security programs configured with default settings and cloud access. All vendors were, in this instance, AMTSO members. And I don’t object to it in principle, since the description should tell any reasonably well-informed reader enough to evaluate the validity of the tests. But what does it tell us about the validity of the RTTL? Well, it tells us that the Top500 isn’t necessarily exhaustively validated or categorized.

And that isn’t necessarily a problem. An advantage of WildCore is that it represents a notionally validated sample set is that it represents, as the WildList Organization FAQ put it,’ a set of replicated virus samples that represents the real threat to computer users.’ The corresponding disadvantage is that by definition the samples aren’t ‘fresh’ because they’ve been around long enough to undergo through some sort of validation process. Which was fine back in the 90s, when the pace of sample release and discovery was infinitely slower. I should also mention that back then, malware was almost entirely viral, whereas most malware samples nowadays aren’t viruses: that is, they don’t self-replicate. So maybe requiring the testing organization to do its own validation is an acceptable trade-off, as long as RTTL access doesn’t get as far as all those amateur (or amateurish) testers who are unfamiliar with the concept of sample validation.

But AV-Comparatives methodology described tells us nothing further about the RTTL, and AMTSO remains uncommunicative. It seems to me that by not expanding on the public details of RTTL, AMTSO is asking us simply to take its word for it that its collection is a Good Thing. And in fact, it has a lot going for it, potentially at least. But there are plenty of testers who are still prepared to say that their test sets are better than anyone else’s without feeling the need to supply evidence.

AMTSO, of course, is no more a testing organization than it is an anti-malware vendor. But if it’s providing samples to testers it might as well be, without information on the conditions under which it is distributed, and it would certainly be nice to know more about the underlying collection methodology.

Don’t get me wrong: while my formal involvement with AMTSO is at this point minimal, I remain sympathetic to its agenda and intentions, and I do believe that testing is in much better shape now than it would be if the organization had not existed. But there are plenty of other people who are less sympathetic – not to say actively hostile. Even if the information above was easily available to outsiders, they will remain suspicious of the apparently close – not to say symbiotic – relationship between vendors and testers within the organization. Sadly, that relationship is in my experience often pretty stressed, inside and outside AMTSO. However, also in my experience, members on both sides of the vendor/tester divide are generally in agreement on the need to raise testing standards and accountability for the benefit of the users of security products, and are committed to working towards achieving higher standards.

AMTSO’s caution as regards sharing information is understandable, given the mauling it has received on occasion from the media (and from vendors and testers who may be less willing to be too accountable for comfort). But in my opinion, it was too ready to give up on transparency.

David Harley

Posted by: David Harley | February 26, 2016

VBWeb Comparative Test

Virus Bulletin, whose tests have played an important part in shaping anti-malware testing over the years, has published an important new test addressing products offering blocking of malicious HTTP traffic. VB has been working on building the test for several years – an indication of the difficulty of implementing a useful test methodology. A number of products were tested privately, but only Fortigate was tested publicly (and came out looking pretty good). Make no mistake, though: I think this is a significant test that I expect to attract more participants for the public test sooner rather than later.

There’s a lengthy report by Martijn Grooten and Adrian Luca here: VBWeb comparative review February 2016

David Harley

Posted by: David Harley | February 8, 2016

The Malware Museum: another take on emulation

I’ve been feeling pretty old recently. Well, I am old: at any rate, past the age where anyone with half a life would be spending their waking hours walking the dog or practicing the ukulele.

Right now, though, I feel particularly old. That’s because I’ve been reminded several times in the past few days of those halcyon days when malware meant (mostly) viruses, discussions about whether worms were viruses, and whether the correct plural is virii. It isn’t, but I rather liked the explanation that it’s one virus, two virii, three viriii, four viriv and so on – hat tip to my friend and sometime co-author Robert Slade for drawing my attention to that one. Though if you’re creating a virus clock, it should be four viriiii but five virv. (I apologise to whoever pointed out to me that clocks use IIII for Roman clockfaces, not IV – I can’t remember who it was!)

clock copy

I can’t actually remember anyone doing a virus clock, but anti-virus companies did, in the 1990s, offer various awareness-raising goodies such as calendars with the dates on which payloads were triggered, virus simulations, and so on. (Whether the intention was to raise awareness of malware or of anti-virus products is moot.) And while part of my present state of depression is because I’ve been getting rid of virus-related books, magazines and even hard-copy conference proceedings, it’s also because Mikko Hypponen has revisited that era with the announcement of the Malware Museum, ‘ a collection of malware programs, usually viruses, that were distributed in the 1980s and 1990s on home computers.’ Though this isn’t an opportunity to top up your collection of malware so that you can test whether security products detect obsolete malware. Destructive code has been removed and the visual effects of malware such as Cascade, Casino and Ambulance (see screenshot below) are displayed in Javascript using DOSbox emulation.

ambulance

David Harley

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.