It’s not that most computer journalists aren’t experts in security and cybercrime. Well, some undoubtedly do have expertise, and bear listening to even when they’re wrong. ;-) Some have strong opinions that they’re not afraid to express, but leaven them with research – they may not go deep-dive hands-on, but they seek out a range of people who know the field about which they’re writing.
Sometimes, of course, they only use quotes that confirm their own opinions. Perhaps that’s not so different to what often happens in academic papers, but readers are entitled to expect a balanced view from reportage, and tend to assume that even opinion pieces pay some attention to opposing opinions. And sometimes their research consists mostly of regurgitating press releases and unverified statistics..
While Imperva’s recent quasi-test ‘proving’ that anti-malware products ‘are rubbish‘ has been thoroughly debunked by my colleague David Harley – There’s Testing, Then There’s VirusTotal and others (such as Trend’s Rik Ferguson and ESET’s Righard Zwienenberg), to the point where Imperva did some backtracking while trying to deflect attention from the suspect methodology of their study, the New York Times has apparently not managed to catch up with subsequent discussion, telling us that The antivirus industry has a dirty little secret: its products are often not very good at stopping viruses. Nicole Perlruth asserts that:
Consumers and businesses spend billions of dollars every year on antivirus software. But these programs rarely, if ever, block freshly minted computer viruses, experts say, because the virus creators move too quickly.
Well, no-one in the AV industry is claiming that AV provides anything like absolute protection against malware, so that’s hardly a secret, but then I think we’ve already established that Imperva’s statistics are – well, let’s not say rubbish – but 5% of new malware?
That’s hardly borne out by the AV-Test screenshot that Imperva itself claims bear out its conclusions, as Mr. Harley already pointed out:
This appears to show the average industry detection results in three scenarios:
- 0-day malware attacks: avg = 87% (n=102)
- malware discovered over last 2-3 months: avg = 98% (n=272,799)
- widespread and prevalent malware: avg=100% (n=5,000)
And, sure enough, Perlruth repeats the oft-heard misconception that:
antivirus makers must capture a computer virus, take it apart and identify its “signature” — unique signs in its code — before they can write a program that removes it.
Well, it’s true that once a malicious program has infected a machine, it can be difficult to remove it safely (though generic disinfection is by no means impossible or even uncommon: its effectiveness depends on a number of factors). However, antivirus programs – if we must use that rather misleading terms – are less about removal/disinfection than they are about stopping malware infecting in the first place, and the idea that a security program can only detect known malware has been incorrect for decades. Perhaps the NYT would care to look up the terms heuristic analysis, behaviour blocking, sandboxing, behaviour analysis, whitelisting, integrity checking, traffic analysis, and emulation, among other approaches that a security program might use to detect possible malicious activity. Not that all AV scanners use all these techniques, but to suggest that AV is totally dependent on static signatures is at best clueless.
The NYT article does observe that “Imperva, which sponsored the antivirus study, has a horse in this race”, but that hasn’t stopped Perlruth uncritically accepting Imperva’s highly flawed and far from impartial analysis of a competing technology. The Register’s Richard Chirgwin also observed that:
Imperva suggests enterprise security should devote more attention to detecting aberrant behavior in systems and servers. Which, unsurprisingly, happens to be the company’s own specialty.
But he didn’t question the results of this ‘test’ either. As Kaspersky’s Roel Schouwenberg succinctly put it:
Criticizing the AV industry is fine. But do it using proper research/tests. Repeat 2013 times:VirusTotal is not a testing tool.
Paul Wagenseil did rather better for Tech News Daily: Study Faulting Anti-Virus Effectiveness May Itself Be Flawed giving Rik Ferguson, Graham Cluley, and an unnamed spokesman from Kaspersky a chance to comment (very tellingly: well said, guys) and also noted Imperva’s curious assertion that 5% and 87% are much the same number. Let’s hope Chirgwin and Perlruth will find time to read his article.
Old Mac Bloggit
(who readily admits to working – like Mr Harley – as a consultant to the security industry, albeit pseudonymously)