On the Internet nobody can hear your embarrassment, something a number of anti-virus companies have had reason to feel thankful for this week.

The venerable VB100 anti-virus tests (published by Virus Bulletin ) showed up a few of the bigger names of the industry – Sophos, Kaspersky, Fortinet, Trend Micro, CA Home, and PC Tools - failing them for a variety of detection ailments, some minor, a few less so.

Kaspersky Lab was annoyed enough with itself to issue a statement of ample rear end-covering proportions:
"The sample we missed is a program which self extracts, and at the time of testing, the on access scanner was not configured to detect such samples by default. This default setting was applied as not to incur the risk of performance overheads. At the time of release of version 7.0, we believed that the original default settings provided an optimal balance between performance and security.

Sophos had something similar to say in an email to Techworld:

“Sophos has logged zero customer issues with respect to this virus. SophosLabs first released protection against this particular virus strain in September and its spread in the wild is considered to be limited. Following the VB100 failure, we have updated the identity to provide improved protection. This is the first time since April 2002 that Sophos has failed the VB100 test because of a lack of detection - the last detection failure on Windows was more than seven years ago, in July 2000,” said Carole Theriault.

And yet there is an air of unreality about these tests, which are based on samples as chosen from the WildList, a large collection of viruses that changes monthly and is generally accepted by the industry to be representative of what is actually out there. Except that this list is itself only a subset of malware, which has since grown far beyond the WildList’s viruses, worms, polymorphic variants, and other assorted creations.

What about the sea of Trojans, some of which do mysterious things like simply call remote code, rootkits (inherently hard to assess), and adware and spyware? These are perhaps the biggest risks around these days, but they don’t sit easily with the more conventional threats on the WildList.

These are incredibly hard to categorise, and the fact that a program might achieve great success in stopping the Trojans around one month wouldn’t anyway mean that it would do as well with the following month’s WildList. This is the nature of modern malware – it isn’t easy to test for and certainly not with a signature scanner. The answer is heuristics , some say, but they were saying that in 1990 and that didn’t stop the problem growing to its current gargantuan proportions.