The holiday season served me a nice slice of humble pie this year. A friend wanted to send some confidential information and my encryption tools were out of date. But as soon as I updated them, my firewall essentially fell over, complaining that the new application - one normally relegated to local file-based activities - was attempting numerous outbound connections. "0wn3d," as the kids would say, I pulled the plug and shut down to ponder the error of my ways.

"You've been had!"

Though it would not have prevented the incident, my first mistake, I decided, was not checking the digital signature of the downloaded binary instead of just the hash that showed the file was undamaged. I started digging to see what else might have been wrong with the software or on the system to trigger the activity connected with the software.

I found the real culprit in the system and recent changes, and had a short dialogue with the software's author. It seemed that either the application's code repository was breached or some resident malware already on my system targeted that specific software.

I rebooted the system to its alternate Linux partition, mounted the Windows C: drive, and had a look at the offending software and related data. (Those without a dual-boot setup could do the equivalent with a bootable security-focused Linux distribution - an excellent one that includes forensic tools is BackTrack, the Slackware-based union of the older WHAX and Auditor projects).

However, what I found was less than informative, since virtually every avenue led to potentially infected operating system components with little or no documentation - and no way to manually verify binaries without long and potentially licence-violating duplication or downloading of code.

Without more information regarding operating system components, I was out of luck. After poking at the damaged Windows system and looking at various pieces of dormant code and recent system changes, it became clear that the effect of a mixed open-source and closed-source system was to increase my risk.

Meanwhile, work called, and the philosophical aspects of this logic puzzle fell by the wayside. The path of least resistance was to blow off some of the dust in the Ubuntu partition, catch up with updates I'd deferred and get on with my life. Within a week, I decided to make it permanent: I'm done with Windows as a base operating system.

Cracked foundation, broken Windows

That's not to say that I'm jumping on an anti-Windows bandwagon - just recognising what a tool's good for and when to set it down.

Here's a non-digital example. The folks at Stanley Works make a multipurpose demolition tool called the "Fat Max Fubar" (allegedly the "Functional Utility Bar") that does the work of a half-dozen other tools. It feels good in your hand when you're being hostile to inanimate objects, but it's no good for installing a sink or hanging a picture - even if it does make a decent framing hammer.

By trying to be all things to all people, and incorporating all models of software development while eschewing all but a very narrow band of responsibility for functional or security problems, Windows security has become the other definition for "fubar."

Cutting through the security marketing fluff (which often describes antivirus software - incorrectly - as a preventive measure), it's clear that the fundamental malware-handling model in Windows is a disaster. An active process that reviews every other peer process or data file on a system to see if bad patterns of code or behaviour are already present is the very definition of closing the barn doors after the livestock have escaped.

There's no good model for quarantining binaries or active content, code-signing is only a stopgap, and the only reasonable way to "clean" a system infected with anything other than the most benign of macro viruses is to break out the installation disks - if the manufacturer was kind enough to provide them. Yet Windows still leans on this rickety security scaffolding.

Rearranging the deck chairs can be useful

There's a difference between planning to fail and planning for failure. Treating a project as if it were inexorably headed for disaster is an effective way to ensure failure. That said, it is a good approach for system- and data-planning.

On Windows, common wisdom indicates that it's good to have preventive controls such as a firewall, reactive elements such as anti-virus or spyware scanners, and modes of recovery ranging from rollback tools in the operating system to heterogeneous multi-boot partitions and offline data backups. Any personal or shared server system that provides functional and security resilience as well as a few modes for recovery is a good thing to be running when something inevitably, eventually, invariably does go wrong.

Rearranging the deck chairs on the Titanic is a fine approach if it gets you to the control room or lifeboats quicker.

How does that translate to minimising risk between your applications and your choice of underlying operating systems? As I noted during my breach incident, the increase in risk seemed to stem from a mix of open- and closed-source tools. It's a simple logic problem, like a chi square: If a questionable item and the operating system is closed-source, then one defers all functional and security problems to the vendors. If both an application and operating system are open-source, problems are handled by a distribution team, community, or author for each component - typically in that order.

A mixed setup with closed-source applications on top of an open-source operating system would seem to be a troublesome combination, as operating system vulnerabilities can be openly identified by anyone, while local threats may lurk in closed code. However, as much as some Microsoft apologists worry about the dangers of not hiding security bugs, this turns out to be a reasonable choice.

The real trouble comes with open-source applications running on a closed-source operating system - threats are more easily identified because applications are open, but the vulnerabilities are often unknown. In simple terms, an open-source apps/closed-source OS setup boils down to a one-to-many relationship, where the one (threat) is controlled but the many (vulnerabilities) are not. That's not good, and even if Windows Vista weren't plagued by functionality and usability problems that make XP look shiny and new again, it'd be reason enough to dismiss it as an underlying OS.

Which way?

As you see from the breakdown above, if there's going to be a mix of open- and closed-source code in your system, the underlying operating system ought to be open, allowing for easier identification and handling of vulnerabilities. In that configuration, the existence of unknown threats can be accepted not only as a result of the closed-source nature of some applications, but also as a result of the practical reality that malicious entities don't tend to make public announcements about attack vectors before attempting to exploit them.

Following this logic, Windows applications, virtual machines or any other closed-source sources of risk ought to live on top of an open-source host system that allows deeper inspection in case of a problem, even if the frequency and severity of such flaws are possibly higher. Better to run out of information and conclude that a dubious component has to be removed than to run out of information about the vulnerabilities of a whole system.

Linux fits this risk-reduction model nicely, and friendly distributions are now the norm. While techno-masochists can still install Slackware by following instructional arcana found using Gopher, or compile Gentoo from its component electrons, Ubuntu and a few other distributions provide an admirably friendly and simple user experience, even for odd hardware and less-capable systems. More important, though, Linux is a reasonable host for both open- and closed-source applications, and even for Windows applications in a virtual machine or directly using Wine.

Why not a shiny new Macbook? OS X marginally fits the definition of an open-source base (though it's moved back in the direction of closed-source), and it's surprising how many security researchers and professionals I see using OS X simply because common security issues are taken care of in the base OS, or easily managed. It's conducive to tracing security issues at several levels, and is a reasonably good host to guest operating systems and applications without undue increases in risk. It's not a bad option at all.

Would either common setup prevent or protect me the next time I choose poorly and install a piece of software containing a Trojan, or that triggers some security flaw already on my system? No, but either would - at least - make it easier to isolate the problem, trace its source, eradicate it, and move on. There's no perfect security or recovery solution, but sometimes rearranging the deck chairs does make it easier when bad things happen.