It has been more than a month since the Blaster worm hit and my company is still having problems. The main one is that we have thousands of desktops and my security team and I don't have a strong and fully automated way to identify and track updates on them.

This situation is a nuisance, if not a crisis. We know what to do and we've communicated the need to keep updates current to all employees, but things don't always happen the way we'd like.

We've made some network configurations within our routers to limit the damage as much as possible but in some instances that has prevented legitimate business activity from occurring. In some of those cases, we've had to remove added access-control lists because it's more important to have revenue-generating functions working than it is to prevent the Blaster worm from propagating.

Another challenge I face is political. At other companies where I've worked, the desktop support group was responsible for virus removal and prevention, while the IT security group tracked down the source of any malicious activity. I never wanted the security team to be the focal point for virus eradication within the organization, but detection and eradication have morphed into IT security department responsibilities. I'd like to change that, but if I start trying to shift responsibilities at this stage of the game, I'll only generate resentment from other organizations.

So, my staffers - all four of them - are stuck dealing with virus updates and patches for nearly 10,000 desktops. That said, we are making progress. Hopefully, within the next week we'll have cleaned up the environment completely - until the next variant comes along.

Other than the Blaster fallout, this week was fairly quiet. I haven't started looking for a replacement for our recently departed security engineer because management has asked that I hold off hiring until the end of the year. In the meantime, I have authorization to hire a consultant if needed.

I've had good luck with consultants in the past. The only problem is that they eventually leave. At that point, if the consultant hasn't generated the proper amount of documentation and transferred critical knowledge to the staff, we're left with an unmanageable project.

In one case, I hired a consultant to build intrusion-detection sensors. No one was working with the consultant, however, and when he left, he didn't show us the configuration or give us the passwords to the system. Luckily, we were able to call him back in and he provided that information free of charge. We now have very strict agreements that specifically identify documents that must be produced each week so that we don't run into such problems in the future.

Keeping up to date
I decided to take advantage of the lull in activity within my department to review our server baseline images to make sure we're keeping up with patches and other configuration issues that affect both our "jump-start" infrastructure for new installations and the retrofit of our existing systems. I also examined our network infrastructure to ensure that the same controls are in place for our networking gear.

Within the department, we have been keeping track of advisories and patch releases from our major operating system, hardware and software vendors. We typically do this by subscribing to various automated advisory services and visiting certain Web sites. We then forward the advisories to the relevant departments for implementation. For example, Cisco last week re-released an advisory that it first announced in July relating to the potential for denial-of-service attacks on its Catalyst switches. I forwarded the advisory to our network group, but I never followed up to ensure that the appropriate software updates for the Catalyst switches were installed. The network engineering department maintains a list of all our routers and switches. I selected a group of switches at random and asked one of the network engineers to capture the configuration data by issuing the "show running config" command and to send the results to me by e-mail.

Unfortunately, this clunky method is the quickest way for me to verify that our switches aren't susceptible to this vulnerability. At some point, we will invest in patch management software, but that didn't make this year's budget.

After reviewing the list of switches, I found that quite a few hadn't been upgraded. However, this was mainly due to the fact that Cisco hadn't supplied the maintenance release for one particular version of the switch. In lieu of that, our network engineers took mitigating steps by configuring access-control lists on the switches to allow only legitimate workstations to connect to the administrative port on the switches. I also checked on a recent vulnerability in the Solaris 9 file transfer protocol server software. The FTP server is based on WU-FTPD, a file-transfer program from Washington University in St. Louis.

The program has a buffer overflow vulnerability that an attacker can exploit to gain unauthorized root-level access. This can be remedied with a patch. I reasoned that if I checked our FTP servers for the presence of the patch, and found it to be consistently installed, I could assume that the systems administrators are attending to patch management on a regular basis. Once we get our hands on some patch management software, we hope to automate the auditing of our infrastructure. At this point, I'm looking for suggestions. But there's no sense moving forward until we have a budget for this in place.