I can't talk about the embargoed Longhorn meeting I had Tuesday, except to say that I got one Microsoft rep to say that Redmond doesn't think there'll be any more service packs after Longhorn sees daylight. But then he burst out laughing, so I'm not sure how reliable that is.
So instead of the wealth of ultra-secret Longhorn-maybe info I just got, let's talk about the not-so-hush-hush peek I got at Microsoft Compute Cluster Server 2003 (MCCS2K3) release candidate. A buddy got this installed at his lab because doing it at mine would have sliced too deeply into my beer-drinking schedule. Fortunately, he's smarter than I am anyway, so we're probably better off all around.
Microsoft is positioning MCCS2K3 to run on what it terms "inexpensive machines," but it's also pushing 64-bit CPU power. Then again, considering what companies such as Dell and HP are selling core duos for nowadays, those two things may not be so far apart. But if you're looking to run a cluster on 3-year-old Pentium 4s, you'll need to reconsider.
Installing Compute Cluster means beginning with a head node. Strangely, even though the head node is the beating heart of your computing cluster, MCCS2K3 doesn't support multi-machine fail-over. The head node is a lone wolf, a single machine leading the cluster pack. (I asked to call ours Oliver, but the jealous weenies voted me down.) The upshot is that if your head node does a face plant, it takes your cluster along for the splat, so make sure that this machine at least is running on robust hardware -- something with a RAID system and preferably multiple CPUs.
After that, the rest of the cluster can be as rickety as a 64-bit-capable workstation can be. Further, you can attach as many nodes as you like, according to the docs. No word on how licensing might affect that last statement, but I've got some Microsoftees looking into that for me.
The docs also blithely refer to 10 Gigabit Ethernet as being the preferred cluster interconnect medium. But when I went outside to check my 10Gig infrastructure tree, there weren't any ripe 10Gig switches or NICs hanging off the branches, so we just stuck with straight GbE over copper.
Even with that little wrinkle, setting up the cluster seemed surprisingly straightforward -- most of it's done with a wizard. You've got MMC (Microsoft Management Console) snap-ins to manage the cluster nodes and you're even using Active Directory to track them. All pretty basic for a skilled Windows admin.
Where it gets complex is when you actually put the cluster to use. Applications need to be tweaked in order to submit jobs to the cluster, although Microsoft was kind enough to put a command-line capability in here. Might make it a little easier to automate job submission via scripting for the ubergeeky. My guys got the cluster to compute specific problems of the deep, financial, Wall Street persuasion. By the time they got things running, I was pretty deep into Episode 6 of The IT Crowd.
My ADD aside, Compute Cluster Server 2003 really surprised us in terms of usability. I'd still experiment with the release candidate before purchasing the commercial version this summer, however. Make sure your applications can truly take advantage of the cluster before going whole hog on this, because the hardware requirements are going to be pricey.