Last night I spent a delightful two-and-a-half hours on the phone. And this despite the fact that: (a) it was Saturday night; and (b) the call started at 10pm which meant I was sitting at my desk instead of down the pub. The fact that it was October 31 added some humour to the situation: every so often you'd hear a doorbell over the speakerphone of the US-based engineers who were working from home - they do Halloween big-time over there, so we had a wealth of trick-or-treaters dropping in.
The mission was simple: we were rolling out a new WAN link in one of my US offices, which basically involved a number of us sitting on the phone typing commands at routers and stuff. We have a pair of connections into this particular office, in an active/passive setup, and the link upgrade wasn't a simple change of a bit of network string but a very complex alteration that completely changed the routing within the US part of my WAN including introducing a new data centre through which this new link is routed. The nice thing from my point of view was that we have a managed service, which meant that I only had to be there to try stuff out and test the services, with the techies from Sungard, the supplier in question, doing all the hard work.
To set the scene, we'd had some hassles getting this link sorted out because after delivery it turned out that something wasn't working correctly. Despite it being a weekend the provider managed to arrange for its service provider, and its service provider's service provider, to come out and get the fault (initially suspected to be a wrongly installed fibre but eventually traced to a dodgy connector) rectified. I admit that I got a bit stroppy on a conference call (you know the kind of thing: "You've known all along that we need this working by Monday so stop dicking me about and make it bloody work") but they stepped up to the plate and made it happen. They also managed to handle a last-minute hardware spec change in our data centre, on a key component that's involved in our routing setup.
Anyhow, we sat on the conf call to do the turn-up and worked through getting it up and running. Having figured out that we'd managed to get a patch link wrong we then got to the stage where everything seemed to work but only one packet in two was getting through.  The engineering guys tried a number of things (flushing ARP tables, downing interfaces on secondary services, disabling BGP temporarily, usual obvious stuff). After a couple of hours Bob had a flash of inspiration: "Hey, let's try shutting down the interfaces and bringing them back up again". A bounce of the interfaces later and we were pinging consistently and my test plan (okay, the biro-scribbled list of stuff that sprang to mind at the time) was executed successfully.
This is my idea of a service provider. Their engineers (I won't spare their blushes - John Flanagan and Bob Gain) were knowledgable and patient, and although it took a couple of hours to get everything going, I never had a doubt that the lads would figure it out. We've got two more circuit turn-ups to do in the next few weeks, since we have two more US offices with new links going in, and thanks to the lads plus the rest of the team (account manager Sue and PM Tina to mention but two) I have no doubt that they'll go off flawlessly too.