Something I’ve often had to remind people—especially when they’re starting out in networking—is that it’s very much a two-way street. Traffic has to get from one end of your network to the other—but it has to be able to get back again, and that’s not always as obvious as you might think. A common utterance, for instance, if someone is troubleshooting a problem: I can’t get to network X. My pings aren’t working: I can’t reach the far end. Okay, for now let’s discount all the more complicated things that could be stopping ping working, like firewalls and access-lists, and stick to the basics. Are you sure your pings can't get to the far end;or is it that the responses can’t get back? That’s not the esoteric philosophical question you might initially think it is. It’s perfectly possible that you have a valid route from your source network to your destination one, but the far end has nothing in its routing table to point a way back. Maybe you’re distributing (or not) between routing protocols. Maybe you’ve overdone the summarisation and are blackholing traffic. There are loads of reasons you could have one-way traffic. You just need to figure out in which direction your problem lies. This is so fundamental, and I’ve been drilling it into my CCNA students for so long, that it was a bit embarrassing when I almost got caught out with a similar situation not that long ago. Nothing to do with routing as such this time though—it was QoS that almost tripped me up. I have a customer who is upgrading a remote site with multiple new applications, a new IPT deployment, some flash new high-definition videoconferencing, and twice as many people. Unsurprisingly, they wanted a bit of help to make sure the WAN would be able to handle it all. We were upgrading the bandwidth to that site—they have a hub and spoke MPLS WAN, with multiple remote sites and a large central Data Centre, so it’s a fairly typical topology. But to make sure that the VoIP, high-bandwidth VC, interactive applications and bulk printing could all be catered for on this site, we were going to also upgrade from a three-level to a five-level QoS model over the WAN, and set up some specific bandwidth guarantees (plus obviously some low latency queuing). We knew how much bandwidth we were getting into the remote site and roughly what each key application needed, so we could specify how big we needed each QoS queue to be. That was all documented, ready to go to the Service Provider and say what we wanted. Then it suddenly struck—we’d only specified all this for the remote end! But if we were going to suddenly start guaranteeing specific QoS capabilities at one end of the link, we kind of had to do something at the central site too—after all, that’s where all the traffic to the site was coming from. The central site had a lot more bandwidth to play with, of course, but the QoS queues weren’t going to be set up right. It started to get a bit more complicated then—the utilisation of the links into the Data Centre was low enough that we weren’t going to need to upgrade that, but we had to be careful that if we were going to guarantee bandwidth, say for videoconferencing at the remote site, we also had enough guaranteed at the central site. However, it had to be available to other applications if it wasn’t used, since we were in theory oversubscribing the WAN links into the core—after all, all the remote sites didn’t use all their bandwidth all the time—so it got a bit complex than it had been at the remote end. We did get a working model figured out, and everyone just assumed we’d worked on the remote site settings first for simplicity and then mapped them to the core site. I didn’t bother mentioning that we’d almost forgotten about it completely!