When it comes to connecting networks or other systems together, it is best to have many, but not too many, connections, mathematicians have found.
Administrators and network engineers have long assumed that the more connections they insert between multiple networks the more resilient the communications between these networks will be. The internet, for example, derives much of its resiliency from multiple, redundant links. But this is true only up to a point. Too many connections can actually be dangerous, because failures in one network can easily cascade to the other, noted Charles Brummitt, a mathematics researcher at the University of California, Davis, who led a team that looked into this issue.
Instead, network owners should fine-tune the number of connections for maximum resiliency, Brummitt said.
Brummitt's team published its work in this week's issue of the "Proceedings of The National Academies of Science."
The work is a mathematical model of how a collection of systems works together. "We're taking a larger view and studying networks of networks," he said. Interconnected networks can be vulnerable to cascading failures, in which a failure, or overload, in one network can disrupt another network. In a typical scenario, when one network is overloaded, it will offload its traffic to the second network. But if a failure is enough to overwhelm the first network, it may overwhelm the second network as well.
"There are some benefits to opening connections to another network. When your network is under stress, the neighbouring network can help you out. But in some cases, the neighbouring network can be volatile and make your problems worse. There is a trade-off," Brummitt said. "We are trying to measure this trade-off and find what amount of interdependence among different networks would minimise the risk of large, spreading failures."
The study, also available in draft form at ArXiv, primarily studied interlocked power grids but could apply to computer networks and interconnected computer systems as well, the authors note. The work could influence thinking on issues such as how to best deal with DDoS (distributed denial-of-service) attacks, which can take down individual servers and nearby routers, causing traffic to be rerouted to nearby networks. Balancing workloads across multiple cloud computing services could be another area where the work would apply.
"As a first theoretical step, it's very nice work," said Cris Moore, a professor in the computer science department at the University of New Mexico. Moore was not involved in the project. "They found a sweet spot in the middle," between too much connectivity and not enough, he said. "If you have some interconnection between clusters but not too much, then the clusters can help each other bear a load, without causing avalanches of work sloshing back and forth."
Of course, one of the largest networks of networks is the internet. For the internet, backbone providers peer with one another, or connect their networks together, which allows internet traffic to move seamlessly from source to destination. Much has been made about the internet's natural resiliency in the face of disaster. But is it as resilient as it could be?
"That's a thorny question," Brummitt admitted. "I don't think we are in a position to make any guesses. Even understanding the network structure of the internet is a problem in itself. But the internet has proved to be rather resilient. So far, it seems like the internet is not too interdependent. But this is speculative."