Secure business networks are at risk thanks to a vulnerability in a fundamental protocol, according to security researchers at the Massachussetts Institute of Technology (MIT).
Researchers have highlighted the increasing danger of attacks exploiting weaknesses in SSH (Secure Shell), and warned that such attacks are likely to be automated in the near future. The risks are not theoretical - SSH weaknesses were involved in a spate of attacks last year, including the theft of source code from Cisco Systems and a series of compromises affecting major universities, corporations, national laboratories, super-computing centres and military institutions, the researchers said.
Grid systems, which link research institutions together in order to share data and processing power, are particularly vulnerable because they create close links between a large number of institutions, according to MIT. Among the victims of last year's attacks were several research institutions connected to TeraGrid, a research grid.
However, universities are not the only ones affected. SSH is used in most Unix and Linux networks to secure remote command execution, file transfer and other services. "As SSH has become one of our most trusted services, attacks that highlight its limitations have become widespread," the paper said.
In writing the paper, called "Inoculating SSH Against Address-Harvesting Worms", MIT researchers Stuart Schechter, Jaeyeon Jung, Will Stockwell and Cynthia McLain collected information from a number of organisational networks, and found that most networks are vulnerable to a weakness involving SSH's known_hosts databases.
These databases, stored on SSH clients, include the list of remote hosts each user has previously contacted via SSH, along with the hosts' public keys, a component used by SSH to create a secure connection. The problem is that when a client is compromised by an attacker or malicious code, the known_hosts database is easy to use in targeting other hosts.
"Such reliable target lists reduce both the time required to find vulnerable hosts and the likelihood that attacks will raise alarms due to failed connections or authentications," said the study. Because worms using the list don't need to rely on techniques such as port scanning, they are likely to evade most current virus detection mechanisms, the researchers said.
An analysis of data from more than 2,000 user accounts containing known_hosts files found that only a small number of hosts would need to be compromised in order to compromise most of the hosts in a system. "We found that a surprisingly large fraction of the known_hosts entries were to hosts on distant networks, that the bulk of these entries could be reached by compromising a small fraction of the user accounts in our survey, and that 62.8 percent of identity keys encountered were stored unencrypted," the study said.
So far these weaknesses have only been exploited manually, but most of the components for creating worms to target known_hosts are already available, the paper said.
"While a worm of this type has not been seen since the first Internet worm of 1988, attacks have been growing in sophistication and most of the tools required are already in use by attackers. It's only a matter of time before someone writes a worm like this," commented cryptographer Bruce Schneier in a blog entry.
In the short term, sysadmins can protect themselves by encrypting known_hosts using an cryptographic hash, similar to the techniques used to protect password files. This requires changes to SSH, however - a known_hosts hashing scheme proposed by MIT has been implemented in OpenSSH 4.0 and in a patch for earlier versions of SSH.
Schechter presented the findings to as part of the keynote speech for The First International Workshop on Cluster Security at CCGrid 2005 in Cardiff, Wales earlier this month. MIT's research and related materials are presented on the university's website.