With EMC Corp.'s acquisition of Acxiom Corp.'s grid computing software for US$30 million last month, enterprise customers started opening their eyes to the fact that grid is not just about raw horsepower and CPU utilization for high-performance computing environments. So what was it that Acxiom did so well with its grid environment that caught EMC's attention? To put it simply: data management.

Acxiom has a very popular data-integration application called AbiliTec. It took the "scale out" commodity hardware route to scale and support a growing number of transactions (as Google Inc. and Amazon.com Inc. have done) and then built its own grid software to manage this new environment. In an article on Acxiom's environment last year, Computerworld reported that its grid had grown to 6,000 Linux nodes, processing more than 50 billion AbiliTec transactions per month.

Performance and reliability have been at the heart of Acxiom's data management grid story, but there are some other very specific enterprise data challenges where grid has already been used in research and science. Today, enterprises are increasingly evaluating the capabilities of grid infrastructure to resolve data management issues ... above and beyond data processing horsepower.

Transporting massive amounts of data
Your typical enterprise is probably not going to be dealing with data on the petabyte (1 quadrillion-byte) level any time soon, like particle physicists in the online science realm do today.

However, many commercial entities do transport enormous files on a daily basis. Consider cases like the British Broadcasting Corp., where one hour of preprocessed high-definition broadcast averages about 280 gigabits. These organizations are working with grid technologies today to make their data assets accessible to field reporters and users across a distributed network.

Moving large data sets at high speeds between distributed sites is a common challenge in many industries. Oil and gas companies are perhaps the poster children for moving large data sets, which they accumulate through seismic analysis and reservoir analysis. Getting the "whole picture" to make sound business decisions requires pulling large quanta of data from many different locations.

Other markets with massive data-transport requirements include the automotive industry (for computer-aided analysis and simulations), semiconductor companies (for mask layout based on instruction sets) and pharmaceutical firms (for molecular matching and chiral synthesis), to name just a few.

Getting data out of complex storage systems
Grid pros have popularized the expression that "access to the data is as important as access to compute resources." Sometimes in enterprises, the challenge with data access - beyond the size of data sets - is the complexity of the protocols associated with storage systems.

A great deal has been accomplished within e-science grids to overcome incompatible protocols for data storage; most notably the GridFTP standard. GridFTP implementations are built upon the existing file transfer protocol of the early Internet days, which make it easier to pull data out of any file (or blob) storage system that uses a flat or hierarchical naming scheme, connected to a TCP/IP network, which applies to the majority of enterprise storage systems in heavy use today.

End-to-end data coordination
Being a large IT organization today means having an environment that includes multiple data centers, each a distinct IT island. While each data center may be well managed as far as compute power goes -- and there may not be pressing need for better utilization (just buy more commodity boxes) -- you're faced with a "Wild West" in terms of managing the data between those different islands. Enterprises with distributed organizations often have large data-sharing needs, such as replicating information between data centers and clusters, optimizing flow management, improving collaboration among a distributed team and conducting better analysis.

Grid allows these organizations to tie multiple IT islands together without ripping out and replacing existing infrastructures. If a company has a group of users in the U.S. with one large set of data, and another group in Japan with another large set of data, it often is not practical to move the data, but it is possible to run jobs against that data remotely instead.

This is where grid can thrive. Organizations are no longer pressured to either move the compute power or move the data. By knitting together distinct IT islands for computation, as well as the data, with security that overlays on existing, often hairy security environments, organizations can begin to tame the Wild West and leave a rip-and-replace mentality behind.

Steve Tuecke is chief architect of the open-source Globus Toolkit used for building grids, and the CEO of Univa Corp., which sells Globus-based products and services.