The company responsible for those cool satellite images accessed by millions of Google Earth users said this week that it has almost completed an enormous upgrade to its internal IT infrastructure to increase production and support the launch of two more imagery satellites over the next two years.
DigitalGlobe Inc. said that at the center of its IT upgrade is its storage, which so far includes more than 200TB of high-end and midrange storage capacity from Hitachi Data Systems Corp. and new data management software from Advanced Digital Information Corp.
The upgrades have quadrupled productivity, according to Luc Trudel, director of IT operations at DigitalGlobe in Longmont, Colo.
The company also rolled out Gigabit Ethernet ports throughout its LAN and plans to install 10Gbit/sec. Ethernet connectivity later this year in other parts of its infrastructure.
"Once our second satellite is launched, data volume will increase fivefold, and the eventual launch of a third satellite will further increase data volume," Trudel said.
In an e-mail response to questions from Computerworld, Trudel said he did not calculate an ROI on the project. However, he added, the new infrastructure means his group can process satellite images in about one-third to one-fourth the time it previously took - reducing it from 12 hours to three.
One notable improvement has been in the overall manageability of the storage infrastructure resulting from a new file-sharing system from ADIC in Redmond, Wash., Trudel said. Not only does it require less time and effort on the part of the storage administrators, but "we're able to rapidly repurpose and/or expand storage to meet shifting production needs," Trudel said.
DigitalGlobe processes 105 satellite image strips per day. Each strip consists of multiple 10-square-mile images of the Earth. That number will expand to more than 400 strips per day when the company launches a second imagery satellite later this year.
The company plans to launch the second imagery satellite later this year and the third in 2008.
Core to DigitalGlobe's upgrade was its storage infrastructure, which was needed because the company doubled its data from 2004 to 2005.
Prior to moving to a storage-area network (SAN) architecture last year, DigitalGlobe said it used an inefficient direct-attached storage model, copying data from server to server along the production line, beginning with the raw imagery received from the satellite through to the finished products that are ready for electronic delivery.
"This process was slow and subject to processing-line failures arising from the lower reliability of that category of storage," Trudel said.
Trudel said key to increased productivity was a file collaboration tool from ADIC called StorNext, which runs on its satellite image processing servers using the SAN for back-end storage. StorNext used the multiple applications and operating systems within DigitalGlobe to share a common pool of digital assets, and it increased performance in handling large files and data sets, Trudel said.
DigitalGlobe has 20 applications reading from and writing to the StorNext file systems in the production line, which is for the most part a "lights out" operation.
DigitalGlobes storage outlay last year:
- One high-end Sun/HDS TagmaStore SE9990 array
- Two mid-tier Hitachi 9585V Thunder arrays
- Three mid-tier Hitachi 9570 arrays
- One Hitachi WMS100 workgroup array
- One ADIC Scalar i2000 and one Scalar 10000 tape library
"We are contemplating using StorNext as an [information life-cycle management] tool, but there aren't any definitive plans for its deployment at this time; presently we use an application called the Storage Manager that was developed in-house several years ago, before viable commercial products were available," Trudel said.
DigitalGlobe has also added numerous servers over the past year, ranging from 40 small-to-mid level enterprise servers from Sun Microsystems Inc. (V210 through V1280) to several high-end servers, including a Sun Fire 6800, a Sun Fire 12000, three Sun Fire 15000s and five Origin 3000 supercomputers from Silicon Graphics Inc.
But Trudel said the main challenge in the upgrade was understanding how best to configure the entire storage stack, from the physical layer - the HDS arrays and the Cisco 9506 Fibre Channel switches - through to the logical layer that includes the StorNext file system.
"There were multiple challenges, but by far the greatest was the redesign of the storage infrastructure," Trudel said. "We had to migrate from a piecemeal set of file-handling processes to a single, coherent file-management solution. The infrastructure itself was not overly complex, but getting all the applications working to one approach took a lot of planning and execution effort."