The convergence of server and storage management is slowly taking place as enterprises look for more automated ways of managing their data center assets. Spurring the trend toward convergence -- which remains somewhat hampered by a lack of available tools -- is virtualisation technology.
“If you don’t believe servers and storage have converged, all you need to do is take a look at server and storage virtualisation," says Greg Schulz, senior analyst for Storage IO. “Let’s go way back -- originally servers and storage were managed together, then they were separated, and now they are being put back together again."
Virtualisation technology lets administrators divide physical servers or storage devices into logical virtual machines that can support different operating systems and applications. Businesses are looking to virtualisation so they can consolidate server and storage resources; run multiple workloads on a single machine more efficiently; and dynamically provision resources as application and business needs shift.
Server virtualisation software from companies such as VMware, SWsoft, Virtual Iron and XenSource has been adopted by leagues of users; according to IDC, more than three-quarters of companies with 500 or more employees use virtual servers, and 45 percent of all new servers purchased in 2006 were virtualised.
However storage virtualization deployments are less mainstream. IDC reports 49 percent of companies are evaluating storage virtualization, while 34 percent have implemented virtualization software or hardware. Enterprises are having difficulty adopting storage virtualization products to pool resources from multiple heterogeneous arrays because the software to do it is lacking.
Where the gaps are
When businesses deploy software for creating and managing virtual servers, the virtual machines typically get storage capacity from shared storage networks. Most link to Fibre Channel and IP storage-area networks (SAN) or network-attached storage devices -- not direct server-attached storage. According to host bus adapter vendor Emulex, at least 70 percent of VMware ESX Server users get their storage from SANs. anaging and provisioning compute power for these virtual and dynamic environments is often a manual process. Software such as VMware’s VMotion and VirtualCenter Distributed Resource Scheduler can move running virtual machines among physical servers, but it falls short of adequately managing storage resources for capacity-hungry virtualized applications.
“VirtualCenter’s Distributed Resource Scheduler takes care of monitoring utilization and reallocating [compute] resources as needed to provide an as-fast-as-possible use scenario," says Eric Kuzmack, IT architect at Gannett in Silver Springs, Md., which has dozens of servers partitioned into hundreds of virtual machines with VMware ESX Server.
To move storage resources from one virtual machine to another, Kuzmack uses VMotion and Distributed Resource Scheduler. But allocating additional storage capacity requires manual intervention by Kuzmack, who has to use multiple tools: one for monitoring storage capacity, another for monitoring and reporting on the links between application performance and storage resources, and a third for provisioning storage.
“Right now the VMware VirtualCenter tools don’t have insight into the allocation of free space in storage," Kuzmack says. “We monitor that disk space with operating system tools. Provisioning storage is done manually when we create a new virtual server farm. As long as I have enough storage provisioned for the server farm, I don’t have to do anything on the storage system."
The matter of aligning compute and storage resources is complicated further by the number of IT staff it takes to make changes to the storage and server infrastructure.
For example, adding compute and storage resources to a business-critical database involves coordination of several people: a database administrator who makes the request for more capacity and compute power; operations staff who install the additional servers; a storage administrator who approves the allocation and provisions the storage; and a server administrator who provisions the server with the correct configuration.
“When you want to provision a set of resources for an application, you need to not just provision storage, you need to provision servers, networking [connections] and do all that in conjunction with each other," says Patrick Eitenbickler, director of marketing for HP StorageWorks. “All these [forms of management] need to work in concert with each other. HP is going to build those linkages and bridges between server, storage, virtualization and automation software to avoid that."
Some storage vendors have adopted the concept of thin provisioning to solve customers’ need for dynamically allocated storage.
With thin provisioning, a single pool of storage that can handle the growth requirements of applications is set aside. Allocation of storage capacity to applications can reach more than 100 percent, but because no application will at any one time consume all the storage available, capacity is left in the pool.
The use of thin provisioning eliminates over-provisioning of storage, in which storage capacity is pre-allocated to applications but never used. Among the companies using thin provisioning in their products are 3Par, Network Appliance, Compellent, LeftHand Networks, DataCore and Equallogic. ary Berger, vice president of technology solutions for Bank of America Securities Prime Brokerage in New York City, uses 3Par’s thin provisioning to allocate storage for his IBM BladeCenter server environment.
“In early 2005, we had a very fragmented environment [with lots of silos] everywhere," Berger says. “We spent most of our time catching up and sending disks around the country to recreate disk allocations and whole allocations, which provided the difference between what was useable and what was allocated. Virtualization helps us distribute workloads across many different physical resources."
Berger has two mirrored data centers with a consolidated SAN infrastructure. He is using 3Par’s “chunklet" technology, which breaks disk allocation into 256MB groups to distribute workloads across many different disks in his system.
“When we need to do capacity upgrades, we can simply add new disk magazines to the system and rebalance our allocation across those disks again to get more efficiencies," Berger says.
Berger also uses the 3Par software and hardware to boot his servers from the SAN.
“Being able to export a [group of disk volumes] to a blade server gives us a tremendous amount of capability because we can easily recover from simple hardware failures," he says.
In a perfect world, the process of identifying application-related compute or storage deficiencies and allocating more resources would be automated. Decreases in the performance of an application automatically would spawn the allocation of more compute or storage capacity.
A few vendors -- including start-ups Akorri and Onaro -- have introduced software that correlates dependencies among servers, storage and applications, and monitors and analyzes those dependencies. But these tools are predictive in nature and enable the manual change of server and storage resources, not the automated handling many users desire.
Meanwhile, software such as HP’s Storage Essentials and IBM’s Virtual Manager and System Director are starting to provide links between storage and server connectivity.
Another option for automating storage provisioning is HP’s Virtual Connect, which pools and abstracts LAN and SAN connections to servers and virtual machines in HP BladeSystems. It lets administrators define a server’s I/O connections and then migrate data to another server or virtual machine without disturbing the LAN or SAN settings.
For its part, Veritas offers Server Foundation and its components, Provisioning Manager and Application Director, which automate the provisioning and management of virtual servers and storage.
As tools such as these mature, the convergence of server and storage virtualisation will move closer to reality.