The pending launch of Windows Server 2012 release 2 focuses on offering a number of advanced capabilities in storage and networking, which used to require the purchase of additional software, or even a full-fledged storage system.
"We see networking and storage as the next places where we can help customers increase their agility and reduce their cost," said Michael Schutz, Microsoft general manager Windows Server product marketing. "We are taking the lessons we have in building and operating our cloud services in networking, storage and compute, and bringing them to our customers on premise."
Microsoft announced the update at the company's TechEd North America conference this week in New Orleans. The company plans to release a preview of Windows Server 2012 R2 by the end of the month and issue the full edition by the end of the year.
On the storage front, Microsoft has introduced a technology called Automated Tiering, that "allows the system to automatically decide which [files] are accessed most frequently," Schutz said. The OS will then store the most frequently consulted files on the fastest storage medium available, such as SSDs (solid state drives), and file the rest on other hard drives, such as less expensive traditional hard drives. The idea is to speed system performance while keeping storage costs down, Schutz said.
Automated Tiering builds on the Storage Space capabilities introduced in Windows Server 2012, which allows Windows Server to work as a front-end file server for a large JBOD (just a bunch of disks) array.
Schutz deferred on saying this approach would serve as a replacement for a full-fledged storage area network, but did say this would be a good setup for smaller organizations that couldn't afford a SAN. He mentioned that many Web and cloud service providers don't use SANs, but rather go with JBOD arrays. He also said the full power of this technology would be in using multiple Windows Servers to run a rather large storage array.
For instance, a total of 16 Windows Servers (split across four storage instances) could power a 64-node cluster that would offer 15 petabytes of raw storage. Each server would use SAS (Serial Attached Storage) connections to a JBOD array of 60 4TB disks, or 960TB per server. Microsoft's tested guideline is 240 drives per storage instance within a cluster, though there is no hard limit as to how large the cluster can grow.
If the administrator then wanted to boost throughput with automated tiering, 10 percent of the 4TB drives could be replaced by speedier 500GB SSDs, still leaving over 13 petabytes of cold storage.
Much like an OS automatically utilizes all the working memory it has, automated tiering can automatically fill all the SSD space allotted. Administrators can configure how much SSD space can be utilized, though the technology can run on its own automatically. No special tweaks are needed for the underlying NTFS (Network File Storage) file system.
Also new with Windows Server 2012 R2 on the storage front is a deduplication technology that could save a lot in storage space for those organizations that use Virtual Hard Disks, for uses such as supplying employees with virtual working environments. In these cases, the VHDs, all of which may contain an identical OS and applications, tend to be largely identical, so Windows Server can reduce all the identical bits into a single copy.
Counter-intuitively, deduplication can also speed the boot times of VHDs as well, Schutz explained. Because the VHDs are booted on the server and streamed to an end device, the server software can copy the identical bits from the first VHD booted directly from its working memory.
On the networking front, the update of Windows Server also speeds migration time of running virtual machines. Windows Server 2012 could already move a live running VM from one server to another. Now the OS can cut the time of this migration considerably. One technique is to compress the VM at the origin server, then decompress it at the target server, which means fewer bits get sent over the wire.
The update is also the first to use RDMA (remote direct memory access), in which a copy of the VM moves directly from the origin server's memory to the destination server's memory, without going through the processor of either server. This process can cut the transmission time by more than half.
Of course, this being the year of the Microsoft's "Cloud First" strategy, the company is also providing a number of potentially useful hooks into its Azure cloud service. One is the Hyper-V Recovery Service. The service can manage a number of backup VMs so that if the primary site goes down, the service will automatically switch operations to the VMs at the backup location. "It orchestrates the recovery in the proper order, so the back end comes up first, then the middle tier than the front end. You set up the process for how you want the backup site to come up," Schutz said.
Though both the primary and secondary sites can be on premise (rather than on Microsoft's Azure), the service itself operates from Azure because it would make sense that the origin of the recovery services should be in a location separate from the site of the primary operations themselves, given that whatever disaster befell the operations would probably also take out the recovery services as well. "You want an independent place, and you don't have to stand up a server or install software," Schutz said.
Microsoft also is gifting some cloud magic to administrators with the Windows Azure Pack. The free add-on to Windows Server 2012 R2 comes with a user portal that replicates the Azure management environment. Corporate IT could build out a cloud fabric using Windows Server and System Center and then offer "cloud" services to business units and IT project managers and others through the portal, Schutz said.
"You get the look and feel of Windows Azure in your own data center," Schutz said. "Instead of the IT department taking requests for a new machine or a new application, the business unit can do that itself."