Why did Sun re-integrate systems and storage? Nobody mentioned countering revenue declines or improving engineering, so why do it? Has the coming virtual world driven the physical one?
At the event announcing the re-integration of Sun's storage unit with the systems unit there was no mention of accelerating the revenue growth for storage, which needs doing. There was no mention of reinvigorating dull engineering in the storage unit. Indeed a question about that was given a brief and emphatic denial.
So why were the two units integrated?
Sun wants to provide a stronger focus on storage.
Yes, sure, but for what?
Yesterday Sun announced its hypervisor strategy; xVM, based on XenSource, along with a management facility called xVM OPs Center. This is meant to be a broad and pervasive data centre virtualisation strategy, encompasing servers, storage and networking.
Ah, a penny might be dropping.
More information will emerge over the next few months. Hints include ZFS, now described as storage virtualisation IP, and the 'external virtualisation' of the server/storage X4500 hybrid known to us all as Thumper.
Are we looking at some kind of storage virtualisation in which a virtual machine, be it one running Windows, Solaris or Linux, is instantiated and given storage; the xVM Ops Center saying, 'Give me 5TB of SAN storage' or '3TB of NAS' or 'Access to the globally-namespaced file system' or '2TB of Thumper NAS'?
Behind this facility is a virtualised storage environment capable of responding to these requests. It is a storage abstraction layer and includes modules within it for data protection (continuous data protection, backup, archive), disaster recovery (snapshots, replication), storage presentation as SAN or NAS, and storage capacity efficiency (de-duplication, auto-migration on inactive data to tape, etc.).
Such a scheme would require close integration of system software and storage software development engineering resources and roadmaps. Now that could be a reason to have Jon Benson's group report into John Fowler.