VMware co-founder Mendel Rosenblum has aired his views on last month's Citrix-XenSource acquisition, on what features the CPU vendors should put in their products in support of virtualisation, and on the brouhaha following his remarks at LinuxWorld, which were widely reported as predicting the death of the operating system.
In an exclusive interview with Techworld, we discussed these and other issues with Rosenblum at VMworld in San Francisco, the virtualisation industry's conference. This is an edited version of that conversation.
Q: Last year, you said that virtualisation "doesn't imply the death of the OS but it's a real opportunity for someone to do OSes for appliances." Yet at LinuxWorld last month, you were reported as prophesying the death of the OS. What changed your mind - and why? A: Journalists like to put things in headlines that I actually didn't say. I was consistent but was surprised at the press response.
The point was that once you have the virtualisation layer driving the hardware, that's taken away some of the functionality of the OS, which gives the OS a chance to focus on its other major goal which still very much needed in the application environment. So where modern OSes have gone is to control hardware and the application and are running into a dead end - that's where the controversy came up. As for your quote, I still agree with that.
Q: There doesn't appear to have a lot of emphasis on virtual appliances it at this event. Is it still important? A: This VMworld it wasn't pitched as much, but I find it encouraging that some of them are very compelling from end user perspective and enterprise infrastructure - there are firewalls and this kind of thing.
Some of these companies are excited because they can abandon the 1U box and sell the software to customers - it's a much better way of doing business.
Q: What do you think of the Citrix/XenSource merger from a technological perspective? Does it make sense? A: If I were in Citrix's shoes would I have gone in that direction? There is a layer to control I can see it from that point of view as they've long suffered from being on top of Microsoft.... I can see why they might do it but there are a lot of challenges.
Q: Given that there are now so many different approaches to virtualisation, some of which are gaining traction, why wouldn't VMware change its hardware virtualisation-only approach? A: Only thing we've ever said is that we're not going to build hardware. I can see that there's application level virtualisation has advantages but doesn't solve all of them. It won't replace what we do but could augment it. Our customers have discovered that they can run our VMs and use Softricity to inject the applications, and that looks like a really nice combination.
Q: What about OS level virtualisation? A: That's an interesting one, I'm less optimistic, it does some things really well, you can get a lightweight VM but you inherit a lot of the problems OSes have, you're dealing with a complex things, then there's compatibility. Fundamentally, I believe you want something at the lowest level just to manage the hardware and if you do that I'm not sure you want something at a higher level. But I wouldn't claim that VMware would never do that.
Q: Isn't VMware duplicating what the OS is doing when it comes to I/O and doesn't that mean a double hit on I/O? A: Most well-designed hypervisors don't cache things so when an application writes you write it out. But there are problems with extreme resources where an OS starts paging and then the hypervisor might then doing, but we have ways of dealing with that to eliminate that problem.
Q: Can hardware support eliminate the I/O performance problem? A: Hardware assist gives us native speed performance and they won't be running any faster than native speed. Doing it in the OS you can make it lighter weight you might have resource advantages.
Q: What about security - how can an embedded hypervisor be protected against hidden VMs? A: With this technology doing virtualisation, if the layer you're using to do the virtualisation can't maintain its integrity and someone breaks into it they cold then do things that are hard to find. We would hope that our hypervisor and interfaces are simple enough that, unlike what we've seen with modern OSes where every Wednesday there's a big patch, that we can reduce this. Hardware support means the hypervisor's responsible for isolation and so it can be small enough that you can guarantee that it won't have any bugs -- it's much smaller than a one million line OS.
Q: Is Java a good model for what could happen to VMware's hypervisor technology? If so, does this mean an open source hypervisor from VMware that's as ubiquitous as Java? A: It's possible to be ubiquitous not open source -- every PC has a BIOS but it's not open source. Whether the hypervisor is going to be open source I don't know -- but I know like to peer under the hood.
That said, Java is not directly comparable as it's a middleware platform, we're at the bottom layer.
Q: Following the embedding of the hypervisor in the server, will the hypervisor go further become part of the microcode? A: I doubt it because you wouldn't get any benefits like performance or simplicity - it'd just make it much harder to change. From a technical point of view, anything complex lessens performance.
Q: How far are we from seeing virtualised 3D graphics? A: It's trailing the rest of the issues. I keep talking to different companies about how we could do this. We've been talking to AMD/ATI and nVidia, as well as Intel about it.
It's an interesting problem as whatever you put in the hypervisor it has to be simple. The good news is that the graphics engines that SGI developed for their multi-user computers had some pretty neat technology for isolating things from one another. I'd hope that some of those ideas or their descendants could find their way into the desktop.
The good news is also that the graphics cards developers tell me they're on a much faster development cycle than the CPU vendors so the hope would that if they figure something out it would appear quickly in the marketplace.
Q: How is your relationship with AMD and Intel? A: Both of them are responding well and listen to what we say. Nine years ago when we started they didn't, but now they do. We're working together really well but you'll have to ask them if you want more.
They've been pretty good about things but there have been things I really wanted that got whacked but I think they'll eventually appear. I'll be in big trouble if I tell you what they were.
Q: So if they had unlimited resources, what would you ask the chip vendors to implement? A: We love these multi-core processors since we can pack these things full. But it's a bit scary that there's not a lot of control if you have some big hog of a VM running on one core, it could very adversely affect what's going on in other cores. Right now we have neither the visibility nor the control mechanisms to manage that. We tried to figure out whether a VM is a hog - if I have the same VMs fighting over the L2 cache. They have all this information they could give us down there. It's a question trying to figure out how to export it -- I think [Intel VP] Pat [Gelsinger] used the term quality of service.
Its a thing I've been asking them about along with I/O virtualisation. We've made great steps but if you have a 10 gig NIC with tiny packets coming it can consume a whole CPU just looking at each of them so it's really difficult trying to match line rate if we have virtualisation. The good news is that, when working with Intel, AMD and others, they understand the constraints, since there's VMotion and you can't just grab hold of the VM and not let go.
Find your next job with techworld jobs