On the eve of HP's launch of a new range of Integrity servers containing Intel's Montecito chip, we spoke to Ken Surplice, European product manager for HP's Itanium-based products.

Q: HP is by far and away the biggest supplier of Itanium systems, with few other vendors in the market. Is that a good sign? A: We would love to have more people in the ecosystem -- witness the investment we made in the $10 billion Itanium Solutions Alliance. But the situation keeps us happy enough. Yes, we would like to recoup the investment [that HP made along with Intel in developing Itanium]. But as we move to outsourcing chip supply, it's affordable because we don't have to spend on chip development.

Q: What's the justification for anyone buying into Itanium now? A: The latest generation -- Montecito -- is a big step forward in terms of driving down costs. The entry-level market -- that's four-way and below -- is hotting up nicely.

Yes, there's a huge overlap between high-end Xeon systems and Itanium, but each OS segment is big business -- Windows and Linux will carry on and it takes care of itself. If customers want a Xeon-based server, they go buy one.

This refresh plus the two new servers take away performance issues. We are delivering up to four times better price performance -- but it's really about virtualisation, ease of use, and performance/watt. We've a 2x performance per watt advantage and have taken a lead over [IBM's] Power.

It's not just us saying that. Customers who tested previous-generation Madison chips found a 40 per cent performance boost. We're now getting maximum performance out of the chip -- more than double performance with its dual core.

And the infrastructure refresh means that all servers are ready for next generation Itanium for next two or three years -- all it will need is a daughterboard swap. We've also backed two horses at the entry level: the new Montecito in the previous servers and, in parallel, new entry level two and four-way servers.

After the launch, everything lines up, all servers are ProLiant compatible, the management software is there, they have common storage technology, and ILO2. We can attract new business now.

Q: Isn't software more important than hardware now that we have multiple cores but precious little parallel software to run on them? A: We've found that powerful servers aren't being utilised, yet we do have tools to address medium and large scale computing -- but not many customers are using them. We have very capable hardware -- but will customers take virtualisation and other techniques and turn them on?

The days when customers just bought another server when they needed to run an application are over. They need to make the Lego building blocks all the same colour while virtualisation is the key piece. Though it's not parallel programming, customers recognise that they need to utilise their high cost CPUs.

Software we've been pioneering on HP/UX -- which now runs on other OSes called the virtual machine client -- allows virtualisation across hardware barriers. With virtual partitions using HP VM, you can use management tools, load balancing, and so forth.

Essentially, we have massive computing power and can manage it using tools such as System Insight Manager and ILO 2. it means you can manage global workloads in business terms. You say that you need an application delivering x transactions per second over a particular period on this schedule and it will migrate resources to maintain the SLA according to business priorities. It can manage VMs and partitions, can failover, use clusters, even give away one quarter of a CPU on a two-way system.

The challenge is to get this across to our customers. They want simplification -- something that makes it easy to manage systems. Whether they have good or bad technology, other vendors are putting applications into restricted stovepipes -- they can't balance workloads across mixed environments with Unix, Linux and Windows.

Q: Whatever happened to HP's much-trumpeted support for utility computing? A: You mean the Utility Data Centre? It died the death -- actually, it's still running in Bristol but isn't really addressing the problems of most customers. We need to move it to a volume scale. All the techniques we learned however have gone into Workload Manager. Virtualised server pools have become a reality and the underlying hardware doesn't matter.

One large European telco told me recently that none of this is new -- it's like the mainframes of old. Except that the issue they raised relates to utility computing and Sarbanes-Oxley. The CEO needs to sign an annual declaration to say that he knows where the company's data is at all times. With utility computing, can he say that?

It doesn't mean we can't deliver pay per use software metering and capacity on demand. Biggest problem is contract maintenance -- is much simpler with utility computing -- but it's early days, customers are ready.

Q: Has OpenVMS gone away yet? Can it continue to exist in an Itanium world? A: People find that 16 and 32-way systems running Unix with virtualised Windows and Linux partitions run much quicker. But we had a conference recently that was full with people queuing outside and two days of very interesting discussion. The problem is that many don't know about the alternatives that are now available -- for example Oracle 10G is here now.

What's more, Windows and Linux have had tools such as capacity planning for some time. OpenVMS people haven't had such goodies until now.

Q: Does HP regret going into Itanium? A: I can't speak for HP but I was an Alpha man. Can we give Alpha customers running OpenVMS a better place to go? Yes we can. They'll get four times better price-performance for OpenVMS on Alpha by moving to Integrity. And customers have proved that they see things in the same way since, in the second quarter, Integrity servers outsold Superdomes.