Virtualisation is gaining popularity as the solution to server sprawl in the data centre as it offers immense opportunities for cost savings -- with proper up-front analysis and planning. However, few have considered the potential pitfalls of large scale adoption.

Virtualisation, in some cases, can create more problems than it solves. It helps IT create a fluid environment where virtual servers can be turned on or off with a few mouse clicks. This is powerful, but dangerous if you don’t know the impact of each change beforehand.

From a planning and execution perspective, virtualising a small set of servers can be a relatively easy process. Those implementing a small scale solution are usually familiar with all of the physical devices, workload and configuration details and other key considerations involved. However, bringing virtualisation to an enterprise environment is a different story entirely. Large scale virtualisation projects necessitate a data-driven approach, carefully evaluating elements, such as asset identification, business considerations, technical constraints, and workload patterns.

Most virtualisation initiatives are approached as tactical exercises in making sure the technology works at a base level. The focus becomes “do these assets fit together on this server?” without consideration of “should they reside on the same server?” When virtualisation is rolled out on a large scale, it needs to be part of an overall consolidation strategy in the data centre that is supported by sufficient analysis and planning. Testing in the lab is not enough. This only evaluates technical issues without taking into consideration important business factors that the company will face when they move into the production environment.

Planning for virtualisation is more than a sizing exercise. Virtualisation analysis and planning should include:

  • Managing inventory: the “one box per application” mentality -- along with continuous hardware upgrades with greater computing power -- has driven a proliferation of diverse servers that are increasingly underutilised. Most organisations don’t have a strong enough discipline around purchasing or asset management, making it difficult to inventory servers. Once organisations move into the virtual world -- where you can create a logical machine without having any paper trail -- this problem grows exponentially. Organisations will be caught in the same trap as they currently are with physical servers: not knowing which servers exist, who created them, which applications they support and whether or not they need them. This has a direct impact on licensing costs and ongoing management issues, which threaten the cost savings of virtualisation. Because of this, organisations need to put technologies and processes in place for tracking and managing the rules around the implementation of virtual servers.

  • Digging deeper into workload patterns and constraints: Although virtualisation is not a sizing exercise, workload patterns must be carefully scrutinised to optimise stacking rations and minimise operational risk. Some of the most important aspects of workload analysis, such as complementary pattern detection and time-shift what-if analysis, are often overlooked when determining if workloads can be combined. This can lead to problems such as unnecessarily limiting the possible gain in efficiency or failing to leave enough headroom to cushion peak demands on the infrastructure.

    Measuring the aggregate utilisation versus the capacity is another important factor. This provides critical insight into pre-virtualisation utilisation levels and patterns, and shows both the maximum capacity reduction that is possible as well as the estimated utilisation target for the virtualisation initiative.

  • Technical constraints play a large role: the type of virtualisation solution being used will dramatically affect technical limitations of the virtualisation initiative. For example, when analysing for VMWare ESX, there are relatively few constraints put upon the operating system configurations since each VM will have its own operating system image in the final configuration. In these cases, hardware, network and storage requirements and constraints play a major role in determining the suitability of a given solution. Alternatively, applications being placed into Solaris 10 Zones will “punch through” to see a common operating system image, and analysis in this situation should therefore factor in operating system compatibilities as well.

    Technical constraints are uncovered through variance analysis on the source hardware, and will often uncover configuration outliers, such as token ring cards, IVRs, proprietary boards, direct-connect printers, or other items not part of the standard build. Not taking these into account could impact the initiative. Rules-based configuration analysis is also important for revealing the best regions of compatibility across the IT environment. These regions represent areas of affinity that are strong candidates for VM pools and clusters.

  • Including business constraints in the analysis: organisations also need to consider availability targets, maintenance windows, application owners, compliance restrictions, and other business issues. Most virtualisation planning tools provided by VM vendors don’t go beyond high-level configuration and workload analyses, yet businesses cannot afford to stop there. It’s not unheard of to put together candidates for virtualisation solely on technical constraints and not have a single time in the calendar when the physical server can actually be shut down for maintenance. Political issues also arise when departments don’t want to share hardware resources, and chargeback models may also break down if resource sharing crosses certain boundaries. Sometimes these can be overcome, but when they can’t, key business constraints play a critical role in the analysis.

  • Security and compliance issues: organisations creating virtual servers also need to carefully consider their storage strategy. This means making sure rules governing access to the data are in place and enacting a proper SAN architecture so that data is not stored on virtual machines. In the virtual world, a whole machine is essentially one file, and there is often a single administrator that has access to many of these files. This can inadvertently create a de facto “super super” user role in the organisation, which has many security ramifications. Separating data from the application and ensuring tight security permissions is therefore, essential in order to assure the integrity and privacy of critical data.

  • Compliance is another key issue that organisations need to consider. Similar to the security precautions discussed above, often it is necessary to determine whether or not certain applications or data files should sit on the same server. Regulation prevents organisations in some industries from sharing customer data between divisions or departments. Good virtualisation analysis looks for these vulnerabilities, providing a risk matrix that helps the organisation ensure it’s not violating any compliance or security rules.

Conclusion

Conducting “what if” scenarios on live production servers by testing potential solutions and backing them out again is asking for trouble. Organisations planning large scale virtualisation initiatives need to invest in proper planning and analysis up front to ensure that critical factors that can’t be found in any lab are accounted for. By recognising that virtualisation is not simply a tactical exercise of assessing workload activity, but instead, a comprehensive resource optimisation strategy that requires input from the business, organisations will be much more likely to realise the promised cost savings. The best approach is to perform analysis beforehand in a safe environment to avoid pitfalls -- ensuring a coherent, stable infrastructure from the moment of rollout.

Andrew Hillier is co-founder and CTO of CiRBA. You can reach him through www.cirba.com.