Many companies hesitate to explore cloud computing because of concerns relating to the security, reliability, and cost of an always-on cloud-based application versus one hosted internally. A perfect initial cloud application for this type of companies is dev/test. Moreover, because of its unique characteristics, a cloud environment can actually better meet dev/test requirements than the internal option.
Therefore, if you've been waiting to explore cloud computing, give some thought to using dev/test as your initial toe in the water.
Many dev/test efforts are poorly served by existing infrastructure operations. If you think about it for a minute, that makes perfect sense for the following reasons:
Dev/test is underfunded with respect to hardware: Operations gets budget priority. Companies naturally devote the highest percentage of their IT budget to keeping vital applications up and running. Unfortunately, that means dev/test is usually underfunded and cannot get enough equipment to do its job.
Infrastructure objectives differ: Dev/test wants to be agile, while operations wants to be deliberate. When a developer wants to get going, he or she wants to get going now. Operations, however (if it's well-managed) has very deliberate, documented, and tracked processes in place to ensure nothing changes too fast and anything that does change can be audited.
Infrastructure use patterns differ: Dev/test use is spiky, while operations seeks smooth utilization to increase hardware use efficiency. A developer will write code, test it out, and then tear it down while doing design reviews, whiteboard discussions, and so on. By its very nature, development is a spiky use of resources. Operations, of course, is charged with efficiency with an aim of lowest total cost of operations.
Operations doesn't want dev/test to affect production systems: Putting development and test into the production infrastructure, even if quarantined via VLANs, holds the potential of affecting production app throughput, an anathema to operations groups. Consequently, dev/test groups are often hindered in their attempts to access a production-like environment.
Dev/test scalability and load testing affect production systems: If putting dev/test in a production environment holds the potential of affecting production apps, what about when dev/test wants to test out how well the app under development responds to load testing or to variable demand? This means that some of the most necessary tasks of development-assessing how well an application holds up under pressure-is difficult or impossible to assess in many environments. Many of the most important bugs only surface under high system load; if they aren't found during development, that only means they will surface when in production. Moreover, in constrained environments it's difficult to reproduce a production environment topology, which means it's hard to assess, prior to going into production, the impact of network latency, storage throughput, etc.
All of these reasons are examples of how current infrastructure management practices optimize toward supporting existing production apps. Left unsaid, most of the time, is how this very reasonable approach can eventually cause more problems for production systems, because bugs are not caught early in the development cycle, and only surface in production, where it is most expensive for them to be fixed.