Unlike in yesteryears, performance testing is today an unavoidable part of quality assurance programmes. Organisations understand the adverse impact that poorly performing IT systems can have on their revenues and reputation, and want to guard against such a scenario. However Quality Assurance (QA) teams do face challenges when it comes to performance testing.

Typically QA teams are confronted with procurement of expensive tool licenses for performance testing tools or unavailability of a newer, compatible version of the performance testing tool. The usual workaround of bringing in consultants from the company which developed the performance testing tool is not really an option due to security and financial constraints. So is there a more practical solution that QA teams can adopt to overcome these challenges? There actually is!

QA teams should consider utilising their functional testing tools in such scenarios. Since performance testing primarily consists of load generation and monitoring system/application behaviour, functional test tools can be used for the former and the latter can be tracked using easily availably utilities (PerfMon, GMon, inbuilt UNIX commands etc.). However the teams need to be aware of their priorities.

This approach works well in case of applications where the primary purpose of performance testing is to ensure optimisation of databases, response to data inputs, and CPU and memory utilisation of servers. Further certain characteristics of applications make them more conducive to this alternative than others:

  • Number of users accessing the application simultaneously is not too high (~300)
  • Overlap between functional and load test scenarios is high
  • Applications developed are based on object oriented concept which allows functional test tools to recognise screen controls
The approach itself can be segregated in to three stages:

Proof of concept to test feasibility: Utilising functional testing tools for performance testing involves making use of terminal servers to generate or simulate the load conditions. First, the performance of the application to be tested is benchmarked by executing a sample test scenario using five desktops individually. Next, the terminal server is introduced to check if there is any degradation in the performance of the system. This requires us to load the functional test tool on to the terminal server and access it through individual desktops (or by opening five concurrent session through the same desktop) to execute the test scenario. Further, a simple code can be written to record the time required for completion of each activity/transaction and collate the data in an excel for analysis at a later point in time.  

Execution of simple test scenarios: QA teams can segregate test scenarios in to simple test scenarios, requiring concurrent completion of a single type of transaction, and, complex test scenarios requiring concurrent completion of different transaction types. To track the performance of the system different monitoring tools can be made use of. Most types of servers which are likely to be used as terminal and application servers have inbuilt utilities which can be used to track usage of CPU, memory etc. For instance, Windows based servers come with the utility, “PerfMon” which helps track these basic parameters. Additionally Java utilities loaded on the application server can help track parameters such as duration and frequency of garbage collection.

Execution of complex test scenarios: Complex test scenarios are dealt with separately since multiple terminal servers may be required to test different types of transactions concurrently. Even if the business teams haven’t explicitly shared performance requirements for complex test scenarios, the QA teams can proactively test the systems and help the business teams set performance benchmarks. Consumption of resources is tracked in the same manner as we do during execution of simple test scenarios.      
 
The approach outlined above is based on actual experiences and so are the benefits. By our estimates at least 20 percent of the cost of testing can be avoided by eliminating the need to purchase licenses for performance testing tools. Similarly hardware costs can be reduced by 30 percent by avoiding the purchase of dedicated servers for simulating load conditions.

In terms of effort required for testing too, the approach is beneficial,since the functional automation scripts can be reused. This is because using functional tools for performance testing eliminates the need to script in a different language. Hence the scripts need to be tweaked only slightly before they can be used for performance testing.

We strongly recommend that all QA teams explore this approach before starting on their performance testing initiatives. After all, the benefits of faster time-to-market and reduced cost of testing provided by this innovative approach are more compelling than what the traditional alternatives have to offer.

Surya Prakash is Group Project Manager at Infosys. He has more than 13 years of experience in the IT industry. One of his primary responsibilities includes managing a Test Centre of Excellence for a leading European Bank.