You are about to start deploying a new LAN infrastructure, cut your remote sites onto an MPLS WAN, deploy a new ERP application, or upgrade all your users to Windows XP. You know you will have to do testing before everything goes live, but how do you begin to write a test plan? What should you include? How can you specify what merits a successful test?

Plan the plan

Don’t just start thinking about the test plan a couple of weeks before the go-live date. Bear the testing in mind from the project start—it will help you to put together a project requirements document, as it should focus you on what the new system, whatever it is, is supposed to bring to your organisation.

Why are you making this change? If it is to gain performance improvements, increase security, or add scalability, then these can be included as test requirements. As the project scope changes, make sure that this is reflected into the test document. You don’t need to start writing the tests yet, though, as they might become obsolete if started too early in the project life cycle. Make sure that testing is included and resourced as part of the overall project.

Document structure

Regardless of what you are testing, there is a structure that you can use as a template, and expand or minimise as appropriate. There are actually industry standards related to test plans, such as the IEEE’s Standard for Software Documentation—Std. 829—but unless you are a software house, it’s unlikely you need to adhere too rigidly.

A lot of people will only read the first couple of pages of your test report, so it’s important that you start with an Introduction, stating what you are testing, and why.

This should be followed by an Executive Summary, which will, without too much detail, give the results of the tests, the conclusions and recommendations. The summary section will not be completed until after the testing is complete. The next section of the test plan is where you get to the real detail.

The Plan Scope must explicitly state not only the devices and systems under test—and those which are not—but also exactly what functions and features will and will not be tested, when it will be tested (can some tests be done on a standalone system, for instance, or does it all have to wait until everything is complete?), how the tests will be carried out and the expected results (although this is covered in more detail in the following sections) and the pass/fail criteria.

This sounds much more straightforward that it really is, as often it’s difficult to specify what merits a pass, but there must be agreement before you test as to what is acceptable, and who has the final decision if there is no empirical result from the testing. It is also vital to explicitly state what you’re NOT testing, to make sure no assumptions are made that might lead to a major piece of testing not being done by anyone.

Next, the Test Environment, which will provide the complete detail on the configuration or set up of the devices being tested, with physical and logical topology diagrams, if appropriate, information on the systems under test, addressing schemes, routing protocols, software levels etc. Actual configurations can be included in appendices to make this section easier to read through.

Test Results and deliverables

The Test Results part of the plan is the bit that you will then fill in as you work through the test. In the planning phase, it should list every test that you will carry out, how you will test it, and the expected result. During the testing, you must follow these instructions exactly, and record the results, which may be outputs from a network analyser, timings, device command line outputs, or whatever is most appropriate for that specific test.

It is far easier to split the overall testing into a large number of very small tests, grouped into logical sections, which can be run sequentially. You should carry out a ‘dry run’ of the tests to determine that results can be achieved the way you expect, and to see if any other relevant information can be recorded.

The idea of this very precise testing and recording is to ensure that you can repeat these tests at any time; if the system is changed, for instance, and you need to rerun part of the tests to see what impact that has caused. These results will form the basis of why you passed or failed the system, so you must know exactly what you did and what happened.

As an example, the extract below is one of several dozen individual tests/results taken from a functionality test to prove the correct operation of routers in a mesh topology:

Objective

The objective of this test is to verify that OSPF adjacencies are correctly formed between the core routers. This should culminate in a link-state database exchange.

Test environment:

As shown down below. Router outputs will be taken from Router_A. Debugs of the adjacency being formed will be taken and any anomalies recorded. Verification will be made that the routers record the correct status of the adjacency.

Test result: Router_A#debug ip os adj …snip… 1d06h: OSPF: 172.31.0.2 address 172.16.0.2 on ATM0/0.1 is dead 1d06h: OSPF: 172.31.0.2 address 172.16.0.2 on ATM0/0.1 is dead, state DOWN 1d06h: %OSPF-5-ADJCHG: Process 172, Nbr 172.31.0.2 on ATM0/0.1 from FULL to DOWN , Neighbor Down: Dead timer expired 1d06h: OSPF: Build router LSA for area 0, router ID 172.31.0.1, seq 0x80000016 1d06h: OSPF: 2 Way Communication to 172.31.0.2 on ATM0/0.1, state 2WAY 1d06h: OSPF: Send DBD to 172.31.0.2 on ATM0/0.1 seq 0x927 opt 0x42 flag 0x7 len32 ….snip….. Router_A#sh ip os nei Neighbor ID Pri State Dead Time Address Interface 172.31.0.4 1 FULL/ - 00:00:32 172.16.0.10 ATM0/0.3 172.31.0.5 1 FULL/ - 00:00:38 172.16.0.6 ATM0/0.2 172.31.0.2 1 FULL/ - 00:00:37 172.16.0.2 ATM0/0.1

At the end of each section, you can show a summary table of each sub-test, with a column for pass/fail, to clearly show if results actually recorded met with expectations. This will make up the largest section of the test plan, and will take longest to compile. Several iterations will be required to ensure you have detailed each and every possible relevant test to show whether the system performs as designed.

You should also include a section on Risks and Contingencies. Particularly if the testing is to be run over a longish period of time, what outside factors could affect it? If only one person knows how to drive the test set, what happens if they go sick? If the whole project slips, will a delay in being able to start testing run into holidays? Is the software supplier just about to release a new service pack or code upgrade that will need retested? Is a company change freeze in the offing?

You may not be able to do anything about any of these, but try to be aware of them, and if you can’t provide solutions, make sure everyone involved knows any possible pitfalls. You should also look at any training requirements to carry out this testing, or extra resources to cover the time when your staff are running tests, and therefore not looking after the existing network. Detail who is responsible for each part of the testing—including all this documentation—so it can be scheduled into existing workloads.

And finally, for completeness, you should include appendices with any additional but not core information: device configs, a glossary, references, document change history, approvals, data sheets etc If you follow this type of structure, it will aid you to plan your testing more fully. Filling in each section may still be complex and time consuming, but this layout should force you to at least consider all aspects.