«Sharam Hekmat PragSoft Corporation Contents 1. INTRODUCTION 1.1 PURPOSE 1.2 SCOPE 1.3 SOFTWARE TOOL 1.4 GLOSSARY OF TERMS 2. ...»
• Condition coverage requires the true/false outcome of every condition in the code is exercised at least once. This is not the same as branch coverage, since the logical expression for a branch may, for example, be a conjunction of multiple conditions.
• Boundary value testing. This is a black box testing technique, where test cases are written such that they involve input or output values that are around the boundary of their permissible range. For example, if a function takes a ‘day of month’ argument then values such as -1, 0, 1, 30, 31, 32 are around the boundary of permissible values and would be good input test data.
• Cause effect graphing. This is a white box technique that involves mapping out the logical relationships between causes (input values representing a meaningful condition) and effects (output values representing a corresponding meaningful outcome). An example of a cause might be a ‘positive amount in a transaction’ and its corresponding effect may be ‘an account being credited’. Once all the possible causes and effects have been listed, then test cases are designed such that each cause and all its potential effects (and each effect and its potential causes) are exercised at least once.
UML Process 49 Copyright © 2005 PragSoft
• Domain analysis. This is a black box technique that involves analysing the domain of each input value, and subdividing it into sub-domains, where each sub-domain involves ‘similar values’ in the sense that if you use one of these values in a test-case, it will be as good as any other value in that sub-domain. For example, the domain of an ‘amount’ input value might be subdivided into the ranges 0-1000, 1001-100,000, and 100,000; these three sub-domains being different from an authorisation point of view. Based on the outcome of domain analysis, a minimum number of test cases can be designed that will have a high yield, by avoiding the use of ‘similar’ values in separate test cases.
• Error guessing. Error guessing involves using your imagination to come up with test cases that are likely to break the artefact being tested. There are no particular rules in this technique, except that the more unusual and nonsensical the test-cases, the more likely is their effectiveness.
One point that is often overlooked by test case designers is the use of erroneous input data. It is important that testing should involve at least as many invalid or unexpected input values as valid or expected ones. Once an application goes live, it will be used in ways that go beyond the original expectations of its designers. It is important that the application behaves gracefully in face of invalid or unexpected input data.
6.1.4 Testing Stages During its development lifecycle, software is subjected to testing at a number of stages, as summarised by the following table.
As indicated by this table, integration testing comes in various forms and happens at a number of stages. Unfortunately, most publications on testing refer to integration testing as if it only happens once in the development lifecycle, and this has led to much confusion. The confusion can be avoided by bearing in mind that integration testing should happen whenever a number of artefacts of similar characteristics are combined, be they classes, components, modules, application tiers, or entire applications.
System testing is by far the most labour intensive testing stage, because there are so many different types of tests that need to be performed. Because of the specialised nature of system testing, it must be performed by a dedicated test team that specialises in this area. In particular, it should never be done by the developers themselves, since they neither have the required expertise, nor the appropriate psychological profile to do it effectively.
In most projects, the customer relies on the developers to help them with creating an acceptance test plan. Because of the extensive nature of system testing, virtually everything that needs to be verified in acceptance testing is likely to have been tested for in system testing. As a result, acceptance testing often involves a subset of the test cases used for system testing.
6.1.5 Regression Testing With any type or stage of testing, one has to deal with the problem of tested artefacts being modified. The dilemma is that, on the one hand, we are aware that the modifications may well have introduced new defects and, on the other hand, we do not want to incur the overhead of completely retesting the artefact every time we make changes to it.
The aim of regression testing is to check if the modifications have caused the artefact to regress (i.e., have introduced new defects into it). It should be obvious that unless regression testing can be done quickly, the whole development cycle grinds to a halt. There are two ways of regression testing
• By selecting a high yield subset of the original tests and only running these.
• By using appropriate testing tools to automate the testing process, so that they can be performed with minimal human intervention.
6.2 Test Planning Successful testing requires appropriate planning. Given that test planning requires substantial amount of effort and a considerable length of time, the actual planning must begin well before the artefacts to be tested are ready for testing.
Test planning covers four key activities:
• The creation of a test strategy that will guide all testing activities.
• The creation of test plans for the different stages of testing.
• The setting up of the test environment so that the test plan can be carried out.
• The creation of test scripts for automated testing.
These are separately discussed below.
6.2.1 Test Strategy The test strategy provides an overall framework for all testing activities in a project (or group of related projects). This may sound motherhood and unnecessary but, in practice, it can have a significant impact on the way testing is carried out.
The primary objectives of a test strategy are to:
• Ensure a consistent approach to testing at all stages.
• Spell out the things that are crucial to the project as far as testing is concerned (e.g., iteration speed, robustness, completeness).
• Provide guidelines on the relevant testing techniques and tools to be used.
• Provide guidelines on test completion criteria (i.e., define what is ‘good enough testing’), and the expected amount of effort that should go into testing.
• Provide a basis for reusing test cases and identifying areas that can benefit from automation.
• Establish standards, templates, and the deliverables that need to be used/produced during testing.
6.2.2 Test Plan A test plan is a documentation of the test cases to be performed and associated instructions for
performing the tests. Two levels of test plans are often used:
• A master test plan is used to identify the high-level objectives and test focus areas.
• A detailed test plan is used to document the test cases produced as a result of analysing the test focus areas identified in the master test plan.
www.pragsoft.com 52 UML Process 6.2.3 Test Environment The computing environment to be used for conducting the tests needs to be planned, so that it is
ready for use when testing commences. Issues to be considered include:
• Construction of test harnesses. A test harness is a software tool that can be used to invoke the software being tested and feed test data to it. A test harness is necessary when the software being tested cannot be executed on its own (e.g., a component). Test harnesses are particularly valuable during the earlier stages of testing (e.g., unit testing).
• Setting up of test boxes. Later stages of testing (e.g., system testing) require the use of ‘clean’ test machines that are set up specifically for the purpose of testing. Unless the test machine is ‘clean’, when an error occurs, it is difficult to determine whether it is due to the effect of existing software and historical state of the machine or it is genuinely caused by the application being tested. This level of isolation is essential in order to have any faith in the test outcome.
• Creation of test databases. Most applications use a database of some form. The database schemas need to have been set up and the database populated with appropriate test data so that the tests can be carried out.
• Setting up of security access and accounts. Access to most applications is subject to security rights and having appropriate user accounts. Additional accounts may also be needed to access backend systems, databases, proxy servers, etc. These accounts and access rights need to be properly set up to enable the tests to be carried out without unnecessary obstacles.
6.2.4 Automated Testing Given the extensive effort that usually goes into testing (especially regression testing), it often makes economic sense to use automated testing tools to cut down the effort. Most such tools have a builtin scripting language, which can be used by test designers to create test scripts. The test tool uses the test script to invoke the application being tested and to supply it with specific test data, and then to compare the outcome of the test against expected test results. Once set up with appropriate scripts, the test tool can rapidly perform the test cases (often with no human involvement) and to record their success/failure outcome.
With automated testing, the bulk of the effort goes into the creation and debugging of the test scripts.
This can require substantial development effort and must be planned in advance.
6.3 System Testing As stated earlier, system testing is the most labour intensive testing stage and of direct relevance to acceptance testing. It largely involves black box testing, and is always performed with respect to the requirements baseline (i.e., it tests the application’s implementation of the requirements). There are many different types of tests that need to be planned and performed to test every aspect of the application, as summarised by the following table.
Each of system tests is separately described below.
6.3.1 Function Testing The purpose of function testing is to identify defects in the ‘business functions’ that the application provides, as specified by the business/application model. If this model is specified as business processes (as recommended earlier in this handbook), then the test cases are built around these business processes. If it is specified as use-cases, then the test cases are built around the use-cases.
6.3.2 Exception Testing An exception refers to a situation outside the normal operation of an application (as represented, for example, by invalid input data or incorrect sequence of operations). For example, a mortgage application may require that before a mortgage account can be created, the mortgagee record must have been created. Therefore, an attempt to create a mortgage account when the mortgagee is unknown to the system is an example of an exception.
Exception testing is, in a way, the opposite of function testing – it tests for dysfunctional behaviour. If an application successfully passes function testing, then it is not necessarily fit for business. It may be the case that it does allow dysfunctional operations to be performed, which, from a business point of view, can lead to liability or financial loss. The purpose of exception testing is to identify exception situations that are not satisfactorily handled by the application.