«Sharam Hekmat PragSoft Corporation Contents 1. INTRODUCTION 1.1 PURPOSE 1.2 SCOPE 1.3 SOFTWARE TOOL 1.4 GLOSSARY OF TERMS 2. ...»
6.3.3 Stress Testing Stress testing involves observing the behaviour of the application under ‘stress conditions’. Exactly what these conditions are depends on the nature of the application. For example, in a web-based www.pragsoft.com 54 UML Process financial application, a very large number of transactions and a very large number of end-users would represent stress conditions. ‘Large’ in this context should be interpreted as ‘equal to, greater than, and much greater than’ what has been specified in the non-functional requirements. If the requirements call for the support of up to 500 concurrent users, then we should, for example, test for 500, 600, 1000, etc.
The purpose of stress testing is to identify the stress conditions that cause the application to break. If these conditions are within the expected operational range of the application, then we conclude that the application has failed the stress tests.
Stress conditions can and do arise when the application goes live. It is therefore important to know what the stress limits of the application are, so that contingency plans can be made.
6.3.4 Volume Testing Volume testing involves observing the behaviour of the application when it is subjected to very large volumes of data. For example, if the application is a banking system having an underlying database for storing account records, volume testing will attempt to populate this database with maximum capacity records and beyond. For example, if the requirement is for the system to store up to a million accounts, then we may try to populate the database with 1 million, 2 million, and 5 million records.
The purpose of volume testing is to identify the volume levels beyond which the application will be unable to operate properly (e.g., physical storage limit or acceptable performance level).
As with stress testing, unanticipated volume levels can occur when the application goes live, and it is therefore important to know the volume limits for contingency reasons.
6.3.5 Scalability Testing Most modern business applications are multi-user, distributed, client-server systems that serve a large user base. The ‘growth’ (i.e., increased use) of an application in a business is often difficult to predict. Some applications that are originally designed for a handful of users, later end up being used by hundreds or thousands of users. The ability of an application to serve a growing user base (and growing transaction volume) should therefore be an important consideration.
The degree to which the use of an application can grow without making any design changes is called scalability. A scalable architecture can ensure that the application can grow by simply adding more hardware resources to distribute the application across more and more boxes.
The purpose of scalability testing is to identify the boundaries beyond which the application will not be able to grow. Scalability testing is environmentally complex because it involves making changes to the distribution model of the application for its test cases.
There is an obvious interplay between scalability and stress/volume testing, and this needs to be taken into account when test planning.
UML Process 55 Copyright © 2005 PragSoft 6.3.6 Availability Testing Each application has certain availability requirements, as determined by the business environment within which it runs. This is often expressed as a percentage for a given duration (e.g., 98% availability for the month of July).
The purpose of availability testing is to determine if the application availability falls below the minimum acceptable level. This is measured by running the application over a long duration (e.g., a week, a month, or until it falls over) while it is subjected to a realistic load.
6.3.7 Usability Testing The purpose of usability testing is to identify any features or obstacles in the design of the application (mainly its user interface) that makes the application difficult to use. Although usability is a subjective notion, there are meaningful measures that can be employed to establish the relative usability of an application. For example, given a certain level of initial training, users can be observed with the
purpose of recording measures such as:
• The average length of time needed to complete a business process/activity/action.
• The number of errors made during a business process/activity/action.
• The amount of time spent on rework due to errors.
• Given a certain task, the length of time it takes a user to find out how to use the application to do it.
6.3.8 Documentation Testing The purpose of documentation testing is to establish the relevance, usefulness, readability, and accuracy of end-user documentation for the application. This is performed using just the supplied documentation in order to operate the application. So for each given test case, the tester will refer to the documentation to find out the instructions for performing it. In other words, the testing process will simulate the actions of an untrained user who has to use the application on basis of the information provided.
Documentation testing will detect defects in the user documentation, such as: gaps (no explanation of how to do a certain task), factual errors, ambiguity, technical jargon, and out of date information.
6.3.9 Installation Testing The purpose of installation testing is to detect defects in the installation process for the application.
Modern applications are supplied with installation tools/scripts that automate the installation process.
Installation test cases involve attempting to install the application (in a clean environment) using the installation package provided (i.e., installation scripts, documentation, and release notes).
6.3.10 Migration Testing Most modern applications are replacements for legacy systems. For business continuity, often the legacy data needs to be preserved and migrated to the new application. This is usually a very
The purpose of migration testing is to identify defects in the data migration process. Migration test cases involve attempts to migrate the legacy data to a clean installation of the application.
6.3.11 Coexistence Testing In a live environment, most applications run in conjunction with other applications. The system test environment, however, is usually isolated and not as complex as the live environment. Potential interplay between applications (that compete for resources and interact with each other) cannot be ruled out in a live environment. The purpose of coexistence testing is to establish whether the application can successfully coexist with other applications.
Coexistence testing is usually carried out first in a pseudo live environment, and then in the live environment itself, but with restricted access.
6.4 Test Case Design The design of test cases is, by far, the most important part of testing, since it is the quality of the test cases that determines the overall effectiveness of testing.
The layered architecture reference model (see Section 2.3.2) provides a sound basis for organising the design of test cases. Using this model, each layer is considered separately, in order to design test cases that cover that layer. This results in the test case design effort being divided into 4 categories,
as summarised by the following table:
This logical separation ensures that every important architectural consideration is fully tested. It does not, however, provide complete coverage for all system test types. Additional test cases need to be created for these, especially those that involve non-functional requirements.
All 4 categories use the same source material for designing the test cases, which consists of the
• Business model. This is probably the most important source, as it describes the business activities that the application supports.
• Application model. This is also important in that it provides a picture of what the application is supposed to do, and can compensate for gaps in the user documentation.
• System model. This is important as a formal and detailed technical source, and is especially useful in relation to non-functional requirements.
UML Process 57 Copyright © 2005 PragSoft
• Non-functional requirements. This covers the important constraints that the application should satisfy (e.g., performance requirements).
• User documentation concept. It is unlikely that by the time system testing commences, the user documentation would be ready. However, it is reasonable to expect that by then concept documents be at least produced, providing a terse version of the intended documentation.
We will look at each category in turn, and provide a table for showing how the source material relates to each major test focus area, and to which system test type the resulting test cases belong.
6.4.1 Presentation Oriented Test Case Design These tests are concerned with all those aspects of the application that are manifested through the
user interface. Major test focus areas are:
• Presentation layout, which uses the actual design of the user interface to design test cases that assess the ease of comprehension of the presentation.
• Input data validation, which uses the specification of the validation rules for input data to design test cases that assess the correctness of the implementation of these rules.
• Interaction dynamics, which uses the rules for dynamic feedback to the user (e.g., enabling/disabling of GUI elements) to design test cases that assess the correctness of the implementation of these rules.
• Navigation, which uses the specification of navigation paths from one window to another to design test cases that assess the correctness of the implementation of these paths.
• Productivity, which considers things that can affect user productivity (e.g., response time, ease of use, rework frequency) to design test cases that can identify barriers to user productivity.
• Documentation, which uses the user documentation concept to design test cases that can identify potential problems in the user interface documentation.
The above table summarises the major test focus areas for presentation oriented test case design.
For each focus area, the source materials to be used for test case design and the system test types under which the test cases are to be documented are identified.
www.pragsoft.com 58 UML Process 6.4.2 Workflow Oriented Test Case Design These tests are concerned with the instantiation and execution of business processes. Major test
focus areas are:
• Workflow Logic, which uses process/activity maps to design test cases that exercise the various paths through the process.
• Workflow Data, which involves test cases that will handle the specific data items (e.g., documents) created/manipulated by the process.
• Security, which involves test cases that ensure that those aspects of a business process that have restricted access are only available to users with the relevant security rights.
• Workflow Validation, which involves test cases that attempt to invoke exception situations for the process to see how they get handled.
• Documentation, which involves test cases that attempt to perform a process, based on its documentation.
6.4.3 Business Object Oriented Test Case Design These tests are concerned with the instantiation and manipulation of business objects. Major test
focus areas are:
• Object Behaviour, which uses the business object models to design test cases that exercise the methods of each business object.
• Object Validation, which involves test cases that attempt to create invalid business objects.
6.4.4 Data Oriented Test Case Design These tests are concerned with the storage and retrieval of persistent objects. Major test focus areas
UML Process 59 Copyright © 2005 PragSoft
• Object Persistence, which involves test cases that verify the correct persistence of entity objects.
• Persistence Efficiency, which involves test cases that measure the time required to store/retrieve persistent objects.
• Storage Capacity, which involves test cases that attempt to exercise the storage limits of the application.
• Backup & Recovery, which involves test cases for backing up the application database and then restoring it from the backup image.