«Sharam Hekmat PragSoft Corporation Contents 1. INTRODUCTION 1.1 PURPOSE 1.2 SCOPE 1.3 SOFTWARE TOOL 1.4 GLOSSARY OF TERMS 2. ...»
5.3.3 Boundary Objects The interface between the middle-tier and the other tiers may involve boundary objects. For example, where the middle-tier is implemented as a CORBA server, the CORBA interface exposed www.pragsoft.com 44 UML Process by the middle tier consists of boundary objects. Such boundary objects, however, are essentially wrappers and provide no further functionality. They simply adapt a tier’s interface to a format that is agreed with another tier.
5.3.4 Long Transactions The implementation of transactions as control objects was discussed earlier. These transactions are known as short transactions, i.e., they correspond to a task performed by a client at a specific time (e.g., transfer funds from one account to another).
There is another class of transactions, known as long transactions, which span beyond one task. A long transaction consists of a number of tasks, performed by potentially different users, and at different points in time. For example, in a process-centric system, an end-to-end process (e.g., home loan application) may be regarded as a long transaction.
Long transactions are generally very complex and pose a number of challenges:
• Two or more long transactions can overlap by operating on the same entity objects, with no guarantee that all will commit.
• Because a long transaction is performed in a piecemeal fashion, it has to cope with possibly modified entity objects between its successive steps. Also, when the transaction is about to commit, it needs to verify that interim modifications to the entity objects have not invalidated earlier steps.
• Rolling back a long transaction may be a non-trivial task, because other transactions may now be relying on the modifications made by the transaction to entity objects.
• Unlike a short transaction, a long transaction needs to be implemented as a persistent object.
The transaction may take hours, days, or even months to complete, during which time the users participating in the transaction may login and out a number of times, and the system may be restarted.
These complexities go beyond virtually all middleware products’ transaction management capabilities. Also, effective management of a long transaction often requires access to relevant business rules, which reside well beyond the middleware domain. As a result, when long transactions are used, the middle-tier needs to implement its own long transaction management facility. This
facility needs to implement the following:
• Transaction persistence (i.e., realisation as an entity object).
• Versioning of entity objects. The entity objects modified by a long transaction need to be versioned to avoid the problem of overlapping transactions modifying the same entity object in inconsistent ways. Versioning can ensure that changes made by overlapped transactions are mutually excluded.
• Version reconciliation. If a transaction is performed on the basis of an old version of an entity object, when committing, the transaction needs to reconcile itself against the latest version of the entity object.
An unusual aspect of long transactions is that occasionally there may not be enough information to reconcile entity object versions at commit time. This will necessitate asking the user to make the decision.
5.4 Back-End Models The back-end tier of a three-tier client-server system is conceptually quite simple: it provides the
persistent storage for the entity objects of the middle tier. This tier provides two things:
• A data model for the storage of the objects.
• Adapter objects for accessing and updating the data.
5.4.1 Data Models For a bespoke system, a new data model needs to be synthesized. The input to this process is the object model created during application modelling and later enriched by system modelling. In most
cases, this is achieved by:
• Mapping each entity object to a table.
• Identifying appropriate keys for each table, based on the access paths required by object methods.
• Modelling the relationships between objects either using keys or additional tables. There are
three possible cases:
• A one-to-one relationship can be modelled using a key in either or both tables. For example, there is a one-to-one relationship between TaxPayer and TaxAccount, and this can be represented by having a TaxPayer key in the TaxAccount table, and/or vice versa.
• A one-to-many relationship can be modelled using a key in the ‘many’ table. For example, a one-to-many relationship between TaxPayer and TaxReturn can be represented by having a TaxPayer key in the TaxReturn table.
• A many-to-many relationship can be modelled using an additional table, which combines the keys for both tables. For example, a many-to-many relationship between Customer and Account can be represented by a CustAccRel table that has attributes for recording the keys of both tables.
Any non-trivial system, however, poses further data modelling challenges that need to be addressed.
One of these involves the issue of inheritance and how to model it at a data level. There are no strict
rules for handling inheritance, but the following two guidelines cover almost all cases:
• Where an inheriting object adds very few attributes to the inherited object, use the same table to represent both. Obviously the table needs to include the attributes of both objects, and where an object does not use a certain attribute, that attribute is simply ignored. For example, specialisations of an Account object (e.g., LoanAccount, SavingAccount, ChequeAccount) www.pragsoft.com 46 UML Process are likely to add very few additional attributes. In this case, it makes sense to have one Account table to represent all account types. An additional attribute in the table can be used to denote the account type.
• Where the inheriting object adds many more attributes to the inherited object, use a separate table for either, and include a key in the ‘inheriting’ table to refer to the ‘inherited’ object’s table.
For example, a ContactPoint object may have specialisations such as PhysicalAddress, TelecomAddress, and WebAddress. These are fairly disjoint, so it makes sense to have a table for each.
The degree to which the data model needs to be normalised is a database design issue and should be determined by the DBA. One of the advantages of OO modelling is that it tends to result in data models that are highly normalised.
The ER data model synthesized from the object model needs to be kept in sync with it. Processes need to be put in place to ensure that developers work off a consistent set of object and data models. Experience has shown that this is an area where most projects encounter avoidable problems.
Where legacy systems are involved (as is the case in most client-server projects), additional constraints are imposed. If all the data for the system is to be sourced from legacy systems, then this rules out the possibility of developing a brand new and clean data model. Instead one has to adapt the data provided to serve the needs of the middle-tier (and vice versa).
These constraints should be absorbed underneath the business object layer and should never be exposed beyond it. Any higher-level component that uses the business objects should not have to (or be allowed to) assume any knowledge about the underlying data model. This decoupling minimises the impact of a data model change on the rest of the system.
5.4.2 Data Access Objects Depending on the data model, there may or may not be a need to have an additional layer to manage access to data. For example, in a bespoke system with a clean, new relational data model, the business objects may access this data through an open interface such as ODBC or JDBC. No additional processing is required.
However, where legacy systems are involved, there may be a need to perform additional processing to adapt the legacy data to the format required by the business objects. For example, a business object layer that talks XML is incompatible with a data layer consisting of CICS/COBOL legacy systems. This can be overcome by developing adapter objects that map the data between the format used by the business objects and the format required by the legacy systems. These objects may also perform additional house keeping as relevant to the legacy systems involved.
The problem with the first view is that it promotes the wrong psychology – it encourages the testers to come up with test cases that are likely to run successfully, rather than ones that break the software.
The second view is based on the premise that any non-trivial application will always contain defects.
The true value of testing is to detect as many defects as is economically feasible (and then to fix them) in order to increase confidence in the reliability of the application.
6.1 Introduction Testing should take place throughout the development lifecycle, so that defects are detected and fixed at the earliest opportunity. Most artefacts produced during the development lifecycle can be tested if they are expressed in appropriate notations. For example, we can test a business process by passing hypothetical cases through it to check if there are any gaps in the flow or logic.
Extensive testing, however, cannot be undertaken until executable code has been produced.
Because code is the ultimate artefact of software development, it should be subject to more testing than other artefacts.
6.1.1 Testing Process The underlying process for all forms of software testing is the same, and should adhere to the following principles.
Before testing can begin, a test plan must be produced. The test plan defines a set of test cases, the completion criteria for the tests, and the environment required for performing the tests.
Each test case in a test plan consists of two things: test data and expected result. When a test case is performed, the software is exercised using the test data and the actual result is compared against the expected result. A match or discrepancy is then recorded in the test log. The test log is a record of the test cases performed and their outcome.
A test plan must conform to a defined test strategy, which provides an overall framework for all the different forms of testing for an application.
6.1.2 Testing Approaches There are two general approaches to testing: white box testing and black box testing.
In black box testing, the artefact being tested is treated as a black box that, given a set of inputs, produces some output. This means that no knowledge of the internal design of the artefact is assumed. Test cases are created based on the range of the input values that the artefact should accept (or reject) and their relationships to the expected output values.
White box testing is more applicable to lower-level artefact, such as functions, classes, and components. Black box testing is better suited to higher-level artefacts, such as application modules, applications, and integrated systems. Given that white box and black box testing tend to expose different types of defects, it is generally recommended that a combination of the two be used in the development lifecycle in order to maximise the effectiveness of testing.
6.1.3 Testing Techniques A number of different testing techniques have been devised for use at various stages of development.
On its own, no one technique is sufficient to produce adequate test cases. The techniques, rather, serve as a toolbox that test designers can utilise to design effective test cases. The most widely
recognised techniques are:
• Coverage testing. This is a white box technique that attempts to achieve a certain level of code
coverage. Typical types of coverage considered are:
• Statement coverage requires that enough test cases be created to exercise every statement in code at least once.
• Branch coverage requires that all the alternatives of every decision branch in the code be exercised at least once.