Test Oracles

To test any program, we need to have a description of its expected behavior and a method of determining whether the observed behavior conforms to the expected behavior. For this we need a test oracle.

A test oracle is a mechanism, different from the program itself, which can be used to check the correctness of the output of the program for the test cases. Conceptually, we can consider testing a process in which the test cases are given to the test oracle and the program under testing. The output of the two is then compared to determine if the program behaved correctly for the test cases, as shown in Figure 1.

Test oracles are human beings, so they may make mistakes when there is a discrepancy between the oracles and the results of a program. First we have to verify the result produced by the oracle before declaring that there is a fault in the program, that’s why testing is so cumbersome and expensive.

The human oracles generally use the specifications of the program to decide what the “correct” behavior of the program should be. To help the oracle to determine the correct behavior, it is important that the behavior of the system be unambiguously specified and the specification itself should be error-free.

Share/Bookmark

Software Testing Objectives

The testing objective is to test the code, whereby there is a high probability of discovering all errors.

This objective also demonstrates that the software functions are working according to software requirements specification (SRS) with regard to functionality, features, facilities, and performance. It should be noted, however, that testing will detect errors in the written code, but it will not show an error if the code does not address a specific requirement stipulated in the SRS but not coded in the program.  

Testing objectives are: 
  • Testing is a process of executing a program with the intent of finding an error. 
  • A good test case is one that has a high probability of finding an as-yetundiscovered error. 
  • A successful test is one that uncovers an as-yet-undiscovered error.

Share/Bookmark

Software Testing Principles

There are many principles that guide software testing. Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing. The following are the main principles for testing:

1. All tests should be traceable to customer requirements. This is in order to uncover any defects that might cause the program or system to fail to meet the client’s requirements.

2. Tests should be planned long before testing begins. Soon after the requirements model is completed, test planning can begin. Detailed test cases can begin as soon as the design model is designed.

3. The Pareto principle applies to software testing. Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them.

4. Testing should begin “in the small” and progress toward testing “in the large.” The first tests planned and executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system.

5. Exhaustive testing is not possible. The number of path permutations for even a moderately-sized program is exceptionally large. For this reason, it is impossible to execute every combination of paths during testing. It is possible,however, to adequately cover program logic and to ensure that all conditions in the component-level design have been exercised.

6. To be most effective, testing should be conducted by an independent third party. The software engineer who has created the system is not the best person to conduct all tests for the software.

Share/Bookmark

Factors influencing Acceptance Testing

The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. The testing can be based upon the User Requirements Specification to which the system should conform.

As in any system though, problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned; including Users; Project Team; Vendors and possibly Consultants / Contractors.

In order to agree what such responses should be, the End Users and the Project Team need to develop and agree a range of 'Severity Levels'. These levels will range from (say) 1 to 6 and will represent the relative severity, in terms of business / commercial impact, of a problem with the system, found during testing. Here is an example, which has been used successfully; '1' is the most severe; and '6' has the least impact:

'Show Stopper' i.e. it is impossible to continue with the testing because of the severity of this error / bug.

Critical Problem; testing can continue but we cannot go into production (live) with this problem.

Major Problem; testing can continue but live this feature will cause severe disruption to business processes in live operation.

Medium Problem; testing can continue and the system is likely to go live with only minimal departure from agreed business processes.

Minor Problem ; both testing and live operations may progress. This problem should be corrected, but little or no changes to business processes are envisaged.

'Cosmetic' Problem e.g. colours; fonts; pitch size however, if such features are key to the business requirements they will warrant a higher severity level.

The users of the system, in consultation with the executive sponsor of the project, must then agree upon the responsibilities and required actions for each category of problem. For example, you may demand that any problems in severity level 1, receive priority response and that all testing will cease until such level 1 problem are resolved.

Caution. Even where the severity levels and the responses to each have been agreed by all parties; the allocation of a problem into its appropriate severity level can be subjective and open to question. To avoid the risk of lengthy and protracted exchanges over the categorisation of problems; we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement; or, or if there are, these will be known in advance and your organisation is forewarned.

Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirely fault free, it must be agreed between End User and vendor, the maximum number of acceptable 'outstandings' in any particular category. Again, prior consideration of this is advisable.

N.B. In some cases, users may agree to accept ('sign off') the system subject to a range of conditions. These conditions need to be analysed as they may, perhaps unintentionally, seek additional functionality, which could be classified as scope creep. In any event, any and all fixes from the software developers, must be subjected to rigorous System Testing and, where appropriate Regression Testing.

Share/Bookmark