Program Conditions

Algorithms or code can be documented with comments that state conditions that should be true when program execution reaches the point where the comment is located. These comments are called preconditions, invariants, or postconditions, depending on their placement relative to a code or algorithm segment.

In object-oriented programming, invariants can refer to the state of an object. They should be true at all times except during the execution of methods for the object. The object's constructor should have the invariant as a postcondition. All methods that modify the object's state should have the invariant as part of both their precondition and their postcondition.

Preconditions, invariants, and postconditions are all statements about the state of computation in the code near a segment of code.

The correctness of a loop can be ascertained by making the following sequence of checks:

If the loop satisfies the above checks then the postcondition must be true whenever the precondition is true, provided that the loop terminates. Since it it possible for a loop to run forever, one more check is needed:

The above iterative algorithm searches the vertices and edges of a graph that are reachable from the starting vertex v. This algorithm only provides the navigation for a graph search. Practical applications of this algorithm also record information about visited edges and vertices. We are only interested here in the correctness of the navigation: that the algorithm visits every vertex that is reachable from the start vertex through directed graph edges.

    visit v
    put all of the edges that leave v into the dispenser
    // Precondition:
    // v is visited.  Every other vertex in the graph is unvisited.
    // Every edge in the graph that goes from v to another vertex
    // is in the dispenser.
    while the dispenser is not empty
      // Invariant:
      // Every edge in the graph that goes from a visited
      // vertex to an unvisited vertex is in the dispenser.
      retrieve and remove edge e from the dispenser
      let w be the head of e
      if w has not been visited
        visit e
        visit w
        put all of the edges that leave w into the dispenser
    // Postcondition:
    // If there is a directed path from v to a
    // vertex x then x has been visited.

We have four things to do:

Here, the precondition is just a rephrasing of the invariant in the starting case where v is the only visited vertex.

This means that you assume that the invariant is true at the beginning of the loop body, and you check that it must therefore be true at the end. Usually, the invariant is falsified early in the loop code in order to make progress towards termination, and the remaining code is designed to reestablish the invariant.

For the graph search loop, the truth of the invariant can be changed in two ways: an edge can be added or removed from the dispenser, or the visited status of its end vertices can be changed. There are two cases to consider, depending on the truth of the if statement condition.

w Not Visited

If the head of e has not been visited then the removal of e from the dispenser can falsify the invariant. But then later code changes the status of its head vertex so that at the end of the loop the removed edge does not go from a visited vertex to an unvisited vertex.

Changing the status of the head vertex to visited also can make more edges that go from visited vertices to unvisited vertices, but all edges that leave the head of e are added to the dispenser, so the invariant is not falsified in the end.

w Visited

If the head of e has been visited then its removal does not falsify the invariant. And in this case, the visited status of vertices is unchanged, so the invariant remains true.


then when the loop terminates

From this it is easy to see that every directed path that starts at a visited vertex must end at a visited vertex. This, together with the precondition implies the postcondition.

A vertex can only be visited once due to the if statement inside the loop. Since an edge is only entered into the dispenser when its head vertex is visited, and each iteration of the loop removes one edge, the number of iterations is bounded by the number of directed edges in the graph.

Overview text.

Testing means verifying correct behavior. Testing can be done at all stages of module development: requirements analysis, interface design, algorithm design, implementation, and integration with other modules. In the following, attention will be directed at implementation testing. Implementation testing is not restricted to execution testing. An implementation can also be tested using correctness proofs, code tracing, and peer reviews, as described below.

Debugging is a cyclic activity involving execution testing and code correction. The testing that is done during debugging has a different aim than final module testing. Final module testing aims to demonstrate correctness, whereas testing during debugging is primarily aimed at locating errors. This difference has a significant effect on the choice of testing strategies.

In order to avoid excessive time spent on debugging, the programmer should be mentally prepared for the effort. The following steps are useful to prepare for debugging.

To effectively debug code you need two capabilities. First, you need to be able to efficiently call on the services provided by the module. Then you need to be able to get information back about results of the calls, changes in the internal state of the module, error conditions, and what the module was doing when an error occurred.

Driving the module

To effectively debug a module, it is necessary to have some method for calling upon the services provided by the module. There are two common methods for doing this.

Obtaining information about the module

Being able to control the sequence of calls to module services has little value unless you can also obtain information about the effects of those calls. If the services generate output then some information is available without any further effort. However, for many modules, including data structure modules, the primary effect of calls to services is a change in the internal state of the module. This leads to needs for three kinds of information for debugging.

Aids built into programming language
Incremental testing

In a good design for a complex module, the code is broken up into numerous subroutines, most of which are no more than 10 to 15 lines long. For a module designed in this way, incremental testing offers significant advantages. For incremental testing, the subroutines are classified in levels, with the lowest level subroutines being those that do not call other subroutines. If subroutine A calls subroutine B then A is a higher level subroutine than B. The incremental testing strategy is to test the subroutines individually, working from the lowest level to higher levels. To do testing at the lower levels, the test driver must either be capable of calling the low level subroutines directly, or else the programmer must be able to provide several test input cases, each of which only involves a small number of low level subroutines. Devising these test cases requires a thorough understanding of the module algorithms, along with a good imagination. The strength of incremental testing is that at any time in the process, there are only a small number of places where errors can arise. This automatically makes debugging information more meaningful and leads to quicker determination of the cause of an error. A second reason for incremental testing is that it greatly reduces the chances of having to deal with two or more errors at the same time. Multiple errors often will generate confusing error indications.

Sanity checks

Low level code in complex data structure is often written with the assumption that the higher level code correctly implements the desired algorithm. For example, the low level code may be written with the assumption that a certain variable or parameter cannot be NULL. Even if that assumption is justified by the algorithm, it may still be a good idea to put in a test to see if the condition is satisfied because the higher level code may be implemented incorrectly. This kind of check is called a sanity check. If an assert procedure is available then it can be used for the checks. The advantage of sanity checks is that they give early detection of errors.

Boolean constants for turning debugging code on or off

If debugging code is added to a module then it is often profitable to enclose it in an if statement that is controlled by a Boolean constant added to the module. By doing this, the debugging code can easily be turned off, yet be readily available if needed later. Different constants should be used for different stages of testing so that useless information is minimized.

Error variables for controlling program behavior after errors

When debugging print statements are added to code, there is the possibility of a tremendous explosion of useless information. The problem is that a print statement by itself will be executed whether or not there is an error. Thus, if the error does not appear until a large number of subroutine calls have been made then most of the messages are just telling you everything is okay so far. This problem is greatly magnified if the added code is displaying the internal structure of a data structure. Assuming that the module has sanity checks for error detection, an error Boolean variable can be added to the module. It should be initialized to false, indicating that there is no error. For most data structures, there is a Create operation for initialization. The error variable can be initialized at the same time. Instead of exiting the sanity checks are modified so that they set the error variable to true. Then debug code can be enclosed in if statements so that information is only printed when errors have been detected. One possible application of this method is obtaining traceback information when it is not otherwise available.

Traceback techniques

To obtain a traceback, use an error Boolean set by sanity checks. At various places in the module add debug code controlled by the error variable that prints the current position. Usually it is more economical to first run the code with a terminating sanity check. Then you only need to add the controlled debug code at places where the subroutine that contains the sanity check is called.

For the correction of errors detected by testing, the is one very important principle to keep in mind: fix the cause, not the symptom.

Suppose that you run some code and get a segmentation fault. After some checking you determine that a NULL pointer was passed into a procedure that did not check for NULL, but tried to reference through the pointer anyway. Should you add a NULL pointer check to the procedure, enclosing the entire body of the procedure in an if statement? This question cannot be answered without an understanding of the design and algorithm. It may be that if the algorithm is correctly implemented then the pointer cannot be NULL, so the procedure does not make the check. If that is the case then adding the if statement does not fix the cause of the problem. Instead, it makes matters worse by covering up the symptoms. The problem will surely appear somewhere else, but now the symptoms will be further removed from the cause. Code such as the pointer NULL check should be added only if you are sure that it should be part of the algorithm. If you add a NULL pointer check that is not required by the algorithm then it should report an error condition. In other words, it should be a sanity check.