Recursive Optimization

There are many problems where you are trying to do optimization. That is, you are attempting to find a way of doing something that maximizes or minimizes some quantity. A recursive divide and conquer approach is often successful for dealing with such problems. The general idea is to solve the optimization problem by seeking ways of dividing it up into smaller problems of similar type. These subproblems can then be solved recursively. You usually have to try a number of ways of subdividing the original problem to ensure that you have tried all possibilities. Then you compare the different ways to find the optimal solution.

Visualizing the Algorithm

It is very important to spend some time visualizing the decompositions involved in a recusive optimization without concern for numerical descriptions. Draw diagrams. Look for features in the diagrams that distinguish the different possible subdivisions. Draw some simple examples that can be solved easily without resorting to recursion. This will suggest base cases of the recursion.

It is also useful to look at the quantity that you are trying to optimize to see how its optimal value is related to optimal values for subproblems. Doing this may lead you to ways of reducing the amount of information that you need to deal with.

Refining the Algorithm

The following steps are intended as a heuristic for refining the algorithm. There will be some variation in their importance and how they are applied as you go from one kind of problem to another.

  1. Characterize the original problem
    What is the input? What is the output? What are the parameters?

    The term parameters is somewhat vague. The intent is to identify information about the problem that will be used in controlling iterations or determining base cases.

  2. Characterize the generalized problem
    What is the input? What is the output? What are the parameters?

    For many problems, the input is just a portion of the input for the original problem, but the parameterization needs to be generalized to support a recursion. In some problems, determining the output for the generalized problem is not crucial at this point. You often save only a small amount of information about the solution, relying on processing after the recursion to reconstruct a solution. This becomes clearer in later steps, so you may want to come back this question later.

    For run time analysis, it is useful to know the constraints on these parameters in terms of the original problem parameters.

  3. Characterize the base cases
    What constraints determine the parameters for cases that can be solved directly? You will usually have some flexibility in characterizing the base cases. Coding will be easiest if you make the base cases as simple as possible. Sometimes you can make the code more efficient by using more complex base cases, with optimized coding for these cases.

    It is best to begin with base cases as simple as possible. When you code up and test a solution, you may find that the run time is adequate. If you do need to improve the run time, you will find that very little of the initial algorithm development work is wasted. You will also be in a better position to do run time enhancement since you will have a better understanding of the algorithm.

  4. Characterize the possible subdivisions of a generalized problem
    Some times there are only a fixed number of possible subdivisions that you need to consider. Then you can just enumerate them. For more complex generalized problems, you may need to work out a parameterization of the possible subdivisions. Then you should also determine constraints on the subdivision parameters in terms of the generalized problem parameters.
  5. Identify the subproblems that arise for the subdivisions
    The most important thing that you are doing here is identifying the parameters for the subproblems in terms of the parameter of the subdivision.
  6. Identify how subdivision solutions can be combined to give a possible solution to a generalized problem
    Here, you want to determine how to compute a possible value of the quantity that you are trying to optimize for a problem from the optimal values for subproblems.
  7. Identify how the best possible subdivision is selected
    This is usually straightforward. From all of the possible values determined in the previous step, you select the largest or smallest.


A recursive divide and conquer algorithm is often the easiest solution to find for optimization problems, but it may not have a good run time. The most important cause of poor run time is the tendency for recursive solutions to do the same thing over and over. This arises when small subproblems occur as subproblems of a large number of larger subproblems. You can usually see when this is happening without doing an exact runtime analysis. Just try to imagine the number of ways of subdividing the original problem lead to a particular subproblem.

If redundant solution of small subproblems appears to be happening, there are two methods for dealing with it: dynamic programming and memoization. Dynamic programming requires converting the top-down recursion to a bottom-up iteration. You begin by solving the simplest subproblems, saving their solutions in some form of a table. Once you have enough of the simpler subproblems, you can solve the more complex subproblems that only require solutions of the ones that you have already solved. In dynamic programming, you do not resolve the simple subproblems - you just look up their solutions in the table.

Memoization is similar except that you retain the recursive algorithm structure with a small modification: before solving a subproblem recursively, you check the table to see if it has already been solved. If it has been, you get its solution from the table instead of recurring. And, of course, whenever you solve a subproblem recursively, you record its solution in the table.

Dynamic programming algorithms give faster run times than memoized algorithms, but only by a small percentage. The improvement in a dynamic programming algorithm over a memoized algorithm is due to elimination of the function call overhead in a recursion. This is usually small compared to the time spent doing computations.

On the other hand, there is usually more work involved in designing a dynamic programming algorithm - you have to set up loops for iterating through the solutions in a way that ensures that when you are solving one subproblem, you have already solved its subproblems. Nested loops are often required, and off-by-one errors are a common difficulty.