Chapter 17 Amortized Analysis 17.0.1
In amortized analysis, the time required for
a sequence of operations is averaged. This
average may be low even if a single operation
in the sequence is expensive. Amortized
analysis differs from average-case analysis in
that probability is not involved; amortized
analysis guarantees the average performance of
each operation in the worst case.
Sections 17.1, 17.2, & 17.3 cover the 3 most
common methods used in amortized analysis.
Section 17.1 treats aggregate analysis, which
determines an upper bound on the total cost
T(n) of n operations. Each operation is given
the average T(n)/n as its amortized cost.
Section 17.2 treats the accounting method,
in which we determine an amortized cost for
each operation. This method overcharges some
operations early in the sequence, storing the
overcharge as a "prepaid credit" to be used to
pay for later operations that are charged less
than they actually cost.
Section 17.3 treats the potential method, in
which we also determine an amortized cost for
each operation, and may overcharge operations
early to make up for later undercharges. The
credit is maintained as the "potential energy"
of the data structure as a whole instead of
being assigned to individual objects in it.
17.1 Aggregate analysis 17.1.1
In aggregate analysis, we show that for all n
a sequence of n operations takes worst-case
time T(n) in total, so that the average or
amortized cost for each operation is T(n)/n.
Stack operations
As a first example of aggregate analysis, we
analyze stacks that also have a MULTIPOP
operation, in addition to PUSH(S,x) and POP(S)
which take O(1) time, and which we assume to
have cost 1, so that the cost of n PUSH and
POP operations is n, and hence Theta(n).
MULTIPOP(S,k) pops the k top objects from S
or pops the whole stack if it has fewer than k
objects. It uses the STACK-EMPTY function.
Figure 17.1(a) shows a stack; Figure 17.1(b)
shows it after MULTIPOP(S,4); & Figure 17.1(c)
shows the empty stack after MULTIPOP(S,7):
top -> 23 Figure 17.1
17
6
39
10 top --> 10
47 47
--- --- --- (empty)
(a) (b) (c)
MULTIPOP(S,k) 17.1.2
1 while not STACK-EMPTY(S) and k not = 0
2 POP(S)
3 k = k - 1
The running time of MULTIPOP(S,k) on a stack
of s objects is O(k) since the while loop
iterates min(s,k) times and each operation
takes Theta(1) time.
Let us analyze a sequence of n PUSH, POP, and
MULTIPOP operations on an initially empty
stack. The worst-case cost of a MULTIPOP in
the sequence is O(n) since the stack size is
at most n. Thus the worst-case time of any
stack operation is O(n), and hence a sequence
of n operations could cost O(n^2). But this
bound is not tight.
We use aggregate analysis to obtain a better
bound. Note that any sequence of n operations
on an empty stack can cost at most O(n). Why?
Because each object can be popped at most once
for each time it is pushed, and so the number
of pops, including calls within MULTIPOP is at
most the number of pushes, which is at most n.
So the average cost of an operation is O(n)/n
= O(1), the amortized cost of each operation.
Note: no probability was involved. We found
O(n) to be the worst-case bound on the cost of
a sequence of n operations; dividing by n is
the average or amortized cost per operation.
Incrementing a binary counter 17.1.3
As a second example of aggregate analysis, we
consider a k-bit binary counter that counts up
from 0. The counter is an array A[0..k-1] of
bits, where A.length = k. A binary number x
has lowest-order bit in A[0] and highest-order
bit in A[k-1], so that: k-1 i
x = Sum ( A[i] * 2 )
i = 0
Initially, x = 0 so A[i] = 0 for all i. To
add 1 (modulo 2^k), to the counter, use:
INCREMENT(A)
1 i = 0
2 while i < A.length and A[i] = 1
3 A[i] = 0
4 i = i + 1
5 if i < A.length
6 A[i] = 1
At the start of each iteration of the while
loop, we wish to add 1 into position i. If
A[i] = 1, we flip the bit to 0 in A[i] which
yields a carry of 1 to be added into position
i+1. Otherwise the loop ends, and if i < k,
A[i] = 0, so adding 1 to that position flips
A[i] to 1. The cost of each call to INCREMENT
is linear in the number of bits flipped.
Figure 17.2 (page 455) below 17.1.4
shows what happens when a binary counter is
incremented 16 times starting at 0.
Counter Total
value A[6] A[5] A[4] A[3] A[2] A[1] A[0] cost
0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 (1) 1
2 0 0 0 0 0 (1 0) 3
3 0 0 0 0 0 1 (1) 4
4 0 0 0 0 (1 0 0) 7
5 0 0 0 0 1 0 (1) 8
6 0 0 0 0 1 (1 0) 10
7 0 0 0 0 1 1 (1) 11
8 0 0 0 (1 0 0 0) 15
9 0 0 0 1 0 0 (1) 16
10 0 0 0 1 0 (1 0) 18
11 0 0 0 1 0 1 (1) 19
12 0 0 0 1 (1 0 0) 22
13 0 0 0 1 1 0 (1) 23
14 0 0 0 1 1 (1 0) 25
15 0 0 0 1 1 1 (1) 26
16 0 0 (1 0 0 0 0) 31
Note that this is slightly different than
Figure 17.2 in that the bits that are flipped
to achieve _this_ counter value are enclosed
in parentheses. Note also that the total cost
is always less than twice the number of
INCREMENT operations.
17.1.5
Note that INCREMENT takes Theta(k) time in
the worst case when A contains all 1's. Thus
a sequence of n INCREMENT operations on a
counter initially at 0 could take O(nk) time.
As in the stack example, we can tighten this
bound to O(n) by observing that not all bits
flip each time. As Figure 17.2 shows, A[0]
does flip each time and A[1] flips every other
time or floor(n/2) times for n INCREMENTs.
Similarly, A[2] flips every fourth time or
floor(n/4) times for n INCREMENTs. In general
A[i] flips floor(n/2^i) times in n INCREMENTs
for i = 0, 1, ... , k-1, and A[i] does not
flip for i >= k. So the total number of flips
is:
k-1 infinity
Sum ( floor(n/2^i) ) < n * Sum ( 1/2^i )
i = 0 i = 0
= 2n
by equation A.6 (page 1147). The worst-case
time for n INCREMENT operations on an
initially zero counter is thus O(n). The
average cost, and therefore the amortized cost
per operation is O(n)/n = O(1).
17.2 The accounting method 17.2.1
In the accounting method, we assign differing
charges to different operations, the amount we
charge an operation is its amortized cost. If
an operation's amortized cost exceeds its
actual cost, the difference is assigned to
specific objects in the data structure as
credit. Credit can be used later to pay for
operations whose amortized cost is less than
their actual cost. Note that this differs
from aggregate analysis, in which operations
all have the same amortized cost.
We want to choose the amortized costs so that
the total amortized cost is small so that we
can prove that the average worst case cost is
small. Also, we must choose the amortized
costs so that the total amortized cost of a
sequence of operations is an upper bound for
the actual cost, for any such sequence. If we
denote the actual cost of the i-th operation
by c_i and its amortized cost by c-hat_i, then
we require for all sequences of n operations:
n n
Sum ( c-hat_i ) >= Sum ( c_i ) (17.1)
i = 1 i = 1
The total credit stored in the data structure
is the left side of (17.1) minus the right
side, which should always be non-negative.
Stack operations 17.2.2
Recall that the actual costs of the stack
operations were:
PUSH 1,
POP 1,
MULTIPOP min(k,s),
where k is MULTIPOP's argument and s is the
stack size when it is called. Let us assign
the following O(1) amortized costs:
PUSH 2,
POP 0,
MULTIPOP 0.
We now show that we can pay for any sequence
of stack operations by charging the amortized
costs in dollars. We start with an empty
stack. When we push an item, the actual cost
is $1, and we push the other dollar of the
amortized cost "onto the stack, along with the
item". Thus every item in the stack has a
dollar attached to it.
When we pop an item, we charge the operation
nothing, but pay the actual cost for the pop
with the dollar attached to the popped item.
Moreover, we don't charge MULTIPOP anything
either, since we pay for each pop with the
dollar attached to that item. Since each item
in the stack has a dollar attached to it, the
credit is the number of items/dollars on the
stack, and so is always non-negative. Thus,
for any sequence of n stack operations, the
total amortized cost, O(n), is also a bound
on the total actual cost.
Incrementing a binary counter 17.2.3
We also analyze the binary counter using the
accounting method. The running time of the
counter is proportional to the bits flipped,
which we use as our cost. For this analysis
we charge an amortized cost of $2 to set a bit
to 1, and nothing to set it to 0. When a bit
is set to 1, we pay for it with one dollar and
place the other dollar on that 1-bit, so if we
reset it later we pay for it with that dollar.
In INCREMENT, the cost of resetting bits in
the while loop is paid for by the dollars on
them. At most one bit is set, and therefore
the amortized cost of an INCREMENT is at most
$2. The number of 1's in the counter is never
negative, so the credit is always nonnegative.
Thus for n INCREMENT operations, the total
amortized cost is O(n), which also bounds the
actual cost.
17.3 The potential method 17.3.1
Unlike the accounting method in which credit
is attached to data objects, in the potential
method, credit is attached to the entire data
structure. This credit or prepaid work is
called the "potential energy" or "potential",
that can be used to pay for future operations.
We perform n operations on an initial data
structure D_0. For i = 1,2,...,n, we let c_i
be the actual cost of the i-th operation and
D_i be the data structure that results after
applying the first i operations. A potential
function Phi maps each D_i to a real number
Phi(D_i), which is the potential associated
with D_i. The amortized cost c-hat_i of the
i-th operation with respect to Phi is defined
by equation (17.2):
c-hat_i = c_i + Phi(D_i) - Phi( D_(i-1) )
Thus the amortized cost of n operations is:
n
Sum ( c-hat_i ) = (17.3)
i = 1 n
Sum (c_i + Phi(D_i) - Phi(D_(i-1))
i = 1
n
= Sum (c_i) + Phi(D_n) - Phi(D_0)
i = 1
by equation (A.9) for telescoping sums.
If we define Phi so that 17.3.2
Phi(D_n) >= Phi(D_0) then the total amortized
n
cost Sum ( c-hat_i ) is an upper bound on the
i = 1 n
total actual cost Sum ( c_i ).
i = 1
We can guarantee this for all n if we require
that Phi(D_i) >= Phi(D_0) for all i, which
says we always "pay in advance". Usually we
define Phi(D_0) to be 0 (Exercise 17.3-1 shows
what to do if Phi(D_0) > 0).
If the difference Phi(D_i) - Phi( D_(i-1) )
is positive, c-hat_i represents an overcharge
to the i-th operation; and if the difference
is negative, the amortized cost c-hat_i
represents an undercharge and the actual cost
is paid by the decrease in the potential.
The amortized costs above depend on how Phi
is chosen. Different functions Phi can give
different amortized costs. There are often
trade-offs in choosing Phi; the best choice of
Phi depends on the desired time bounds.
Stack operations
For our stack example, we define Phi to be
the number of items on the stack. So for the
starting empty stack D_0, Phi(D_0) = 0. Since
the number of items on the stack is never
negative, D_i has nonnegative potential, so
Phi(D_i) >= 0 = Phi(D_0)
17.3.3
Thus the total amortized cost of n operations
represents an upper bound on the actual cost.
Now we compute the amortized cost of the
stack operations. If the i-th operation on a
stack containing s items is a PUSH, then the
potential difference is:
Phi(D_i) - Phi( D_(i-1) ) = (s + 1) - s = 1
So by equation (17.2), the amortized cost is:
c-hat_i = c_i + Phi(D_i) - Phi( D_(i-1) )
= 1 + 1 = 2
Now suppose that the i-th operation is a
MULTIPOP(S,k) and that k' = min(s,k) items are
popped off the stack. The actual cost of the
operation is k', and the potential difference
is: Phi(D_i) - Phi( D_(i-1) ) = -k'.
Thus the amortized cost of the MULTIPOP is:
c-hat_i = c_i + Phi(D_i) - Phi( D_(i-1) )
= k' + ( -k' ) = 0
Similarly, the amortized cost of POP is 0 too.
The amortized cost of the three operations is
O(1), so the total amortized cost of n
operations is O(n). Since we have argued that
Phi(D_i) >= Phi(D_0), the total amortized cost
is an upper bound for the total actual cost.
The worst-case actual cost of n operations is
therefore O(n) also.
Incrementing a binary counter 17.3.4
For the binary counter, we define the
potential to be b_i, the number of 1-bits in
the counter after the i-th operation.
Let us compute the amortized cost of an
INCREMENT operation. Suppose that the i-th
INCREMENT resets t_i bits. The actual cost of
the operation is then at most t_i + 1, since
at most one bit is set to 1.
If b_i = 0, then the i-th operation reset all
k bits, and so b_(i-1) = t_i = k. Therefore
b_i = 0 = b_(i-1) - t_i < b_(i-1) - t_i + 1.
If b_i > 0, then b_i = b_(i-1) - t_i + 1.
In either case, b_i <= b_(i-1) - t_i + 1,
and the potential difference is:
Phi(D_i) - Phi( D_(i-1) )
<= ( b_(i-1) - t_i + 1 ) - b_(i-1)
= 1 - t_i
The amortized cost is therefore:
c-hat_i = c_i + Phi(D_i) - Phi( D_(i-1) )
<= (t_i + 1) + ( 1 - t_i ) = 2
If the counter starts at zero, Phi(D_0) = 0.
Since Phi(D_i) >= Phi(D_0), the total
amortized cost of n INCREMENT operations is an
upper bound on the total actual cost, and so
the worst-case cost of n INCREMENT operations
is O(n).
17.3.5
The potential method also gives us a way to
analyze the counter even when it doesn't start
at zero. There are initially b_0 1's and
after n INCREMENT operations there are b_n 1's
where 0 <= b_0, b_n <= k, the number of bits
in the counter. We can rewrite (17.3) as:
n n
Sum(c_i) = Sum(c-hat_i) - Phi(D_n) + Phi(D_0)
i = 1 i = 1
We have c-hat_i <= 2, and since Phi(D_0) = b_0
and Phi(D_n) = b_n, the total actual cost is:
n n
Sum(c_i) <= Sum( 2 ) - b_n + b_0
i = 1 i = 1
= 2n - b_n + b_0
In particular, note that since b_0 <= k, as
long as k = O(n), the total actual cost is
O(n). In other words, if we execute at least
n = Omega(k) INCREMENT operations, the total
actual cost is O(n), no matter what the
initial value of the counter was.
17.4 Dynamic Tables (skipped) 17.4.1