Analyzing run times of algorithms is often done in terms of "Big O" notation. This simplifies analysis, but requires some understanding of the rate of growth of some common functions.
For most practical algorithms the run times are polynomial - bounded by by some polynomial in n, the size of the problem. Here are some functions that are bounded by a polynomial. Note that nk×log(n) is bounded by nk+1 for almost every n (all but a finite number of values of n).
f(n) | n = 103 (1K) | n = 106 (1M) | n = 109 (1G) |
---|---|---|---|
1 | 1 | 1 | 1 |
log(n) | 10 | 20 | 30 |
√n | 3.2×101 (32) | 103 (1K) | 3.2×104 (32K) |
n | 103 (1K) | 106 (1M) | 109 (1G) |
n×log(n) | 104 (10K) | 2×107 (20M) | 3×1010 (30G) |
n2 | 106 (1M) | 1012 (1T) | 1018 |
n3 | 109 (1G) | 1018 | 1027 |
Exhaustive search algorithms often have exponential or worse run times. Here are some functions that are exponential or worse. Note that the n values in this table are much smaller than in the table above.
f(n) | n = 10 | n = 20 | n = 30 |
---|---|---|---|
2n | 103 (1K) | 106 (1M) | 109 (1G) |
3n | 5.9×104 (59K) | 3.5×109 (3.5G) | 2.1×1014 |
n! | 3.6×106 (1M) | 2.4×1018 | 2.7×1032 |
Some simple algorithms can be analyzed for run time by simply adding up the amount of time spent in each line of code. This amounts to adding up terms for each line of code. Each term has the form
run-time-for-a-single-execution × number-of-executions
This is almost always easier with "Big O" notation, which can absorb unknown constants.
code | time |
---|---|
for j = 1 to A.length - 1 | O(n) |
key = A[j] | O(n) |
i = j | O(n) |
while i > 0 && A[i - 1] > key | Σtj×O(1) |
A[i] = A[i - 1] | Σtj×O(1) |
i = i - 1 | Σtj×O(1) |
A[i] = key | O(n) |
In this analysis tj is the number of steps down from index j that are needed to find the proper place for key. In the worst case tj = j. On the average it is about half that, but still O(j). The sum of the tj from j = 2 to j = n is then O(n2).