Complexity of Algorithms

Efficiency is one of the three major desirable attributes that an algorithm should posses. [What are the other two?] It is also something that we must be able to measure or estimate if we are to discuss this with those who will examine or use our algorithms. Determining efficiency involves counting and analysis. Our focus here will be counting - determining exactly how fast algorithms are.

Let us begin by examining the two program fragments in figure 1.

Figure 1 - Counting Programs

What is the value of x at the end of each of them? In figure 1a we increment x by one n times in the first repeat loop and then do it again in the second loop. This sets x to n+n or 2n.

The code in figure 1b, on the other hand, contains a nested loop. In this program fragment, x is incremented n times in the inside loop as before. In other words, the inner loop adds n to the value of x. At the same time, the outside loop forces the inside loop to be executed n times. This means that we add n to x exactly n times. So, we end up with a value of n* n or n2 for x.

[This little example tells us something about complexity and program structure. It seems that addition takes place with unnested loops and multiplication can be done with nesting to depth two. What can be done with three nested loops?]

This example also illustrated an important rule concerning task dependence of which loop nesting is a good example.

If tasks are carried out independently, we add the results together. But, if tasks are intertwined (e.g. in loops that depend upon each other), the results are multiplied.

Now, let us examine the complexity of these program fragments. Exactly how much time does each require for execution? To do this in actual physical clock time we must know exactly how the statements are translated into machine language and how many nanoseconds each machine instruction takes. Since this depends upon particular machines or systems and varies from machine to machine, we shall adopt a less stringent policy and just count the primitive operations that are apparent from the program. In the code of figure 1, additions, assignments, and compares appear and thus should be counted. Also, in order to execute a ‘repeat n times’ loop like those above we need a counter to keep track of the number of times the loop has been executed so far. We go through the following steps every time the loop is executed:

    1. add 1 to x

    2. store the value of x

    3. increment the loop counter

    4. store the value of the loop counter

    5. compare the loop counter to n

Thus, if we go through the loop n times we have executed 5n steps. Closer examination of the program fragment in figure 1a reveals that when we execute both loops and also initialize the variable x and the two loop counters, we have gone through a total of

10n + 3 steps.

Looking at the nested example in figure 1b, we note that the following sequence of events takes place during one execution of the outer loop.

    1. do the inner loop

    2. increment the loop counter

    3. store the value of the loop counter

    4. compare the loop counter to n

Since the inner loop requires 5n + 1 steps each time it is executed, going around the outer loop one time requires 5n + 4 steps. Doing this n times and initializing both the outer loop counter and x gives us a total of

5n2 + 4n + 2 steps.

It appears that figuring out the complexity of a computation is itself a complex process. Let us try to make it easier. To do so we shall need some assistance from mathematics. Consider the following definition about function growth.

Definition. The function g(n) grows at least as fast as the function f(n) if and only if there is an integer k such that f(n) £ k* g(n) for all but a finite number of n.

That is quite a mouthful. Another way to say the same thing is that if we have two functions f and g, and multiplying g by some constant makes it's values larger than f after a certain point, then g grows at least as fast as f. The phrase 'grows at least as fast as' could also be thought of as grows at the same rate or faster. So, we are talking about growth rate here, not actual values.

Examples probably provide the best intuitive explanation for the definition of growth rate. Consider f(n) = n and g(n) = n2. We all would agree that g grows at a lot faster (at a much quicker rate) than f since it is the larger of the two functions for all values of n greater than 1.

For a more interesting example, consider the functions g(n) = n/3 and f(n) = n+3. The function n+3 is of course larger than n/3 at every value of n greater than zero. So, we may immediately say that n+3 grows at least as fast as n/3. This was easy. But, if we multiply n/3 by the constant 6 we have built a function (namely 2n) which eventually becomes larger than n+3. In fact, it does so after n = 3. This is pictured in figure 2.

Figure 2 - Functions Growing at the Same Rate

Thus, by the definition of growth rate, we may assert that n/3 grows at least as fast as n+3. This means that (by our definition) they grow at the same rate!

[A physical analogue might be acceleration. Suppose you are at the top of a tall tower and drop a stone over the edge. When it is halfway to the ground, you release a second stone. Until it hits the ground, the first stone travels at a higher velocity than the second stone. But, we say that the velocity of both stones is increasing or growing at the same rate.]

An intuitive remark about function growth is that functions, which grow as fast as each other have graphs that are the same shape. For example, the functions pictured in figure 2 were all straight lines and they all grow at the same rate. We call them linear functions because they grow in a linear manner.

Speaking of shapes, examine figure 3. The functions n, n2, and n3 appear there. They obviously do not have the same shapes. Intuitively we would immediately guess that they grow at different rates. Examination of their values reveals that n3 grows at least as fast as n2 which in turn grows at least as fast as n according to the growth definition. Thus it seems that cubic functions grow at least as fast as quadratic functions and quadratic functions grow at least as fast as linear functions. This is exactly what we have always believed.

Figure 3 - Functions of Different Orders

But, the reverse relationships are not true. Close examination of the functions reveals that n3 grows faster than n2 since:

for every integer k, n3 > k n2

for every value of n greater than k. That is, no matter what we multiply n2 by, it will never be larger than n3 for more than just a few small values. At some point n3 will become larger and remain so from there on. Thus n2 does not grow as fast as n3. The same is true for n2 and n.

To describe this phenomenon, a bit of notation is order. This has traditionally been called Big-Oh notation in mathematics and theoretical computer science.

Definition. The function f(n) is of order g(n) [written f(n) = O(g(n))] if and only if g(n) grows at least as fast as f(n).

Orders are really classes defined by growth rate. All functions that grow in a linear manner such as:

n/3, n, n+17, 3n/2, 5n, and even 175n+42

are of the same order and thus in the same order class. And as we noted earlier, two functions are the same order then they grow exactly as fast as each other and have the same shape when plotted as graphs. Thus O(n/3) = O(n+3). Usually however, we strip the functions of any constants when we mention orders. Thus all linear functions are referred to as O(n) rather than O(n+17) or O(175n).

Order classes are also collections of all functions that grow at rates no faster than the class name. Thus n is O(n2) since n2 grows at least as fast as n, but n is not O(n2) since n2 grows faster.

We also think of functions of the same order as being the same size, up to a constant factor. All linear functions are of the same order and the powers of n form the order hierarchy:

O(n) < O(n2) < O(n3) < ...

Now let us return to the programming examples of figure 1. Recall that when we carefully counted the steps executed by each code fragment in figure 1, we computed 10n + 3 steps for figure 1a and 5n2 + 4n + 2 steps for figure 1b with the nested loops. If for the practical reasons outlined above, our counting merely approximated complexity, we might decide that order notation nicely depicts the notion of comparative complexity. Thus our two fragments have complexities O(n) and O(n2); the same as the values of x that were computed.

So, if we feel that order notation is good enough for complexity representation, all we actually needed to do in the algorithms of figure 1 is count the number of times the assignment statement in the inner loop is executed. This generalizes to determining a basic computational step and figuring out exactly how many times it is executed. Since code without loops has constant complexity or O(1) in our new notation, we merely have to contend with loops. Our original guideline told us how to do this. We figure out the complexity of each cluster of nested loops and then add these together. In figure 1a this came out to n + n = 2n while in figure 1b it was n2. To turn this into order notation we remove the constants and get O(n) for the code in figure 1a and O(n2) for the nested loop code of figure 1b.

This brings up another issue, modules. A general rule is:

If an algorithm consists of several independent modules, then the most complex module determines the complexity of the entire algorithm.

Suppose we combined the code in the two parts of figure 1 into a single algorithm. Then there would be three independent loops with the last a nested loop and the value of x would be n2 + 2n after execution. The simple loops have complexity O(n), and the nested loop has complexity O(n2). Adding these together gives us O(n2 + 2n) for the entire algorithm, which is just O(n2) since the squared term dominates the complexity expression.

In another example, suppose we had modules of O(n2), O(n3), O(n), as well as some straight-line code which were independent. The order of the entire algorithm is then:

O(n2) + O(n3) + O(n) + O(1) = O(n3).

Let us try something with lots of nested loops. Suppose we wanted to find out exactly how many truth assignments satisfy a Boolean function f(x1, ... , xn). [A function is satisfied when it is true.] Consider the program with n nested loops in figure 4 which checks all possible truth assignments to the n Boolean variables for the function f( ) and computes the number of times it is true.

Figure 4 - Satisfiability Verification

To determine the complexity we must find out exactly how many times the function is evaluated. Recalling that since we are dealing with nested loops, we multiply together the number of times each of the loops is executed. Since each loop is executed twice, this is:

2(2(2(¼ (2(2)) ¼ ))) = 2*2*2* ¼ * 2*2 = 2n.

Thus the complexity of the satisfiability algorithm is O(2n).

[It is interesting to note that this is exactly the same as the number of rows in a truth table for a Boolean function of n variables. This tells us that the above algorithm is the fastest that exists if we wish to examine every truth assignment in order to determine satisfiability. Thus complexity often depends upon the size of the solution space to a problem.]

Now let us turn to the complexity of programs for which we do not know exactly how many times a loop will be executed. Our first topic will be searching lists. Figure 5 contains a sequential search for x in an array of n elements named a.

Figure 5 - Sequential Search

Each time we execute the while loop we do two comparisons, one addition, and one assignment. At the beginning we do an assignment, and at the end we do a comparison and an assignment. So, if the loop is executed k times this adds up to 4k + 3 steps or O(k).

But, we now must determine how many times the loop is executed so that we will have a value for k. This is not as easy with a while loop since it could be executed anywhere from no times up to n times. The rule of thumb is always to be pessimistic and consider the worst case. Here this happens when we do not find x on the list and execute the loop n times. Thus the complexity of sequential searching is O(n) or linear.

[By the way, we should note that this is the best we can do. The rationale for this is since we do not know anything about the items on the list, we must examine every item in order to be sure we have found the target element.]

If we know that a list is sorted, then we may use another popular searching technique, the binary search. The strategy for this search is to look in the middle of the list and discard the half of the list that cannot possibly contain the target element. In figure 6 the algorithm for a binary search on the list segment ab, ... , ae is presented. We assume that the list is sorted so that small elements appear at the beginning of the list.

Figure 6 - Binary Search

We once more must consider only the worst case - that in which x was not on the list. It is not quite as straight forward this time to determine exactly how many times the loop is executed. We do have a hint though. Each time the loop is executed, the segment of the list being searched (namely ab, ..., ae) is halved in size. Thus if we keep track of the segment sizes, we shall know when the search is over. Assuming that we begin with a length n segment to search, and we do not find x, the number of elements in the range from ab to ae each time the loop is executed is shown by the following sequence:

If we can figure out how long the sequence is, then we shall know the complexity of the binary search algorithm. There is a small trick that will help us determine this. Since the sequence involves dividing by two, if n was a power of two, then we might have an easier time of things. So, select k such that is the smallest power of two larger than n and substitute it in place of n in the above sequence. Now we have the sequence:

2k, 2k-1, 2k-2, ... , 21, 20

and it is easy to see that the sequence is of length k+1. This means that the loop could possibly be executed k times. So, the complexity is O(k). But, we need now to know exactly what k is equal to. Since n is between 2k-1 and 2k, k is just slightly larger than log2n. Since logs to different bases only differ by constants, we write log2n without the base as logn. Thus the complexity of binary search is O(logn) and this is quicker than sequential search since it does not grow as fast as O(n).

Let us move from searching to sorting. First we shall present an algorithm for a function that finds the maximum on a section of a list and use it in a sorting algorithm. It appears as figure 7.

Figure 7 - Finding the Maximum

After all of our practice in determining complexities, we know that this algorithm is O(e-b) or if n=e-b, O(n). So, we found another algorithm with linear complexity.

In selection sorting, we use finding the maximum as out major tool in sorting a list of numbers. All we do is to place the maximum number at the top of the list, the second largest in second place, and so forth. In this algorithm we use the procedure swap(a, i, j) which swaps the ith and jth elements on the list a. Selection sort appears as figure 8.

Figure 8 - Selection Sort

Since the swap takes three steps each time the loop is executed, it contributes O(n) steps to the complexity of the entire sort. Most of the work however, is done by the max function. The first time through the loop it looks at n elements of the list and the second time through (n-1) are examined. This eventually adds up to:

Finding the sum is not difficult if we pair the first and last terms, the second and next-to-last terms, and so forth. It ends up looking like this:

Therefore, the selection sort has complexity O(n2).

Other popular sorting methods such as bubble sort and insertion sort that depend on swapping adjacent elements also have O(n2) complexity.

That gives us a fairly good idea about how to compute complexity for algorithms that depend upon iteration for their power. But in many parts of computer science, recursion is used to provide repetition. For example:

Figure 9 - Recursive Binary Search

is the well-known recursive algorithm for binary search. Since this algorithm seems to go through exactly the same number of steps that the nonrecursive one does, we would guess that it is an O(logn) algorithm. But, how do we actually determine the complexity of something like this? We first note that searching a list of n elements is merely

    1. checking b = e

    2. computing m = (b+e)/2

    3. searching a sublist of n/2 elements.

Steps 1 and 2 are constant or O(1), so they just add one step to the algorithm. Step 3, however adds some complexity. In fact it adds half the complexity of searching the entire list. We can write this as the equation:

S(n) = 1 + S(n/2)

and solve for S(n). This is done by expanding S(n/2). And, since searching a list of n/2 elements involves similar steps to searching a list of n elements, we state that:

S(n/2) = 1 + S(n/4)

and substituting this in the original equation, we get:

S(n) = 1 + 1 + S(n/4)

Recalling the little trick that we used when faced with a sequence where the elements are halved each time, we once more take k to be the smallest integer such that 2k is larger than n and note that:

Thus recursive binary search is an O(logn) complexity search just like we expected it to be.

We should finish with something even more difficult. Why not our favorite sorting algorithm: mergesort? Let us assume that we have a merging routine named merge(a, b, m, e) which merges the adjacent, sorted segments ab, …, am and am+1, …, ae of a list together to form a sorted list. We may now put together a sort for an entire list ab, …, ae in the recursive manner of figure 10.

Figure 10 - Merge Sort

Sorting in this manner requires two sorts and a merge. To sort a list of size n one needs to sort two lists of size n/2 and merge them. For this we write the recurrence relation:

S(n) = M(n) + 2S(n/2)

where S(n) and S(n/2) denote sorting and M(n) denotes merging two lists of n/2 elements each. Our standard algorithms for merging take O(n) steps so we can substitute n for M(n) and we have:

S(n) = n + 2S(n/2)

Expanding S(n/2) gives us:

and taking k such that 2k is just bigger than n as before, we find that:

which means that mergesort is O(nlogn).

In closing, here are several rules of thumb to be used in determining complexity of algorithms.

  1. Always assume the worst possible case.

  2. Add complexities for independent modules. Multiply complexities for nested loops.

  3. When recursion arises, construct and solve an equation that depicts the flow of computation.

  4. If you have a sequence like n, n/2, n/4, etc. to deal with, select a k so that 2k-1 < n < 2k and substitute 2k for n in the formula.