The logic employed is based upon the observation (as discussed in lecture) that the sum of the elements in the array segment a[low..high) is
The class is "instrumented" to keep track of how many recursive calls were made and the maximum recursion depth reached. (The private methods entering() and leaving(), which update the instance variables callCntr, callDepth, and maxCallDepth, are used for this purpose.)
The class has a main() method that invites the user to create an array of a specified length, following which it computes the sum of the array's elements and gives a "report" of the results. (This repeats until the user enters a negative number for the array length.) That report includes not only the computed sum, but also the "call count" (i.e., the number of times that segSumA() was called during the computation), and the maximum depth to which the recursion ever got.
T(n) | category | How to Tell |
---|---|---|
O(logc n) | logarithmic | T(cn) ≈ T(n)+1 |
O(n) | linear | T(cn)/T(n) ≈ c |
O(n·logc n) | log-linear | T(cn)/T(n) ≈ c + c/logc n |
O(n2) | quadratic | T(2n)/T(n) ≈ 4 |
O(n3) | cubic | T(2n)/T(n) ≈ 8 |
O(cn) | exponential | T(n+1) ≈ c·T(n) |
Write down the results and try to discern the asymptotic growth rates of the "call count" and the "max depth of recursion". The table to the right should provide some aid, as it shows, for several categories of asymptotic growth rate, the (approximate) ratios you could expect between T(2n) and T(n), depending upon T's asymptotic growth rate. Discuss your findings with your lab instructor.
Now notice that the class also has two methods called segSumB(), the second of which is stubbed. Its intended purpose is also to compute the sum of the elements in a given array segment, using recursion. However, the code that you supply for it should be based upon these observations: The sum of the elements in the array segment a[low..high) is
Supply the missing code from segSumB() (making sure to retain the calls to entering() and leaving()) and then run the program as you did before, with arrays of several lengths, doubling the length each time. Try to figure out the aymptotic growth rates of the "call counter" and "max recursion depth", and notice how they compare to those of segSumA(). Discuss your findings with your lab instructor.
Download the Fibonacci class. It includes methods fibA() and fibB() that use recursion to compute Fibonacci Numbers. The first one is complete, and its recursive code mimics the definition given above.
As in the previous lab activity, the class includes a main() method that allows a user to run experiments and get feedback on the "call count" and "max recursion depth". Run the program and have it compute several Fibonacci numbers. (Ignore the output referring to fibB().) Each time increase the argument by 1, as opposed to doubling it. Notice the ratio between successive call counts. Using the table above, try to discern fibA()'s asymptotic running time. Discuss it with your lab instructor.
Now turn your attention to the fibBAux() method, which is the "auxiliary" method used by fibB() that does all the real work. But it is going to do it in a much different fashion than fibA(), which by now you know is horribly inefficient.
The "trick" to be employed by fibBAux() —it is up to you to supply the missing code— is to pass back to its caller not just the n-th Fibonacci number but both the n-th and (n-1)-st. It does so by returning an array of length two, with fib(n−1) stored in location 0 and fib(n) stored in location 1.
Complete fibBAux(), run some tests, and compare the resulting call count values to those of fibA(). They should be MUCH, MUCH lower. Discuss it with your instructor.