SUNY Geneseo Department of Computer Science
CSci 240, Spring 2007
Prof. Doug Baldwin
Due Friday, March 30
In studying Θ(nlogn)-time sorting algorithms, you have encountered algorithms that use a combination of iteration and recursion, and whose analysis is well-suited to the Master Theorem. This exercise gives you a chance to practice such analyses on your own.
For information on the Master Theorem, see the lectures from February 19 and February 21. For applications of the Master Theorem to actual algorithms, see the lectures from March 23 and March 28.
Use the Master Theorem to find asymptotic execution times for each of the following algorithms. Express your execution times as functions of n.
This is a made-up algorithm that provides some initial practice.
ferambulate( array[] a, int n ) {
array[] temp1;
array[] temp2;
if ( n > 0 ) {
for ( int i = 0; i < n/2; i++ ) {
temp1[i] = a[i];
temp2[i] = a[i+n/2];
}
ferambulate( temp1, n/2 );
ferambulate( temp2, n/2 );
}
}
This is another made-up algorithm that is a little more interesting to analyze.
int trapalyze( array[] a, int n ) {
if ( n > 3 ) {
array[] first;
array[] second;
int f = 0;
int s = 0;
for ( int i = 0; i < n; i++ ) {
if ( i % 3 == 0 ) {
first[f++] = a[i];
}
else if ( i % 3 == 2 ) {
second[s++] = a[i]
}
}
return trapalyze( first, n/3 ) + trapalyze( second, n/3 );
}
else {
return 0;
}
}
This is a real, although not practically used, algorithm for multiplying n-by-n matrices.
Don't worry if you don't know how to multiply matrices yourself, you can analyze the algorithm without knowing
how it works. Assume that add(p,q,m)
is a function that adds m-by-m matrices p and q in Θ(m2) time.
array[][] multiply( array[][] a, array[][] b, int n ) {
array[][] result;
if ( n <= 1 ) {
result[0][0] = a[0][0] * b[0][0];
}
else {
array[][] c, d, e, f, g, h, i, j;
for ( int x = 0; x < n/2; x++ ) {
for ( int y = 0; y < n/2; y++ ) {
c[x][y] = a[x][y];
d[x][y] = a[x][y+n/2];
e[x][y] = a[x+n/2][y];
f[x][y] = a[x+n/2][y+n/2];
g[x][y] = b[x][y];
h[x][y] = b[x][y+n/2];
i[x][y] = b[x+n/2][y];
j[x][y] = b[x+n/2][y+n/2];
}
}
array[][] q1 = add( multiply(c,g,n/2), multiply(d,i,n/2), n/2 );
array[][] q2 = add( multiply(c,h,n/2), multiply(d,j,n/2), n/2 );
array[][] q3 = add( multiply(e,g,n/2), multiply(f,h,n/2), n/2 );
array[][] q4 = add( multiply(e,i,n/2), multiply(f,j,n/2), n/2 );
for ( int x = 0; x < n/2; x++ ) {
for ( int y = 0; y < n/2; y++ ) {
result[x][y] = q1[x][y];
result[x][y+n/2] = q2[x][y];
result[x+n/2][y] = q3[x][y];
result[x+n/2][y+n/2] = q4[x][y];
}
}
}
return result;
}
There is a practical recursive algorithm for multiplying matrices known as Strassen's algorithm. Strassen's algorithm is similar to Algorithm 3 above, in that it divides an n-by-n matrix into n/2-by-n/2 sub-matrices, and does some matrix arithmetic on them, and then combines the results into an n-by-n product matrix. However, Strassen's algorithm does seven, rather than eight, recursive multiplications of n/2-by-n/2 sub-matrices, plus Θ(n2) additional work to split and recombine the products. What is the asymptotic execution time of Strassen's algorithm?
I will grade this exercise in a face-to-face meeting with you. Make an appointment to meet with me at some time convenient for you, as long as that time is before the end of the due date above. Meetings only need to be 15 minutes long. You can make an appointment by signing up on the copy of my schedule on the bulletin board outside my office. Please bring written written derivations for ach algorithm to the meeting.