Consider the problem of computing the exponential of a given number. We would like a function that takes as arguments a base $b$ and a positive integer exponent $n$ and computes $b^n$. One way to do this is via the recursive definition \begin{eqnarray*} b^{n} &=& b\cdot b^{n-1}\\ b^{0} &=&1 \end{eqnarray*} which translates readily into the function

function expt(b,n) { return n === 0 ? 1 : b * expt(b, n - 1); }

This is a linear recursive process, which requires $\Theta(n)$ steps and $\Theta(n)$ space. Just as with factorial, we can readily formulate an equivalent linear iteration:

function expt(b,n) { return expt_iter(b,n,1); } function expt_iter(b,counter,product) { return counter === 0 ? product : expt_iter(b, counter - 1, b * product); }

We can compute exponentials in fewer steps by using successive squaring. For instance, rather than computing $b^8$ as \[ b\cdot(b\cdot(b\cdot(b\cdot(b\cdot(b\cdot(b\cdot b)))))) \] we can compute it using three multiplications: \begin{eqnarray*} b^{2} &=& b\cdot b\\ b^{4} &=& b^{2}\cdot b^{2}\\ b^{8} &=& b^{4}\cdot b^{4} \end{eqnarray*}

This method works fine for exponents that are powers of 2. We can also take advantage of successive squaring in computing exponentials in general if we use the rule \[ \begin{array}{ll} b^{n} =(b^{n/2})^{2} & \mbox{~if $n$ is even}\\ b^{n} =b\cdot b^{n-1} & \mbox{~if $n$ is odd}\\ \end{array} \] We can express this method as a function:

function fast_expt(b, n) { return n === 0 ? 1 : is_even(n) ? square(fast_expt(b, n / 2)) : b * fast_expt(b, n - 1); }

function is_even(n) { return n % 2 === 0; }

The process evolved by
`fast_expt`
grows logarithmically with $n$
in both space and number of steps. To see this, observe that
computing $b^{2n}$ using
`fast_expt`
requires only one more
multiplication than computing $b^n$. The size of the exponent we can
compute therefore doubles (approximately) with every new
multiplication we are allowed. Thus, the number of multiplications
required for an exponent of $n$ grows about as fast as the logarithm
of $n$ to the base 2. The process has $\Theta(\log n)$
growth.[1]

The difference between $\Theta(\log n)$ growth and $\Theta(n)$ growth
becomes striking as $n$ becomes large. For example,
`fast_expt`
for $n=1000$ requires only 14
multiplications.[2]
It is also possible to use the idea of
successive squaring to devise an iterative algorithm that computes
exponentials with a logarithmic number of steps
(see exercise 1.16), although, as is often
the case with iterative algorithms, this is not written down so
straightforwardly as the recursive algorithm.[3]

function fast_expt_iter(a, b, n){ return n === 0 ? a : is_even(n) ? fast_expt_iter(a, b * b, n / 2) : fast_expt_iter(a * b, b, n - 1); } function fast_expt(b, n){ return fast_expt_iter(1, b, n); }

function times(a,b) { return b === 0 ? 0 : a + times(a, b - 1); }

function double(x) { return x + x; } function halve(x) { return x / 2; } function fast_times(a, b) { return b === 1 ? a : a === 0 || b === 0 ? 0 : is_even(b) ? double(fast_times(a, halve(b))) : a + fast_times(a, b - 1); }

function double(x) { return x + x; } function half(x) { return x / 2; } function fast_times_iter(total, a, b) { return b === 1 ? total + a : a === 0 || b===0 ? 0 : is_even(b) ? fast_times_iter(total, double(a), half(b)) : fast_times_iter(total + a, a, b - 1); } function times(a, b) { return fast_times_iter(0, a, b); }

function fib(n) { return fib_iter(1,0,0,1,n); } function fib_iter(a,b,p,q,count) { return count === 0 ? b : is_even(count) ? fib_iter(a, b, ??, // compute p' ??, // compute q' count / 2) : fib_iter(b * q + a * q + a * p, b * p + a * q, p, q, count - 1); }

function fib(n) { return fib_iter(1, 0, 0, 1, n); } function fib_iter(a, b, p, q, count) { return count === 0 ? b : is_even(count) ? fib_iter(a, b, p * p + q * q, 2 * p * q + q * q, count / 2) : fib_iter(b * q + a * q + a * p, b * p + a * q, p, q, count - 1); }

[1] More precisely, the number of multiplications
required is equal to 1 less than the log base 2 of $n$, plus the number
of ones in the binary representation of $n$. This total is always
less than twice the log base 2 of $n$. The arbitrary constants
$k_1$ and $k_2$ in
the definition of order notation imply that, for a logarithmic
process, the base to which logarithms are taken does not matter, so
all such processes are described as $\Theta(\log n)$.

[2] You may wonder
why anyone would care about raising numbers to the 1000th power.
See section 1.2.6.

[3] This iterative
algorithm is ancient. It appears in the *Chandah-sutra* by
Áchárya, written before 200 b.c .
See Knuth 1981 , section
4.6.3, for a full discussion and analysis of this and other methods of
exponentiation.

[4] This
algorithm, which is sometimes known as the b.c. (and copied from an even
older document) by an Egyptian scribe named A'h-mose.

Russian peasant methodof multiplication, is ancient. Examples of its use are found in the Rhind Papyrus, one of the two oldest mathematical documents in existence, written about 1700

[5] This exercise was
suggested to us by Joe Stoy, based on an example in
Kaldewaij 1990 .

1.2.4 Exponentiation