We introduced compound functions in section 1.1.4 as a mechanism for abstracting patterns of numerical operations so as to make them independent of the particular numbers involved. With higher-order functions, such as the integral function of section 1.3.1, we began to see a more powerful kind of abstraction: functions used to express general methods of computation, independent of the particular functions involved. In this section we discuss two more elaborate examples—general methods for finding zeros and fixed points of functions—and show how these methods can be expressed directly as functions.
The half-interval method is a simple but powerful technique for
finding roots of an equation $f(x)=0$, where
$f$ is a continuous function. The idea is that,
if we are given points $a$ and
$b$ such that
$f(a) < 0 < f(b)$, then
$f$ must have at least one zero between
$a$ and $b$. To locate
a zero, let $x$ be the average of
$a$ and $b$ and
compute $f(x)$. If
$f(x) > 0$, then
$f$ must have a zero between
$a$ and $x$. If
$f(x) < 0$, then
$f$ must have a zero between
$x$ and $b$.
Continuing in this way, we can identify smaller and smaller intervals on
which $f$ must have a zero. When we reach a
point where the interval is small enough, the process stops. Since the
interval of uncertainty is reduced by half at each step of the process, the
number of steps required grows as
$\Theta(\log( L/T))$, where
$L$ is the length of the original interval and
$T$ is the error tolerance (that is, the size of
the interval we will consider small enough
). Here is a
function that implements this strategy:[1]
function search(f, neg_point, pos_point) { const midpoint = average(neg_point,pos_point); if (close_enough(neg_point, pos_point)) { return midpoint; } else { const test_value = f(midpoint); if (positive(test_value)) { return search(f, neg_point, midpoint); } else if (negative(test_value)) { return search(f, midpoint, pos_point); } else { return midpoint; } } }
We assume that we are initially given the function
$f$ together with points at which its values are
negative and positive. We first compute the midpoint of the two given
points. Next we check to see if the given interval is small enough, and if
so we simply return the midpoint as our answer. Otherwise, we compute as a
test value the value of $f$ at the midpoint. If
the test value is positive, then we continue the process with a new interval
running from the original negative point to the midpoint. If the test value
is negative, we continue with the interval from the midpoint to the positive
point. Finally, there is the possibility that the test value is 0, in
which case the midpoint is itself the root we are searching for.
To test whether the endpoints are close enough
we can use a
function
similar to the one used in section 1.1.7 for
computing square roots:[2]
function close_enough(x,y) { return abs(x - y) < 0.001; }
The function search is awkward to use directly, because we can accidentally give it points at which $f$'s values do not have the required sign, in which case we get a wrong answer. Instead we will use search via the following function, which checks to see which of the endpoints has a negative function value and which has a positive value, and calls the search function accordingly. If the function has the same sign on the two given points, the half-interval method cannot be used, in which case the function signals an error.[3]
function half_interval_method(f, a, b) { const a_value = f(a); const b_value = f(b); return negative(a_value) && positive(b_value) ? search(f, a, b) : negative(b_value) && positive(a_value) ? search(f, b, a) : error("values are not of opposite sign"); }
half_interval_method(math_sin, 2.0, 4.0);
half_interval_method( x => x * x * x - 2 * x - 3, 1.0, 2.0);
A number $x$ is called a fixed point of a function $f$ if $x$ satisfies the equation $f(x)=x$. For some functions $f$ we can locate a fixed point by beginning with an initial guess and applying $f$ repeatedly, \[ f(x), f(f(x)), f(f(f(x))), \ldots \] until the value does not change very much. Using this idea, we can devise a function fixed_point that takes as inputs a function and an initial guess and produces an approximation to a fixed point of the function. We apply the function repeatedly until we find two successive values whose difference is less than some prescribed tolerance:
const tolerance = 0.00001; function fixed_point(f, first_guess) { function close_enough(x, y) { return abs(x - y) < tolerance; } function try_with(guess) { const next = f(guess); return close_enough(guess, next) ? next : try_with(next); } return try_with(first_guess); }
For example, we can use this method to approximate the fixed point of the cosine function, starting with 1 as an initial approximation:[4]
fixed_point(math_cos, 1.0);
Similarly, we can find a solution to the equation $y=\sin y + \cos y$:
fixed_point( y => math_sin(y) + math_cos(y), 1.0);
The fixed-point process is reminiscent of the process we used for finding square roots in section 1.1.7. Both are based on the idea of repeatedly improving a guess until the result satisfies some criterion. In fact, we can readily formulate the square-root computation as a fixed-point search. Computing the square root of some number $x$ requires finding a $y$ such that $y^2 = x$. Putting this equation into the equivalent form $y = x/y$, we recognize that we are looking for a fixed point of the function[5] $y \mapsto x/y$, and we can therefore try to compute square roots as
function sqrt(x) { return fixed_point(y => x / y, 1.0); }
One way to control such oscillations is to prevent the guesses from changing so much. Since the answer is always between our guess $y$ and $x/y$, we can make a new guess that is not as far from $y$ as $x/y$ by averaging $y$ with $x/y$, so that the next guess after $y$ is $\frac{1}{2}(y+x/y)$ instead of $x/y$. The process of making such a sequence of guesses is simply the process of looking for a fixed point of $y \mapsto \frac{1}{2}(y+x/y)$:
function sqrt(x) { return fixed_point( y => average(y, x / y), 1.0); }
With this modification, the square-root function works. In fact, if we unravel the definitions, we can see that the sequence of approximations to the square root generated here is precisely the same as the one generated by our original square-root function of section 1.1.7. This approach of averaging successive approximations to a solution, a technique we call average damping, often aids the convergence of fixed-point searches.
fixed_point(x => 1 + (1 / x), 1.0);
const tolerance = 0.00001; function fixed_point(f, first_guess) { function close_enough(x, y) { return abs(x - y) < tolerance; } function try_with(guess) { display(guess); const next = f(guess); return close_enough(guess, next) ? next : try_with(next); } return try_with(first_guess); }
function fixed_point_with_average_dampening(f, first_guess) { function close_enough(x, y) { return abs(x - y) < tolerance; } function try_with(guess) { display(guess); const next = (guess + f(guess)) / 2; return close_enough(guess, next) ? next : try_with(next); } return try_with(first_guess); }
cont_frac(i => 1.0, i => 1.0, k);
//recursive process function cont_frac(n, d, k) { function fraction(i) { return i > k ? 0 : n(i) / (d(i) + fraction(i + 1)); } return fraction(1); }
//iterative process function cont_frac(n, d, k) { function fraction(i, current) { return i === 0 ? current : fraction(i - 1, n(i) / (d(i) + current)); } return fraction(k, 0); }
2 + cont_frac(i => 1, i => (i + 1) % 3 < 1 ? 2 * (i + 1) / 3 : 1, 20);
function tan_cf(x, k) { return cont_frac(i => i === 1 ? x : - x * x, i => 2 * i - 1, k); }
smallnumber to indicate a tolerance for the acceptable error in a calculation. The appropriate tolerance for a real calculation depends upon the problem to be solved and the limitations of the computer and the algorithm. This is often a very subtle consideration, requiring help from a numerical analyst or some other kind of magician.
maps to) is the mathematician's way of writing lambda expressions. $y \mapsto x/y$ means y => x / y, that is, the function whose value at $y$ is $x/y$.