Note: this section is a work in progress!

In chapter 1 we stressed that computer science deals with imperative (how to) knowledge, whereas mathematics deals with declarative (what is) knowledge. Indeed, programming languages require that the programmer express knowledge in a form that indicates the step-by-step methods for solving particular problems. On the other hand, high-level languages provide, as part of the language implementation, a substantial amount of methodological knowledge that frees the user from concern with numerous details of how a specified computation will progress.

Most programming languages, including
pun

that an expression that describes the value of a function may also be
interpreted as a means of computing that value. Because of this, most
programming languages are strongly biased toward unidirectional
computations (computations with well-defined inputs and outputs).
There are, however, radically different programming languages that
relax this bias. We saw one such example in
section 3.3.5, where the objects of computation were
arithmetic constraints. In a constraint system the direction and the
order of computation are not so well specified; in carrying out a
computation the system must therefore provide more detailed how to

knowledge than would be the case with an ordinary arithmetic
computation. This does not mean, however, that the user is released
altogether from the responsibility of providing imperative knowledge.
There are many constraint networks that implement the same set of
constraints, and the user must choose from the set of mathematically
equivalent networks a suitable network to specify a particular
computation.

The nondeterministic program evaluator of
section 4.3 also moves away from the
view that programming is about constructing algorithms for computing
unidirectional functions. In a nondeterministic language, expressions
can have more than one value, and, as a result, the computation is
dealing with relations rather than with
single-valued functions. Logic programming extends this idea by
combining a relational vision of programming with a powerful kind of
symbolic pattern matching called
*unification*.[1]

This approach, when it works, can be a very powerful way to write
programs. Part of the power comes from the fact that a single what
is

fact can be used to solve a number of different problems that
would have different how to

components. As an example, consider
the
`append` operation, which takes two lists as arguments and
combines their elements to form a single list. In a procedural
language such as JavaScript, we could define `append` in terms of the
basic list constructor `pair`, as we did in
section 2.2.1:

function append(x, y) { return is_null(x) ? y : pair(head(x), append(tail(x), y)); }

This
function
can be regarded as a translation into JavaScript of the
following two rules, the first of which covers the case where the
first list is empty and the second of which handles the case of a
nonempty list, which is a `pair` of two parts:

- For any list
`y`, the empty list and`y``append`to form`y`. - For any
`u`,$\,$`v`,$\,$`y`, and`z`, $\,$`pair(u, v)`and`y``append`to form`pair(u, z)`if`v`and`y``append`to form`z`.[2]

Using the `append`
function, we can answer questions such as

Find theappendoflist("a", "b")andlist("c", "d").

But the same two rules are also sufficient for answering the following sorts of questions, which the function can't answer:

Find a listythatappends withlist("a", "b")to producelist("a", "b", "c", "d"). Find allxandythatappendto formlist("a", "b", "c", "d").

In a logic programming language, the programmer writes an `append`
function

by stating the two rules about `append` given above.
How to

knowledge is provided automatically by the interpreter to
allow this single pair of rules to be used to answer all three types
of questions about `append`.[3]

Contemporary logic programming languages (including the one we
implement here) have substantial deficiencies, in that their general
how to

methods can lead them into spurious infinite loops or other
undesirable behavior.
Logic programming is an active field of research in computer
science.[4]

Earlier in this chapter we explored the technology of implementing
interpreters and described the elements that are essential to an
interpreter for a JavaScript-like language (indeed, to an interpreter for
any conventional language). Now we will apply these ideas to discuss
an interpreter for a logic programming language. We call this
language the *query language*, because it is very useful for
retrieving information from data bases by formulating
*queries*,
or questions, expressed in the language. Even though the query
language is very different from JavaScript, we will find it convenient to
describe the language in terms of the same general framework we have
been using all along: as a collection of primitive elements, together
with means of combination that enable us to combine simple elements to
create more complex elements and means of abstraction that enable us
to regard complex elements as single conceptual units. An interpreter
for a logic programming language is considerably more complex than an
interpreter for a language like JavaScript. Nevertheless, we will see
that our query-language interpreter contains many of the same elements
found in the interpreter of section 4.1. In particular,
there will be an eval

part that classifies expressions according
to type and an apply

part that implements the language's
abstraction mechanism (functions
in the case of JavaScript, and *rules*
in the case of logic programming). Also, a central role is played in
the implementation by a frame data structure, which determines the
correspondence between symbols and their associated values. One
additional interesting aspect of our query-language implementation is
that we make substantial use of streams, which were introduced in
chapter 3.

[1] Logic programming has grown out of a long
history of research in automatic theorem proving. Early
theorem-proving programs could accomplish very little, because they
exhaustively searched the space of possible proofs. The major
breakthrough that made such a search plausible was the discovery in
the early 1960s of the
*unification algorithm* and the
*
resolution principle* (Robinson 1965 ).
Resolution was used, for
example, by
Green and Raphael (1968) (see also Green 1969 ) as the
basis for a deductive question-answering system. During most of this
period, researchers concentrated on algorithms that are guaranteed to
find a proof if one exists. Such algorithms were difficult to control
and to direct toward a proof.
Hewitt (1969) recognized the
possibility of merging the control structure of a programming language
with the operations of a logic-manipulation system, leading to the
work in automatic search mentioned in section 4.3.1
(footnote **Cound not find label for foot:backtrack**). At the same time that this was being done,
Colmerauer, in Marseille, was developing rule-based systems for
manipulating natural language
(see Colmerauer et al. 1973 ). He
invented a programming language called
Prolog for representing those
rules.
Kowalski (Kowalski 1973 ;
Kowalski 1979 ),
in Edinburgh, recognized that execution
of a Prolog program could be interpreted as proving theorems (using a
proof technique called linear
Horn-clause resolution). The merging of
the last two strands led to the logic-programming movement. Thus, in
assigning credit for the development of logic programming, the French
can point to Prolog's genesis at the
University of Marseille, while
the British can highlight the work at the
University of Edinburgh.
According to people at
MIT, logic programming was developed by these
groups in an attempt to figure out what Hewitt was talking about in
his brilliant but impenetrable Ph.D. thesis. For a history of logic
programming, see Robinson 1983 .

[2] To
see the correspondence between the rules and the
function, let `x` in the
function
(where `x` is nonempty) correspond to `pair(u, v)` in the rule. Then `z` in the rule corresponds to the
`append` of `tail(x)` and `y`.

[3] This certainly does not
relieve the user of the entire problem of how to compute the answer.
There are many different mathematically equivalent sets of rules for
formulating the `append` relation, only some of which can be
turned into effective devices for computing in any direction. In
addition, sometimes

what isinformation gives no clue

how tocompute an answer. For example, consider the problem of computing the $y$ such that $y^2 = x$.

[4] Interest in logic programming peaked
during the early 80s when the Japanese government began an ambitious
project aimed at building superfast computers optimized to run logic
programming languages. The speed of such computers was to be measured
in LIPS (Logical Inferences Per Second) rather than the usual FLOPS
(FLoating-point Operations Per Second). Although the project
succeeded in developing hardware and software as originally planned,
the international computer industry moved in a different direction.
See
Feigenbaum and Shrobe 1993 for an overview evaluation of the
Japanese project. The logic programming community has also moved on
to consider relational programming based on techniques other than
simple pattern matching, such as the ability to deal with numerical
constraints such as the ones illustrated in the constraint-propagation
system of section 3.3.5.

4.4 Logic Programming