Review of A Discipline of Programming (1976)
Edsger W. Dijkstra (writer).
Read in 2021.
Computer science in general and one approach to proving program correctness in particular. Dijkstra develops an Algol-like programming language for the purpose of illustrating a series of solutions to known problems, one step at a time.
Provable correctness is not part of the new language but is investigated separately, by manual propositional calculus, creating predicates for inputs (weakest precondition functions) as a function of system state and known postconditions.
The reader may interpret the above exercise as a hint to software managers not to grade their programmers by the number of lines of code produced per month; I would rather let them pay the punched cards they use out of their own pocket.
The typography leaves certain things to be desired. Dijkstra mentions Bertrand Russell with respect, but in his formulae, he uses both E and A without explanation. The reader must guess that in his notation, these represent Ǝ (the existential quantifier) and ∀ (the all-quantifier as used by Russell, named by Dijkstra on page 67 in my first-edition copy), not new variables. Long discussions of coloured spanning trees and convex hulls are pure text; there is not a single illustration as there would be in scientific texts just a few years later, with the computerization of layouting. In general, the author assumes a reader who is already familiar with every concept in the book.
If you are familiar with the subject, then as a picture of computer science at the time, the book is a lot of fun. Dijkstra, who participated in a failed effort to rebrand what he did as “computing science”, writes with character, erudition and intelligence. He aptly points out that the particular challenge of programming is its “grain ratio”: 101 bits is easy, but 1010 bits is an incomprehensible amount of the same thing. He has opinions about the era that coined the term “software” and had not yet converged on a popular hardware architecture, or anything popular at all (p. 202):
Sad remark. Since then we have witnessed the proliferation of baroque, ill-defined and therefore, unstable software systems. Instead of working with a formal tool, which their task requires, many programmers now live in a limbo of folklore, in a vague and slippery world, in which they are never quite sure what the system will do to their programs. Under such regretful circumstances the whole notion of a correct program —let alone a program that has been proved to be correct— becomes void. What the proliferation of such systems has done to the morale of the computing community is more than I can describe. (End of sad remark.)
There is nothing wrong with formality, proof or the explicit application of mathematics to software engineering. These are still common in computer science, and rightly so, but they never caught on in what he calls “business-oriented computer applications”, that is outside the academy.
In my opinion, Dijkstra went wrong in his choice of what to prove. His pseudocode is entirely of the type he himself elsewhere named “structured programming”, in an imperative style. His most central concern is to defend this style. Whereas in mathematics, a variable’s value does not change, in Dijkstra’s code it does, and his primary efforts in calculus seek to show that this mutation will not invalidate the results (the maths) in each individual case. The more general halting problem is a secondary concern. Dijkstra does not propose that his formality should be automated or included in his novel language. He expects the programmer to prove everything manually, as if their life depended on it (p. 207):
We can imagine the following setup. In order to start the UM [the ideal Unbounded Machine], the user must push the start button, but in order to be able to do so, he must be in the computer room. For reasons of protection of the air conditioning, the UM can only be started provided that all doors of the computer room are closed and for safety’s sake, the UM (which has no stop button!) will keep the doors locked while in operation. This environment shows how deadly it can be to start a machine without having proved termination!
This rather violent joke betrays a flaw in reasoning. In 1976, there was already an alternative to the author’s defence of imperative programming. Since the author himself argues for a categorical distinction between the physical machine and the programmer’s abstraction, and he considers the abstraction to be primary, he should have preferred a less stateful programming paradigm, less tied to the physical coincidence that addresses in memory can be overwritten with new values. Mutation does indeed exacerbate complexity and the halting problem, hence Dijkstra’s desire for a higher order of reasoning. The functional programming paradigm is less prone to this category of errors, and was already available in Lisp. In fact, Scheme had appeared in 1975, but nothing like it is mentioned. I wonder why. Later in life, Dijkstra came to recommend Haskell over Java, C++ etc., because he understood then that functional programs are “more readily appreciated as mathematical objects than imperative ones”, hence more suited to “rigorous reasoning” (letter “to the members of the Budget Council”, 2001-04-12).
Though alternatives did exist, imperative programming was already dominant and would remain so indefinitely. Dijkstra’s approach is an interesting thought experiment about it, but did not catch on. In short, he chose a poor paradigm and justified his choice by raising expectations upon the programmer that were rarely met. The reasons are embedded in the text itself. He talks about domain checking but not type checking (p. 50). He talks about changing algorithms for efficiency on real-world machines (as a secondary concern, p. 55). He adds a note that physical machines can malfunction, and that a later rewrite of a program can inadvertently invalidate the pen-and-paper mathematics that went into writing the first version (p. 56), since they are not available in the program itself! This latter consideration would turn out to be important in a later era of agile and iterative development. In one of his own algorithms, Dijkstra deliberately and temporarily violates his own postcondition predicate (p. 71), which is hardly to be encouraged. He acknowledges that increased hardware power drives the need for greater discipline (p. 81), and he seems vaguely aware that the basic problem is what he calls “the smallness of our human skulls” (p. 190), but this does not lead him to simplify or to leverage intuition. Instead, he’s got a rather arrogant anecdote about a student of his who was too stupid for his liking.
As with his typography, the author also does very little to make his programs readable. He has no function names because he has no named functions. His variable names are short enough to be cryptic, but not short enough for mathematics, and he does not discuss the naming of them. At one point he uses the variable name inf, possibly to mean “in file”, more generally to symbolize the high end of a domain, but not to symbolize infinity. His statements are similarly poorly named (virvar, glocon etc.). He has comments embedded in his pseudocode, but does not specify their syntax as part of his language, which in itself did not catch on. Even so, with zero irony, he complains that long programs aren’t readable enough (p. 116), and he finds it “somewhat frightening to discover the devious ways” in which human culture affects human thought, apropos of which bucket that students of different nationalities pick first in a sorting problem (ibid.).
The author does not anticipate Agda. In a book about proof, the near-absence of automatic static analysis is even more curious. Apropos of three of his own errors, Dijkstra claims that only the most trivial one “could have been detected by a syntactical checker” (p. 190), and this is true, but later linters, refactoring aids etc. proved more economically efficient than manual proofs for all but the most demanding applications. Unit tests didn’t come into play until the 1980s, and they certainly weren’t invented here.
References here: 3D keyboard wiring, The C Programming Language (1978).