Before we can ask whether a “safe” computer system is possible, we must have some definition of what we mean by “safe”.
An informal answer might be: a computer system which produces correct output for all possible input, (provided it is not physically damaged). Correct output for invalid input is some output which signifies that the input is invalid.
The above is unsatisfactory, however: just what does “correct” mean? Informally we might say that a correct system is one whose output can always be precisely predicted from the input in accordance with the system’s specification.
This leads us to: how can we tell that a system is behaving correctly in accordance with its specification? This notion of behaviour in terms of specification is in many respects the crux of the matter: if we can prove that a system behaves in accordance with some specification, we will have a system which is correct in terms of that specification. Such a system may not be “safe”: the specification could contain errors of , (but not in ), logic which might lead the system to behave in a dangerous (but correct) way with certain input. It is because such errors cannot be found by mathematical means that this article focuses primarily on the question of correctness rather than safety , since we have some hope of finding practical mathematical methods to assist us in the search for correctness.
But before we look into specifying software, surely there is a simpler solution: why not simply test the software we develop; after all if it performs correctly in all test cases then it must be correct?
It is simply not possible to test an arbitrary C or C+ + program for all possible inputs. The number of possible input combinations usually increases at least exponentially, so even with modest test programmes we can find that the required testing time can easily exceed human life-spans. By using specifications which are expressed in a mathematical-logical language with a formal semantics we can reason with whole ranges of values. Instead of having to test the variable, ‘x’, for the value ‘0’, then ‘1’, then ‘2’ and so on, up to some finite limit, we can test, for example, for all natural number values of ‘x’, i.e. we can prove over a range, (x x N ), “for all x such that x is an element of the set of natural numbers”. Thus where testing can only ever cover a finite number of cases, (and usually a very restricted range), by using formal mathematics we can prove correctness over infinite ranges of input data, and simply restrict ourselves to the finite ranges that computers can actually deal with in practice (when they are not working symbolically). Thus the only hope we have of proving the correctness of a non-trivial computer system it to prove mathematically that the system behaves in accordance with its specification.
Here is an example of a specification: “The system will output the square root of the input.”
This specification cannot be tested for correctness because it is both ambiguous and incomplete. It is ambiguous because we cannot predict its output for any possible input:
-
It does not define the permitted input type(s): we might expect to accept natural numbers, integers, and reals; but some users might wish to find the square root of some other kind of number or of an expression.
-
It does not define what the program will output if the input is of valid type, but invalid by range: for example, the integer, ‘-4’ does not have a square root.
-
It does not define what the program will output if the input is invalid, i.e. is not a valid representation of any of the accepted input types.
There are other reasons why the specification given is ambiguous, but the above suffices to make the point. It is incomplete for the same reasons. (In more complex examples ambiguity and completeness may well be for different reasons.)
Certain theoretical results imply that we cannot prove the completeness or incompleteness of a specification, (Gödel’s work for example). Ambiguity however, is something which can be tackled. At first we may think that by carefully considering all the cases we can flesh out the specification so that it is quite clear. But in practice the English language itself permits considerable ambiguity in the meaning of language: a phrase or sentence which means one thing to one person may have a subtly (or even radically) different meaning to another person.Because of this, English, and indeed all human languages used today are unsuitable for writing specifications in. Instead we must use a language which is incapable of ambiguity: i.e. a mathematical or logical language.
But why not use a computer language?
Existing computer languages are, in general, unsuitable for writing specifications because they have no formal semantics . What this means is that the meaning of a piece of code depends on the compiler translating it, rather than on some completely formal and unambiguous standard. Some mathematical and logical languages do however, have formal semantics: so long as we understand these formal semantics we will always interpret the same statement in the same way. Contrariwise, in C or C+ + we could write:
/* Line 1 */
a=b/*c;/* comment */
/* Line 2 */ ;
Some compilers would assign the value of b divided by the contents of the address pointed to by c, to a, on line 1, followed by an empty statement on line 2; others would assign the value of b to a on line 1, taking the ‘/*’ following the b as the beginning of a comment, and the ; on line 2 as the statement separator.
There are of course computer languages, such as Pascal, whose syntax is defined by formal grammars and which have no syntactic ambiguities: nonetheless, it is still perfectly possible to have semantic ambiguity in such languages.
It is possible to find or create languages which do have formal semantics and use these to specify computer systems. To be of use, such languages must be able to be converted to machine-readable form and must be tractable in such a way that specifications written in them can be tested for logical consistency by automatic or semi-automatic proof checking systems. (Manual proof checking of even the simplest proofs is tedious, time-consuming and error-prone.)
To achieve this tractability such languages must be restricted in terms of what they can express. In general they are far less expressive than most computer languages. This means that it is relatively straightforward to convert a specification in such a language into an “equivalent” program in a general-purpose programming language. Thus the problem of correctness appears to be solvable. But what about the restrictions?
The restrictions on our formal language are likely to mean that we have guaranteed program termination, i.e. infinite loops would not be possible. Also we could fully analyse the behaviour of the program for all inputs, (since the proofs would range over all possible inputs rather than a narrow range of specific test cases), so we can calculate precisely the resources required. (Thus, if we used recursion, we could guarantee never running out of stack space because we can calculate in advance the maximum amount that could ever be used: and this amount will always be finite.)
The two main disadvantages are that we cannot have unbounded “while” loops and we cannot compute everything which it is possible to compute, i.e. with a less restrictive language. Furthermore, arbitrary C or C++ programs cannot be proved to meet a specification, because our formal language is less expressive, so in general, we cannot convert an arbitrary high-level language program to the formal language.
The most obvious restriction is the loss of the unbounded loop. In C and C++ terms this would mean no while loops, and for loops which behaved like those found, for example, in Pascal, i.e. truly bounded. Of course this restriction is not as confining as it seems, for example:
while ( TRUE ) {
cin >> i ;
if( BREAK_VALUE == i )
break ;
}
would become:
typedef unsigned long ULong;
for( ULong j = 0 ; ( j < VERY_BIG_NUMBER) && ( i != BREAK_VALUE); j++ ) {
cin >> i ;
}
The class of simultaneous primitive recursive functions can be given a formal semantics and are on the one hand restrictive enough to be tractable for semi-automatic proof checking, and on the other hand are expressive enough to be able to express useful functionality. In terms of specifying hardware (which is in a sense software in silicon), some people believe that everything we would wish to compute can be computed by simultaneous primitive recursive functions. After all, hardware is a finite medium, so to take the above example, in reality we would never have an unbounded loop in hardware, there would always be some upper bound, dependent on the size in bits of the loop counter, eg. a counter might go from 65,534 to 65,535 to 0 to 1 and so on. (Even if we used a real number as a loop counter, the actual number of bits would eventually intrude: the machine may not be able to distinguish between 1,000,000,000,000.0 and 1,000,000,000,001.0)
Since we know that primitive recursive functions can provide us with what we need in terms of expressiveness and provability, why is it that formal languages based on such functions are not being used for writing hardware and software specifications?
-
There are no commercially available tools which allow us to convert such specifications into a high-level language.
-
There are no commercially available tools which allow us to perform automatic or semi-automatic proof checking on such specifications.
-
The formal language in which such specifications would have to be processed is so low level that it is practically the mathematical equivalent of programming a computer in raw binary: there are no commercially available tools for writing such specifications in a “high-level” formal language which can be converted by machine into the raw specification language.
-
Although the theoretical work on primitive recursive functions has a long, (by computer science standards), history and is very rigorous, any proofs we obtain can still only increase our confidence rather than being absolute because we may not be able to model every aspect of real-world systems which means that our proofs will be incomplete.
Furthermore, the will to implement such rigorous systems may be lacking. This might partly be explained by the fact that the use of “formal” languages so far has not even been proved to improve software correctness: most projects which have used formal methods have reported that the number of errors both at the design and the code level were considerably less than had occurred when similar projects had been tackled without formal methods. However, the use of formal methods generally involves a lot more time and effort being spent on a project and it could be argued that the improvements are simply a function of this increased effort. This later argument is compounded by the fact that in general such projects have used “formal” specification languages which have been designed to be usable in real-world projects, but at the price of having no formal semantics, (e.g. ‘Z’) which is what makes their use purposeful in the first place!
But even stipulating that we did specify systems using a language for which we have a formal semantics and the necessary proof-checking tools, we are still not guaranteed correctness. To build a correct computer system we would have to specify and verify the correctness of the system’s microprocessor(s) in particular and its support chips in general; similarly for the operating system, the compiler for the high-level language we convert our specifications to, and the proof-checking and other tools relating to the specification language itself. We would also have to find a mathematical means of proving the equivalence of the compiled high-level language executables and their specification.
In the future I think that we can expect that the problems mentioned in the previous paragraph will be tackled with broad success. I would expect systems created using such tools and techniques to be correct in terms of their specification, i.e. to behave in a precisely predictable manner. I would still expect such specifications to have “gaps” because of practical and theoretical problems and limitations. Such systems might be sold as “safe”, but they will not be safe except by remotest chance because we cannot guarantee that our specification itself does not demand behaviour which under certain circumstances may lead to danger. (Furthermore a “safe” computer system must have some ability to cope with a certain amount of physical damage and transient corruption of volatile memory.) Truly safe computer systems are as desirable and as remote as the end of the rainbow: but for correct systems we may only have to wait until the next millennium.
M N Summerfield