The falsity of formally proven software

Eric Drexler falls into the hole of "imagine we can prove programs correct":
Why does this matter to us ordinary mortals? Because proof methods can be applied to digital systems, and in particular, will be able to verify the correctness (with respect to a formal specification) of compilers [pdf], microprocessor designs [pdf] (at the digital-abstraction level), and operating system microkernels [...] If this doesn’t seem important, it may be because we’re so accustomed to living with systems that have built on foundations made of mud, and thinking about a future likewise based on mud. All of us have difficulty imagining what could be developed in a world where computers didn’t crash, were guaranteed to be immune from virus attack, and could safely download code written by the devil himself, and where crucial pieces of software could be guaranteed to not leak data.
The bolding is mine. Eric Drexler is missing that, if you have a formal specification for a program, then you have the program. Programming is writing a formal specification. In the absence of hardware error, programs always behave according to their formal specification - their source code.

But what if the formal specification, when actually put to work, turns out to specify things that we didn't really want? Ah... Therein lies the rub.

The reliability of software can be improved (a lot!) by designing and adopting new programming languages which restrain the programmer's freedom of expression in subtle and wise ways that cause errors to be avoided as much as possible, while maximizing the ability to get things done. But far from it that we will ever produce programs that are error-free.

Bugs are in the mind of the programmer. The computer will always do what the programmer tells it to do. Bugs indicate the programmer's lack of awareness, confusion about what he wants, conflicting ideas. As long as human minds are faulty, the formal specifications they produce will be faulty as well - and this is the eternal source of bugs.


kyky said...

Although a program is a formal specification, not all formal specifications are programs. A formal specification does not even need to be complete in order to be useful.

Formal verification methods have been in use for decades (for example, this guy has been doing it for nearly thirty years), bringing valuable results. Naturally, he started with hardware because it was simpler to express as a final state machine.

Today, there are formal methods for converting a program to a skeletal FSM, and then verifying that machine against certain assumptions expressed as temporal logic statements. Of course this is not the dream world that Eric describes, but it is not something that one could easily dismiss either, considering the useful results that it brings.

denis bider said...

kyky: I agree that this is progress, quite conceivably important progress: formal verification can validate a program's self-consistency, and self-consistency errors are an important class of software problems.

However, formal validation cannot discover incorrect assumptions about the environment the program runs in, or incompatibilities with a variety of other programs' separate specifications.

I'm all in favor of giving programmers tools that can help us write software with fewer errors, and I didn't want to demean the work of people who work in this area, which work I think is remarkable and worthwhile.

However, I think Drexler is wrong when he proposes that formal proofs are the cure for all software ills. Improvement yes. Significant improvement, quite possibly.

But remove all errors? Make software truly 100% dependable? No. Self-consistency is just one class of errors. And frankly, for competent and seasoned developers - admittedly a minority - self-consistency problems are much less frequent than incorrect environment assumptions.