Here’s Edsger Dijkstra discussing the birth of the use of axiomatics in computer science – the start of “formal methods” research.  What’s striking is the assumed choice between “axiomatic” and “mechanistic” as if there was no other way. In a later note he writes:

And now we are back at our old dilemma. Either we take by definition all properties of the model as relevant, or we specify in one way or another which of its properties are the relevant ones. In the first case we have failed to introduce in the name of “divide et impera” an interface that would allow us to divide and rule and the study of how we could build upon the (only implicitly defined) interface seems bound to deteriorate into a study of the model itself; in the second case we are again close to the axiomatic method….

[…]

Or, to put it in another way: if the traditional automata theory tends to make us insensitive to the role interfaces could and should play in coping with complex designs, should it then (continue to) occupy a central position in computing science curricula?

And I’m struck by the idea  that seems utterly wrong to me, that one either uses the methods of formal logic OR one is stuck without any ability to abstract or underspecify

The reason mathematics has advanced so much was not because of the Euclidean axioms-lemma-theorem straitjacket, but in spite of it. Luckily, when we actually discover mathematics, we do it the Babylonian way, empirically and algorithmically. It is only when it is time to present it, that we put on the stifling Greek formal attire.

so says Doron Zeilberger

 

The source of error (updated)
Tagged on: