I’m thinking about distributed consensus algorithms, timestamping, and databases and if you read that literature you will see many references to the Fischer, Lynch, Paterson “theorem”. Google Scholar tells me the paper has been cited more than  Baardheere 4500 times. The theorem can be paraphrased as the following

If you cannot tell the difference between a network site or process that has failed and one that is just slow, then you can’t tell the difference between a network site or process that has failed and one that is just slow

One might respond:  “surely this trivial tautology can’t be a famous result, it’s more probable that you are being hyperbolic” , but – well,  here is the problem statement as set down in the paper:

The problem is for all the data manager processes that have participated in the processing of a particular transaction to agree on whether to install the transaction’s results in the database or to discard them. The latter action might be necessary, for example, if some data managers were, for any reason, unable to carry out the required transaction processing. Whatever decision is made, all data managers must make the same decision in order to preserve the consistency of the database.

A set of data manager processes  must come to a consensus about whether to commit or to discard. The problem statement requires that ALL of the processes must agree either “yes” or “no” and, presumably a single “no” vote must persuade the others. An implicit but key property for the desired consensus is that if a process fails the others can ignore its opinion. That is, a dead process does not count in the consensus but a “no” process can veto consensus. And that’s the core problem here. The processes consult with each other possibly all agree on “yes” except for one process that does not answer. Is it a slow process that will say “yes” or is it a slow dissenter that will say “no” or has it crashed so its opinion can be ignored? This is a real and interesting problem – consider what happens if a router crashes and comes up 20 minutes later, after 99 processes agreed to commit a transaction and suddenly the 100th process is back on line objecting.

In this paper, we show http://offroadersblog.com/wp-json/yoast/v1/configuration/save_configuration_state?_locale=user the surprising result [my bold] that no completely asynchronous consensus protocol can tolerate even a single unannounced process death. We do not consider Byzantine failures, and we assume that the message system is reliable: it delivers all messages correctly and exactly once. Nevertheless, even with these assumptions, the stopping of a single process at an inopportune time can cause any distributed commit protocol to fail to reach agreement.

It is “surprising” that a consensus algorithm can fail if a process dies “unannounced”.  Surprising because the participants in the consensus algorithm can … well they can’t:

Finally, we do not postulate the ability to detect the death of a process, so it is impossible for one process to tell whether another has died (stopped entirely) or is just running very slowly.

Here’s the key phrase: “it is impossible for one process to tell whether another has died (stopped entirely) or is just running very slowly“.

Implicit in this phrase and in the problem statement is that it is essential  to distinguish between a process that is “very slow” and one that has failed. This is a key requirement, because otherwise the problem is still solvable  But as presented, we have  a tautology.

Tautology: If it is impossible for one process to tell whether another has died or is just running slowly and a protocol depends on making that distinction,  the protocol cannot work.

Suppose you can tell if a process is either dead or very slow, however – something that is not at necessarily ruled out in the problem statement. From a systems engineering perspective, we don’t care whether a process has crashed or not if it won’t or can’t participate in the consensus process on a timely basis.   The obvious solution is to have something like a heartbeat message so that when the slow process rejoins the network it knows it has probably been ruled out of the consensus group and can participate in some sort of catch up protocol.  This is actually a pretty simple and durable test: a process that has not been able to participate in the protocol for some period of time should consider itself suspect until it can rejoin the protocol and waiting past that period without interaction is all the evidence that the remaining processes need to conclude this process is faulty.  However, I’m not sure if that would be allowed:

We also assume that processes do not have access to synchronized clocks, so algorithms based on time-outs, for example, cannot be used.

Synchronized clocks are not needed to solve the problem, but reasonable elapsed time clocks or some stand-in are needed.  A coordinating process simply needs to know that without a response after 4 milliseconds or 2 weeks or whatever time duration is appropriate, it can conclude that the process failing to respond has failed or is too slow.  If that is ruled out by what is meant by “asynchronous” then, we are back to the tautology.

(edited Sept 3 2016, May 2018, December 2018, April 2024)

 

More on Fischer, Lynch, Patterson and the parrot theorem.