next up previous
Next: Discussion Up: Self-Stabilization and Fault Tolerance Previous: Protocol Unification

Cascaded Events

Since network disruptions are random and unpredictable, it is natural to consider the possibility of so-called cascaded membership events. (In fact, cascaded events and their impact on group protocols are often considered in group communication literature, but, alas, not often enough in the security literature.) A cascaded event occurs, in its simplest form, when one membership change occurs while another is being handled. Event here means any of: join, leave, partition, merge or a combination thereof. For example, a partition can occur while a prior partition is being dealt with, resulting in a cascade of size two. In principle, cascaded events of arbitrary size can occur if the underlying network is highly volatile.

We claim that the TGDHpartition protocol is self-stabilizing, i.e., robust against cascaded network events. This is quite rare as most multi-round cryptographic protocols are not geared towards handling of such events. In general, self-stabilization is a very desirable feature since lack thereof requires extensive and complicated protocol "coating" to either 1) shield the protocol from cascaded events, or 2) harden it sufficiently to make the protocol robust with respect to cascaded events (essentially, by making it re-entrant).

The high-level pseudocode for the self-stabilizing protocol is shown in figure 9. The changes from figure 8 are minimal.

  figure834
Figure 9: Self-stabilizing protocol pseudocode

 
 

Figure 10: An Example of Cascaded Partition

Instead of providing a formal proof of self-stabilization (which we omit due to page limitations) we demonstrate it with an example. Figure 10 shows an example of a cascaded partition event. The first part of the figure depicts a partition of M1, M4, and M7 from the prior group of ten members {M1,&ldots;,M10}. This partition normally requires two rounds to complete the key agreement. As described in section 5.4, every member constructs the same tree after completing the initial round. The middle part shows the resulting tree. In it, all non-leaf nodes except K2,3 must be recomputed as follows:

  1. First, M2 and M3 both compute K2,0 , M5 and M6 compute K2,1 while M8, M9 and M10 compute K1,1 . All blinded keys are broadcasted by each sponsorM2, M5 and M8.
  2. Then, as all broadcasts are received, M2, M3, M5 and M6 compute K1,0 and K0,0 . The blinded keys are broadcasted by the sponsorM6.
  3. Finally, all broadcasts are received and M8, M9 and M10 compute K0,0 .
Suppose that, in the midst of handling the first partition, another partition (of M3 and M8) takes place. Note that, regardless of which round (1,2,3) of the first partition is in progress, the departure of M3 and M8 does not affect the keys (and blinded keys) in the subtrees formed by M9 and M10 as well as M5 and M6. All remaining members update the tree as shown in the rightmost part of figure 10. The blinded key of K1,0 is the only one missing in all members' view of the tree. It is computed by M2, M5 and M6 and broadcasted by M6. When the broadcast is received, all members compute the root key.

The only remaining issue is whether a broadcast from the first partition can be received after the notification of the second (cascaded) partition. Here we rely on the underlying group communication system to guarantee that all membership events are delivered in sequence after all outstanding messages are delivered. In other words, if a message is sent in one membership view and membership changes while the message is not yet delivered, the membership change must be postponed until the message is delivered to the (surviving) subset of the original membership. This is essentially a restatement of View Synchrony (as discussed in section 3).


next up previous
Next: Discussion Up: Self-Stabilization and Fault Tolerance Previous: Protocol Unification

Adrian Perrig
Fri Sep 1 21:02:14 PDT 2000