Immediate generalization Given an item and its immediate parent in the taxonomy, the immediate generalization rule of is .
Given a most simplified association rule set , is the set of the ancestor items of the rules in . Naively we could do an exhaustive search of the subset of . For any subset of , we generate the immediate generalization and verify its validity and compute the information loss. At the end we can find the valid generalization rules with the least information loss.
But obviously this will be too expensive. We develop a greedy algorithm which is very efficient and in general the information loss can be approximate to the optimal solution.
The algorithm works in several passes.
Pass One We compute , the set of ancestor items of the size-one association rules in . We add the immediate generalization rules of to the generalization rule set . We then use to prune , eliminating the association rules which will not hold any more in the generalized database with generalization rule set. Hence we get which doesn't contain any size-one association rule.
Pass Two We compute , the set of ancestor items of the size-two association rules in . For each item in , we compute a heuristic function . function will be explained later. Assume has the highest H value. We add the immediate generalization rule of to the generalization rule set . We then use to prune , eliminating the association rules which will not hold any more in the generalized database with generalization rule set. Then we repeat this pass again until is empty. Hence we get which doesn't contain any size-one and size-two association rules.
Pass K Do similar steps as in Pass two for size-k association rules.
Finally, We get . If is 's ancestor in the taxonomy, and both A and B's immediate association rules are in , then we delete A's immediate association rule from . At the end we get the pruned generalized association rule set .
Heuristic function H The intuition of the heuristic function H is that we should favor an item if it appears in many association rules in . Also we should favor an item more if it appears more in short size association rules in . One possible heuristic function could be = weight() where , is an association rule in . A possible weight function could be . Other heuristic functions with similar properties are possible as well.
We can see that this algorithm finds valid generalization rule set pretty efficiently. With a good heuristic function, on the average case, the information loss is close to the optimal solution.