Flajolet–Martin algorithm

The Flajolet–Martin algorithm is an algorithm for approximating the number of distinct elements in a stream with a single pass and space-consumption which is logarithmic in the maximum number of possible distinct elements in the stream. The algorithm was introduced by Philippe Flajolet and G. Nigel Martin in their 1984 paper "Probabilistic Counting Algorithms for Data Base Applications".[1] Later it has been refined in the papers "LogLog counting of large cardinalities" by Marianne Durand and Philippe Flajolet,[2] and "HyperLogLog: The analysis of a near-optimal cardinality estimation algorithm" by Philippe Flajolet et al.[3]

In their 2010 paper "An optimal algorithm for the distinct elements problem",[4] Daniel M. Kane, Jelani Nelson and David P. Woodruff gives an improved algorithm which uses nearly optimal space, and has optimal O(1) update and reporting times.

The algorithm

Assume that we are given a hash function hash(x) which maps input x to integers in the range [0; 2^L - 1] and where the outputs are sufficiently uniformly distributed. Note that the set of integers from 0 to 2^L-1 corresponds to the set of binary strings of length L. For any non-negative integer y, define bit(y,k) to be the k-th bit in the binary representation of y, such that:


y = \sum_{k \geq 0} \text{bit}(y,k) 2^k

We then define a function \rho(y) which outputs the position of the least significant 1-bit in the binary representation of y:

\rho(y) = \min_{k \geq 0} \text{bit}(y,k) \neq 0

where \rho(0) = L. Note that with the above definition we are using 0-indexing for the positions. For example, \rho(13) = \rho(1101_{2}) = 0 since the least significant bit is a 1, and \rho(8) = \rho(1000_{2}) = 3 since the least significant 1-bit is at the third position. At this point, note that under the assumption that the output of our hash-function is uniformly distributed, then the probability of observing a hash-output ending with 2^k (a one, followed by k zeroes) is 2^{-(k+1)} since this corresponds to flipping k heads and then a tail with a fair coin.

Now the Flajolet–Martin algorithm for estimating the cardinality of a multiset M is as follows:

  1. Initialize a bit-vector BITMAP to be of length L and contain all 0's.
  2. For each element x in M:
    1. index = \rho(\text{hash}(x)).
    2. BITMAP[index] = 1.
  3. Let R denote the smallest index i such that BITMAP[i] = 0.
  4. Estimate the cardinality of M as 2^R / \phi where \phi \approx 0.77351.

The idea is that if n is the number of distinct elements in the multiset M, then BITMAP[0] is accessed approximately n/2 times, BITMAP[1] is accessed approximately n/4 times and so on. Consequently if i \gg \log_2 n then BITMAP[i] is almost certainly 0, and if i \ll \log_2 n then BITMAP[i] is almost certainly 1. If i \approx \log_2 n then BITMAP[i] can be expected to be either 1 or 0.

The correction factor \phi \approx 0.77351 is found by calculations which can be found in the original paper.

Improving accuracy

A problem with the Flajolet–Martin algorithm in the above form is that the results vary a lot. A common solution is to run the algorithm multiple times with k different hash-functions, and combine the results from the different runs. One idea is to take the mean of the k results together from each hash-function, obtaining a single estimate of the cardinality. The problem with this is that averaging is very susceptible to outliers (which are likely here). A different idea is to use the median which is less prone to be influences by outliers. The problem with this is that the results can only take form 2^R / \phi, where R is integer. A common solution is to combine both the mean and the median: Create k \cdot \ell hash-functions and split them into k distinct groups (each of size \ell). Within each group use the median for aggregating together the \ell results, and finally take the mean of the k group estimates as the final estimate.

See also

References

  1. Flajolet, Philippe; Martin, G. Nigel (1985). "Probabilistic counting algorithms for data base applications" (PDF). Journal of Computer and System Sciences 31 (2): 182209. doi:10.1016/0022-0000(85)90041-8.
  2. Durand, Marianne; Flajolet, Philippe (2003). "Loglog Counting of Large Cardinalities" (PDF). Algorithms - ESA 2003. Lecture Notes in Computer Science 2832. p. 605. doi:10.1007/978-3-540-39658-1_55. ISBN 978-3-540-20064-2.
  3. Flajolet, Philippe; Fusy, Éric; Gandouet, Olivier; Meunier, Frédéric (2007). "Hyperloglog: The analysis of a near-optimal cardinality estimation algorithm" (PDF). Discrete Mathematics and Theoretical Computer Science: 127–146. CiteSeerX: 10.1.1.76.4286.
  4. Kane, D. M.; Nelson, J.; Woodruff, D. P. (2010). "An optimal algorithm for the distinct elements problem" (PDF). Proceedings of the twenty-ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems of data - PODS '10. p. 41. doi:10.1145/1807085.1807094. ISBN 978-1-4503-0033-9.

Additional sources

This article is issued from Wikipedia - version of the Friday, February 12, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.