Maximum coverage problem

The maximum coverage problem is a classical question in computer science, computational complexity theory, and operations research. It is a problem that is widely taught in approximation algorithms.

As input you are given several sets and a number k. The sets may have some elements in common. You must select at most k of these sets such that the maximum number of elements are covered, i.e. the union of the selected sets has maximal size.

Formally, (unweighted) Maximum Coverage

Instance: A number  k and a collection of sets  S = S_1, S_2, \ldots, S_m .
Objective: Find a subset  S^{'} \subseteq S of sets, such that  \left| S^{'} \right| \leq k and the number of covered elements  \left| \bigcup_{S_i \in S^{'}}{S_i} \right| is maximized.

The maximum coverage problem is NP-hard, and cannot be approximated within 1 - \frac{1}{e} + o(1) \approx 0.632 under standard assumptions. This result essentially matches the approximation ratio achieved by the generic greedy algorithm used for maximization of submodular functions with a cardinality constraint.[1]

ILP formulation

The maximum coverage problem can be formulated as the following integer linear program.

maximize \sum_{e_j \in E} y_j (maximizing the sum of covered elements)
subject to \sum{x_i} \leq k (no more than k sets are selected)
\sum_{e_j \in S_i} x_i \geq y_j (if y_j > 0 then at least one set e_j \in S_i is selected)
y_j \in \{0,1\} (if y_j=1 then e_j is covered)
x_i \in \{0,1\} (if x_i=1 then S_i is selected for the cover)

Greedy algorithm

The greedy algorithm for maximum coverage chooses sets according to one rule: at each stage, choose a set which contains the largest number of uncovered elements. It can be shown that this algorithm achieves an approximation ratio of 1 - \frac{1}{e}.[2] Inapproximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for maximum coverage.[3]

Known extensions

The inapproximability results apply to all extensions of the maximum coverage problem since they hold the maximum coverage problem as a special case.

Weighted version

In the weighted version every element  e_j has a weight w(e_j). The task is to find a maximum coverage which has maximum weight. The basic version is a special case when all weights are 1.

maximize \sum_{e \in E} w(e_j) \cdot y_j . (maximizing the weighted sum of covered elements).
subject to  \sum{x_i}  \leq k ; (no more than k sets are selected).
 \sum_{e_j \in S_i} x_i \geq y_j ; (if y_j \geq 0 then at least one set e_j \in S_i is selected).
y_j \in \{0,1\}; (if y_j=1 then e_j is covered)
x_i \in \{0,1\} (if x_i=1 then S_i is selected for the cover).

The greedy algorithm for the weighted maximum coverage at each stage chooses a set that contains the maximum weight of uncovered elements. This algorithm achieves an approximation ratio of 1 - \frac{1}{e}.[1]

Budgeted maximum coverage

In the budgeted maximum coverage version, not only does every element  e_j have a weight w(e_j), but also every set S_i has a cost c(S_i). Instead of k that limits the number of sets in the cover a budget B is given. This budget B limits the total cost of the cover that can be chosen.

maximize \sum_{e \in E} w(e_j) \cdot y_j . (maximizing the weighted sum of covered elements).
subject to  \sum{c(S_i) \cdot x_i}  \leq B ; (the cost of the selected sets cannot exceed B).
 \sum_{e_j \in S_i} x_i \geq y_j ; (if y_j \geq 0 then at least one set e_j \in S_i is selected).
y_j \in \{0,1\}; (if y_j=1 then e_j is covered)
x_i \in \{0,1\} (if x_i=1 then S_i is selected for the cover).

A greedy algorithm will no longer produce solutions with a performance guarantee. Namely, the worst case behavior of this algorithm might be very far from the optimal solution. The approximation algorithm is extended by the following way. First, after finding a solution using the greedy algorithm, return the better of the greedy algorithm's solution and the set of largest weight. Call this algorithm the modified greedy algorithm. Second, starting with all possible families of sets of sizes from one to (at least) three, augment these solutions with the modified greedy algorithm. Third, return the best out of all augmented solutions. This algorithm achieves an approximation ratio of 1- 1/e. This is the best possible approximation ratio unless NP \subseteq DTIME(n^{O(\log\log n)}).[4]

Generalized maximum coverage

In the generalized maximum coverage version every set S_i has a cost c(S_i), element  e_j has a different weight and cost depending on which set covers it. Namely, if  e_j is covered by set S_i the weight of  e_j is w_i(e_j) and its cost is c_i(e_j). A budget  B is given for the total cost of the solution.

maximize \sum_{e \in E, S_i} w_i(e_j) \cdot y_{ij} . (maximizing the weighted sum of covered elements in the sets in which they are covered).
subject to  \sum{c_i(e_j) \cdot y_{ij}} + \sum{c(S_i) \cdot x_i}  \leq B ; (the cost of the selected sets cannot exceed B).
 \sum_{i} y_{ij} \leq 1 ; (element e_j=1 can only be covered by at most one set).
 \sum_{S_i} x_i \geq y_{ij} ; (if y_j \geq 0 then at least one set e_j \in S_i is selected).
y_{ij} \in \{0,1\} ; (if y_{ij}=1 then e_j is covered by set S_i)
x_i \in \{0,1\} (if x_i=1 then S_i is selected for the cover).

Generalized maximum coverage algorithm

The algorithm uses the concept of residual cost/weight. The residual cost/weight is measured against a tentative solution and it is the difference of the cost/weight from the cost/weight gained by a tentative solution.

The algorithm has several stages. First, find a solution using greedy algorithm. In each iteration of the greedy algorithm the tentative solution is added the set which contains the maximum residual weight of elements divided by the residual cost of these elements along with the residual cost of the set. Second, compare the solution gained by the first step to the best solution which uses a small number of sets. Third, return the best out of all examined solutions. This algorithm achieves an approximation ratio of 1-1/e - o(1).[5]

Related problems

Notes

  1. 1 2 G. L. Nemhauser, L. A. Wolsey and M. L. Fisher. An analysis of approximations for maximizing submodular set functions I, Mathematical Programming 14 (1978), 265–294
  2. Hochbaum, Dorit S. (1997). "Approximating Covering and Packing Problems: Set Cover, Vertex Cover, Independent Set, and Related Problems". In Hochbaum, Dorit S. Approximation Algorithms for NP-Hard Problems. Boston: PWS Publishing Company. pp. 94–143. ISBN 053494968-1.
  3. Feige, Uriel (July 1998). "A Threshold of ln n for Approximating Set Cover". Journal of the ACM 45 (4) (New York, NY, USA: Association for Computing Machinery). pp. 634–652. doi:10.1145/285055.285059. ISSN 0004-5411.
  4. Khuller, S., Moss, A., and Naor, J. 1999. The budgeted maximum coverage problem. Inf. Process. Lett. 70, 1 (Apr. 1999), 39-45.
  5. Cohen, R. and Katzir, L. 2008. The Generalized Maximum Coverage Problem. Inf. Process. Lett. 108, 1 (Sep. 2008), 15-22.

References

This article is issued from Wikipedia - version of the Friday, January 08, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.