摘要:In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem Î is direct product feasible if it is possible to efficiently aggregate any k instances of Î and form one large instance of Î such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem Î , our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of Î of size n such that every randomized algorithm running in time t(n) fails to solve Î on 1/α(n) fraction of inputs sampled from D, then, assuming some relationships on α(n) and t(n), there is a distribution D' over instances of Î of size O(nâ<.α(n)) such that every randomized algorithm running in time t(n)/poly(α(n)) fails to solve Î on 99/100 fraction of inputs sampled from D'. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.
关键词:hardness amplification; average case complexity; direct product; optimization problems; fine-grained complexity; TFNP