In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem is direct product feasible if it is possible to efficiently aggregate any k instances of and form one large instance of such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem , our hardness amplification theorem may be informally stated as follows: If there is a distribution over instances of of size n such that every randomized algorithm running in time t ( n ) fails to solve on 1 ( n ) fraction of inputs sampled from , then, assuming some relationships on ( n ) and t ( n ) , there is a distribution over instances of of size O ( n ( n )) such that every randomized algorithm running in time t ( n ) pol y ( ( n )) fails to solve on 99 100 fraction of inputs sampled from . As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.