期刊名称:Electronic Colloquium on Computational Complexity
印刷版ISSN:1433-8092
出版年度:2012
卷号:2012
出版社:Universität Trier, Lehrstuhl für Theoretische Computer-Forschung
摘要:We address the following problem: how to execute any algorithm P, for an unbounded number of executions, in the presence of an adversary who observes partial information on the internal state of the computation during executions. The security guarantee is that the adversary learns nothing, beyond P's input/output behavior.
This general problem is important for running cryptographic algorithms in the presence of side-channel attacks, as well as for running non-cryptographic algorithms, such as a proprietary search algorithm or a game, on a cloud server where parts of the execution's internals might be observed.
Our main result is a compiler, which takes as input an algorithm P and a security parameter k, and produces a functionally equivalent algorithm P'. The running time of P' is a factor of poly(k) slower than P and is composed of a series of calls to poly(k) time computable sub-algorithms. During the executions of P', an adversary algorithm Adv, which can choose the inputs of P', can learn the results of adaptively chosen leakage functions -- each of bounded output size -- on the sub-algorithms of P' and the randomness they use.
We prove that for any computationally unbounded Adv observing the results of computationally unbounded leakage functions, will learn no more from its observations than it could given black-box access only to the input-output behavior of P. This result is unconditional and does not rely on any secure hardware components.