期刊名称:CORE Discussion Papers / Center for Operations Research and Econometrics (UCL), Louvain
出版年度:2011
卷号:2011
期号:1
出版社:Center for Operations Research and Econometrics (UCL), Louvain
摘要:In this paper, we prove the complexity bounds for methods of Convex Optimization
based only on computation of the function value. The search directions of our schemes
are normally distributed random Gaussian vectors. It appears that such methods usually
need at most n times more iterations than the standard gradient methods, where n is the
dimension of the space of variables. This conclusion is true both for nonsmooth and
smooth problems. For the later class, we present also an accelerated scheme with the
expected rate of convergence O(n2/k2), where k is the iteration counter. For Stochastic
Optimization, we propose a zero-order scheme and justify its expected rate of
convergence O(n/k1/2). We give also some bounds for the rate of convergence of the
random gradient-free methods to stationary points of nonconvex functions, both for
smooth and nonsmooth cases. Our theoretical results are supported by preliminary
computational experiments
关键词:convex optimization, stochastic optimization, derivative-free methods,
random methods, complexity bounds.