摘要:It is known that kernel regularized online learning has the advantages of low complexity and simple calculations, and thus is accompanied with slow convergence and low accuracy. Often the algorithm are designed with the help of gradient of the loss function, the complexity of the loss may influence the convergence. In this paper, we show, at some extent, the strong convexity can increase the learning rates.