摘要:In this paper, we consider the regularized learning schemes based on l 1 -regularizer and pinball loss in a data dependent hypothesis space. The target is the error analysis for the quantile regression learning. There is no regularized condition with the kernel function, excepting continuity and boundness. The graph-based semi-supervised algorithm leads to an extra error term called manifold error. Part of new error bounds and convergence rates are exactly derived with the techniques consisting of l 1 -empirical covering number and boundness decomposition.
关键词:Semi-Supervised Learning;Conditional Quantile Regression;l1-Regularizer;Manifold-Regularizer;Pinball Loss