出版社:The Japanese Society for Artificial Intelligence
摘要:Support Vector Machines, when combined with kernels, achieve state-of-the-art accuracy on many datasets. However, their use in many real-world applications is hindered by the fact that their model size is often too large and their prediction function too expensive to evaluate. In this paper, to address these issues, we are interested in the problem of learning non-linear classifiers with a sparsity constraint. We first define an L1-regularized convex objective and show how to optimize it, without constraint. Next, we show how our approach can be naturally extended to incorporate a contraint by constrained model selection. Experiments show that, compared to SVMs, our approach leads to much more parsimonious models with comparable or better accuracy.