首页    期刊浏览 2024年12月02日 星期一
登录注册

文章基本信息

  • 标题:A Generalization of Haussler's Convolution Kernel Mapping Kernel
  • 本地全文:下载
  • 作者:Kilho Shin ; Kuboyama Tetsuji
  • 期刊名称:人工知能学会論文誌
  • 印刷版ISSN:1346-0714
  • 电子版ISSN:1346-8030
  • 出版年度:2009
  • 卷号:24
  • 期号:2
  • 页码:263-271
  • DOI:10.1527/tjsai.24.263
  • 出版社:The Japanese Society for Artificial Intelligence
  • 摘要:Haussler's convolution kernel provides a successful framework for engineering new positive semidefinite kernels, and has been applied to a wide range of data types and applications. In the framework, each data object represents a finite set of finer grained components. Then, Haussler's convolution kernel takes a pair of data objects as input, and returns the sum of the return values of the predetermined primitive positive semidefinite kernel calculated for all the possible pairs of the components of the input data objects. On the other hand, the mapping kernel that we introduce in this paper is a natural generalization of Haussler's convolution kernel, in that the input to the primitive kernel moves over a predetermined subset rather than the entire cross product. Although we have plural instances of the mapping kernel in the literature, their positive semidefiniteness was investigated in case-by-case manners, and worse yet, was sometimes incorrectly concluded. In fact, there exists a simple and easily checkable necessary and sufficient condition, which is generic in the sense that it enables us to investigate the positive semidefiniteness of an arbitrary instance of the mapping kernel. This paper presents and proves the validity of the condition. In addition, we introduce two important instances of the mapping kernel, which we refer to as the size-of-index-structure-distribution kernel and the edit-cost-distribution kernel. Both of them are naturally derived from well known (dis)similarity measurements in the literature (the maximum agreement tree, the edit distance), and are reasonably expected to improve the performance of the existing measures by evaluating their distributional features rather than their peak (maximum/minimum) features.
  • 关键词:machine learning ; kernel method ; convolution kernel
国家哲学社会科学文献中心版权所有