首页    期刊浏览 2024年12月03日 星期二
登录注册

文章基本信息

  • 标题:A Comparative Analysis of Semi-Supervised Learning in Detecting Burst Header Packet Flooding Attack in Optical Burst Switching Network
  • 本地全文:下载
  • 作者:Md. Kamrul Hossain ; Md. Mokammel Haque ; M. Ali Akber Dewan
  • 期刊名称:Computers
  • 电子版ISSN:2073-431X
  • 出版年度:2021
  • 卷号:10
  • 期号:8
  • 页码:95
  • DOI:10.3390/computers10080095
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:This paper presents a comparative analysis of four semi-supervised machine learning (SSML) algorithms for detecting malicious nodes in an optical burst switching (OBS) network. The SSML approaches include a modified version of K-means clustering, a Gaussian mixture model (GMM), a classical self-training (ST) model, and a modified version of self-training (MST) model. All the four approaches work in semi-supervised fashion, while the MST uses an ensemble of classifiers for the final decision making. SSML approaches are particularly useful when a limited number of labeled data is available for training and validation of the classification model. Manual labeling of a large dataset is complex and time consuming. It is even worse for the OBS network data. SSML can be used to leverage the unlabeled data for making a better prediction than using a smaller set of labelled data. We evaluated the performance of four SSML approaches for two (Behaving, Not-behaving), three (Behaving, Not-behaving, and Potentially Not-behaving), and four (No-Block, Block, NB- wait and NB-No-Block) class classifications using precision, recall, and F1 score. In case of the two-class classification, the K-means and GMM-based approaches performed better than the others. In case of the three-class classification, the K-means and the classical ST approaches performed better than the others. In case of the four-class classification, the MST showed the best performance. Finally, the SSML approaches were compared with two supervised learning (SL) based approaches. The comparison results showed that the SSML based approaches outperform when a smaller sized labeled data is available to train the classification models.
国家哲学社会科学文献中心版权所有