期刊名称:Journal of King Saud University @?C Computer and Information Sciences
印刷版ISSN:1319-1578
出版年度:2022
卷号:34
期号:6
页码:2787-2797
DOI:10.1016/j.jksuci.2020.11.005
语种:English
出版社:Elsevier
摘要:Training an Artificial Neural Network (ANN) algorithm is not trivial, which requires optimizing a set of weights and biases that increase dramatically with the increasing capacity of the neural network resulting in such hard optimization problems. Essentially, over recent decades, stochastic search algorithms have shown remarkable abilities for addressing hard optimization problems. On the other hand, pragmatically, abundant real-world problems suffer from the imbalance problem, where the distribution of data varies considerably among classes resulting in more training biases and variances which degrades the performance of the learning algorithm. This paper introduces three stochastic and metaheuristic algorithms for training the Multilayer Perceptron (MLP) neural network to solve the problem of imbalanced classifications. The utilized algorithms are the Grey Wolf Optimization (GWO), Particle Swarm Optimization (PSO), and the Salp Swarm Algorithm (SSA). The proposed GWO-MLP, PSO-MLP, and SSA-MLP are trained based on different objective functions; accuracy, f1-score, and g-mean. Whereas, it is evaluated based on 10 benchmark imbalanced datasets. The results show an advantage for f1-score, and g-mean fitness functions over the accuracy when the datasets are imbalanced.