论文标题
使用依赖性优化器来训练多层感知器
Using Fitness Dependent Optimizer for Training Multi-layer Perceptron
论文作者
论文摘要
这项研究根据最近提出的适应性优化器(FDO)提出了一种新型的培训算法。该算法的稳定性已通过一些标准测量值验证并在探索和剥削阶段进行了防护。这影响了我们在训练多层感知神经网络(MLP)中衡量算法性能的目标。这项研究结合了FDO与MLP(代号FDO-MLP)相结合,以优化体重和偏见以预测学生的结果。这项研究还可以根据学生的教育背景来改善学习系统,除了提高自己的成就。通过与后传播算法(BP)进行比较,可以肯定这种方法的实验结果,以及一些进化模型,例如带有Cascade MLP(FDO-CMLP),灰狼优化器(GWO),与MLP(GWO-MLP)结合使用MLP(GWO-MLP),将GWO与MLP(MLP(MGWO-MLP)合并),MGWO-MLP(MGWO-MLP) (GWO-CMLP),并使用Cascade MLP(MGWO-CMLP)修改GWO。定性和定量结果证明,使用FDO作为培训师的拟议方法可以在收敛速度和局部最佳选择方面胜过其他培训师的其他方法。提出的FDO-MLP方法的速率为0.97。
This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent Optimizer (FDO). The stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages using some standard measurements. This influenced our target to gauge the performance of the algorithm in training multilayer perceptron neural networks (MLP). This study combines FDO with MLP (codename FDO-MLP) for optimizing weights and biases to predict outcomes of students. This study can improve the learning system in terms of the educational background of students besides increasing their achievements. The experimental results of this approach are affirmed by comparing with the Back-Propagation algorithm (BP) and some evolutionary models such as FDO with cascade MLP (FDO-CMLP), Grey Wolf Optimizer (GWO) combined with MLP (GWO-MLP), modified GWO combined with MLP (MGWO-MLP), GWO with cascade MLP (GWO-CMLP), and modified GWO with cascade MLP (MGWO-CMLP). The qualitative and quantitative results prove that the proposed approach using FDO as a trainer can outperform the other approaches using different trainers on the dataset in terms of convergence speed and local optima avoidance. The proposed FDO-MLP approach classifies with a rate of 0.97.