论文标题
对处理一般模糊的最小模 - 最大神经网络的混合属性数据的方法的深入比较
An in-depth comparison of methods handling mixed-attribute data for general fuzzy min-max neural network
论文作者
论文摘要
一般模糊的Min-Max(GFMM)神经网络是用于分类问题的有效神经模糊系统之一。但是,GFMM当前大多数学习算法的缺点是,它们只能处理有效的数值有价值的功能。因此,本文提供了一些潜在的方法来调整GFMM学习算法,以解决具有混合型或仅分类特征的分类问题,因为它们在实际应用中非常普遍,并且通常提供非常有用的信息。我们将比较和评估三种具有混合功能的数据集的主要方法,包括使用编码方法,将GFMM模型与其他分类器的组合组合以及对这两种功能采用特定的学习算法。实验结果表明,目标和詹姆斯 - 斯坦是学习GFMM模型的合适的分类编码方法,而GFMM神经网络和决策树的组合是增强与混合特征在数据集上GFMM模型分类性能的灵活方法。具有混合型特征能力的学习算法是以自然方式处理混合属性数据的潜在方法,但是它们需要进一步改进以实现更好的分类准确性。基于分析,我们还确定了不同方法的强和弱点,并提出了潜在的研究方向。
A general fuzzy min-max (GFMM) neural network is one of the efficient neuro-fuzzy systems for classification problems. However, a disadvantage of most of the current learning algorithms for GFMM is that they can handle effectively numerical valued features only. Therefore, this paper provides some potential approaches to adapting GFMM learning algorithms for classification problems with mixed-type or only categorical features as they are very common in practical applications and often carry very useful information. We will compare and assess three main methods of handling datasets with mixed features, including the use of encoding methods, the combination of the GFMM model with other classifiers, and employing the specific learning algorithms for both types of features. The experimental results showed that the target and James-Stein are appropriate categorical encoding methods for learning algorithms of GFMM models, while the combination of GFMM neural networks and decision trees is a flexible way to enhance the classification performance of GFMM models on datasets with the mixed features. The learning algorithms with the mixed-type feature abilities are potential approaches to deal with mixed-attribute data in a natural way, but they need further improvement to achieve a better classification accuracy. Based on the analysis, we also identify the strong and weak points of different methods and propose potential research directions.