论文标题
自优化的特征转换
Self-Optimizing Feature Transformation
论文作者
论文摘要
特征转换旨在通过数学上转换现有特征来提取良好的表示(功能)空间。应对维度的诅咒,增强模型概括,克服数据稀疏性并扩大经典模型的可用性至关重要。当前的研究重点是基于领域的知识特征工程或学习潜在表示;然而,这些方法并不完全是自动化的,不能产生可追溯和最佳的表示空间。在重建机器学习任务的功能空间时,可以同时解决这些限制吗?在这项扩展研究中,我们提出了特征转换的自优化框架。为了取得更好的性能,我们通过(1)获得高级状态表示,以使加强代理能够更好地理解当前功能集; (2)解决Q值高估的增强剂,以学习无偏见和有效的政策。最后,为了使实验比初步工作更具说服力,我们结论是通过五个数据集添加异常检测任务,评估各种状态表示方法,并比较不同的培训策略。广泛的实验和案例研究表明,我们的工作更有效和更高。
Feature transformation aims to extract a good representation (feature) space by mathematically transforming existing features. It is crucial to address the curse of dimensionality, enhance model generalization, overcome data sparsity, and expand the availability of classic models. Current research focuses on domain knowledge-based feature engineering or learning latent representations; nevertheless, these methods are not entirely automated and cannot produce a traceable and optimal representation space. When rebuilding a feature space for a machine learning task, can these limitations be addressed concurrently? In this extension study, we present a self-optimizing framework for feature transformation. To achieve a better performance, we improved the preliminary work by (1) obtaining an advanced state representation for enabling reinforced agents to comprehend the current feature set better; and (2) resolving Q-value overestimation in reinforced agents for learning unbiased and effective policies. Finally, to make experiments more convincing than the preliminary work, we conclude by adding the outlier detection task with five datasets, evaluating various state representation approaches, and comparing different training strategies. Extensive experiments and case studies show that our work is more effective and superior.