2022
Ramos-Pérez, Ismael; Arnaiz-González, Álvar; Rodríguez, Juan José; García-Osorio, César
When is resampling beneficial for feature selection with imbalanced wide data? Journal Article
In: Expert Systems with Applications, vol. 188, pp. 116015, 2022, ISSN: 0957-4174.
Abstract | Links | BibTeX | Tags: Feature selection, High dimensional data, Machine learning, Unbalanced, Very low sample size, Wide data
@article{Ramos-Pérez2022,
title = {When is resampling beneficial for feature selection with imbalanced wide data?},
author = {Ismael Ramos-Pérez and Álvar Arnaiz-González and Juan José Rodríguez and César García-Osorio},
url = {https://www.sciencedirect.com/science/article/pii/S0957417421013622},
doi = {https://doi.org/10.1016/j.eswa.2021.116015},
issn = {0957-4174},
year = {2022},
date = {2022-02-01},
journal = {Expert Systems with Applications},
volume = {188},
pages = {116015},
abstract = {This paper studies the effects that combinations of balancing and feature selection techniques have on wide data (many more attributes than instances) when different classifiers are used. For this, an extensive study is done using 14 datasets, 3 balancing strategies, and 7 feature selection algorithms. The evaluation is carried out using 5 classification algorithms, analyzing the results for different percentages of selected features, and establishing the statistical significance using Bayesian tests.
Some general conclusions of the study are that it is better to use RUS before the feature selection, while ROS and SMOTE offer better results when applied afterwards. Additionally, specific results are also obtained depending on the classifier used, for example, for Gaussian SVM the best performance is obtained when the feature selection is done with SVM-RFE before balancing the data with RUS.},
keywords = {Feature selection, High dimensional data, Machine learning, Unbalanced, Very low sample size, Wide data},
pubstate = {published},
tppubtype = {article}
}
This paper studies the effects that combinations of balancing and feature selection techniques have on wide data (many more attributes than instances) when different classifiers are used. For this, an extensive study is done using 14 datasets, 3 balancing strategies, and 7 feature selection algorithms. The evaluation is carried out using 5 classification algorithms, analyzing the results for different percentages of selected features, and establishing the statistical significance using Bayesian tests.
Some general conclusions of the study are that it is better to use RUS before the feature selection, while ROS and SMOTE offer better results when applied afterwards. Additionally, specific results are also obtained depending on the classifier used, for example, for Gaussian SVM the best performance is obtained when the feature selection is done with SVM-RFE before balancing the data with RUS.
Some general conclusions of the study are that it is better to use RUS before the feature selection, while ROS and SMOTE offer better results when applied afterwards. Additionally, specific results are also obtained depending on the classifier used, for example, for Gaussian SVM the best performance is obtained when the feature selection is done with SVM-RFE before balancing the data with RUS.