Feature selection has several potentially beneficial uses in machine learning. Some of them are to improve the performance of the learning method by removing noisy features, to reduce the feature set in data collection, and to better understand the data. In this report we present how to use empirical alignment, a well known measure for the fitness of kernels to data labels, to perform feature selection for support vector machines. We show that this measure improves the results obtained with other widely used measures for feature selection (like information gain or correlation) in linearly separable problems. We also show how alignment can be successfully used to select relevant features in non-linearly separable problems when using support vector machines.
CitationCatalà, N., Martín, M. "Feature selection for support vector machines by alignment with ideal kernel". 2007.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder. If you wish to make any use of the work not provided for in the law, please contact: firstname.lastname@example.org