On the behavior of convolutional nets for feature extraction
Cita com:
hdl:2117/115533
Document typeArticle
Defense date2018-03
Rights accessOpen Access
Except where otherwise noted, content on this work
is licensed under a Creative Commons license
:
Attribution-NonCommercial-NoDerivs 3.0 Spain
Abstract
Deep neural networks are representation learning techniques. During training, a deep net is capable of generating a descriptive language of unprecedented size and detail in machine learning. Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors previously learnt by the CNN after processing millions of images, without requiring an expensive training phase. Contributions to this field (commonly known as feature representation transfer or transfer learning) have been purely empirical so far, extracting all CNN features from a single layer close to the output and testing their performance by feeding them to a classifier. This approach has provided consistent results, although its relevance is limited to classification tasks. In a completely different approach, in this paper we statistically measure the discriminative power of every single feature found within a deep CNN, when used for characterizing every class of 11 datasets. We seek to provide new insights into the behavior of CNN features, particularly the ones from convolutional layers, as this can be relevant for their application to knowledge representation and reasoning. Our results confirm that low and middle level features may behave differently to high level features, but only under certain conditions. We find that all CNN features can be used for knowledge representation purposes both by their presence or by their absence, doubling the information a single CNN feature may provide. We also study how much noise these features may include, and propose a thresholding approach to discard most of it. All these insights have a direct application to the generation of CNN embedding spaces.
CitationGarcía-Gasulla, D., Parés, F., Vilalta, A., Moreno, J., Ayguadé, E., Labarta, J., Cortés, U., Suzumura, T. On the behavior of convolutional nets for feature extraction. "Journal of artificial intelligence research", Març 2018, vol. 61, p. 563-592.
ISSN1076-9757
Publisher versionhttps://www.jair.org/index.php/jair/article/view/11184
Collections
- Doctorat en Intel·ligència Artificial - Articles de revista [50]
- Computer Sciences - Articles de revista [341]
- Departament d'Arquitectura de Computadors - Articles de revista [1.098]
- Departament de Ciències de la Computació - Articles de revista [1.083]
- KEMLG - Grup d'Enginyeria del Coneixement i Aprenentatge Automàtic - Articles de revista [124]
- CAP - Grup de Computació d'Altes Prestacions - Articles de revista [382]
Files | Description | Size | Format | View |
---|---|---|---|---|
On the Behavior ... for Feature Extraction.pdf | 9,072Mb | View/Open |