dc.contributor.author | Rey-Arena, Manuel |
dc.contributor.author | Guirado, Emilio |
dc.contributor.author | Tabik, Siham |
dc.contributor.author | Ruiz Hidalgo, Javier |
dc.contributor.other | Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions |
dc.date.accessioned | 2020-09-18T12:53:58Z |
dc.date.available | 2022-06-30T00:28:14Z |
dc.date.issued | 2020-10 |
dc.identifier.citation | Rey-Arena, M. [et al.]. FuCiTNet: improving the generalization of deep learning networks by the fusion of learned class-inherent transformations. "Information fusion", Octubre 2020, vol. 63, p. 188-195. |
dc.identifier.issn | 1566-2535 |
dc.identifier.uri | http://hdl.handle.net/2117/328939 |
dc.description | © <2020>. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.description.abstract | It is widely known that very small datasets produce overfitting in Deep Neural Networks (DNNs), i.e., the network becomes highly biased to the data it has been trained on. This issue is often alleviated using transfer learning, regularization techniques and/or data augmentation. This work presents a new approach, independent but complementary to the previous mentioned techniques, for improving the generalization of DNNs on very small datasets in which the involved classes share many visual features. The proposed model, called FuCiTNet (Fusion Class inherent Transformations Network), inspired by GANs, creates as many generators as classes in the problem. Each generator, k, learns the transformations that bring the input image into the k-class domain. We introduce a classification loss in the generators to drive the leaning of specific k-class transformations. Our experiments demonstrate that the proposed transformations improve the generalization of the classification model in three diverse datasets. |
dc.description.sponsorship | This work partially supported by the Spanish Ministry of Science and Technology under the project TIN2017-89517-P and the project TEC2016-75976-R, financed by the Spanish Ministerio de Economía, Industria y Competitividad and the European Regional Development Fund (ERDF). S. Tabik was supported by the Ramon y Cajal Programme (RYC-2015-18136). E.G was supported by the European Research Council (ERC Grant agreement 647038 [BIODESERT]), with additional support from Generalitat Valenciana (CIDEGENT/2018/041). |
dc.format.extent | 8 p. |
dc.language.iso | eng |
dc.publisher | Elsevier |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 Spain |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/es/ |
dc.subject | Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Aprenentatge automàtic |
dc.subject | Àrees temàtiques de la UPC::Enginyeria de la telecomunicació::Telemàtica i xarxes d'ordinadors |
dc.subject.lcsh | Machine learning |
dc.subject.lcsh | Neural networks (Computer science) |
dc.subject.other | Deep neural networks |
dc.subject.other | Generalization |
dc.subject.other | Pre-processing |
dc.subject.other | Transformation |
dc.subject.other | GANs (Generative Adversarial Networks) |
dc.subject.other | Classification |
dc.subject.other | Small dataset |
dc.title | FuCiTNet: improving the generalization of deep learning networks by the fusion of learned class-inherent transformations |
dc.type | Article |
dc.subject.lemac | Aprenentatge automàtic |
dc.subject.lemac | Xarxes neuronals (Informàtica) |
dc.contributor.group | Universitat Politècnica de Catalunya. GPI - Grup de Processament d'Imatge i Vídeo |
dc.identifier.doi | 10.1016/j.inffus.2020.06.015 |
dc.description.peerreviewed | Peer Reviewed |
dc.relation.publisherversion | https://www.sciencedirect.com/science/article/abs/pii/S1566253520303122 |
dc.rights.access | Open Access |
local.identifier.drac | 28853434 |
dc.description.version | Postprint (author's final draft) |
dc.relation.projectid | info:eu-repo/grantAgreement/MINECO/1PE/TEC2016-75976-R |
local.citation.author | Rey-Arena, M.; Guirado, E.; Tabik, S.; Ruiz-Hidalgo, J. |
local.citation.publicationName | Information fusion |
local.citation.volume | 63 |
local.citation.startingPage | 188 |
local.citation.endingPage | 195 |