How concepts emerge in neural networks
CovenanteeMassachusetts Institute of Technology
Document typeMaster thesis
Rights accessRestricted access - author's decision
Deep learning models, and more specifically computer vision systems, have achieved great results in recent years. However, the interpretability and understanding of these models is still in its early stages. Interpretability can be approached from a low-level or filter level perspective, but the representations learned by neural networks encompass a much higher-level knowledge that has to be approached from a semantic point of view, with concepts in mind. The goal of this project is to investigate the concepts neural networks learn implicitly when they are trained in an unsupervised scenario, with a special focus on the multimodal matching of words to visual objects and attributes. We study how we can detect these concepts, as well as how we can force the networks to learn more meaningful ones, both providing analytical insights and getting practical results.
To be defined at MIT.
SubjectsNeural networks (Computer science), Computer vision, Xarxes neuronals (Informàtica), Visió per ordinador
DegreeMÀSTER UNIVERSITARI EN ENGINYERIA DE TELECOMUNICACIÓ (Pla 2013)
|TFM Didac Suris.pdf||20,42Mb||Restricted access|
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder