Visual interpretability of deep learning algorithms in medical applications

View/Open
Cita com:
hdl:2117/349401
Document typeMaster thesis
Date2020-09
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
Artificial intelligence is having a very big boost in recent times, and after the success of deep learning algorithms in many applications, they are also providing successful results for medical imaging, especially because of the good performance of convolutional neural networks. However, the black box behaviour of these networks makes it very difficult to assign them tasks that an expert human normally does. This project aims to interpret in human terms what a convolutional neural network trained to classify fetal different ultrasound planes is based on. We use transfer learning to build a network with good performance in the classification task and apply interpretability techniques on it. These methods include Activation Maximization, Saliency Maps, Occlusion Sensitivity Maps, Class Activation Mapping and LIME. The trained network is able to classify fetal ultrasound images with an accuracy of 91.7%, and we provide a robust interpretation of its performance that allows us to understand the most important characteristics of each class for the model.
SubjectsMachine learning, Artificial intelligence, Neural networks (Computer science), Aprenentatge automàtic, Intel·ligència artificial, Xarxes neuronals (Informàtica)
DegreeMÀSTER UNIVERSITARI EN FÍSICA PER A L'ENGINYERIA (Pla 2018)
Files | Description | Size | Format | View |
---|---|---|---|---|
MASTER_THESIS_christian.pdf | 5,237Mb | View/Open |