Establishing links between image segmentation and deep learning interpretability methods
Visualitza/Obre
extended-tfm-c8ck3nn68-benjamin-tezcan.pdf (4,272Mb) (Accés restringit)
Estadístiques de LA Referencia / Recolecta
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/371827
Realitzat a/ambAmrita Vishwa Vidyapeetham
Tipus de documentProjecte Final de Màster Oficial
Data2022-07-15
Condicions d'accésAccés restringit per decisió de l'autor
Tots els drets reservats. Aquesta obra està protegida pels drets de propietat intel·lectual i
industrial corresponents. Sense perjudici de les exempcions legals existents, queda prohibida la seva
reproducció, distribució, comunicació pública o transformació sense l'autorització del titular dels drets
Abstract
Traditional machine learning methods amongst others segment an image into different regions on basis of pixel attributes. One of the most well known methods are clustering and thresholding algorithms. Convolutional neural networks are designed to imitate the human visual cortex by applying convolutional filters in form of layers to the input images. That way, deep features are extracted. Since this process is often referred to a “black box” behaviour because it is unknown what is going on inside the model, deep learning interpretability methods are introduced that highlight parts of an image that are important for the algorithm’s classification process. Experiments show that it is possible to establish links between segmented regions extracted by traditional methods and deep features extracted by CNNs. These results rely heavily on
the DL interpretability method used and the type of dataset. This extended TFM is carried out in cooperation with a research group from the AMRITA University – school of medicine in India, focusing on biomedical image processing and computer vision. The collaboration started in November 2021
Col·leccions
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
extended-tfm-c8ck3nn68-benjamin-tezcan.pdf | 4,272Mb | Accés restringit |