Emulate Laser Doppler Imager (LDI) with machine learning - making diagnosing burn wounds more accessible
Fitxers
Títol de la revista
ISSN de la revista
Títol del volum
Autors
Correu electrònic de l'autor
Tutor / director
Tribunal avaluador
Realitzat a/amb
Tipus de document
Data
Condicions d'accés
Llicència
Publicacions relacionades
Datasets relacionats
Projecte CCD
Abstract
In this bachelor thesis, a method for assessing burn wounds was developed by emulating Laser Doppler Imaging (LDI) from ordinary Red¿Green¿Blue (RGB) photographs. Since traditional LDI devices are costly and complex, a deep learning model was proposed to translate digital images of burns into synthetic perfusion maps that indicate tissue viability and healing potential. The study began by gathering a dataset of paired RGB and LDI images from Burn Wound Center of UZGen and then developing a customized data-alignment pipeline. Specifically, each photograph was converted to a uniform size, its background was removed, and the remaining regions were aligned with the corresponding perfusion data through feature matching and geometric transformation. Poorly aligned or nearly uniform dark images were filtered out, ensuring that only high-quality pairs entered the training pipeline. Next, the model architecture from A. Rozo et al. was replicated, which is an imageto-image translation approach to estimate burn healing times, using a digital image to approximate the LDI [1]. It was designed to reflect the ordinal nature of healing-time categories. The continuous perfusion values of LDI were discretized into six ordered classes, and this information was encoded through a U-Net structure with a pretrained VGG encoder. The network first produced six binary maps indicating whether each pixel's true perfusion exceeded successive thresholds, then combined them into a continuous map via a small convolutional refinement module. During training, a composite loss function balanced the ordinal classification error with the mean absolute error of the final map, guiding the model to respect both class order and precise perfusion values. Reproduction of a published baseline model confirmed the validity of the implementation, yielding a normalized mean absolute error of approximately 0.25. Building on this foundation, the encoder was upgraded to an EfficientNet-B7 backbone and the decoder was extended accordingly. This enhancement reduced the test error to about 0.21, representing a sixteen percent improvement over the baseline, and also lowered the discrepancy in colour distributions that reflect healing-time regions. However, overfitting was observed, likely because burn depth can be hard to distinguish by sight alone, so even more complex models and novel approaches are needed to find better patterns and better results. In conclusion, this work demonstrates that there is still room for improvement for a deep learning image-to-image translation approach to generate reliable estimates of burn perfusion from simple RGB images, potentially bringing LDI-like diagnostics to settings where the standard equipment is unavailable. To further boost performance, future work will explore better balancing of underrepresented healing-time classes, automatic burn segmentation to remove healthy skin, and prospective validation with larger multicentre datasets.



