Multi-task Deep Learning models for real-time deployment in embedded systems
Tutor / director / evaluatorPardàs Feliu, Montse
Document typeMaster thesis
Rights accessOpen Access
Multitask Learning (MTL) was conceived as an approach to improve the generalization ability of machine learning models. When applied to neural networks, multitask models take advantage of sharing resources for reducing the total inference time, memory footprint and model size. We propose MTL as a way to speed up deep learning models for applications in which multiple tasks need to be solved simultaneously, which is particularly useful in embedded, real-time systems such as the ones found in autonomous cars or UAVs. In order to study this approach, we apply MTL to a Computer Vision problem in which both Object Detection and Semantic Segmentation tasks are solved based on the Single Shot Multibox Detector and Fully Convolutional Networks with skip connections respectively, using a ResNet-50 as the base network. We train multitask models for two different datasets, Pascal VOC, which is used to validate the decisions made, and a combination of datasets with aerial view images captured from UAVs. Finally, we analyse the challenges that appear during the process of training multitask networks and try to overcome them. However, these hinder the capacity of our multitask models to reach the performance of the best single-task models trained without the limitations imposed by applying MTL. Nevertheless, multitask networks benefit from sharing resources and are 1.6x faster, lighter and use less memory compared to deploying the single-task models in parallel, which turns essential when running them on a Jetson TX1 SoC as the parallel approach does not fit into memory. We conclude that MTL has the potential to give superior performance as far as the object detection and semantic segmentation tasks are concerned in exchange of a more complex training process that requires overcoming challenges not present in the training of single-task models.