Simulación de redes de colas para la validación de sistemas de procesado en tiempo real
Tutor / director / evaluatorSalamí San Juan, Esther
Document typeBachelor thesis
Rights accessOpen Access
In recent years, the use of Unmanned Aerial Vehicles (UAVs or drones) has increased considerably, mainly for remote sensing tasks. In most cases, the UAV collects data during the flight; these data are processed afterwards, during the post-processing phase that may take between one and two days, before generating the final result. In some applications, such as support tasks for forest fires or sea rescue, this delay is not acceptable. In these cases, it is possible to equip the UAV with a processor system which launches the processing algorithms on board and generates the result in real time. Given this context, it appears the necessity of scheduling, for each mission, what is the maximum data acquisition rate in which the system can work according to the different algorithms involved as well as the resources (or processing cores) available, insuring the proper performance of the system. A multiprocessing system can be interpreted as a net of shared resources that provide service to a large number of customers. The queueing theory provides the basis that allows for the analysis of the service degree that can be given in the system. For those models where the distribution of the inter-arrival time and the service time is exponential, the system can be resolved in an analytical way. However, in a more general case where the times follow distributions different from the exponential, it may be necessary to appeal to simulation techniques to analyze the system. In our case, there is a queue system in which the images act as incoming customers who request a service. The services are provided by the execution of the algorithms in the processing cores. This research is focused on analyzing how the queues behave, depending on how the system is modeled. For the evaluation, we will focus on the average waiting time in queue. A small average waiting time in queue guarantees the queues remaining within manageable limits. Yet, a very small value may lead to an inefficient use of resources. By contrast, a big average waiting time in queue indicates a large number of customers waiting that, when reaching the limit, makes the system unstable. To carry out this, we begin by studying the behavior of five processing algorithms using exponential distributions. The system has been resolved in two ways: analytical (using the R programming language) and through simulation techniques (using the work environment OMNeT++), with the aim of proving that the simulated values establish a certain level of concordance with the theoretical values and thus, validating the simulation environment used during the project. Nevertheless, if a real case is evaluated, it is possible to observe that both the inter-arrival time and the execution time of the algorithms follow a distribution far different from the exponential. This means that a scheduling based on the exponential model may result too conservative, avoiding getting the maximum profit from the system when processing the images. For this reason, the parameters of the time-of-service probability distribution must be fitted. It will allow assigning each algorithm the distribution which best fits, allowing then, a more aggressive scheduling and thus, being able to process the maximum number of images per time unit. Once the adjustment has been done, the simulations will be launched once again setting the chosen distribution of each algorithm in the servers. This will allow obtaining the maximum incoming rate of the algorithm with the aim of deciding which rate will define the system limits. Finally, the results obtained will be applied in two use cases. On the one hand, a network composed by four algorithms will be implemented in order to locate the hot spots in areas affected by the fire. On the other hand, in the second case, the mission consists of a single algorithm with the aim of detecting jellyfish banks.