Analysis of explainable artificial intelligence on time series data
Visualitza/Obre
Estadístiques de LA Referencia / Recolecta
Inclou dades d'ús des de 2022
Cita com:
hdl:2117/376475
Tipus de documentProjecte Final de Màster Oficial
Data2022-10-19
Condicions d'accésAccés obert
Tots els drets reservats. Aquesta obra està protegida pels drets de propietat intel·lectual i
industrial corresponents. Sense perjudici de les exempcions legals existents, queda prohibida la seva
reproducció, distribució, comunicació pública o transformació sense l'autorització del titular dels drets
Abstract
In recent years, the interest in Artificial Intelligence (AI) has experienced a significant growth, which has contributed to the emergence of new research directions such as Explainable Artificial Intelligence (XAI). The ability to apply AI approaches to solve various problems in many industrial areas has been mainly achieved by increasing model complexity and the use of various black-box models that lack transparency. In particular, deep neural networks are great at dealing with problems that are too difficult for classic machine learning methods, but it is often a big challenge to answer the question why a neural network made such a decision and not another. The answer to this question is extremely important to ensure that ML models are reliable and can be held liable for the decision-making process. Over a relatively short period of time a plethora of methods to tackle this problem have been proposed, but mainly in the area of computer vision and natural language processing. Few publications have been published so far in the context of explainability in time series. This Thesis aims to provide a comprehensive literature review of the research in XAI for time series data as well as to achieve and evaluate local explainability for a model in time series forecasting problem. The solution involved framing a time series forecasting task as a Remaining Useful Life (RUL) prognosis for turbofan engines. We trained two Bi-LSTM models, with and without attention layer, on the C-MAPSS data set. The local explainability was achieved using two post-hoc explainability techniques SHAP and LIME as well as extracting and interpreting the attention weights. The results of explanations were compared and evaluated. We applied the evaluation metric which incorporates the temporal dimension of the data. The obtained results indicate that LIME technique outperforms other methods in terms of the fidelity of local explanations. Moreover, we demonstrated the potential of attention mechanisms to make a deep learning model for time series forecasting task more interpretable. The approach presented in this work can be easily applied to any time series forecasting or classification scenario in which we want to achieve model interpretability and evaluation of generated explanations.
TitulacióMÀSTER UNIVERSITARI EN INTEL·LIGÈNCIA ARTIFICIAL (Pla 2017)
Col·leccions
Fitxers | Descripció | Mida | Format | Visualitza |
---|---|---|---|---|
171756.pdf | 6,757Mb | Visualitza/Obre |