Understanding complex predictive models with ghost variables

Cita com:
hdl:2117/383386
Document typeArticle
Defense date2022-08-24
PublisherSpringer
Rights accessOpen Access
Except where otherwise noted, content on this work
is licensed under a Creative Commons license
:
Attribution 4.0 International
ProjectESTRECHANDO LA BRECHA ENTRE LA ESTADISTICA Y LA CIENCIA DE DATOS (AEI-MTM2017-88142-P)
ESTADISTICA AVANZADA Y CIENCIA DE DATOS: INTERPRETANDO MODELOS CAJA-NEGRA Y ANALIZANDO CONJUNTOS DE DATOS GRANDES Y COMPLEJOS (AEI-PID2020-116294GB-I00)
ESTADISTICA AVANZADA Y CIENCIA DE DATOS: INTERPRETANDO MODELOS CAJA-NEGRA Y ANALIZANDO CONJUNTOS DE DATOS GRANDES Y COMPLEJOS (AEI-PID2020-116294GB-I00)
Abstract
Framed in the literature on Interpretable Machine Learning, we propose a new procedure to assign a measure of relevance to each explanatory variable in a complex predictive model. We assume that we have a training set to fit the model and a test set to check its out-of-sample performance. We propose to measure the individual relevance of each variable by comparing the predictions of the model in the test set with those obtained when the variable of interest is substituted (in the test set) by its ghost variable, defined as the prediction of this variable by using the rest of explanatory variables. In linear models it is shown that, on the one hand, the proposed measure gives similar results to leave-one-covariate-out (loco, with a lowest computational cost) and outperforms random permutations, and on the other hand, it is strongly related to the usual F-statistic measuring the significance of a variable. In nonlinear predictive models (as neural networks or random forests) the proposed measure shows the relevance of the variables in an efficient way, as shown by a simulation study comparing ghost variables with other alternative methods (including loco and random permutations, and also knockoff variables and estimated conditional distributions). Finally, we study the joint relevance of the variables by defining the relevance matrix as the covariance matrix of the vectors of effects on predictions when using every ghost variable. Our proposal is illustrated with simulated examples and the analysis of a large real data set.
Description
The version of record of this article, first published in Test, is available online at Publisher’s website: http://dx.doi.org/10.1007/s11749-022-00826-x
CitationDelicado, P.; Peña, D. Understanding complex predictive models with ghost variables. "Test", 24 Agost 2022, vol. 32; núm. 1; p. 107–145
ISSN1863-8260
Publisher versionhttps://link.springer.com/article/10.1007/s11749-022-00826-x
Files | Description | Size | Format | View |
---|---|---|---|---|
Relevance_matrix_TEST_authors_version.pdf | Authors' version of the paper | 841,5Kb | View/Open | |
Relevance_matrix_TEST_Suppls.pdf | Supplements | 401,6Kb | View/Open |