Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/3689
Tue, 23 Jan 2018 14:14:45 GMT2018-01-23T14:14:45ZA prospective fuzzy approach for the development of integral seismic risk scenarios for Barcelona, Spain
http://hdl.handle.net/2117/110970
A prospective fuzzy approach for the development of integral seismic risk scenarios for Barcelona, Spain
González Cárdenas, Rubén; Múgica Álvarez, Francisco; Nebot Castells, M. Àngela
We create a set of synthetic seismic risk scenarios by combining stochastic seismic simulations with social fragility indicators by mean of a fuzzy Mamdani type inference nested-model. The original values of the social economic variables were modified by arbitrary increments to simulate either constrains or improvement in their reported levels, and the Fuzzy Seismic Risk Model was applied again for each of these variations to produce a range of final integral seismic risk levels. Even if this experiment clearly needs to be further tuned, the use of fuzzy inference in the creation of risk scenarios becomes a simpler task once suitable membership functions have been defined, since the non-linear influence of each of the variables involved can be easily quantified. The final product is capable to facilitate the prospective view needed in decision-making planning while avoiding compensability issues, commonly reflected when composite indicators are used to represent
social dimensions.
Tue, 21 Nov 2017 08:54:40 GMThttp://hdl.handle.net/2117/1109702017-11-21T08:54:40ZGonzález Cárdenas, RubénMúgica Álvarez, FranciscoNebot Castells, M. ÀngelaWe create a set of synthetic seismic risk scenarios by combining stochastic seismic simulations with social fragility indicators by mean of a fuzzy Mamdani type inference nested-model. The original values of the social economic variables were modified by arbitrary increments to simulate either constrains or improvement in their reported levels, and the Fuzzy Seismic Risk Model was applied again for each of these variations to produce a range of final integral seismic risk levels. Even if this experiment clearly needs to be further tuned, the use of fuzzy inference in the creation of risk scenarios becomes a simpler task once suitable membership functions have been defined, since the non-linear influence of each of the variables involved can be easily quantified. The final product is capable to facilitate the prospective view needed in decision-making planning while avoiding compensability issues, commonly reflected when composite indicators are used to represent
social dimensions.Modeling a flue-gas desulfurization plant with a fuzzy methodology to optimize the SO2 absorption process
http://hdl.handle.net/2117/110097
Modeling a flue-gas desulfurization plant with a fuzzy methodology to optimize the SO2 absorption process
Escobet Canal, Antoni; Nebot Castells, M. Àngela; Múgica Álvarez, Francisco; Gamisans Noguera, Javier; Guimerà Villalba, Xavier
Tue, 07 Nov 2017 16:25:47 GMThttp://hdl.handle.net/2117/1100972017-11-07T16:25:47ZEscobet Canal, AntoniNebot Castells, M. ÀngelaMúgica Álvarez, FranciscoGamisans Noguera, JavierGuimerà Villalba, XavierBayesian semi non-negative matrix factorisation
http://hdl.handle.net/2117/103878
Bayesian semi non-negative matrix factorisation
Vilamala Muñoz, Albert; Vellido Alcacena, Alfredo; Belanche Muñoz, Luis Antonio
Non-negative Matrix Factorisation (NMF) has become a standard method for source identification when data, sources and mixing coefficients are constrained to be positive-valued. The method has recently been extended to allow for negative-valued data and sources in the form of Semi-and Convex-NMF. In this paper, we re-elaborate Semi-NMF within a full Bayesian framework. This provides solid foundations for parameter estimation and, importantly, a principled method to address the problem of choosing the most adequate number of sources to describe the observed data. The proposed Bayesian Semi-NMF is preliminarily evaluated here in a real neuro-oncology problem.
Tue, 02 May 2017 08:11:19 GMThttp://hdl.handle.net/2117/1038782017-05-02T08:11:19ZVilamala Muñoz, AlbertVellido Alcacena, AlfredoBelanche Muñoz, Luis AntonioNon-negative Matrix Factorisation (NMF) has become a standard method for source identification when data, sources and mixing coefficients are constrained to be positive-valued. The method has recently been extended to allow for negative-valued data and sources in the form of Semi-and Convex-NMF. In this paper, we re-elaborate Semi-NMF within a full Bayesian framework. This provides solid foundations for parameter estimation and, importantly, a principled method to address the problem of choosing the most adequate number of sources to describe the observed data. The proposed Bayesian Semi-NMF is preliminarily evaluated here in a real neuro-oncology problem.A methodological approach for algorithmic composition systems' parameter spaces aesthetic exploration
http://hdl.handle.net/2117/102221
A methodological approach for algorithmic composition systems' parameter spaces aesthetic exploration
Paz Ortiz, Iván; Nebot Castells, M. Àngela; Romero Merino, Enrique; Múgica Álvarez, Francisco; Vellido Alcacena, Alfredo
Algorithmic composition is the process of creating musical material by means of formal methods. As a consequence of its design, algorithmic composition systems are (explicitly or implicitly) described in terms of parameters. Thus, parameter space exploration plays a key role in learning the system's capabilities. However, in the computer music field, this task has received little attention. This is due in part, because the produced changes on the human perception of the outputs, as a response to changes on the parameters, could be highly nonlinear, therefore models with strongly predictable outputs are needed. The present work describes a methodology for the human perceptual (or aesthetic) exploration of generative systems' parameter spaces. As the systems' outputs are intended to produce an aesthetic experience on humans, audition plays a central role in the process. The methodology starts from a set of parameter combinations which are perceptually evaluated by the user. The sampling process of such combinations depends on the system under study and possible on heuristic considerations. The evaluated set is processed by a compaction algorithm able to generate linguistic rules describing the distinct perceptions (classes) of the user evaluation. The semantic level of the extracted rules allows for interpretability, while showing great potential in describing high and low-level musical entities. As the resulting rules represent discrete points in the parameter space, further possible extensions for interpolation between points are also discussed. Finally, some practical implementations and paths for further research are presented.
Thu, 09 Mar 2017 15:00:28 GMThttp://hdl.handle.net/2117/1022212017-03-09T15:00:28ZPaz Ortiz, IvánNebot Castells, M. ÀngelaRomero Merino, EnriqueMúgica Álvarez, FranciscoVellido Alcacena, AlfredoAlgorithmic composition is the process of creating musical material by means of formal methods. As a consequence of its design, algorithmic composition systems are (explicitly or implicitly) described in terms of parameters. Thus, parameter space exploration plays a key role in learning the system's capabilities. However, in the computer music field, this task has received little attention. This is due in part, because the produced changes on the human perception of the outputs, as a response to changes on the parameters, could be highly nonlinear, therefore models with strongly predictable outputs are needed. The present work describes a methodology for the human perceptual (or aesthetic) exploration of generative systems' parameter spaces. As the systems' outputs are intended to produce an aesthetic experience on humans, audition plays a central role in the process. The methodology starts from a set of parameter combinations which are perceptually evaluated by the user. The sampling process of such combinations depends on the system under study and possible on heuristic considerations. The evaluated set is processed by a compaction algorithm able to generate linguistic rules describing the distinct perceptions (classes) of the user evaluation. The semantic level of the extracted rules allows for interpretability, while showing great potential in describing high and low-level musical entities. As the resulting rules represent discrete points in the parameter space, further possible extensions for interpolation between points are also discussed. Finally, some practical implementations and paths for further research are presented.A decision making support tool: The resilience management fuzzy controller
http://hdl.handle.net/2117/102219
A decision making support tool: The resilience management fuzzy controller
González Cardenas, Rubén; Nebot Castells, M. Àngela; Múgica Álvarez, Francisco; Vellido Alcacena, Alfredo
In this paper a fuzzy controller capable to perform an automated estimation of the period of time necessary to recover a resilience level is proposed. Estimations where made by considering realistic time-dependent action changes for a set of resilience indicators originally proposed by Cardona (2001) and modified by Cardenas et al (2015). The fuzzy resilience controller works using two output control variables and four input variables designed to resemble politics decisions made over resilience recovery while considering an economical national growth factor. We applied the fuzzy controller onto Barcelona Spain, where different recovery times where estimated in terms of variations in Spaniard GDP (Gross domestic product) inter anual rate of change. This Decision Support System might be helpful to assist disaster reduction planning by allowing decision takers, governs or institutions to achieve reliable recovery time estimations while a proper supervision and control of resilience indicators progress is performed and an open evaluation and scrutiny of applied policies is made.
Thu, 09 Mar 2017 14:48:13 GMThttp://hdl.handle.net/2117/1022192017-03-09T14:48:13ZGonzález Cardenas, RubénNebot Castells, M. ÀngelaMúgica Álvarez, FranciscoVellido Alcacena, AlfredoIn this paper a fuzzy controller capable to perform an automated estimation of the period of time necessary to recover a resilience level is proposed. Estimations where made by considering realistic time-dependent action changes for a set of resilience indicators originally proposed by Cardona (2001) and modified by Cardenas et al (2015). The fuzzy resilience controller works using two output control variables and four input variables designed to resemble politics decisions made over resilience recovery while considering an economical national growth factor. We applied the fuzzy controller onto Barcelona Spain, where different recovery times where estimated in terms of variations in Spaniard GDP (Gross domestic product) inter anual rate of change. This Decision Support System might be helpful to assist disaster reduction planning by allowing decision takers, governs or institutions to achieve reliable recovery time estimations while a proper supervision and control of resilience indicators progress is performed and an open evaluation and scrutiny of applied policies is made.Multivariate dynamic kernels for financial time series forecasting
http://hdl.handle.net/2117/102167
Multivariate dynamic kernels for financial time series forecasting
Peña, Mauricio; Arratia Quesada, Argimiro Alejandro; Belanche Muñoz, Luis Antonio
We propose a forecasting procedure based on multivariate dynamic kernels, with the capability of integrating information measured at different frequencies and at irregular time intervals in financial markets. A data compression process redefines the original financial time series into temporal data blocks, analyzing the temporal information of multiple time intervals. The analysis is done through multivariate dynamic kernels within support vector regression. We also propose two kernels for financial time series that are computationally efficient without a sacrifice on accuracy. The efficacy of the methodology is demonstrated by empirical experiments on forecasting the challenging S&P500 market.
The final publication is available at http://link.springer.com/chapter/10.1007/978-3-319-44781-0_40
Thu, 09 Mar 2017 08:50:56 GMThttp://hdl.handle.net/2117/1021672017-03-09T08:50:56ZPeña, MauricioArratia Quesada, Argimiro AlejandroBelanche Muñoz, Luis AntonioWe propose a forecasting procedure based on multivariate dynamic kernels, with the capability of integrating information measured at different frequencies and at irregular time intervals in financial markets. A data compression process redefines the original financial time series into temporal data blocks, analyzing the temporal information of multiple time intervals. The analysis is done through multivariate dynamic kernels within support vector regression. We also propose two kernels for financial time series that are computationally efficient without a sacrifice on accuracy. The efficacy of the methodology is demonstrated by empirical experiments on forecasting the challenging S&P500 market.Automated quality control for proton magnetic resonance spectroscopy data using convex non-negative matrix factorization
http://hdl.handle.net/2117/99395
Automated quality control for proton magnetic resonance spectroscopy data using convex non-negative matrix factorization
Mocioiu, Victor; Kyathanahally, Sreenath P.; Arús, Carles; Vellido Alcacena, Alfredo; Julià Sapé, Margarida
Proton Magnetic Resonance Spectroscopy (1H MRS) has proven its diagnostic potential in a variety of conditions. However, MRS is not yet widely used in clinical routine because of the lack of experts on its diagnostic interpretation. Although data-based decision support systems exist to aid diagnosis, they often take for granted that the
data is of good quality, which is not always the case in a real application context. Systems based on models built with bad quality data are likely to underperform in their decision support tasks. In this study, we propose a system to filter out such bad quality data. It is based on convex Non-Negative Matrix Factorization models, used as a dimensionality reduction procedure, and on the use of several classifiers to discriminate between good and bad quality data.
Tue, 17 Jan 2017 08:51:09 GMThttp://hdl.handle.net/2117/993952017-01-17T08:51:09ZMocioiu, VictorKyathanahally, Sreenath P.Arús, CarlesVellido Alcacena, AlfredoJulià Sapé, MargaridaProton Magnetic Resonance Spectroscopy (1H MRS) has proven its diagnostic potential in a variety of conditions. However, MRS is not yet widely used in clinical routine because of the lack of experts on its diagnostic interpretation. Although data-based decision support systems exist to aid diagnosis, they often take for granted that the
data is of good quality, which is not always the case in a real application context. Systems based on models built with bad quality data are likely to underperform in their decision support tasks. In this study, we propose a system to filter out such bad quality data. It is based on convex Non-Negative Matrix Factorization models, used as a dimensionality reduction procedure, and on the use of several classifiers to discriminate between good and bad quality data.A machine learning pipeline for supporting differentiation of glioblastomas from single brain metastases
http://hdl.handle.net/2117/97584
A machine learning pipeline for supporting differentiation of glioblastomas from single brain metastases
Mocioiu, Victor; de Barros, Nuno M. Pedrosa; Ortega Martorell, Sandra; Slotboom, Johannes; Knecht, Urspeter; Arús, Carles; Vellido Alcacena, Alfredo; Julià Sapé, Margarida
Machine learning has provided, over the last decades, tools for knowledge extraction in complex medical domains. Most of these tools, though, are ad hoc solutions and lack the systematic approach that would be required to become mainstream in medical practice. In this brief paper, we define a machine learning-based analysis pipeline for helping in a difficult problem in the field of neuro-oncology, namely the discrimination of brain glioblastomas from single brain metastases. This pipeline involves source extraction using k-Meansinitialized Convex Non-negative Matrix Factorization and a collection of classifiers, including Logistic Regression, Linear Discriminant Analysis, AdaBoost, and Random Forests.
Thu, 01 Dec 2016 10:29:18 GMThttp://hdl.handle.net/2117/975842016-12-01T10:29:18ZMocioiu, Victorde Barros, Nuno M. PedrosaOrtega Martorell, SandraSlotboom, JohannesKnecht, UrspeterArús, CarlesVellido Alcacena, AlfredoJulià Sapé, MargaridaMachine learning has provided, over the last decades, tools for knowledge extraction in complex medical domains. Most of these tools, though, are ad hoc solutions and lack the systematic approach that would be required to become mainstream in medical practice. In this brief paper, we define a machine learning-based analysis pipeline for helping in a difficult problem in the field of neuro-oncology, namely the discrimination of brain glioblastomas from single brain metastases. This pipeline involves source extraction using k-Meansinitialized Convex Non-negative Matrix Factorization and a collection of classifiers, including Logistic Regression, Linear Discriminant Analysis, AdaBoost, and Random Forests.Instance and feature weighted k-nearest-neighbors algorithm
http://hdl.handle.net/2117/97582
Instance and feature weighted k-nearest-neighbors algorithm
Prat, Gabriel; Belanche Muñoz, Luis Antonio
We present a novel method that aims at providing a more stable selection of feature subsets when variations in the training process occur. This is accomplished by using an instance-weighting process -assigning different importances to instances as a preprocessing step to a feature weighting method that is independent of the learner, and then making good use of both sets of computed weigths in a standard Nearest-Neighbours classifier.
We report extensive experimentation in well-known benchmarking datasets as well as some challenging microarray
gene expression problems. Our results show increases in stability for most subset sizes and most problems, without
compromising prediction accuracy.
Thu, 01 Dec 2016 10:15:32 GMThttp://hdl.handle.net/2117/975822016-12-01T10:15:32ZPrat, GabrielBelanche Muñoz, Luis AntonioWe present a novel method that aims at providing a more stable selection of feature subsets when variations in the training process occur. This is accomplished by using an instance-weighting process -assigning different importances to instances as a preprocessing step to a feature weighting method that is independent of the learner, and then making good use of both sets of computed weigths in a standard Nearest-Neighbours classifier.
We report extensive experimentation in well-known benchmarking datasets as well as some challenging microarray
gene expression problems. Our results show increases in stability for most subset sizes and most problems, without
compromising prediction accuracy.Physics and machine learning: Emerging paradigms
http://hdl.handle.net/2117/97581
Physics and machine learning: Emerging paradigms
Martín Guerrero, José; Lisboa, Paulo J G; Vellido Alcacena, Alfredo
Current research in Machine Learning (ML) combines the study of variations on well-established methods with cutting-edge breakthroughs based on completely new approaches. Among the latter, emerging paradigms from Physics have taken special relevance in recent years. Although still in its initial stages, Quantum Machine Learning (QML) shows promising ways to speed up some of the costly ML calculations with a similar or even better performance than existing approaches. Two additional advantages are related to the intrinsic probabilistic approach of QML, since quantum states are genuinely probabilistic, and to the capability of finding the global optimum of a given cost function by means of adiabatic quantum optimization, thus circumventing the usual problem of local minima. Another Physics approach for ML comes from Statistical Physics and is linked to Information theory in supervised and semi-supervised learning frameworks. On the other hand, and from the perspective of Physics, ML can provide solutions by extracting knowledge from huge amounts of data, as it is common in many experiments in the field, such as those related to High Energy Physics for elementary-particle research and Observational Astronomy.
Thu, 01 Dec 2016 10:08:53 GMThttp://hdl.handle.net/2117/975812016-12-01T10:08:53ZMartín Guerrero, JoséLisboa, Paulo J GVellido Alcacena, AlfredoCurrent research in Machine Learning (ML) combines the study of variations on well-established methods with cutting-edge breakthroughs based on completely new approaches. Among the latter, emerging paradigms from Physics have taken special relevance in recent years. Although still in its initial stages, Quantum Machine Learning (QML) shows promising ways to speed up some of the costly ML calculations with a similar or even better performance than existing approaches. Two additional advantages are related to the intrinsic probabilistic approach of QML, since quantum states are genuinely probabilistic, and to the capability of finding the global optimum of a given cost function by means of adiabatic quantum optimization, thus circumventing the usual problem of local minima. Another Physics approach for ML comes from Statistical Physics and is linked to Information theory in supervised and semi-supervised learning frameworks. On the other hand, and from the perspective of Physics, ML can provide solutions by extracting knowledge from huge amounts of data, as it is common in many experiments in the field, such as those related to High Energy Physics for elementary-particle research and Observational Astronomy.