Reports de recerca
http://hdl.handle.net/2117/3688
Wed, 17 Jan 2018 13:11:07 GMT2018-01-17T13:11:07ZSimilarity networks for classification: a case study in the Horse Colic problem
http://hdl.handle.net/2117/99450
Similarity networks for classification: a case study in the Horse Colic problem
Belanche Muñoz, Luis Antonio; Hernández González, Jerónimo
This paper develops a two-layer neural network in which the neuron model computes a user-defined similarity function between inputs and weights. The neuron transfer function is formed by composition of an adapted logistic function with the mean of the partial input-weight similarities. The resulting neuron model is capable of dealing directly with variables of potentially different nature (continuous, fuzzy, ordinal, categorical). There is also provision for missing values. The network is trained using a two-stage procedure very similar to that used to train a radial basis function (RBF) neural network. The network is compared to two types of RBF networks in a non-trivial dataset: the Horse Colic problem, taken as a case study and analyzed in detail.
Tue, 17 Jan 2017 12:32:58 GMThttp://hdl.handle.net/2117/994502017-01-17T12:32:58ZBelanche Muñoz, Luis AntonioHernández González, JerónimoThis paper develops a two-layer neural network in which the neuron model computes a user-defined similarity function between inputs and weights. The neuron transfer function is formed by composition of an adapted logistic function with the mean of the partial input-weight similarities. The resulting neuron model is capable of dealing directly with variables of potentially different nature (continuous, fuzzy, ordinal, categorical). There is also provision for missing values. The network is trained using a two-stage procedure very similar to that used to train a radial basis function (RBF) neural network. The network is compared to two types of RBF networks in a non-trivial dataset: the Horse Colic problem, taken as a case study and analyzed in detail.Exploiting the accumulated evidence for gene selection in microarray gene expression data
http://hdl.handle.net/2117/99401
Exploiting the accumulated evidence for gene selection in microarray gene expression data
Prat, Gabriel; Belanche Muñoz, Luis Antonio
Machine Learning methods have of late made signicant efforts to solving multidisciplinary problems in the field of cancer classification using microarray gene expression data. Feature subset selection methods can play an important role in the modeling process, since these tasks are characterized by a large number of features and a few observations, making the modeling a non-trivial undertaking. In this particular scenario, it is extremely important to select genes by taking into account the possible interactions with other gene subsets. This paper shows that, by accumulating the evidence in favour (or against) each gene along the search process, the obtained gene subsets may constitute better solutions, either in terms of predictive accuracy or gene size, or in both. The proposed technique is extremely simple and applicable at a negligible overhead in cost.
Tue, 17 Jan 2017 09:31:03 GMThttp://hdl.handle.net/2117/994012017-01-17T09:31:03ZPrat, GabrielBelanche Muñoz, Luis AntonioMachine Learning methods have of late made signicant efforts to solving multidisciplinary problems in the field of cancer classification using microarray gene expression data. Feature subset selection methods can play an important role in the modeling process, since these tasks are characterized by a large number of features and a few observations, making the modeling a non-trivial undertaking. In this particular scenario, it is extremely important to select genes by taking into account the possible interactions with other gene subsets. This paper shows that, by accumulating the evidence in favour (or against) each gene along the search process, the obtained gene subsets may constitute better solutions, either in terms of predictive accuracy or gene size, or in both. The proposed technique is extremely simple and applicable at a negligible overhead in cost.Similarity and dissimilarity concepts in machine learning
http://hdl.handle.net/2117/97975
Similarity and dissimilarity concepts in machine learning
Orozco Luquero, Jorge
Similarity and dissimilarity are rarely formalized concepts in Artificial Intelligence (AI). Similarity and dissimilarity have a psychological origin, and they have been adapted to AI. In this field, however, similarity and dissimilarity choice is not always dependent on the problem to solve. In this paper, a formalization of similarity and dissimilarity is presented. The purpose of this paper is to contribute to the design and understanding of similarity and dissimilarity in AI, increasing their general utility. A formal definition and some basic properties are introduced. Also, some transformation functions and similarity and dissimilarity operators are presented.
Mon, 12 Dec 2016 09:41:25 GMThttp://hdl.handle.net/2117/979752016-12-12T09:41:25ZOrozco Luquero, JorgeSimilarity and dissimilarity are rarely formalized concepts in Artificial Intelligence (AI). Similarity and dissimilarity have a psychological origin, and they have been adapted to AI. In this field, however, similarity and dissimilarity choice is not always dependent on the problem to solve. In this paper, a formalization of similarity and dissimilarity is presented. The purpose of this paper is to contribute to the design and understanding of similarity and dissimilarity in AI, increasing their general utility. A formal definition and some basic properties are introduced. Also, some transformation functions and similarity and dissimilarity operators are presented.Studying embedded human EEG dynamics using generative topographic mapping
http://hdl.handle.net/2117/97971
Studying embedded human EEG dynamics using generative topographic mapping
Vellido Alcacena, Alfredo; El-Deredy, W.; Lisboa, Paulo J G
A method has recently been proposed [1] to extract multiple signal source information from single-channel electroencephalogram (EEG) recordings. A dynamical systems approach is used to analyze the resulting EEG time series, and its dynamics are captured by the transformation of the original data into an embedding matrix residing in a Euclidean embedding space. Measurements in [1] are taken to be of ongoing unbounded EEG recordings. Many experiments concerning the study of cognitive tasks, though, are developed in a multi-subject repetitive setting where time-boundaries are defined in relation to the onset time of certain stimuli. Each repetition of an experiment is known as a trial and, although the experimental setting might induce to expect little variability amongst responses, the reality usually yields high inter-trial and inter-subject variability. Pooling all responses may mislead their interpretation. In this paper we resort to the Generative Topographic Mapping (GTM, [2]), a neural-network inspired but statistically principled unsupervised model, to achieve the following goals: First, the definition of groups of trials with intra-group similarities and inter-group differences in order to improve the interpretability of the results in the aforementioned experimental settings; second, the visualization of embedded EEG dynamics in a 2-dimensional latent space; finally, the study of the trajectories of these EEG dynamics over the GTM latent space representation, showing that transitions and stationary states in these trajectories correspond to special features in the time-power and time-frequency representations of the EEG data.
Mon, 12 Dec 2016 09:34:58 GMThttp://hdl.handle.net/2117/979712016-12-12T09:34:58ZVellido Alcacena, AlfredoEl-Deredy, W.Lisboa, Paulo J GA method has recently been proposed [1] to extract multiple signal source information from single-channel electroencephalogram (EEG) recordings. A dynamical systems approach is used to analyze the resulting EEG time series, and its dynamics are captured by the transformation of the original data into an embedding matrix residing in a Euclidean embedding space. Measurements in [1] are taken to be of ongoing unbounded EEG recordings. Many experiments concerning the study of cognitive tasks, though, are developed in a multi-subject repetitive setting where time-boundaries are defined in relation to the onset time of certain stimuli. Each repetition of an experiment is known as a trial and, although the experimental setting might induce to expect little variability amongst responses, the reality usually yields high inter-trial and inter-subject variability. Pooling all responses may mislead their interpretation. In this paper we resort to the Generative Topographic Mapping (GTM, [2]), a neural-network inspired but statistically principled unsupervised model, to achieve the following goals: First, the definition of groups of trials with intra-group similarities and inter-group differences in order to improve the interpretability of the results in the aforementioned experimental settings; second, the visualization of embedded EEG dynamics in a 2-dimensional latent space; finally, the study of the trajectories of these EEG dynamics over the GTM latent space representation, showing that transitions and stationary states in these trajectories correspond to special features in the time-power and time-frequency representations of the EEG data.Exploring dopamine-mediated reward processing through the analysis of EEG-measured gamma-band brain oscillations
http://hdl.handle.net/2117/97970
Exploring dopamine-mediated reward processing through the analysis of EEG-measured gamma-band brain oscillations
Vellido Alcacena, Alfredo; El-Deredy, W.
The central role of the dopamine system on reward brain processing is now quite well delimited. Its influence on other brain areas for learning and decision-making is still a matter of intense research. Most of this is based on fMRI imaging methods, which excel in terms of spatial resolution for source localization but lack the ability to trace the time-course of the signals. Incipient efforts have been made to address this issue from the point of view of EEG-measured brain oscillation theories. We review recent advances in this area and propose a broad framework for EEG-based reward processing analysis.
Mon, 12 Dec 2016 09:24:53 GMThttp://hdl.handle.net/2117/979702016-12-12T09:24:53ZVellido Alcacena, AlfredoEl-Deredy, W.The central role of the dopamine system on reward brain processing is now quite well delimited. Its influence on other brain areas for learning and decision-making is still a matter of intense research. Most of this is based on fMRI imaging methods, which excel in terms of spatial resolution for source localization but lack the ability to trace the time-course of the signals. Incipient efforts have been made to address this issue from the point of view of EEG-measured brain oscillation theories. We review recent advances in this area and propose a broad framework for EEG-based reward processing analysis.Generative topographic mapping as a constrained mixture of student t-distributions: theoretical developments
http://hdl.handle.net/2117/97911
Generative topographic mapping as a constrained mixture of student t-distributions: theoretical developments
Vellido Alcacena, Alfredo
The Generative Topographic Mapping (GTM: Bishop et al. 1998a), a non-linear latent variable model, was originally defined as constrained mixture of Gaussians. Gaussian mixture models are known to lack robustness in the presence of outlier observations in the data sample, and multivariate Student t-distributions have recently been put forward as a more robust alternative to deal with continuous data in this context.
Fri, 09 Dec 2016 08:46:55 GMThttp://hdl.handle.net/2117/979112016-12-09T08:46:55ZVellido Alcacena, AlfredoThe Generative Topographic Mapping (GTM: Bishop et al. 1998a), a non-linear latent variable model, was originally defined as constrained mixture of Gaussians. Gaussian mixture models are known to lack robustness in the presence of outlier observations in the data sample, and multivariate Student t-distributions have recently been put forward as a more robust alternative to deal with continuous data in this context.Maximizing the margin with feed-forward neural networks
http://hdl.handle.net/2117/97853
Maximizing the margin with feed-forward neural networks
Romero Merino, Enrique
Feed-forward Neural Networks (FNNs) and Support Vector Machines
(SVMs) are two machine learning frameworks developed from very
different starting points of view. The solutions obtained by the
respective frameworks may be very different. In this work a new
learning model for FNNs will be proposed such that, in the linearly
separable case, tends to obtain the same solution that SVMs. The key
idea of the model is a weighting of the sum-of-squares error function,
which is inspired in the AdaBoost algorithm. The model depends on a
parameter that controls the hardness of the margin, as in SVMs, so
that it can be used for the non-linearly separable case as well. In
addition, it allows to deal with multiclass and multilabel problems in
a natural way (as FNNs usually do), and it is not restricted to the
use of kernel functions. Finally, it is independent of the concrete
training algorithm used. Both theoretic and experimental results will
be shown to confirm these ideas.
Wed, 07 Dec 2016 12:25:29 GMThttp://hdl.handle.net/2117/978532016-12-07T12:25:29ZRomero Merino, EnriqueFeed-forward Neural Networks (FNNs) and Support Vector Machines
(SVMs) are two machine learning frameworks developed from very
different starting points of view. The solutions obtained by the
respective frameworks may be very different. In this work a new
learning model for FNNs will be proposed such that, in the linearly
separable case, tends to obtain the same solution that SVMs. The key
idea of the model is a weighting of the sum-of-squares error function,
which is inspired in the AdaBoost algorithm. The model depends on a
parameter that controls the hardness of the margin, as in SVMs, so
that it can be used for the non-linearly separable case as well. In
addition, it allows to deal with multiclass and multilabel problems in
a natural way (as FNNs usually do), and it is not restricted to the
use of kernel functions. Finally, it is independent of the concrete
training algorithm used. Both theoretic and experimental results will
be shown to confirm these ideas.Predicción a largo plazo de la concentración de ozono usando la metodología de razonamiento inductivo difuso
http://hdl.handle.net/2117/97845
Predicción a largo plazo de la concentración de ozono usando la metodología de razonamiento inductivo difuso
Gómez Miranda, Pilar; Nebot Castells, M. Àngela; Múgica Álvarez, Francisco
En este reporte se ha realizado un primer estudio para conocer
la capacidad de la metodología de Razonamiento Inductivo Difuso (FIR),
en la identificación de modelos para la predicción a largo plazo de las
concentraciones de ozono en la zona centro de la Ciudad de México. El
trabajo descrito en este artículo es un primer paso hacia la obtención
de modelos que sean capaces de predecir posibles contingencias
ambientales en esta ciudad. La investigación realizada se centra en la
identificación de modelos para la predicción del ozono desde dos
perspectivas distintas. La primera de ellas tiene como objetivo
pronosticar el período no lluvioso del año 2000, para lo cual se
utilizan datos consecutivos de enero a mayo del 2000. En la segunda se
trabaja con datos del mes de enero de diferentes años, de 1996 a 2000
para pronosticar enero de este último año. En este estudio se utilizan
datos horarios proporcionados por la Red Automática de Monitoreo
Atmosférico del Valle de México (RAMA). La complejidad de esta
aplicación radica, principalmente, en la alta frecuencia de los datos
disponibles por ser registros horarios, el número de variables
involucradas y la elevada proporción de datos perdidos . Los resultados obtenidos son prometedores aunque consideramos que para obtener modelos capaces de predecir posibles contingencias ambientales es necesario profundizar y mejorar algún aspecto de la metodologia de Razonamiento Inductivo Difuso.
Wed, 07 Dec 2016 11:46:36 GMThttp://hdl.handle.net/2117/978452016-12-07T11:46:36ZGómez Miranda, PilarNebot Castells, M. ÀngelaMúgica Álvarez, FranciscoEn este reporte se ha realizado un primer estudio para conocer
la capacidad de la metodología de Razonamiento Inductivo Difuso (FIR),
en la identificación de modelos para la predicción a largo plazo de las
concentraciones de ozono en la zona centro de la Ciudad de México. El
trabajo descrito en este artículo es un primer paso hacia la obtención
de modelos que sean capaces de predecir posibles contingencias
ambientales en esta ciudad. La investigación realizada se centra en la
identificación de modelos para la predicción del ozono desde dos
perspectivas distintas. La primera de ellas tiene como objetivo
pronosticar el período no lluvioso del año 2000, para lo cual se
utilizan datos consecutivos de enero a mayo del 2000. En la segunda se
trabaja con datos del mes de enero de diferentes años, de 1996 a 2000
para pronosticar enero de este último año. En este estudio se utilizan
datos horarios proporcionados por la Red Automática de Monitoreo
Atmosférico del Valle de México (RAMA). La complejidad de esta
aplicación radica, principalmente, en la alta frecuencia de los datos
disponibles por ser registros horarios, el número de variables
involucradas y la elevada proporción de datos perdidos . Los resultados obtenidos son prometedores aunque consideramos que para obtener modelos capaces de predecir posibles contingencias ambientales es necesario profundizar y mejorar algún aspecto de la metodologia de Razonamiento Inductivo Difuso.Function aproximation with SAOCIF: a general sequential method and a particular algorithm with feed-forward neural networks
http://hdl.handle.net/2117/97844
Function aproximation with SAOCIF: a general sequential method and a particular algorithm with feed-forward neural networks
Romero Merino, Enrique
A sequential method for approximating vectors in Hilbert spaces,
called Sequential Approximation with Optimal Coefficients and
Interacting Frequencies (SAOCIF), is presented. SAOCIF combines two
key ideas. The first one is the optimization of the coefficients (the
linear part of the approximation). The second one is the flexibility
to choose the frequencies (the non-linear part). The only relation
with the previous residue has to do with its approximation capability
of the target vector f. The approximations defined by SAOCIF always
exist, and maintain orthogonal-like properties. The theoretical
results obtained prove that, under reasonable conditions, the residue
of the approximation obtained with SAOCIF (in the limit) is the best
one that can be obtained with any subset of the given set of vectors.
In the particular case of L^2, it can be applied to approximations
by algebraic polynomials, Fourier series, wavelets and feed-forward
neural networks, among others. Also, a particular algorithm with
neural networks is presented. The resulting method combines the
locality of sequential approximations, where only one frequency is
found at every step, with the globality of non-sequential methods,
such as Backpropagation, where every frequency interacts with the
others. Experimental results show a very satisfactory performance of
this new method and several suggesting ideas for future experiments.
Wed, 07 Dec 2016 11:37:52 GMThttp://hdl.handle.net/2117/978442016-12-07T11:37:52ZRomero Merino, EnriqueA sequential method for approximating vectors in Hilbert spaces,
called Sequential Approximation with Optimal Coefficients and
Interacting Frequencies (SAOCIF), is presented. SAOCIF combines two
key ideas. The first one is the optimization of the coefficients (the
linear part of the approximation). The second one is the flexibility
to choose the frequencies (the non-linear part). The only relation
with the previous residue has to do with its approximation capability
of the target vector f. The approximations defined by SAOCIF always
exist, and maintain orthogonal-like properties. The theoretical
results obtained prove that, under reasonable conditions, the residue
of the approximation obtained with SAOCIF (in the limit) is the best
one that can be obtained with any subset of the given set of vectors.
In the particular case of L^2, it can be applied to approximations
by algebraic polynomials, Fourier series, wavelets and feed-forward
neural networks, among others. Also, a particular algorithm with
neural networks is presented. The resulting method combines the
locality of sequential approximations, where only one frequency is
found at every step, with the globality of non-sequential methods,
such as Backpropagation, where every frequency interacts with the
others. Experimental results show a very satisfactory performance of
this new method and several suggesting ideas for future experiments.Algorithmes d'entraînement local de RBF
http://hdl.handle.net/2117/97843
Algorithmes d'entraînement local de RBF
Quartier, Benoit; Belanche Muñoz, Luis Antonio
The aim of this work is to study the effect of locality
in classification tasks with radial basis function neural networks
(RBFNN).
The networks are trained in a three stage process. Firstly, the data are
decomposed in their natural clusters, using clustering algorithms of
different complexity. Secondly, a local RBFNN is fit to each
cluster. These RBFNNs are local in the sense that they are modeling
only a part of the problem, as given by the previous stage. Any RBFNN
training algorithm can be used here. Thirdly, the local networks are
fused together. We propose several simple techniques to do so. The
results are analyzed in light of the following aspects: overall
feasibility of the idea, influence of clustering algorithm
complexity, influence of specific training algorithms, and selection
of the fusing method.
Wed, 07 Dec 2016 11:32:05 GMThttp://hdl.handle.net/2117/978432016-12-07T11:32:05ZQuartier, BenoitBelanche Muñoz, Luis AntonioThe aim of this work is to study the effect of locality
in classification tasks with radial basis function neural networks
(RBFNN).
The networks are trained in a three stage process. Firstly, the data are
decomposed in their natural clusters, using clustering algorithms of
different complexity. Secondly, a local RBFNN is fit to each
cluster. These RBFNNs are local in the sense that they are modeling
only a part of the problem, as given by the previous stage. Any RBFNN
training algorithm can be used here. Thirdly, the local networks are
fused together. We propose several simple techniques to do so. The
results are analyzed in light of the following aspects: overall
feasibility of the idea, influence of clustering algorithm
complexity, influence of specific training algorithms, and selection
of the fusing method.