Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/3335
Wed, 25 May 2016 03:48:13 GMT2016-05-25T03:48:13ZSupervised assessment of segmentation hierarchies
http://hdl.handle.net/2117/87191
Supervised assessment of segmentation hierarchies
Pont Tuset, Jordi; Marqués Acosta, Fernando
This paper addresses the problem of the supervised assessment of hierarchical
region-based image representations. Given the large amount of partitions
represented in such structures, the supervised assessment approaches in
the literature are based on selecting a reduced set of representative partitions and
evaluating their quality. Assessment results, therefore, depend on the partition selection
strategy used. Instead, we propose to find the partition in the tree that best
matches the ground-truth partition, that is, the upper-bound partition selection.
We show that different partition selection algorithms can lead to different conclusions
regarding the quality of the assessed trees and that the upper-bound partition
selection provides the following advantages: 1) it does not limit the assessment
to a reduced set of partitions, and 2) it better discriminates the random trees from
actual ones, which reflects a better qualitative behavior. We model the problem as
a Linear Fractional Combinatorial Optimization (LFCO) problem, which makes
the upper-bound selection feasible and efficient.
Thu, 19 May 2016 11:58:33 GMThttp://hdl.handle.net/2117/871912016-05-19T11:58:33ZPont Tuset, JordiMarqués Acosta, FernandoThis paper addresses the problem of the supervised assessment of hierarchical
region-based image representations. Given the large amount of partitions
represented in such structures, the supervised assessment approaches in
the literature are based on selecting a reduced set of representative partitions and
evaluating their quality. Assessment results, therefore, depend on the partition selection
strategy used. Instead, we propose to find the partition in the tree that best
matches the ground-truth partition, that is, the upper-bound partition selection.
We show that different partition selection algorithms can lead to different conclusions
regarding the quality of the assessed trees and that the upper-bound partition
selection provides the following advantages: 1) it does not limit the assessment
to a reduced set of partitions, and 2) it better discriminates the random trees from
actual ones, which reflects a better qualitative behavior. We model the problem as
a Linear Fractional Combinatorial Optimization (LFCO) problem, which makes
the upper-bound selection feasible and efficient.Multiclass cancer-microarray classification algorithm with Pair-Against-All redundancy
http://hdl.handle.net/2117/87164
Multiclass cancer-microarray classification algorithm with Pair-Against-All redundancy
Bosio, Mattia; Bellot Pujalte, Pau; Salembier Clairon, Philippe Jean; Oliveras Vergés, Albert
Multiclass cancer classification is still a challenging task in the field of machine learning. A novel multiclass approach is proposed in this work as a combination of multiple binary classifiers. It is an example of Error Correcting Output Codes algorithms, applying data transmission coding techniques to improve the classification as a combination of binary classifiers. The proposed method combines the One Against All, OAA, approach with a set of classifiers separating each class-pair from the rest, called Pair Against All, PAA. The OAA+PAA approach has been tested on seven publicly available datasets. It has been compared with the common OAA approach and with state of the art alternatives. The obtained results showed how the OAA+PAA algorithm consistently improves the OAA results, unlike other ECOC algorithms presented in the literature.
Wed, 18 May 2016 13:46:21 GMThttp://hdl.handle.net/2117/871642016-05-18T13:46:21ZBosio, MattiaBellot Pujalte, PauSalembier Clairon, Philippe JeanOliveras Vergés, AlbertMulticlass cancer classification is still a challenging task in the field of machine learning. A novel multiclass approach is proposed in this work as a combination of multiple binary classifiers. It is an example of Error Correcting Output Codes algorithms, applying data transmission coding techniques to improve the classification as a combination of binary classifiers. The proposed method combines the One Against All, OAA, approach with a set of classifiers separating each class-pair from the rest, called Pair Against All, PAA. The OAA+PAA approach has been tested on seven publicly available datasets. It has been compared with the common OAA approach and with state of the art alternatives. The obtained results showed how the OAA+PAA algorithm consistently improves the OAA results, unlike other ECOC algorithms presented in the literature.Feature set enhancement via hierarchical clustering for microarray classification
http://hdl.handle.net/2117/87114
Feature set enhancement via hierarchical clustering for microarray classification
Bosio, Mattia; Bellot Pujalte, Pau; Salembier Clairon, Philippe Jean; Oliveras Vergés, Albert
A new method for gene expression classification is
proposed in this paper. In a first step, the original feature set is
enriched by including new features, called metagenes, produced
via hierarchical clustering. In a second step, a reliable classifier
is built from a wrapper feature selection process. The selection
relies on two criteria: the classical classification error rate and
a new reliability measure. As a result, a classifier with good
predictive ability using as few features as possible to reduce the
risk of overfitting is obtained. This method has been tested on
three public cancer datasets: leukemia, lymphoma and colon. The
proposed method has obtained interesting classification results
and the experiments have confirmed the utility of both metagenes
and feature ranking criterion to improve the final classifier
Tue, 17 May 2016 12:59:21 GMThttp://hdl.handle.net/2117/871142016-05-17T12:59:21ZBosio, MattiaBellot Pujalte, PauSalembier Clairon, Philippe JeanOliveras Vergés, AlbertA new method for gene expression classification is
proposed in this paper. In a first step, the original feature set is
enriched by including new features, called metagenes, produced
via hierarchical clustering. In a second step, a reliable classifier
is built from a wrapper feature selection process. The selection
relies on two criteria: the classical classification error rate and
a new reliability measure. As a result, a classifier with good
predictive ability using as few features as possible to reduce the
risk of overfitting is obtained. This method has been tested on
three public cancer datasets: leukemia, lymphoma and colon. The
proposed method has obtained interesting classification results
and the experiments have confirmed the utility of both metagenes
and feature ranking criterion to improve the final classifierHierarchical clustering combining numerical and biological similarities for gene expression data classification
http://hdl.handle.net/2117/86695
Hierarchical clustering combining numerical and biological similarities for gene expression data classification
Bosio, Mattia; Salembier Clairon, Philippe Jean; Bellot Pujalte, Pau; Oliveras Vergés, Albert
High throughput data analysis is a challenging problem due to the vast amount of available data. A major concern is to develop algorithms that provide accurate numerical predictions and biologically relevant results. A wide variety of tools exist in the literature using biological knowledge to evaluate analysis results. Only recently, some works have included biological knowledge inside the analysis process improving the prediction results.
Fri, 06 May 2016 12:31:26 GMThttp://hdl.handle.net/2117/866952016-05-06T12:31:26ZBosio, MattiaSalembier Clairon, Philippe JeanBellot Pujalte, PauOliveras Vergés, AlbertHigh throughput data analysis is a challenging problem due to the vast amount of available data. A major concern is to develop algorithms that provide accurate numerical predictions and biologically relevant results. A wide variety of tools exist in the literature using biological knowledge to evaluate analysis results. Only recently, some works have included biological knowledge inside the analysis process improving the prediction results.Filtrado transversal adaptativo de varianza constante para la ecualización de canal
http://hdl.handle.net/2117/86209
Filtrado transversal adaptativo de varianza constante para la ecualización de canal
Vázquez Grau, Gregorio; Gasull Llampallas, Antoni; Sánchez Umbría, Juan; Oliveras Vergés, Albert
This paper describes the problem of lineal filtering of noisy data under a Maximum Likelihood objective. In this sense, the paper shows that a weighted square error cost function deals and it is necessary to weight the filtering error sequence by a factor that, basically, depends the probability density function of the error sequence and on its first derivate. As it is well known, this information used to be not available and other proposals must be made. For this purpose, going around this problem, the paper discusses the design of this weighting factor for including sorne kind of data-selection mechanism for the final filter weight-vector solution design. The underlying of the proposal is the development of a recursive algorithm in such a way that for any measure or observation, its associated
Tue, 26 Apr 2016 14:37:29 GMThttp://hdl.handle.net/2117/862092016-04-26T14:37:29ZVázquez Grau, GregorioGasull Llampallas, AntoniSánchez Umbría, JuanOliveras Vergés, AlbertThis paper describes the problem of lineal filtering of noisy data under a Maximum Likelihood objective. In this sense, the paper shows that a weighted square error cost function deals and it is necessary to weight the filtering error sequence by a factor that, basically, depends the probability density function of the error sequence and on its first derivate. As it is well known, this information used to be not available and other proposals must be made. For this purpose, going around this problem, the paper discusses the design of this weighting factor for including sorne kind of data-selection mechanism for the final filter weight-vector solution design. The underlying of the proposal is the development of a recursive algorithm in such a way that for any measure or observation, its associatedProcesado digital de imágenes para la detección y seguimiento de células
http://hdl.handle.net/2117/86207
Procesado digital de imágenes para la detección y seguimiento de células
Sayrol Clols, Elisa; Gasull Llampallas, Antoni
A new computerized methodology is described in which detection and tracking of human spermatozoa in semen are analized using a personal computer. Several aproaches are studied in order to quantify the number of spermatozoa and to characterize the swimming motion.
Tue, 26 Apr 2016 14:20:20 GMThttp://hdl.handle.net/2117/862072016-04-26T14:20:20ZSayrol Clols, ElisaGasull Llampallas, AntoniA new computerized methodology is described in which detection and tracking of human spermatozoa in semen are analized using a personal computer. Several aproaches are studied in order to quantify the number of spermatozoa and to characterize the swimming motion.Measuring true spectral density from ML filters (NMLM and q-NMLM spectral estimates)
http://hdl.handle.net/2117/86197
Measuring true spectral density from ML filters (NMLM and q-NMLM spectral estimates)
Lagunas Hernandez, Miguel A.; Gasull Llampallas, Antoni
Starting from the classical procedure reported by Capon for power level estimation from ML filters, the authors present how this method can be modifyed in order to obtain a power spectral density estimate. The basic idea is to compute the effective bandwidth of the ML filter, and normalize the power level estimate, at the output of a quadratic detector following the filter, with it. The effective bandwidth has been obtained by an equal area constraint criteria. Furthermore, the above mentioned estimate, we called NMLM, converges in the distributional sense to the true spectral power density. This suggest the use of new estimates, denoted in the text as q-MLM, which improves the mentioned convergence both in 1-D as well as 2-D problems of SPA.
Tue, 26 Apr 2016 12:42:25 GMThttp://hdl.handle.net/2117/861972016-04-26T12:42:25ZLagunas Hernandez, Miguel A.Gasull Llampallas, AntoniStarting from the classical procedure reported by Capon for power level estimation from ML filters, the authors present how this method can be modifyed in order to obtain a power spectral density estimate. The basic idea is to compute the effective bandwidth of the ML filter, and normalize the power level estimate, at the output of a quadratic detector following the filter, with it. The effective bandwidth has been obtained by an equal area constraint criteria. Furthermore, the above mentioned estimate, we called NMLM, converges in the distributional sense to the true spectral power density. This suggest the use of new estimates, denoted in the text as q-MLM, which improves the mentioned convergence both in 1-D as well as 2-D problems of SPA.Cross spectrum ML estimate
http://hdl.handle.net/2117/86195
Cross spectrum ML estimate
Lagunas Hernandez, Miguel A.; Santamaría Pérez, María Eugenia; Gasull Llampallas, Antoni; Moreno Bilbao, M. Asunción
This work reports how to include general concepts of the one-dimensional MLM procedure in a two-channel problem of cross-spectrum estimation. It is shown in the sequel that there is no any problem in extrapolating the well-known procedures for auto-spectrum estimation to the cross-spectrum, if the original procedure can be explained as a filter bank analysis procedure. The resulting cross-spectrum estimate looks formally to satisfy the excellent features which the normalized maximum likelihood procedure, reported previously by the authors, does in the auto-spectrum problem as concerns with resolution a low-side lobe behavior.
Tue, 26 Apr 2016 12:35:55 GMThttp://hdl.handle.net/2117/861952016-04-26T12:35:55ZLagunas Hernandez, Miguel A.Santamaría Pérez, María EugeniaGasull Llampallas, AntoniMoreno Bilbao, M. AsunciónThis work reports how to include general concepts of the one-dimensional MLM procedure in a two-channel problem of cross-spectrum estimation. It is shown in the sequel that there is no any problem in extrapolating the well-known procedures for auto-spectrum estimation to the cross-spectrum, if the original procedure can be explained as a filter bank analysis procedure. The resulting cross-spectrum estimate looks formally to satisfy the excellent features which the normalized maximum likelihood procedure, reported previously by the authors, does in the auto-spectrum problem as concerns with resolution a low-side lobe behavior.Método MLNq para arrays de alta resolución
http://hdl.handle.net/2117/86149
Método MLNq para arrays de alta resolución
Gasull Llampallas, Antoni; Lagunas Hernandez, Miguel A.; Fernández Rubio, Juan Antonio; Moreno Bilbao, M. Asunción
Spectral analysis techniques are used to bearing estimation problem. Each one of this gives a different array beamforming. We show here a generalized normalized Maximum Likehood Method which present a high resolution comparable to the singular value decomposition methods, but with a smaller computational load .
Mon, 25 Apr 2016 13:36:49 GMThttp://hdl.handle.net/2117/861492016-04-25T13:36:49ZGasull Llampallas, AntoniLagunas Hernandez, Miguel A.Fernández Rubio, Juan AntonioMoreno Bilbao, M. AsunciónSpectral analysis techniques are used to bearing estimation problem. Each one of this gives a different array beamforming. We show here a generalized normalized Maximum Likehood Method which present a high resolution comparable to the singular value decomposition methods, but with a smaller computational load .Data pre-processing for high-resolution adaptive algorithms
http://hdl.handle.net/2117/86134
Data pre-processing for high-resolution adaptive algorithms
Vázquez Grau, Gregorio; Gasull Llampallas, Antoni
The inclusion of adaptive methods in high-resolution spectral estimation algorithms is considered. The generation of a complete family of spectral estimators from the normalized maximum-likelihood method (MLM) is discussed. It is shown how the generalized power MLM can be used to generate adaptive schemes for improving resolution. The authors propose the substitution of the conventional mean-square filtering error by quadratic objectives built as inner products of the coefficient error vector of the estimator filter
Mon, 25 Apr 2016 12:34:14 GMThttp://hdl.handle.net/2117/861342016-04-25T12:34:14ZVázquez Grau, GregorioGasull Llampallas, AntoniThe inclusion of adaptive methods in high-resolution spectral estimation algorithms is considered. The generation of a complete family of spectral estimators from the normalized maximum-likelihood method (MLM) is discussed. It is shown how the generalized power MLM can be used to generate adaptive schemes for improving resolution. The authors propose the substitution of the conventional mean-square filtering error by quadratic objectives built as inner products of the coefficient error vector of the estimator filter