Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/3974
Tue, 06 Dec 2016 19:55:24 GMT
20161206T19:55:24Z

Lookahead with minibucket heuristics for MPE
http://hdl.handle.net/2117/97599
Lookahead with minibucket heuristics for MPE
Dechter, Rina; Kask, Kalev; Lam, William; Larrosa Bondia, Francisco Javier
The paper investigates the potential of lookahead in the context of AND/OR search in graphical models using the MiniBucket heuristic for combinatorial optimization tasks (e.g., MAP/MPE or weighted CSPs). We present and analyze the complexity of computing the residual (a.k.a Bellman update) of the MiniBucket heuristic and show how this can be used to identify which parts of the search space are more likely to benefit from lookahead and how to bound its overhead. We also rephrase the lookahead computation as a graphical model, to facilitate structure exploiting inference schemes. We demonstrate empirically that augmenting MiniBucket heuristics by lookahead is a costeffective way of increasing the power of BranchAndBound search.
Thu, 01 Dec 2016 11:47:58 GMT
http://hdl.handle.net/2117/97599
20161201T11:47:58Z
Dechter, Rina
Kask, Kalev
Lam, William
Larrosa Bondia, Francisco Javier
The paper investigates the potential of lookahead in the context of AND/OR search in graphical models using the MiniBucket heuristic for combinatorial optimization tasks (e.g., MAP/MPE or weighted CSPs). We present and analyze the complexity of computing the residual (a.k.a Bellman update) of the MiniBucket heuristic and show how this can be used to identify which parts of the search space are more likely to benefit from lookahead and how to bound its overhead. We also rephrase the lookahead computation as a graphical model, to facilitate structure exploiting inference schemes. We demonstrate empirically that augmenting MiniBucket heuristics by lookahead is a costeffective way of increasing the power of BranchAndBound search.

Limited discrepancy AND/OR search and its application to optimization tasks in graphical models
http://hdl.handle.net/2117/97596
Limited discrepancy AND/OR search and its application to optimization tasks in graphical models
Larrosa Bondia, Francisco Javier; Rollón Rico, Emma; Dechter, Rina
Many combinatorial problems are solved with a DepthFirst search (DFS) guided by a heuristic and it is wellknown that this method is very fragile with respect to heuristic mistakes. One standard way to make DFS more robust is to search by increasing number of discrepancies. This approach has been found useful in several domains where the search structure is a heightbounded OR tree. In this paper we investigate the generalization of discrepancybased search to AND/OR search trees and propose an extension of the Limited Discrepancy Search (LDS) algorithm. We demonstrate the relevance of our proposal in the context of Graphical Models. In these problems, which can be solved with either a standard OR search tree or an AND/OR tree, we show the superiority of our approach. For a fixed number of discrepancies, the search space visited by the AND/OR algorithm strictly contains the search space visited by standard LDS, and many more nodes can be visited due to the multiplicative effect of the AND/OR decomposition. Besides, if the AND/OR tree achieves a significant size reduction with respect to the standard OR tree, the cost of each iteration of the AND/OR algorithm is asymptotically lower than in standard LDS. We report experiments on the minsum problem on different domains and show that the AND/OR version of LDS usually obtains better solutions given the same CPU time.
Thu, 01 Dec 2016 11:41:04 GMT
http://hdl.handle.net/2117/97596
20161201T11:41:04Z
Larrosa Bondia, Francisco Javier
Rollón Rico, Emma
Dechter, Rina
Many combinatorial problems are solved with a DepthFirst search (DFS) guided by a heuristic and it is wellknown that this method is very fragile with respect to heuristic mistakes. One standard way to make DFS more robust is to search by increasing number of discrepancies. This approach has been found useful in several domains where the search structure is a heightbounded OR tree. In this paper we investigate the generalization of discrepancybased search to AND/OR search trees and propose an extension of the Limited Discrepancy Search (LDS) algorithm. We demonstrate the relevance of our proposal in the context of Graphical Models. In these problems, which can be solved with either a standard OR search tree or an AND/OR tree, we show the superiority of our approach. For a fixed number of discrepancies, the search space visited by the AND/OR algorithm strictly contains the search space visited by standard LDS, and many more nodes can be visited due to the multiplicative effect of the AND/OR decomposition. Besides, if the AND/OR tree achieves a significant size reduction with respect to the standard OR tree, the cost of each iteration of the AND/OR algorithm is asymptotically lower than in standard LDS. We report experiments on the minsum problem on different domains and show that the AND/OR version of LDS usually obtains better solutions given the same CPU time.

A machine learning pipeline for supporting differentiation of glioblastomas from single brain metastases
http://hdl.handle.net/2117/97584
A machine learning pipeline for supporting differentiation of glioblastomas from single brain metastases
Mocioiu, Victor; de Barros, Nuno M. Pedrosa; Ortega Martorell, Sandra; Slotboom, Johannes; Knecht, Urspeter; Arús, Carles; Vellido Alcacena, Alfredo; Julià Sapé, Margarida
Machine learning has provided, over the last decades, tools for knowledge extraction in complex medical domains. Most of these tools, though, are ad hoc solutions and lack the systematic approach that would be required to become mainstream in medical practice. In this brief paper, we define a machine learningbased analysis pipeline for helping in a difficult problem in the field of neurooncology, namely the discrimination of brain glioblastomas from single brain metastases. This pipeline involves source extraction using kMeansinitialized Convex Nonnegative Matrix Factorization and a collection of classifiers, including Logistic Regression, Linear Discriminant Analysis, AdaBoost, and Random Forests.
Thu, 01 Dec 2016 10:29:18 GMT
http://hdl.handle.net/2117/97584
20161201T10:29:18Z
Mocioiu, Victor
de Barros, Nuno M. Pedrosa
Ortega Martorell, Sandra
Slotboom, Johannes
Knecht, Urspeter
Arús, Carles
Vellido Alcacena, Alfredo
Julià Sapé, Margarida
Machine learning has provided, over the last decades, tools for knowledge extraction in complex medical domains. Most of these tools, though, are ad hoc solutions and lack the systematic approach that would be required to become mainstream in medical practice. In this brief paper, we define a machine learningbased analysis pipeline for helping in a difficult problem in the field of neurooncology, namely the discrimination of brain glioblastomas from single brain metastases. This pipeline involves source extraction using kMeansinitialized Convex Nonnegative Matrix Factorization and a collection of classifiers, including Logistic Regression, Linear Discriminant Analysis, AdaBoost, and Random Forests.

Instance and feature weighted knearestneighbors algorithm
http://hdl.handle.net/2117/97582
Instance and feature weighted knearestneighbors algorithm
Prat, Gabriel; Belanche Muñoz, Luis Antonio
We present a novel method that aims at providing a more stable selection of feature subsets when variations in the training process occur. This is accomplished by using an instanceweighting process assigning different importances to instances as a preprocessing step to a feature weighting method that is independent of the learner, and then making good use of both sets of computed weigths in a standard NearestNeighbours classifier.
We report extensive experimentation in wellknown benchmarking datasets as well as some challenging microarray
gene expression problems. Our results show increases in stability for most subset sizes and most problems, without
compromising prediction accuracy.
Thu, 01 Dec 2016 10:15:32 GMT
http://hdl.handle.net/2117/97582
20161201T10:15:32Z
Prat, Gabriel
Belanche Muñoz, Luis Antonio
We present a novel method that aims at providing a more stable selection of feature subsets when variations in the training process occur. This is accomplished by using an instanceweighting process assigning different importances to instances as a preprocessing step to a feature weighting method that is independent of the learner, and then making good use of both sets of computed weigths in a standard NearestNeighbours classifier.
We report extensive experimentation in wellknown benchmarking datasets as well as some challenging microarray
gene expression problems. Our results show increases in stability for most subset sizes and most problems, without
compromising prediction accuracy.

Physics and machine learning: Emerging paradigms
http://hdl.handle.net/2117/97581
Physics and machine learning: Emerging paradigms
Martín Guerrero, José; Lisboa, Paulo J G; Vellido Alcacena, Alfredo
Current research in Machine Learning (ML) combines the study of variations on wellestablished methods with cuttingedge breakthroughs based on completely new approaches. Among the latter, emerging paradigms from Physics have taken special relevance in recent years. Although still in its initial stages, Quantum Machine Learning (QML) shows promising ways to speed up some of the costly ML calculations with a similar or even better performance than existing approaches. Two additional advantages are related to the intrinsic probabilistic approach of QML, since quantum states are genuinely probabilistic, and to the capability of finding the global optimum of a given cost function by means of adiabatic quantum optimization, thus circumventing the usual problem of local minima. Another Physics approach for ML comes from Statistical Physics and is linked to Information theory in supervised and semisupervised learning frameworks. On the other hand, and from the perspective of Physics, ML can provide solutions by extracting knowledge from huge amounts of data, as it is common in many experiments in the field, such as those related to High Energy Physics for elementaryparticle research and Observational Astronomy.
Thu, 01 Dec 2016 10:08:53 GMT
http://hdl.handle.net/2117/97581
20161201T10:08:53Z
Martín Guerrero, José
Lisboa, Paulo J G
Vellido Alcacena, Alfredo
Current research in Machine Learning (ML) combines the study of variations on wellestablished methods with cuttingedge breakthroughs based on completely new approaches. Among the latter, emerging paradigms from Physics have taken special relevance in recent years. Although still in its initial stages, Quantum Machine Learning (QML) shows promising ways to speed up some of the costly ML calculations with a similar or even better performance than existing approaches. Two additional advantages are related to the intrinsic probabilistic approach of QML, since quantum states are genuinely probabilistic, and to the capability of finding the global optimum of a given cost function by means of adiabatic quantum optimization, thus circumventing the usual problem of local minima. Another Physics approach for ML comes from Statistical Physics and is linked to Information theory in supervised and semisupervised learning frameworks. On the other hand, and from the perspective of Physics, ML can provide solutions by extracting knowledge from huge amounts of data, as it is common in many experiments in the field, such as those related to High Energy Physics for elementaryparticle research and Observational Astronomy.

A methodology for maintaining consistency between conceptual interpretations of nested partitions
http://hdl.handle.net/2117/97066
A methodology for maintaining consistency between conceptual interpretations of nested partitions
SevillaVillanueva, Beatriz; Gibert, Karina; SànchezMarrè, Miquel
The relationship between interpretations of nested partitions is analyzed in this work, since there are multiple situations where a refinement of the original partition arises. As a result, a new methodology NCIIMS is proposed in order to maintain the consistency between interpretations of nested partitions. This methodology extends a previous methodology that obtains classes' descriptors by determining the significance's robustness of the characteristics significance. Then, NCIIMS takes advantage of the descriptors robustness obtaining a deeper analysis of the relations between superclass's and subclasses' descriptors. © 2016 The authors and IOS Press. All rights reserved.
Wed, 23 Nov 2016 07:56:29 GMT
http://hdl.handle.net/2117/97066
20161123T07:56:29Z
SevillaVillanueva, Beatriz
Gibert, Karina
SànchezMarrè, Miquel
The relationship between interpretations of nested partitions is analyzed in this work, since there are multiple situations where a refinement of the original partition arises. As a result, a new methodology NCIIMS is proposed in order to maintain the consistency between interpretations of nested partitions. This methodology extends a previous methodology that obtains classes' descriptors by determining the significance's robustness of the characteristics significance. Then, NCIIMS takes advantage of the descriptors robustness obtaining a deeper analysis of the relations between superclass's and subclasses' descriptors. © 2016 The authors and IOS Press. All rights reserved.

A Framework for animation in global illumination environments
http://hdl.handle.net/2117/96741
A Framework for animation in global illumination environments
Martín Campos, Ignacio Clemente; Pueyo, X; Tost Pardell, Daniela
This paper proposes a framework for the production of animations of
high quality radiosity environments which makes use of the apriori
knowledge of the dynamic properties of the scene in order to exploit
temporal coherence. A discussion of the previous work leads to design
a twopass strategy that extends to the temporal dimension a related
static method that computes a coarse global solution and then performs
a final gathering step using hardwaregraphics accelerators. The input
data are a dynamic model of the environment through a period of time
corresponding to the same camera recording. The strategy designed
consists into two processes: a preprocessing stage and a production
stage. The aim of the former one is to build a data structure able to
store the eventual changes in the radiosity interactions throughout
the sequence along with the predictable modifications in the
image. The data structure is hierarchical so that maximum adaptability
can be achieved in traversing it. The production stage computes all
the frames by traversing the data structure and computing
incrementally both the global and the local pass. A first
implementation of the general framework is discussed along with future
development lines.
Wed, 16 Nov 2016 14:33:08 GMT
http://hdl.handle.net/2117/96741
20161116T14:33:08Z
Martín Campos, Ignacio Clemente
Pueyo, X
Tost Pardell, Daniela
This paper proposes a framework for the production of animations of
high quality radiosity environments which makes use of the apriori
knowledge of the dynamic properties of the scene in order to exploit
temporal coherence. A discussion of the previous work leads to design
a twopass strategy that extends to the temporal dimension a related
static method that computes a coarse global solution and then performs
a final gathering step using hardwaregraphics accelerators. The input
data are a dynamic model of the environment through a period of time
corresponding to the same camera recording. The strategy designed
consists into two processes: a preprocessing stage and a production
stage. The aim of the former one is to build a data structure able to
store the eventual changes in the radiosity interactions throughout
the sequence along with the predictable modifications in the
image. The data structure is hierarchical so that maximum adaptability
can be achieved in traversing it. The production stage computes all
the frames by traversing the data structure and computing
incrementally both the global and the local pass. A first
implementation of the general framework is discussed along with future
development lines.

Detecting which teaching competences should be reinforced in an engineering lecturer training program
http://hdl.handle.net/2117/96518
Detecting which teaching competences should be reinforced in an engineering lecturer training program
López Álvarez, David; Pérez Poch, Antoni
In Catalonia (Spain), each university is required by law to offer lecturers a learning framework. This requirement is met by specialized centers: at our technical university, Universitat Politècnica de Catalunya (UPC) this center is the Institute for Education Sciences (ICE), to which the authors of this article belong. This institute offers training to both new and senior lecturers. This training is voluntary because no specific teacher training background is required for teaching at the university, other than knowledge of the subject to be taught.
UPC only offers degrees in architecture, mathematics and engineering. We do not have schools and departments of psychology or education, or a tradition of social science methods among our faculty. Our lecturers do have the technical competences required for teaching, but not necessarily the professional competences required for good teaching practice. This is particularly problematic in university of engineering studies, which traditionally have one of the highest dropout rates in higher education.
The opinions of lecturers on their own teaching depend on the students they have had, the subject they teach, their previous experience and the beliefs that guide their work. These beliefs are consistent with and depend on the teaching style of each lecturer, so they are fairly stable and resistant to change. It is difficult for a lecturer to Change her or his beliefs, particularly if they are intuitively reasonable. For such a change to occur, the lecturer has to feel somewhat dissatisfied. In addition, the lecturer must be offered an intelligible and apparently useful alternative; and finally, the lecturer has to find a way to connect these new beliefs with their previous ones. Lecturer training in Engineering has been studied in recent years. The studies focus on the methods and tools required for quality teaching practice. However, a paradigm shift in learning is taking place. In the European Higher Education Area we are moving from contentbased to competencebased learning. We therefore believe that lecturer training should also be based on competences such as communication capability, and syllabus planning and management.
Fri, 11 Nov 2016 09:41:32 GMT
http://hdl.handle.net/2117/96518
20161111T09:41:32Z
López Álvarez, David
Pérez Poch, Antoni
In Catalonia (Spain), each university is required by law to offer lecturers a learning framework. This requirement is met by specialized centers: at our technical university, Universitat Politècnica de Catalunya (UPC) this center is the Institute for Education Sciences (ICE), to which the authors of this article belong. This institute offers training to both new and senior lecturers. This training is voluntary because no specific teacher training background is required for teaching at the university, other than knowledge of the subject to be taught.
UPC only offers degrees in architecture, mathematics and engineering. We do not have schools and departments of psychology or education, or a tradition of social science methods among our faculty. Our lecturers do have the technical competences required for teaching, but not necessarily the professional competences required for good teaching practice. This is particularly problematic in university of engineering studies, which traditionally have one of the highest dropout rates in higher education.
The opinions of lecturers on their own teaching depend on the students they have had, the subject they teach, their previous experience and the beliefs that guide their work. These beliefs are consistent with and depend on the teaching style of each lecturer, so they are fairly stable and resistant to change. It is difficult for a lecturer to Change her or his beliefs, particularly if they are intuitively reasonable. For such a change to occur, the lecturer has to feel somewhat dissatisfied. In addition, the lecturer must be offered an intelligible and apparently useful alternative; and finally, the lecturer has to find a way to connect these new beliefs with their previous ones. Lecturer training in Engineering has been studied in recent years. The studies focus on the methods and tools required for quality teaching practice. However, a paradigm shift in learning is taking place. In the European Higher Education Area we are moving from contentbased to competencebased learning. We therefore believe that lecturer training should also be based on competences such as communication capability, and syllabus planning and management.

Ring oscillator clocks and margins
http://hdl.handle.net/2117/96481
Ring oscillator clocks and margins
Cortadella Fortuny, Jordi; Lupon Navazo, Marc; Moreno Vega, Alberto; Roca Pérez, Antoni; Sapatnekar, Sachin
How much margin do we have to add to the delay lines of a bundleddata circuit? This paper is an attempt to give a methodical answer to this question, taking into account all sources of variability and the existing EDA machinery for timing analysis and signoff. The paper is based on the study of the margins of a ring oscillator that substitutes a PLL as clock generator. A timing model is proposed that shows that a 12% margin for delay lines can be sufficient to cover variability in a 65nm technology. In a typical scenario, performance and energy improvements between 15% and 35% can be obtained by using a ring oscillator instead of a PLL. The paper concludes that a synchronous circuit with a ring oscillator clock shows similar benefits in performance and energy as those of bundleddata asynchronous circuits.
Thu, 10 Nov 2016 12:52:04 GMT
http://hdl.handle.net/2117/96481
20161110T12:52:04Z
Cortadella Fortuny, Jordi
Lupon Navazo, Marc
Moreno Vega, Alberto
Roca Pérez, Antoni
Sapatnekar, Sachin
How much margin do we have to add to the delay lines of a bundleddata circuit? This paper is an attempt to give a methodical answer to this question, taking into account all sources of variability and the existing EDA machinery for timing analysis and signoff. The paper is based on the study of the margins of a ring oscillator that substitutes a PLL as clock generator. A timing model is proposed that shows that a 12% margin for delay lines can be sufficient to cover variability in a 65nm technology. In a typical scenario, performance and energy improvements between 15% and 35% can be obtained by using a ring oscillator instead of a PLL. The paper concludes that a synchronous circuit with a ring oscillator clock shows similar benefits in performance and energy as those of bundleddata asynchronous circuits.

Kauffman's adjacent possible in word order evolution
http://hdl.handle.net/2117/96478
Kauffman's adjacent possible in word order evolution
Ferrer Cancho, Ramon
Word order evolution has been hypothesized to be constrained by a word order permutation ring: transitions involving orders that are closer in the permutation ring are more likely. The hypothesis can be seen as a particular case of Kauffman's adjacent possible in word order evolution.
Here we consider the problem of the association of the six possible orders of S, V and O to yield a couple of primary alternating orders as a window to word order evolution. We evaluate the suitability of various competing hypotheses to predict one member of the couple from the other with the help of information theoretic model selection. Our ensemble of models includes a sixway model that is based on the word order permutation ring (Kauffman's adjacent possible) and another model based on the dual twoway of standard typology, that reduces word order to basic orders preferences (e.g., a preference for SV over VS and another for SO over OS). Our analysis indicates that the permutation ring yields the best model when favoring parsimony strongly, providing support for Kauffman's general view and a sixway typology.
Thu, 10 Nov 2016 12:33:17 GMT
http://hdl.handle.net/2117/96478
20161110T12:33:17Z
Ferrer Cancho, Ramon
Word order evolution has been hypothesized to be constrained by a word order permutation ring: transitions involving orders that are closer in the permutation ring are more likely. The hypothesis can be seen as a particular case of Kauffman's adjacent possible in word order evolution.
Here we consider the problem of the association of the six possible orders of S, V and O to yield a couple of primary alternating orders as a window to word order evolution. We evaluate the suitability of various competing hypotheses to predict one member of the couple from the other with the help of information theoretic model selection. Our ensemble of models includes a sixway model that is based on the word order permutation ring (Kauffman's adjacent possible) and another model based on the dual twoway of standard typology, that reduces word order to basic orders preferences (e.g., a preference for SV over VS and another for SO over OS). Our analysis indicates that the permutation ring yields the best model when favoring parsimony strongly, providing support for Kauffman's general view and a sixway typology.