Capítols de llibre
http://hdl.handle.net/2117/3976
2017-10-20T19:59:23ZParallel algorithms for two processors precedence constraint scheduling
http://hdl.handle.net/2117/104935
Parallel algorithms for two processors precedence constraint scheduling
Serna Iglesias, María José
The final publication is available at link.springer.com
2017-05-26T14:27:35ZSerna Iglesias, María JoséRandomized parallel approximations to max flow
http://hdl.handle.net/2117/104934
Randomized parallel approximations to max flow
Serna Iglesias, María José
The final publication is available at link.springer.com
2017-05-26T14:21:33ZSerna Iglesias, María JoséOperA/ALIVE/OperettA
http://hdl.handle.net/2117/102181
OperA/ALIVE/OperettA
Aldewereld, Huib; Álvarez Napagao, Sergio; Dignum, Virginia; Jiang, Jie; Vasconcelos, Wamberto; Vázquez Salceda, Javier
Comprehensive models for organizations must, on the one hand, be able to specify global goals and requirements but, on the other hand, cannot assume that particular actors will always act according to the needs and expectations of the system design. Concepts as organizational rules (Zambonelli 2002), norms and institutions (Dignum and Dignum 2001; Esteva et al. 2002), and social structures (Parunak and Odell 2002) arise from the idea that the effective engineering of organizations needs high-level, actor-independent concepts and abstractions that explicitly define the organization in which agents live (Zambonelli 2002).
2017-03-09T10:21:57ZAldewereld, HuibÁlvarez Napagao, SergioDignum, VirginiaJiang, JieVasconcelos, WambertoVázquez Salceda, JavierComprehensive models for organizations must, on the one hand, be able to specify global goals and requirements but, on the other hand, cannot assume that particular actors will always act according to the needs and expectations of the system design. Concepts as organizational rules (Zambonelli 2002), norms and institutions (Dignum and Dignum 2001; Esteva et al. 2002), and social structures (Parunak and Odell 2002) arise from the idea that the effective engineering of organizations needs high-level, actor-independent concepts and abstractions that explicitly define the organization in which agents live (Zambonelli 2002).Data mining to support tutoring in virtual learning communities: experiences and challenges
http://hdl.handle.net/2117/102084
Data mining to support tutoring in virtual learning communities: experiences and challenges
Gaudioso, Elena; Talavera Méndez, Luis Jose
Computers and Internet are becoming widely used in educational contexts. Particularly,
the wide availability of Learning Management Systems (LMS) allows to
easily set up virtual communities providing channels and workspaces to facilitate
communication and information sharing. Most of these systems are able to track
students interaction within the workspaces and store it in a database that can be
later analyzed to assess student behavior. In this chapter we review some experiences
using data mining to analyze data obtained from e-learning courses based
upon virtual communities. We illustrate several issues that arise in this task providing
real world examples and applications and discuss the challenges that must
be addressed in order to integrate data mining technologies in LMS.
2017-03-07T16:47:56ZGaudioso, ElenaTalavera Méndez, Luis JoseComputers and Internet are becoming widely used in educational contexts. Particularly,
the wide availability of Learning Management Systems (LMS) allows to
easily set up virtual communities providing channels and workspaces to facilitate
communication and information sharing. Most of these systems are able to track
students interaction within the workspaces and store it in a database that can be
later analyzed to assess student behavior. In this chapter we review some experiences
using data mining to analyze data obtained from e-learning courses based
upon virtual communities. We illustrate several issues that arise in this task providing
real world examples and applications and discuss the challenges that must
be addressed in order to integrate data mining technologies in LMS.A norm-aware multi-agent system for social simulations in a river basin
http://hdl.handle.net/2117/101895
A norm-aware multi-agent system for social simulations in a river basin
Gómez Sebastià, Ignasi; Oliva Felipe, Luís Javier; Cortés García, Claudio Ulises; Verdaguer, Marta; Poch Espallargas, Manel; Rodríguez Roda, Ignasi; Vázquez Salceda, Javier
Wastewater management is a complex task involving a wide range of technical environmental and social factors. Furthermore, it typically requires the coordination of a heterogeneous society of actors with different goals. Regulations and protocols can be effectively used to tackle this complexity. In this chapter we present a norm-aware multi-agent system for social simulations in a river basin. The norms we present are inspired in European policies for wastewater management and they can evolve through time.
2017-03-03T08:22:36ZGómez Sebastià, IgnasiOliva Felipe, Luís JavierCortés García, Claudio UlisesVerdaguer, MartaPoch Espallargas, ManelRodríguez Roda, IgnasiVázquez Salceda, JavierWastewater management is a complex task involving a wide range of technical environmental and social factors. Furthermore, it typically requires the coordination of a heterogeneous society of actors with different goals. Regulations and protocols can be effectively used to tackle this complexity. In this chapter we present a norm-aware multi-agent system for social simulations in a river basin. The norms we present are inspired in European policies for wastewater management and they can evolve through time.Measuring the quality of open source software ecosystems using QuESo
http://hdl.handle.net/2117/101472
Measuring the quality of open source software ecosystems using QuESo
Franco Bedoya, Óscar Hernán; Ameller, David; Costal Costa, Dolors; Franch Gutiérrez, Javier
2017-02-23T14:04:22ZFranco Bedoya, Óscar HernánAmeller, DavidCostal Costa, DolorsFranch Gutiérrez, JavierImpostor-based crowd rendering
http://hdl.handle.net/2117/100685
Impostor-based crowd rendering
Beacco Porres, Alejandro; Pelechano Gómez, Núria; Andújar Gran, Carlos Antonio
Real-time rendering of detailed animated characters in crowd simulations is still a challenging problem in computer graphics. State-of-the-art approaches can render up to several thousand agents by consuming most of the graphics processing unit (GPU) resources, leav-
ing little room for other GPU uses such as driving the crowd simulation. Polygonal meshes deformed through skinning in real time are suitable for simulations involving a relatively small number of agents, since the rendering cost of each animated character is proportional to the complexity of its polygonal representation.
A number of techniques have been proposed to accelerate the rendering of animated characters. Besides view frustum and occlusion culling techniques, related work has focused mainly on providing level-of-detail (LoD) representations. Unfortunately, most
surface simplification methods do not work well with dynamic articulated meshes. As a consequence, the simplified versions of each character are often created manually and they still suffer from a substantial loss of detail. Image-based precomputed impostors for
the whole character provide substantial speed improvements by rendering distant characters as a textured polygon, but suffer from two major limitations: all animation cycles have to be known in advance (and thus animation blending is not supported) and resulting textures are huge (as for each view angle and animation frame, an image has to be stored). In this chapter, we overview different approaches to crowd rendering, focusing on impostor-based techniques. We summarize and compare two recent approaches [1,2] based on rigidly animated impostors per body limb. Compared to having impostors representing an entire character, having animated per-joint impostors provides a more memory-efficient approach.
2017-02-08T13:43:38ZBeacco Porres, AlejandroPelechano Gómez, NúriaAndújar Gran, Carlos AntonioReal-time rendering of detailed animated characters in crowd simulations is still a challenging problem in computer graphics. State-of-the-art approaches can render up to several thousand agents by consuming most of the graphics processing unit (GPU) resources, leav-
ing little room for other GPU uses such as driving the crowd simulation. Polygonal meshes deformed through skinning in real time are suitable for simulations involving a relatively small number of agents, since the rendering cost of each animated character is proportional to the complexity of its polygonal representation.
A number of techniques have been proposed to accelerate the rendering of animated characters. Besides view frustum and occlusion culling techniques, related work has focused mainly on providing level-of-detail (LoD) representations. Unfortunately, most
surface simplification methods do not work well with dynamic articulated meshes. As a consequence, the simplified versions of each character are often created manually and they still suffer from a substantial loss of detail. Image-based precomputed impostors for
the whole character provide substantial speed improvements by rendering distant characters as a textured polygon, but suffer from two major limitations: all animation cycles have to be known in advance (and thus animation blending is not supported) and resulting textures are huge (as for each view angle and animation frame, an image has to be stored). In this chapter, we overview different approaches to crowd rendering, focusing on impostor-based techniques. We summarize and compare two recent approaches [1,2] based on rigidly animated impostors per body limb. Compared to having impostors representing an entire character, having animated per-joint impostors provides a more memory-efficient approach.Learning probability distributions generated by finite-state machines
http://hdl.handle.net/2117/100347
Learning probability distributions generated by finite-state machines
Castro Rabal, Jorge; Gavaldà Mestre, Ricard
We review methods for inference of probability distributions generated by probabilistic automata and related models for sequence generation. We focus on methods that can be proved to learn in the inference
in the limit and PAC formal models. The methods we review are state merging and state splitting methods for probabilistic deterministic automata and the recently developed spectral method for nondeterministic probabilistic automata. In both cases, we derive them from a high-level algorithm described in terms of the Hankel matrix of the distribution to be learned, given as an oracle, and then describe how to adapt that algorithm to account for the error introduced by a finite sample.
2017-01-31T09:07:39ZCastro Rabal, JorgeGavaldà Mestre, RicardWe review methods for inference of probability distributions generated by probabilistic automata and related models for sequence generation. We focus on methods that can be proved to learn in the inference
in the limit and PAC formal models. The methods we review are state merging and state splitting methods for probabilistic deterministic automata and the recently developed spectral method for nondeterministic probabilistic automata. In both cases, we derive them from a high-level algorithm described in terms of the Hankel matrix of the distribution to be learned, given as an oracle, and then describe how to adapt that algorithm to account for the error introduced by a finite sample.Fast calculation of entropy with Zhang's estimator
http://hdl.handle.net/2117/100157
Fast calculation of entropy with Zhang's estimator
Lozano Bojados, Antoni; Casas Fernández, Bernardino; Bentz, Chris; Ferrer Cancho, Ramon
Entropy is a fundamental property of a repertoire. Here, we present an efficient algorithm to estimate the entropy of types with the help of Zhang’s estimator. The algorithm takes advantage of the fact that the number of different frequencies in a text is in general much smaller than the number of types. We justify the convenience of the algorithm by means of an analysis of the statistical properties of texts from more than 1000 languages. Our work opens up various possibilities for future research.
2017-01-27T08:06:04ZLozano Bojados, AntoniCasas Fernández, BernardinoBentz, ChrisFerrer Cancho, RamonEntropy is a fundamental property of a repertoire. Here, we present an efficient algorithm to estimate the entropy of types with the help of Zhang’s estimator. The algorithm takes advantage of the fact that the number of different frequencies in a text is in general much smaller than the number of types. We justify the convenience of the algorithm by means of an analysis of the statistical properties of texts from more than 1000 languages. Our work opens up various possibilities for future research.Identifying nutritional patterns through integrative multiview clustering
http://hdl.handle.net/2117/99193
Identifying nutritional patterns through integrative multiview clustering
Sevilla-Villanueva, Beatriz; Gibert, Karina; Sànchez-Marrè, Miquel
The main goal of this work is to develop a methodology for finding nutritional patterns based on a variety of subject characteristics which can contribute to better understand the interactions between nutrition and health, provided that the complexity of the phenomenon gives poor performance using classical approaches. An innovative methodology based on advanced clustering techniques is proposed in order to find more compact patterns or clusters. The Integrative Multiview Clustering (IMC) combines Multiview Clustering approach with crossing operations over the several partitions obtained. Comparison with other classical clustering techniques is provided to assess the performance of our approach. The Dunn-like cluster validity index proposed by Bezdek & Pal is used for the comparison from a structural point of view, as it is more robust than the original Dunn index. The performance of the IMC method is better than other popular clustering techniques based on the Dunn-like Index. Our findings suggest that the Integrative Multiview Clustering provides more compact and separated clusters. In addition, IMC helps to reduce the high dimensionality of the data based on multiview division of attributes and also, the resulting partition is easier to interpret. Using the Integrative Multiview Clustering approach, a good partition is obtained from a structural point of view. Also, the interpretation of the resulting partition is clearer than the one obtained by classical approache
2017-01-13T09:28:14ZSevilla-Villanueva, BeatrizGibert, KarinaSànchez-Marrè, MiquelThe main goal of this work is to develop a methodology for finding nutritional patterns based on a variety of subject characteristics which can contribute to better understand the interactions between nutrition and health, provided that the complexity of the phenomenon gives poor performance using classical approaches. An innovative methodology based on advanced clustering techniques is proposed in order to find more compact patterns or clusters. The Integrative Multiview Clustering (IMC) combines Multiview Clustering approach with crossing operations over the several partitions obtained. Comparison with other classical clustering techniques is provided to assess the performance of our approach. The Dunn-like cluster validity index proposed by Bezdek & Pal is used for the comparison from a structural point of view, as it is more robust than the original Dunn index. The performance of the IMC method is better than other popular clustering techniques based on the Dunn-like Index. Our findings suggest that the Integrative Multiview Clustering provides more compact and separated clusters. In addition, IMC helps to reduce the high dimensionality of the data based on multiview division of attributes and also, the resulting partition is easier to interpret. Using the Integrative Multiview Clustering approach, a good partition is obtained from a structural point of view. Also, the interpretation of the resulting partition is clearer than the one obtained by classical approache