DSpace Collection:
http://hdl.handle.net/2117/3093
Sat, 02 Aug 2014 07:01:29 GMT
20140802T07:01:29Z
webmaster.bupc@upc.edu
Universitat Politècnica de Catalunya. Servei de Biblioteques i Documentació
no

Singleentry singleexit decomposed conformance checking
http://hdl.handle.net/2117/23672
Title: Singleentry singleexit decomposed conformance checking
Authors: Muñoz Gama, Jorge; Carmona Vargas, Josep; Van der Aalst, Wil M. P.
Abstract: An exponential growth of event data can be witnessed across all industries. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue. The omnipresence of large amounts of event data is an important enabler for process mining. Process mining techniques can be used to discover, monitor and improve real processes by extracting knowledge from observed behavior. However, unprecedented volumes of event data also provide new challenges and often stateoftheart process mining techniques cannot cope. This paper focuses on “conformance checking in the large” and presents a novel decomposition technique that partitions larger process models and event logs into smaller parts that can be analyzed independently. The socalled SingleEntry SingleExit (SESE) decomposition not only helps to speed up conformance checking, but also provides improved diagnostics. The analyst can zoom in on the problematic parts of the process. Importantly, the conditions under which the conformance of the whole can be assessed by verifying the conformance of the SESE parts are described, which enables the decomposition and distribution of large conformance checking problems. All the techniques have been implemented in ProM, and experimental results are provided.
Fri, 01 Aug 2014 12:25:50 GMT
http://hdl.handle.net/2117/23672
20140801T12:25:50Z
Muñoz Gama, Jorge; Carmona Vargas, Josep; Van der Aalst, Wil M. P.
no
Process mining, Conformance checking, Decomposition, Process diagnosis
An exponential growth of event data can be witnessed across all industries. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue. The omnipresence of large amounts of event data is an important enabler for process mining. Process mining techniques can be used to discover, monitor and improve real processes by extracting knowledge from observed behavior. However, unprecedented volumes of event data also provide new challenges and often stateoftheart process mining techniques cannot cope. This paper focuses on “conformance checking in the large” and presents a novel decomposition technique that partitions larger process models and event logs into smaller parts that can be analyzed independently. The socalled SingleEntry SingleExit (SESE) decomposition not only helps to speed up conformance checking, but also provides improved diagnostics. The analyst can zoom in on the problematic parts of the process. Importantly, the conditions under which the conformance of the whole can be assessed by verifying the conformance of the SESE parts are described, which enables the decomposition and distribution of large conformance checking problems. All the techniques have been implemented in ProM, and experimental results are provided.

Reasoning about orchestrations of web services using partial correctness
http://hdl.handle.net/2117/23530
Title: Reasoning about orchestrations of web services using partial correctness
Authors: Stewart, Alan; Gabarró Vallès, Joaquim; Keenan, Anthony
Abstract: A service is a remote computational facility which is made available for general use by means of a
widearea network. Several types of service arise in practice: stateless services, shared state services and services
with states which are customised for individual users. A servicebased orchestration is a multithreaded compu
tation which invokes remote services in order to deliver results back to a user (publication). In this paper a means
of specifying services and reasoning about the correctness of orchestrations over
stateless
services is presented.
As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed.
For this reason a partialcorrectness powerdomain approach is proposed to capture the semantics of recursive
orchestrations.
Wed, 16 Jul 2014 12:57:09 GMT
http://hdl.handle.net/2117/23530
20140716T12:57:09Z
Stewart, Alan; Gabarró Vallès, Joaquim; Keenan, Anthony
no
World Wide Web – Service – Specification – Orchestration – Orc – Partial correctness – Preorders – Fixedpoints – Powerdomains
A service is a remote computational facility which is made available for general use by means of a
widearea network. Several types of service arise in practice: stateless services, shared state services and services
with states which are customised for individual users. A servicebased orchestration is a multithreaded compu
tation which invokes remote services in order to deliver results back to a user (publication). In this paper a means
of specifying services and reasoning about the correctness of orchestrations over
stateless
services is presented.
As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed.
For this reason a partialcorrectness powerdomain approach is proposed to capture the semantics of recursive
orchestrations.

Excessively duplicating patterns represent nonregular languages
http://hdl.handle.net/2117/23504
Title: Excessively duplicating patterns represent nonregular languages
Authors: Creus López, Carles; Godoy Balil, Guillem; Ramos Garrido, Lander
Abstract: A constrained term pattern s:¿ represents the language of all instances of the term s satisfying the constraint ¿. For each variable in s, this constraint specifies the language of its allowed substitutions. Regularity of languages represented by sets of patterns has been studied for a long time. This problem is known to be coNPcomplete when the constraints allow each variable to be replaced by any term over a fixed signature, and EXPTIMEcomplete when the constraints restrict each variable to a regular set. In both cases, duplication of variables in the terms of the patterns is a necessary condition for nonregularity. This is because duplications force the recognizer to test equality between subterms. Hence, for the specific classes of constraints mentioned above, if all patterns are linear, then the represented language is necessarily regular. In this paper we focus on the opposite case, that is when there are patterns with
Mon, 14 Jul 2014 12:29:43 GMT
http://hdl.handle.net/2117/23504
20140714T12:29:43Z
Creus López, Carles; Godoy Balil, Guillem; Ramos Garrido, Lander
no
Theory of computation, Pattern, Regular tree language, Tree automaton, Tree homomorphism
A constrained term pattern s:¿ represents the language of all instances of the term s satisfying the constraint ¿. For each variable in s, this constraint specifies the language of its allowed substitutions. Regularity of languages represented by sets of patterns has been studied for a long time. This problem is known to be coNPcomplete when the constraints allow each variable to be replaced by any term over a fixed signature, and EXPTIMEcomplete when the constraints restrict each variable to a regular set. In both cases, duplication of variables in the terms of the patterns is a necessary condition for nonregularity. This is because duplications force the recognizer to test equality between subterms. Hence, for the specific classes of constraints mentioned above, if all patterns are linear, then the represented language is necessarily regular. In this paper we focus on the opposite case, that is when there are patterns with

Turing's algorithmic lens: from computability to complexity theory
http://hdl.handle.net/2117/22738
Title: Turing's algorithmic lens: from computability to complexity theory
Authors: Díaz Cort, Josep; Torras, Carme
Abstract: The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time) and NP (problems solvable in nondeterministic polynomial time) were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.
Mon, 28 Apr 2014 17:13:15 GMT
http://hdl.handle.net/2117/22738
20140428T17:13:15Z
Díaz Cort, Josep; Torras, Carme
no
automation
Author keywords:
Turing, computer science, computability, complexity theory, algorithmics
The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time) and NP (problems solvable in nondeterministic polynomial time) were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.

Emptiness and finiteness for tree automata with global reflexive disequality constraints
http://hdl.handle.net/2117/22680
Title: Emptiness and finiteness for tree automata with global reflexive disequality constraints
Authors: Creus López, Carles; Gascon Caro, Adrian; Godoy Balil, Guillem
Abstract: In recent years, several extensions of tree automata have been considered. Most of them are related with the capability of testing equality or disequality of certain subterms of the term evaluated by the automaton. In particular, tree automata with global constraints are able to test equality and disequality of subterms depending on the state to which they are evaluated. The emptiness problem is known decidable for this kind of automata, but with a nonelementary time complexity, and the finiteness problem remains unknown. In this paper, we consider the particular case of tree automata with global constraints when the constraint is a conjunction of disequalities between states, and the disequality predicate is forced to be reflexive. This restriction is significant in the context of XML definitions with monadic key constraints. We prove that emptiness and finiteness are decidable in triple exponential time for this kind of automata. © 2012 Springer Science+Business Media Dordrecht.
Wed, 23 Apr 2014 14:55:34 GMT
http://hdl.handle.net/2117/22680
20140423T14:55:34Z
Creus López, Carles; Gascon Caro, Adrian; Godoy Balil, Guillem
no
Decision problems, Disequality constraints, Global constraints, Tree automata
In recent years, several extensions of tree automata have been considered. Most of them are related with the capability of testing equality or disequality of certain subterms of the term evaluated by the automaton. In particular, tree automata with global constraints are able to test equality and disequality of subterms depending on the state to which they are evaluated. The emptiness problem is known decidable for this kind of automata, but with a nonelementary time complexity, and the finiteness problem remains unknown. In this paper, we consider the particular case of tree automata with global constraints when the constraint is a conjunction of disequalities between states, and the disequality predicate is forced to be reflexive. This restriction is significant in the context of XML definitions with monadic key constraints. We prove that emptiness and finiteness are decidable in triple exponential time for this kind of automata. © 2012 Springer Science+Business Media Dordrecht.

Nonlinear rewrite closure and weak normalization
http://hdl.handle.net/2117/20443
Title: Nonlinear rewrite closure and weak normalization
Authors: Creus López, Carles; Godoy Balil, Guillem; Massanes Basi, Francesc d'Assis; Tiwari, Ashish Kumar
Abstract: A rewrite closure is an extension of a term rewrite system with new rules, usually deduced by transitivity. Rewrite closures have the nice property that all rewrite derivations can be transformed into derivations of a simple form. This property has been useful for proving decidability results in term rewriting. Unfortunately, when the term rewrite system is not linear, the construction of a rewrite closure is quite challenging. In this paper, we construct a rewrite closure for term rewrite systems that satisfy two properties: the righthand side term in each rewrite rule contains no repeated variable (rightlinear) and contains no variable occurring at depth greater than one (rightshallow). The lefthand side term is unrestricted, and in particular, it may be nonlinear. As a consequence of the rewrite closure construction, we are able to prove decidability of the weak normalization problem for rightlinear rightshallow term rewrite systems. Proving this result also requires tree automata theory. We use the fact that rightshallow rightlinear term rewrite systems are regularity preserving. Moreover, their set of normal forms can be represented with a tree automaton with disequality constraints, and emptiness of this kind of automata, as well as its generalization to reduction automata, is decidable. A preliminary version of this work was presented at LICS 2009.
Tue, 22 Oct 2013 12:42:27 GMT
http://hdl.handle.net/2117/20443
20131022T12:42:27Z
Creus López, Carles; Godoy Balil, Guillem; Massanes Basi, Francesc d'Assis; Tiwari, Ashish Kumar
no
Rewrite closure
Term rewriting
Tree automata
Weak normalization
A rewrite closure is an extension of a term rewrite system with new rules, usually deduced by transitivity. Rewrite closures have the nice property that all rewrite derivations can be transformed into derivations of a simple form. This property has been useful for proving decidability results in term rewriting. Unfortunately, when the term rewrite system is not linear, the construction of a rewrite closure is quite challenging. In this paper, we construct a rewrite closure for term rewrite systems that satisfy two properties: the righthand side term in each rewrite rule contains no repeated variable (rightlinear) and contains no variable occurring at depth greater than one (rightshallow). The lefthand side term is unrestricted, and in particular, it may be nonlinear. As a consequence of the rewrite closure construction, we are able to prove decidability of the weak normalization problem for rightlinear rightshallow term rewrite systems. Proving this result also requires tree automata theory. We use the fact that rightshallow rightlinear term rewrite systems are regularity preserving. Moreover, their set of normal forms can be represented with a tree automaton with disequality constraints, and emptiness of this kind of automata, as well as its generalization to reduction automata, is decidable. A preliminary version of this work was presented at LICS 2009.

Architectural exploration of largescale hierarchical chip multiprocessors
http://hdl.handle.net/2117/20442
Title: Architectural exploration of largescale hierarchical chip multiprocessors
Authors: Nikitin, Nikita; San Pedro Martín, Javier de; Cortadella Fortuny, Jordi
Abstract: The continuous scaling of nanoelectronics is increasing the complexity of chip multiprocessors (CMPs) and exacerbating the memory wall problem. As CMPs become more complex, the memory subsystem is organized into more hierarchical structures to better exploit locality. To efficiently discover promising architectures within the rapidly growing search space, exhaustive exploration is replaced with tools that implement intelligent search strategies. Moreover, faster analytical models are preferred to costly simulations for estimating the performance and power of CMP architectures. The memory traffic generated by CMP cores has a cyclic dependency with the latency of the memory subsystem, which critically affects the overall system performance. Based on this observation, a novel scalable analytical method is proposed to estimate the performance of highly parallel CMPs (hundreds or thousands of cores) with hierarchical interconnect networks. The method can use customizable probabilistic models and solves the cyclic dependencies between traffic and latency by using a fixedpoint strategy. By using the analytical model as a performance and power estimator, an efficient metaheuristicbased search is proposed for the exploration of large design spaces. The proposed techniques are shown to be very accurate and a promising strategy when compared to the results obtained by simulation.
Tue, 22 Oct 2013 12:34:01 GMT
http://hdl.handle.net/2117/20442
20131022T12:34:01Z
Nikitin, Nikita; San Pedro Martín, Javier de; Cortadella Fortuny, Jordi
no
Analytical modeling
chip multiprocessing
design space exploration
metaheuristics
numerical methods
The continuous scaling of nanoelectronics is increasing the complexity of chip multiprocessors (CMPs) and exacerbating the memory wall problem. As CMPs become more complex, the memory subsystem is organized into more hierarchical structures to better exploit locality. To efficiently discover promising architectures within the rapidly growing search space, exhaustive exploration is replaced with tools that implement intelligent search strategies. Moreover, faster analytical models are preferred to costly simulations for estimating the performance and power of CMP architectures. The memory traffic generated by CMP cores has a cyclic dependency with the latency of the memory subsystem, which critically affects the overall system performance. Based on this observation, a novel scalable analytical method is proposed to estimate the performance of highly parallel CMPs (hundreds or thousands of cores) with hierarchical interconnect networks. The method can use customizable probabilistic models and solves the cyclic dependencies between traffic and latency by using a fixedpoint strategy. By using the analytical model as a performance and power estimator, an efficient metaheuristicbased search is proposed for the exploration of large design spaces. The proposed techniques are shown to be very accurate and a promising strategy when compared to the results obtained by simulation.

Decidable classes of tree automata mixing local and global constraints modulo flat theories
http://hdl.handle.net/2117/20262
Title: Decidable classes of tree automata mixing local and global constraints modulo flat theories
Authors: Barguño, Luis; Creus López, Carles; Godoy Balil, Guillem; Jacquemard, Florent; Vacher, Camile
Abstract: We define a class of ranked tree automata TABG generalizing both the tree automata with local tests between brothers of Bogaert and Tison (1992) and with global equality and disequality constraints (TAGED) of Filiot et al. (2007). TABG can test for
equality and disequality modulo a given flat equational theory between brother subterms
and between subterms whose positions are defined by the states reached during a computation.
In particular, TABG can check that all the subterms reaching a given state are distinct. This constraint is related to monadic key constraints for XML documents,
meaning that every two distinct positions of a given type have different values. We prove decidability of the emptiness problem for TABG. This solves, in particular, the open question of the decidability of emptiness for TAGED. We further extend our result by allowing global arithmetic constraints for counting the number of occurrences of some state or the number of different equivalence classes of subterms (modulo a given flat equational theory) reaching some state during a computation. We also adapt the
model to unranked ordered terms. As a consequence of our results for TABG, we prove
the decidability of a fragment of the monadic second order logic on trees extended with predicates for equality and disequality between subtrees, and cardinality.
Wed, 02 Oct 2013 11:49:06 GMT
http://hdl.handle.net/2117/20262
20131002T11:49:06Z
Barguño, Luis; Creus López, Carles; Godoy Balil, Guillem; Jacquemard, Florent; Vacher, Camile
no
Automata theory, Computability and decidability, Equivalence classes, Forestry, XML
We define a class of ranked tree automata TABG generalizing both the tree automata with local tests between brothers of Bogaert and Tison (1992) and with global equality and disequality constraints (TAGED) of Filiot et al. (2007). TABG can test for
equality and disequality modulo a given flat equational theory between brother subterms
and between subterms whose positions are defined by the states reached during a computation.
In particular, TABG can check that all the subterms reaching a given state are distinct. This constraint is related to monadic key constraints for XML documents,
meaning that every two distinct positions of a given type have different values. We prove decidability of the emptiness problem for TABG. This solves, in particular, the open question of the decidability of emptiness for TAGED. We further extend our result by allowing global arithmetic constraints for counting the number of occurrences of some state or the number of different equivalence classes of subterms (modulo a given flat equational theory) reaching some state during a computation. We also adapt the
model to unranked ordered terms. As a consequence of our results for TABG, we prove
the decidability of a fragment of the monadic second order logic on trees extended with predicates for equality and disequality between subtrees, and cardinality.

Reference databases for taxonomic assignment in metagenomics
http://hdl.handle.net/2117/20108
Title: Reference databases for taxonomic assignment in metagenomics
Authors: Santamaria, Monica; Fosso, Bruno; Consiglio, Arianna; De Caro, Giorigio; Grillo, Giorgio; Licciulli, Flavio; Liuni, Sabino; Marzano, Marinella; AlonsoAlemany, Daniel; Valiente Feruglio, Gabriel Alejandro; Pesole, Graziano
Abstract: Metagenomics is providing an unprecedented access to the environmental microbial diversity. The ampliconbased metagenomics approach involves the PCRtargeted sequencing of a genetic locus fitting different features. Namely, it must be ubiquitous in the taxonomic range of interest, variable enough to discriminate between different species but flanked by highly conserved sequences, and of suitable size to be sequenced through nextgeneration platforms. The internal transcribed spacers 1 and 2 (ITS1 and ITS2) of the ribosomal DNA operon and one or more hypervariable regions of 16S ribosomal RNA gene are typically used to identify fungal and bacterial species, respectively.
In this context, reliable reference databases and taxonomies are crucial to sssign amplicon sequence reads to the correct phylogenetic ranks. Several resources provide consistent phylogenetic classification of publicly available 16S ribosomal DNA sequences, whereas the state of ribosomal internal transcribed spacers reference databases is notably less advanced. In this review, we aim to give an overview of existing reference resources for both types of markers, highlighting strengths and possible shortcomings of their use for metagenomics purposes. Moreover, we
present a new database, ITSoneDB, of well annotated and phylogenetically classified ITS1 sequences to be used as a reference collection in metagenomic studies of environmental fungal communities. ITSoneDB is available for download and browsing at http://itsonedb.ba.itb.cnr.it/.
Mon, 09 Sep 2013 11:44:31 GMT
http://hdl.handle.net/2117/20108
20130909T11:44:31Z
Santamaria, Monica; Fosso, Bruno; Consiglio, Arianna; De Caro, Giorigio; Grillo, Giorgio; Licciulli, Flavio; Liuni, Sabino; Marzano, Marinella; AlonsoAlemany, Daniel; Valiente Feruglio, Gabriel Alejandro; Pesole, Graziano
no
Metagenomics, reference database, ITS, 16S rRNA, microbial communities
Metagenomics is providing an unprecedented access to the environmental microbial diversity. The ampliconbased metagenomics approach involves the PCRtargeted sequencing of a genetic locus fitting different features. Namely, it must be ubiquitous in the taxonomic range of interest, variable enough to discriminate between different species but flanked by highly conserved sequences, and of suitable size to be sequenced through nextgeneration platforms. The internal transcribed spacers 1 and 2 (ITS1 and ITS2) of the ribosomal DNA operon and one or more hypervariable regions of 16S ribosomal RNA gene are typically used to identify fungal and bacterial species, respectively.
In this context, reliable reference databases and taxonomies are crucial to sssign amplicon sequence reads to the correct phylogenetic ranks. Several resources provide consistent phylogenetic classification of publicly available 16S ribosomal DNA sequences, whereas the state of ribosomal internal transcribed spacers reference databases is notably less advanced. In this review, we aim to give an overview of existing reference resources for both types of markers, highlighting strengths and possible shortcomings of their use for metagenomics purposes. Moreover, we
present a new database, ITSoneDB, of well annotated and phylogenetically classified ITS1 sequences to be used as a reference collection in metagenomic studies of environmental fungal communities. ITSoneDB is available for download and browsing at http://itsonedb.ba.itb.cnr.it/.

Evaluation of struggle strategy in genetic algorithms for ground stations scheduling problem
http://hdl.handle.net/2117/19681
Title: Evaluation of struggle strategy in genetic algorithms for ground stations scheduling problem
Authors: Xhafa Xhafa, Fatos; Herrero, Xavier; Barolli, Admir; Barolli, Leonard; Takizawa, Makoto
Abstract: Ground station scheduling problem arises in spacecraft operations and aims to allocate ground stations to spacecraft to make possible the communication between operations teams and spacecraft systems. The problem belongs to the family of satellite scheduling for the specific case of mapping communications to ground stations. The allocation of a ground station to a mission (e.g. telemetry, tracking information, etc.) has a high cost, and automation of the process provides many benefits not only in terms of management, but in economic terms as well. The problem is known for its high complexity as it is an overconstrained problem. In this paper, we present the resolution of the problem through Struggle Genetic Algorithms – a version of GAs that distinguishes for its efficiency in maintaining the diversity of the population during genetic evolution. We present some computational results obtained with Struggle GA using the STK simulation toolkit, which showed the efficiency of the method in solving the problem.
Wed, 26 Jun 2013 15:48:39 GMT
http://hdl.handle.net/2117/19681
20130626T15:48:39Z
Xhafa Xhafa, Fatos; Herrero, Xavier; Barolli, Admir; Barolli, Leonard; Takizawa, Makoto
no
Ground station scheduling, Satellite scheduling, Struggle Genetic Algorithms, Constraint programming, Simulation
Ground station scheduling problem arises in spacecraft operations and aims to allocate ground stations to spacecraft to make possible the communication between operations teams and spacecraft systems. The problem belongs to the family of satellite scheduling for the specific case of mapping communications to ground stations. The allocation of a ground station to a mission (e.g. telemetry, tracking information, etc.) has a high cost, and automation of the process provides many benefits not only in terms of management, but in economic terms as well. The problem is known for its high complexity as it is an overconstrained problem. In this paper, we present the resolution of the problem through Struggle Genetic Algorithms – a version of GAs that distinguishes for its efficiency in maintaining the diversity of the population during genetic evolution. We present some computational results obtained with Struggle GA using the STK simulation toolkit, which showed the efficiency of the method in solving the problem.

P2P data replication and trustworthiness for a JXTAOverlay P2P system using fuzzy logic
http://hdl.handle.net/2117/19618
Title: P2P data replication and trustworthiness for a JXTAOverlay P2P system using fuzzy logic
Authors: Spaho, Evjola; Barolli, Leonard; Xhafa Xhafa, Fatos; Biberaj, A.; Shurdi, O.
Abstract: P2P systems are very important for future distributed systems and applications. In such systems, the computational burden of the system can be distributed to peer nodes of the system. Therefore, in decentralized systems users become themselves actors by sharing, contributing and controlling the resources of the system. This characteristic makes P2P systems very interesting for the development of decentralized applications. Data replication techniques are commonplace in P2P systems. Data replication means storing copies of the same data at multiple peers thus improving availability and scalability. The trustworthiness of peers also is very important for safe communication in P2P system. The trustworthiness of a peer can be evaluated based on the reputation and actual behaviour of peers to provide services to other peers. In this paper, we propose two fuzzybased systems for data replication and peer trustworthiness for JXTAOverlay P2P platform. The simulation results have shown that in the first system, replication factor increases proportionally with increase of number of documents per peer, replication percentage and scale of replication per peer parameters and the second system can be used successfully to select the most reliable peer candidate to execute the tasks.
Fri, 21 Jun 2013 16:08:34 GMT
http://hdl.handle.net/2117/19618
20130621T16:08:34Z
Spaho, Evjola; Barolli, Leonard; Xhafa Xhafa, Fatos; Biberaj, A.; Shurdi, O.
no
Data replication, Fuzzy system, JXTAOverlay, P2P, Trustworthiness
P2P systems are very important for future distributed systems and applications. In such systems, the computational burden of the system can be distributed to peer nodes of the system. Therefore, in decentralized systems users become themselves actors by sharing, contributing and controlling the resources of the system. This characteristic makes P2P systems very interesting for the development of decentralized applications. Data replication techniques are commonplace in P2P systems. Data replication means storing copies of the same data at multiple peers thus improving availability and scalability. The trustworthiness of peers also is very important for safe communication in P2P system. The trustworthiness of a peer can be evaluated based on the reputation and actual behaviour of peers to provide services to other peers. In this paper, we propose two fuzzybased systems for data replication and peer trustworthiness for JXTAOverlay P2P platform. The simulation results have shown that in the first system, replication factor increases proportionally with increase of number of documents per peer, replication percentage and scale of replication per peer parameters and the second system can be used successfully to select the most reliable peer candidate to execute the tasks.

Ant colony optimization theory : a survey
http://hdl.handle.net/2117/18911
Title: Ant colony optimization theory : a survey
Authors: Dorigo, Marco; Blum, Christian
Abstract: Research on a new metaheuristic for optimization is often initially focused on proofofconcept applications. It is only after experimental work has shown the practical interest of the method that researchers try to deepen their understanding of the method's functioning not only through more and more sophisticated experiments but also by means of an effort to build a theory. Tackling questions such as "how and why the method works" is important, because finding an answer may help in improving its applicability. Ant colony optimization, which was introduced in the early 1990s as a novel technique for solving hard combinatorial optimization problems, finds itself currently at this point of its life cycle. With this article we provide a survey on theoretical results on ant colony optimization. First, we review some convergence results. Then we discuss relations between ant colony optimization algorithms and other approximate methods for optimization. Finally, we focus on some research efforts directed at gaining a deeper understanding of the behavior of ant colony optimization algorithms. Throughout the paper we identify some open questions with a certain interest of being solved in the near future.
Description: "Theoretical Computer Science Top Cited Article 20052010"
Mon, 22 Apr 2013 10:09:40 GMT
http://hdl.handle.net/2117/18911
20130422T10:09:40Z
Dorigo, Marco; Blum, Christian
no
Ant colony optimization, Metaheuristics, Combinatorial optimization, Convergence, Stochastic gradient descent, Modelbased search, Approximate algorithms
Research on a new metaheuristic for optimization is often initially focused on proofofconcept applications. It is only after experimental work has shown the practical interest of the method that researchers try to deepen their understanding of the method's functioning not only through more and more sophisticated experiments but also by means of an effort to build a theory. Tackling questions such as "how and why the method works" is important, because finding an answer may help in improving its applicability. Ant colony optimization, which was introduced in the early 1990s as a novel technique for solving hard combinatorial optimization problems, finds itself currently at this point of its life cycle. With this article we provide a survey on theoretical results on ant colony optimization. First, we review some convergence results. Then we discuss relations between ant colony optimization algorithms and other approximate methods for optimization. Finally, we focus on some research efforts directed at gaining a deeper understanding of the behavior of ant colony optimization algorithms. Throughout the paper we identify some open questions with a certain interest of being solved in the near future.

Computation of several power indices by generating functions
http://hdl.handle.net/2117/17925
Title: Computation of several power indices by generating functions
Authors: Alonso Meijide, José María; Freixas Bosch, Josep; Molinero Albareda, Xavier
Abstract: In this paper we propose methods to compute the DeeganPackel, the Public
Good, and the Shift power indices by generating functions for the particular
case of weighted voting games. Furthermore, we define a new power index
which combines the ideas of the Shift and the DeeganPackel power indices and
also propose a method to compute it with generating functions. We conclude
by some comments about the complexity to compute these power indices.
Fri, 22 Feb 2013 10:25:06 GMT
http://hdl.handle.net/2117/17925
20130222T10:25:06Z
Alonso Meijide, José María; Freixas Bosch, Josep; Molinero Albareda, Xavier
no
In this paper we propose methods to compute the DeeganPackel, the Public
Good, and the Shift power indices by generating functions for the particular
case of weighted voting games. Furthermore, we define a new power index
which combines the ideas of the Shift and the DeeganPackel power indices and
also propose a method to compute it with generating functions. We conclude
by some comments about the complexity to compute these power indices.

Coaching on new technologies: programming workshop on Android applications for Google phones
http://hdl.handle.net/2117/16927
Title: Coaching on new technologies: programming workshop on Android applications for Google phones
Authors: Blesa Aguilera, Maria Josep; Duch Brown, Amalia; Gabarró Vallès, Joaquim; Hernández, Hugo; Serna Iglesias, María José
Abstract: In this work we describe our experience teaching an innovative Android programming workshop organized by the Universitat Politècnica de Catalunya (UPC) within the AndroidEDU Google EMEA Program. The growing interest in Android has allowed us to apply proactive learning techniques with very good results. As teachers, this was a challenging experience, that has forced us to rethink our role, to create educational material
accordant with the new communication media (forums, YouTube, etc.), and to supply the lack of expertise with an interesting collaboration between teachers and students. After three semesters teaching this workshop, we are convinced that this is an experience to share since the results have far exceeded our expectations.
Thu, 15 Nov 2012 11:15:38 GMT
http://hdl.handle.net/2117/16927
20121115T11:15:38Z
Blesa Aguilera, Maria Josep; Duch Brown, Amalia; Gabarró Vallès, Joaquim; Hernández, Hugo; Serna Iglesias, María José
no
AndroidEDU Google EMEA Program
In this work we describe our experience teaching an innovative Android programming workshop organized by the Universitat Politècnica de Catalunya (UPC) within the AndroidEDU Google EMEA Program. The growing interest in Android has allowed us to apply proactive learning techniques with very good results. As teachers, this was a challenging experience, that has forced us to rethink our role, to create educational material
accordant with the new communication media (forums, YouTube, etc.), and to supply the lack of expertise with an interesting collaboration between teachers and students. After three semesters teaching this workshop, we are convinced that this is an experience to share since the results have far exceeded our expectations.

Iterated greedy algorithms for the maximal covering location problem
http://hdl.handle.net/2117/16393
Title: Iterated greedy algorithms for the maximal covering location problem
Authors: Rodríguez, Francisco J.; Blum, Christian; Lozano, Manuel; García Martínez, Carlos
Abstract: The problem of allocating a set of facilities in order to maximise
the sum of the demands of the covered clients is known as the
maximal covering location problem. In this work we tackle this problem
by means of iterated greedy algorithms. These algorithms iteratively refine
a solution by partial destruction and reconstruction, using a greedy
constructive procedure. Iterated greedy algorithms have been applied
successfully to solve a considerable number of problems. With the aim of
providing additional results and insights along this line of research, this
paper proposes two new iterated greedy algorithms that incorporate two
innovative components: a population of solutions optimised in parallel
by the iterated greedy algorithm, and an improvement procedure that
explores a large neighbourhood by means of an exact solver. The benefits
of the proposal in comparison to a recently proposed decomposition
heuristic and a standalone exact solver are experimentally shown.
Tue, 28 Aug 2012 10:04:07 GMT
http://hdl.handle.net/2117/16393
20120828T10:04:07Z
Rodríguez, Francisco J.; Blum, Christian; Lozano, Manuel; García Martínez, Carlos
no
Maximal covering location problem
The problem of allocating a set of facilities in order to maximise
the sum of the demands of the covered clients is known as the
maximal covering location problem. In this work we tackle this problem
by means of iterated greedy algorithms. These algorithms iteratively refine
a solution by partial destruction and reconstruction, using a greedy
constructive procedure. Iterated greedy algorithms have been applied
successfully to solve a considerable number of problems. With the aim of
providing additional results and insights along this line of research, this
paper proposes two new iterated greedy algorithms that incorporate two
innovative components: a population of solutions optimised in parallel
by the iterated greedy algorithm, and an improvement procedure that
explores a large neighbourhood by means of an exact solver. The benefits
of the proposal in comparison to a recently proposed decomposition
heuristic and a standalone exact solver are experimentally shown.