DSpace Collection:
http://hdl.handle.net/2117/3095
Thu, 27 Nov 2014 01:50:14 GMT
20141127T01:50:14Z
webmaster.bupc@upc.edu
Universitat Politècnica de Catalunya. Servei de Biblioteques i Documentació
no

Eventbased realtime decomposed conformance analysis
http://hdl.handle.net/2117/24754
Title: Eventbased realtime decomposed conformance analysis
Authors: vanden Broucke, Seppe; Muñoz Gama, Jorge; Carmona Vargas, Josep; Baesens, Bart; Vanthienen, Jan
Abstract: Process mining deals with the extraction of knowledge from event logs. One important task within this research field is denoted as conformance checking, which aims to diagnose deviations and discrepancies between modeled behavior and reallife, observed behavior. Conformance checking techniques still face some challenges, among which scalability, timeliness and traceability issues. In this paper, we propose a novel conformance analysis methodology to support the realtime monitoring of eventbased data streams, which is shown to be more efficient than related approaches and able to localize deviations in a more finegrained manner. Our developed approach can be directly applied in business process contexts where rapid reaction times are crucial; an exhaustive case example is provided to evidence the validity of the approach.
Wed, 19 Nov 2014 12:31:08 GMT
http://hdl.handle.net/2117/24754
20141119T12:31:08Z
vanden Broucke, Seppe; Muñoz Gama, Jorge; Carmona Vargas, Josep; Baesens, Bart; Vanthienen, Jan
no
Realtime monitoring, Process decomposition, Conformance checking, Conformance analysis, Process mining, Event logs
Process mining deals with the extraction of knowledge from event logs. One important task within this research field is denoted as conformance checking, which aims to diagnose deviations and discrepancies between modeled behavior and reallife, observed behavior. Conformance checking techniques still face some challenges, among which scalability, timeliness and traceability issues. In this paper, we propose a novel conformance analysis methodology to support the realtime monitoring of eventbased data streams, which is shown to be more efficient than related approaches and able to localize deviations in a more finegrained manner. Our developed approach can be directly applied in business process contexts where rapid reaction times are crucial; an exhaustive case example is provided to evidence the validity of the approach.

Decomposing alignmentbased conformance checking of dataaware process models
http://hdl.handle.net/2117/24753
Title: Decomposing alignmentbased conformance checking of dataaware process models
Authors: Leoni, Massimiliano de; Muñoz Gama, Jorge; Carmona Vargas, Josep; Van der Aalst, Wil M. P.
Abstract: Process mining techniques relate observed behavior to modeled behavior, e.g., the automatic discovery of a Petri net based on an event log. Process mining is not limited to process discovery and also includes conformance checking. Conformance checking techniques are used for evaluating the quality of discovered process models and to diagnose deviations from some normative model (e.g., to check compliance). Existing conformance checking approaches typically focus on the controlflow, thus being unable to diagnose deviations concerning data. This paper proposes a technique to check the conformance of dataaware process models. We use socalled Petri nets with Data to model data variables, guards, and read/write actions. Dataaware conformance checking problem may be very time consuming and sometimes even intractable when there are many transitions and data variables. Therefore, we propose a technique to decompose large dataaware conformance checking problems into smaller problems that can be solved more efficiently. We provide a general correctness result showing that decomposition does not influence the outcome of conformance checking. The approach is supported through ProM plugins and experimental results show significant performance improvements. Experiments have also been conducted with a reallife case study, thus showing that the approach is also relevant in real business settings.
Wed, 19 Nov 2014 12:13:07 GMT
http://hdl.handle.net/2117/24753
20141119T12:13:07Z
Leoni, Massimiliano de; Muñoz Gama, Jorge; Carmona Vargas, Josep; Van der Aalst, Wil M. P.
no
Process mining, Conformance checking, Divideandconquer techniques, Multiperspective process modelling
Process mining techniques relate observed behavior to modeled behavior, e.g., the automatic discovery of a Petri net based on an event log. Process mining is not limited to process discovery and also includes conformance checking. Conformance checking techniques are used for evaluating the quality of discovered process models and to diagnose deviations from some normative model (e.g., to check compliance). Existing conformance checking approaches typically focus on the controlflow, thus being unable to diagnose deviations concerning data. This paper proposes a technique to check the conformance of dataaware process models. We use socalled Petri nets with Data to model data variables, guards, and read/write actions. Dataaware conformance checking problem may be very time consuming and sometimes even intractable when there are many transitions and data variables. Therefore, we propose a technique to decompose large dataaware conformance checking problems into smaller problems that can be solved more efficiently. We provide a general correctness result showing that decomposition does not influence the outcome of conformance checking. The approach is supported through ProM plugins and experimental results show significant performance improvements. Experiments have also been conducted with a reallife case study, thus showing that the approach is also relevant in real business settings.

A recommender system for process discovery
http://hdl.handle.net/2117/24752
Title: A recommender system for process discovery
Authors: Ribeiro, Joel; Carmona Vargas, Josep; Misir, Mustafa; Sebag, Michele
Abstract: Over the last decade, several algorithms for process discovery and process conformance have been proposed. Still, it is wellaccepted that there is no dominant algorithm in any of these two disciplines, and then it is often difficult to apply them successfully. Most of these algorithms need a closeto expert knowledge in order to be applied satisfactorily. In this paper, we present a recommender system that uses portfoliobased algorithm selection strategies to face the following problems: to find the best discovery algorithm for the data at hand, and to allow bridging the gap between general users and process mining algorithms. Experiments performed with the developed tool witness the usefulness of the approach for a variety of instances.
Wed, 19 Nov 2014 11:43:24 GMT
http://hdl.handle.net/2117/24752
20141119T11:43:24Z
Ribeiro, Joel; Carmona Vargas, Josep; Misir, Mustafa; Sebag, Michele
no
Algorithm selection, Process mining, Recommender systems
Over the last decade, several algorithms for process discovery and process conformance have been proposed. Still, it is wellaccepted that there is no dominant algorithm in any of these two disciplines, and then it is often difficult to apply them successfully. Most of these algorithms need a closeto expert knowledge in order to be applied satisfactorily. In this paper, we present a recommender system that uses portfoliobased algorithm selection strategies to face the following problems: to find the best discovery algorithm for the data at hand, and to allow bridging the gap between general users and process mining algorithms. Experiments performed with the developed tool witness the usefulness of the approach for a variety of instances.

Narrow proofs may be maximally long
http://hdl.handle.net/2117/24406
Title: Narrow proofs may be maximally long
Authors: Atserias Peri, Albert; Lauria, Massimo; Nordström, Jakob
Abstract: We prove that there are 3CNF formulas over n variables that can be refuted in resolution in width w but require resolution proofs of size n¿(w). This shows that the simple counting argument that any formula refutable in width w must have a proof in size nO(¿) is essentially tight. Moreover, our lower bounds can be generalized to polynomial calculus resolution (PCR) and SheraliAdams, implying that the corresponding size upper bounds in terms of degree and rank are tight as well. Our results do not extend all the way to Lasserre, howeverthe formulas we study have Lasserre proofs of constant rank and size polynomial in both n and w.
Fri, 17 Oct 2014 11:02:37 GMT
http://hdl.handle.net/2117/24406
20141017T11:02:37Z
Atserias Peri, Albert; Lauria, Massimo; Nordström, Jakob
no
Degree, Lasserre, Length, PCR, Polynomial calculus, Proof complexity, Rank, Resolution, SheraliAdams, Size, Width
We prove that there are 3CNF formulas over n variables that can be refuted in resolution in width w but require resolution proofs of size n¿(w). This shows that the simple counting argument that any formula refutable in width w must have a proof in size nO(¿) is essentially tight. Moreover, our lower bounds can be generalized to polynomial calculus resolution (PCR) and SheraliAdams, implying that the corresponding size upper bounds in terms of degree and rank are tight as well. Our results do not extend all the way to Lasserre, howeverthe formulas we study have Lasserre proofs of constant rank and size polynomial in both n and w.

Correctness of incremental model synchronization with triple graph grammars
http://hdl.handle.net/2117/24364
Title: Correctness of incremental model synchronization with triple graph grammars
Authors: Orejas Valdés, Fernando; Pino Blanco, Elvira
Abstract: In modeldriven software development, we may have several models describing the same system or artifact, by providing different views on it. In this case, we say that these models are consistently integrated. Triple Graph Grammars (TGGs), defined by Schürr, are a general and powerful tool to describe (bidirectional) model transformations. In this context, model synchronization is the operation that, given two consistent models and an update or modification of one of them, finds the corresponding update on the other model, so that consistency is restored. There are different approaches to describe this operation in terms of TGGs, but most of them have a computational cost that depends on the size of the given models. In general this may be very costly since these models may be quite large. To avoid this problem, Giese and Wagner have advocated for the need of incremental synchronization procedures, meaning that their cost should depend only on the size of the given update. In particular they proposed one such procedure. Unfortunately, the correctness of their approach is not studied and, anyhow, it could only be ensured under severe restrictions on the kind of TGGs considered. In the work presented, we study the problem from a different point of view. First, we discuss what it means for a procedure to be incremental, defining a correctness notion that we call incremental consistency. Moreover, we present a general incremental synchronization procedure and we show its correctness, completeness and incrementality.
Tue, 14 Oct 2014 11:38:12 GMT
http://hdl.handle.net/2117/24364
20141014T11:38:12Z
Orejas Valdés, Fernando; Pino Blanco, Elvira
no
Incremental model synchronization, Model synchronization, Model transformation, Triple graph grammars
In modeldriven software development, we may have several models describing the same system or artifact, by providing different views on it. In this case, we say that these models are consistently integrated. Triple Graph Grammars (TGGs), defined by Schürr, are a general and powerful tool to describe (bidirectional) model transformations. In this context, model synchronization is the operation that, given two consistent models and an update or modification of one of them, finds the corresponding update on the other model, so that consistency is restored. There are different approaches to describe this operation in terms of TGGs, but most of them have a computational cost that depends on the size of the given models. In general this may be very costly since these models may be quite large. To avoid this problem, Giese and Wagner have advocated for the need of incremental synchronization procedures, meaning that their cost should depend only on the size of the given update. In particular they proposed one such procedure. Unfortunately, the correctness of their approach is not studied and, anyhow, it could only be ensured under severe restrictions on the kind of TGGs considered. In the work presented, we study the problem from a different point of view. First, we discuss what it means for a procedure to be incremental, defining a correctness notion that we call incremental consistency. Moreover, we present a general incremental synchronization procedure and we show its correctness, completeness and incrementality.

Processoriented analysis for medical devices
http://hdl.handle.net/2117/24363
Title: Processoriented analysis for medical devices
Authors: Sfyrla, Vassiliki; Carmona Vargas, Josep; Henck, Pascal
Abstract: Medical Cyber Physical Systems are widely used in modern healthcare environments. Such systems are considered lifecritical due to the severity of consequences that faults may cause. Effective methods, techniques and tools for modeling and analyzing medical critical systems are of major importance for ensuring system reliability and patient safety. This work is looking at issues concerning different types of medical industry needs including safety analysis, testing, conformance checking, performance analysis and optimization. We explore the possibility of addressing these issues by exploiting information recorded in logs generated by medical devices during execution. Processoriented analysis of logs is known as process mining, a novel field that has gained considerable interest in several contexts in the last decade. Process mining techniques will be applied to an industrial use case provided by Fresenius, a manufacturer of medical devices, for analyzing process logs generated by an infusion pump.
Tue, 14 Oct 2014 09:28:19 GMT
http://hdl.handle.net/2117/24363
20141014T09:28:19Z
Sfyrla, Vassiliki; Carmona Vargas, Josep; Henck, Pascal
no
Discovery, Formal analysis, Infusion pump, Process logs, Process mining
Medical Cyber Physical Systems are widely used in modern healthcare environments. Such systems are considered lifecritical due to the severity of consequences that faults may cause. Effective methods, techniques and tools for modeling and analyzing medical critical systems are of major importance for ensuring system reliability and patient safety. This work is looking at issues concerning different types of medical industry needs including safety analysis, testing, conformance checking, performance analysis and optimization. We explore the possibility of addressing these issues by exploiting information recorded in logs generated by medical devices during execution. Processoriented analysis of logs is known as process mining, a novel field that has gained considerable interest in several contexts in the last decade. Process mining techniques will be applied to an industrial use case provided by Fresenius, a manufacturer of medical devices, for analyzing process logs generated by an infusion pump.

On the average performance of fixed partial match queries in random relaxed Kd trees
http://hdl.handle.net/2117/24268
Title: On the average performance of fixed partial match queries in random relaxed Kd trees
Authors: Duch Brown, Amalia; Lau LaynesLozada, Gustavo Salvador; Martínez Parra, Conrado
Abstract: We obtain an explicit formula for the expected cost of a fixed partial match query in a random relaxed Kd tree, that is, the expected cost for a query of the form q = (q0 , q1 , . . . , qK1 ) with 0 < s < K specified coordinates with values z0 , . . . , zs1 ¿ (0, 1). This is a generalization of previous results in the literature for s = 1. Qualitatively similar results hold for standard Kd trees and we conjecture that this also holds for other multidimensional tree structures such as quadtrees and Kd tries.
Mon, 06 Oct 2014 09:14:06 GMT
http://hdl.handle.net/2117/24268
20141006T09:14:06Z
Duch Brown, Amalia; Lau LaynesLozada, Gustavo Salvador; Martínez Parra, Conrado
no
Partial match, Kd trees, Multidimensional data structures, Associative queries
We obtain an explicit formula for the expected cost of a fixed partial match query in a random relaxed Kd tree, that is, the expected cost for a query of the form q = (q0 , q1 , . . . , qK1 ) with 0 < s < K specified coordinates with values z0 , . . . , zs1 ¿ (0, 1). This is a generalization of previous results in the literature for s = 1. Qualitatively similar results hold for standard Kd trees and we conjecture that this also holds for other multidimensional tree structures such as quadtrees and Kd tries.

Automatic evaluation of reductions between NPcomplete problems
http://hdl.handle.net/2117/24254
Title: Automatic evaluation of reductions between NPcomplete problems
Authors: Creus López, Carles; Fernández Durán, Pau; Godoy Balil, Guillem
Abstract: We implement an online judge for evaluating correctness of reductions between NPcomplete problems. The site has a list of exercises asking for a reduction between two given problems. Internally, the reduction is evaluated by means of a set of tests. Each test consists of an input of the first problem and gives rise to an input of the second problem through the reduction. The correctness of the reduction, that is, the preservation of the answer between both problems, is checked by applying additional reductions to SAT and using a stateoftheart SAT solver. In order to represent the reductions, we have defined a new programming language called REDNP. On one side, REDNP has specific features for describing typical concepts that frequently appear in reductions, like graphs and clauses. On the other side, we impose severe limitations to REDNP in order to avoid malicious submissions, like the ones with an embedded SAT solver.
Sun, 05 Oct 2014 06:34:30 GMT
http://hdl.handle.net/2117/24254
20141005T06:34:30Z
Creus López, Carles; Fernández Durán, Pau; Godoy Balil, Guillem
no
NPcompleteness, Reductions, SAT application, Self learning
We implement an online judge for evaluating correctness of reductions between NPcomplete problems. The site has a list of exercises asking for a reduction between two given problems. Internally, the reduction is evaluated by means of a set of tests. Each test consists of an input of the first problem and gives rise to an input of the second problem through the reduction. The correctness of the reduction, that is, the preservation of the answer between both problems, is checked by applying additional reductions to SAT and using a stateoftheart SAT solver. In order to represent the reductions, we have defined a new programming language called REDNP. On one side, REDNP has specific features for describing typical concepts that frequently appear in reductions, like graphs and clauses. On the other side, we impose severe limitations to REDNP in order to avoid malicious submissions, like the ones with an embedded SAT solver.

Automatic evaluation of contextfree grammars (system description)
http://hdl.handle.net/2117/24226
Title: Automatic evaluation of contextfree grammars (system description)
Authors: Creus López, Carles; Godoy Balil, Guillem
Abstract: We implement an online judge for contextfree grammars. Our system contains a list of problems describing formal languages, and asking for grammars generating them. A submitted proposal grammar receives a verdict of acceptance or rejection depending on whether the judge determines that it is equivalent to the reference solution grammar provided by the problem setter. Since equivalence of contextfree grammars is an undecidable problem, we consider a maximum length l and only test equivalence of the generated languages up to words of length l. This length restriction is very often sufficient for the wellmeant submissions. Since this restricted problem is still NPcomplete, we design and implement methods based on hashing, SAT, and automata that perform well in practice. © 2014 Springer International Publishing Switzerland.
Fri, 03 Oct 2014 07:01:26 GMT
http://hdl.handle.net/2117/24226
20141003T07:01:26Z
Creus López, Carles; Godoy Balil, Guillem
no
Automata theory, Biomineralization, Context free grammars, Differentiation (calculus), automata, equivalence, grammars, hashing, SAT, Context free languages
We implement an online judge for contextfree grammars. Our system contains a list of problems describing formal languages, and asking for grammars generating them. A submitted proposal grammar receives a verdict of acceptance or rejection depending on whether the judge determines that it is equivalent to the reference solution grammar provided by the problem setter. Since equivalence of contextfree grammars is an undecidable problem, we consider a maximum length l and only test equivalence of the generated languages up to words of length l. This length restriction is very often sufficient for the wellmeant submissions. Since this restricted problem is still NPcomplete, we design and implement methods based on hashing, SAT, and automata that perform well in practice. © 2014 Springer International Publishing Switzerland.

Tree automata with height constraints between brothers
http://hdl.handle.net/2117/24225
Title: Tree automata with height constraints between brothers
Authors: Creus López, Carles; Godoy Balil, Guillem
Abstract: We define the tree automata with height constraints between brothers (TACBB H ). Constraints of equalities and inequalities between heights of siblings that restrict the applicability of the rules are allowed in TACBB H. These constraints allow to express natural tree languages like complete or balanced (like AVL) trees. We prove decidability of emptiness and finiteness for TACBB H, and also for a more general class that additionally allows to combine equality and disequality constraints between brothers. © 2014 Springer International Publishing Switzerland.
Fri, 03 Oct 2014 06:29:03 GMT
http://hdl.handle.net/2117/24225
20141003T06:29:03Z
Creus López, Carles; Godoy Balil, Guillem
no
Constraints, Emptiness, Finiteness, TreeAutomata
We define the tree automata with height constraints between brothers (TACBB H ). Constraints of equalities and inequalities between heights of siblings that restrict the applicability of the rules are allowed in TACBB H. These constraints allow to express natural tree languages like complete or balanced (like AVL) trees. We prove decidability of emptiness and finiteness for TACBB H, and also for a more general class that additionally allows to combine equality and disequality constraints between brothers. © 2014 Springer International Publishing Switzerland.

Metastability in betterthanworstcase designs
http://hdl.handle.net/2117/24174
Title: Metastability in betterthanworstcase designs
Authors: Beer, Salomon; Cannizzaro, Marco; Cortadella Fortuny, Jordi; Ginosar, Ran; Lavagno, Luciano
Abstract: BetterThanWorstCaseDesigns use timing speculation to run with a cycle period faster than the one required for worstcase conditions. This speculation may produce timing violations and metastability that result in failures and nondeterministic timing behavior. The effects of these phenomena are not always well understood by designers and researchers in this area. This paper analyzes the impact of timing speculation and the reasons why it is difficult to adopt this paradigm in industrial designs.
Fri, 26 Sep 2014 10:14:18 GMT
http://hdl.handle.net/2117/24174
20140926T10:14:18Z
Beer, Salomon; Cannizzaro, Marco; Cortadella Fortuny, Jordi; Ginosar, Ran; Lavagno, Luciano
no
Metastability, Betterthanworstcase designs, Timing violations, Nondeterministic timing behavior: Timing speculation, Industrial designs
BetterThanWorstCaseDesigns use timing speculation to run with a cycle period faster than the one required for worstcase conditions. This speculation may produce timing violations and metastability that result in failures and nondeterministic timing behavior. The effects of these phenomena are not always well understood by designers and researchers in this area. This paper analyzes the impact of timing speculation and the reasons why it is difficult to adopt this paradigm in industrial designs.

Hardware primitives for the synthesis of multithreaded elastic systems
http://hdl.handle.net/2117/24157
Title: Hardware primitives for the synthesis of multithreaded elastic systems
Authors: Dimitrakopoulos, George N.; Seitanidis, I.; Psarras, A.; Tsiouris, K.; Mattheakis, Pavlos M.; Cortadella Fortuny, Jordi
Abstract: Elastic systems operate in a dataflowlike mode using a distributed scalable control and tolerating variablelatency computations. At the same time, multithreading increases the utilization of processing units and hides the latency of each operation by timemultiplexing operations of different threads in the datapath. This paper proposes a model to unify multithreading and elasticity. A new multithreaded elastic control protocol is introduced supported by lowcost elastic buffers that minimize the storage requirements without sacrificing performance. To enable the synthesis of multithreaded elastic architectures, new hardware primitives are proposed and utilized in two circuit examples to prove the applicability of the proposed approach.
Thu, 25 Sep 2014 09:32:15 GMT
http://hdl.handle.net/2117/24157
20140925T09:32:15Z
Dimitrakopoulos, George N.; Seitanidis, I.; Psarras, A.; Tsiouris, K.; Mattheakis, Pavlos M.; Cortadella Fortuny, Jordi
no
Engineering controlled terms: Elasticity, Hardware
Elastic systems operate in a dataflowlike mode using a distributed scalable control and tolerating variablelatency computations. At the same time, multithreading increases the utilization of processing units and hides the latency of each operation by timemultiplexing operations of different threads in the datapath. This paper proposes a model to unify multithreading and elasticity. A new multithreaded elastic control protocol is introduced supported by lowcost elastic buffers that minimize the storage requirements without sacrificing performance. To enable the synthesis of multithreaded elastic architectures, new hardware primitives are proposed and utilized in two circuit examples to prove the applicability of the proposed approach.

An operational framework to reason about policy behavior in trust management systems
http://hdl.handle.net/2117/24093
Title: An operational framework to reason about policy behavior in trust management systems
Authors: Pasarella Sánchez, Ana Edelmira; Lobo, Jorge
Abstract: In this paper we show that the logical framework proposed by Becker et al. to reason about security policy behavior in a trust management context can be captured by an operational framework that is based on the language proposed by Miller to deal with scoping and/or modules in logic programming in 1989. The framework of Becker et al. uses propositional Horn clauses to represent both policies and credentials, implications in clauses are interpreted in counterfactual logic, a Hilbertstyle proof is defined and a system based on SAT is used to proof whether properties about credentials, permissions and policies are valid in trust management systems, i.e. formulas that are true for all possible policies. Our contribution is to show that instead of using a SAT system, this kind of validation can rely on the operational semantics (derivability relation) of Miller’s language, which is very close to derivability in logic programs, opening up the possibility to extend Becker et al.’s framework to the more practical first order case since Miller’s language is first order.
Thu, 18 Sep 2014 08:12:23 GMT
http://hdl.handle.net/2117/24093
20140918T08:12:23Z
Pasarella Sánchez, Ana Edelmira; Lobo, Jorge
no
In this paper we show that the logical framework proposed by Becker et al. to reason about security policy behavior in a trust management context can be captured by an operational framework that is based on the language proposed by Miller to deal with scoping and/or modules in logic programming in 1989. The framework of Becker et al. uses propositional Horn clauses to represent both policies and credentials, implications in clauses are interpreted in counterfactual logic, a Hilbertstyle proof is defined and a system based on SAT is used to proof whether properties about credentials, permissions and policies are valid in trust management systems, i.e. formulas that are true for all possible policies. Our contribution is to show that instead of using a SAT system, this kind of validation can rely on the operational semantics (derivability relation) of Miller’s language, which is very close to derivability in logic programs, opening up the possibility to extend Becker et al.’s framework to the more practical first order case since Miller’s language is first order.

On the existence of Nash equilibria in strategic search games
http://hdl.handle.net/2117/23999
Title: On the existence of Nash equilibria in strategic search games
Authors: Álvarez Faura, M. del Carme; Duch Brown, Amalia; Serna Iglesias, María José; Thilikos Touloupas, Dimitrios
Abstract: We consider a general multiagent framework in which a set of n agents are roaming a network where m valuable and sharable goods (resources, services, information ....) are hidden in m different vertices of the network. We analyze several strategic situations that arise in this setting by means of game theory. To do so, we introduce a class of strategic games that we call strategic search games. In those games agents have to select a simple path in the network that starts from a predetermined set of initial vertices. Depending on how the value of the retrieved goods is splitted among the agents, we consider two game types: findersshare in which the agents that find a good split among them the corresponding benefit and firstsshare in which only the agents that first find a good share the corresponding benefit. We show that findersshare games always have pure Nash equilibria (pne ). For obtaining this result, we introduce the notion of Nashpreserving reduction between strategic games. We show that findersshare games are Nashreducible to singlesource network congestion games. This is done through a series of Nashpreserving reductions. For firstsshare games we show the existence of games with and without pne. Furthermore, we identify some graph families in which the firstsshare game has always a pne that is computable in polynomial time.
Mon, 08 Sep 2014 11:01:23 GMT
http://hdl.handle.net/2117/23999
20140908T11:01:23Z
Álvarez Faura, M. del Carme; Duch Brown, Amalia; Serna Iglesias, María José; Thilikos Touloupas, Dimitrios
no
Nash equilibria existence, Strategic search games, General multiagent framework
We consider a general multiagent framework in which a set of n agents are roaming a network where m valuable and sharable goods (resources, services, information ....) are hidden in m different vertices of the network. We analyze several strategic situations that arise in this setting by means of game theory. To do so, we introduce a class of strategic games that we call strategic search games. In those games agents have to select a simple path in the network that starts from a predetermined set of initial vertices. Depending on how the value of the retrieved goods is splitted among the agents, we consider two game types: findersshare in which the agents that find a good split among them the corresponding benefit and firstsshare in which only the agents that first find a good share the corresponding benefit. We show that findersshare games always have pure Nash equilibria (pne ). For obtaining this result, we introduce the notion of Nashpreserving reduction between strategic games. We show that findersshare games are Nashreducible to singlesource network congestion games. This is done through a series of Nashpreserving reductions. For firstsshare games we show the existence of games with and without pne. Furthermore, we identify some graph families in which the firstsshare game has always a pne that is computable in polynomial time.

BeamACO for the repetitionfree longest common subsequence problem
http://hdl.handle.net/2117/23345
Title: BeamACO for the repetitionfree longest common subsequence problem
Authors: Blum, Christian; Blesa Aguilera, Maria Josep; Calvo, Borja
Abstract: In this paper we propose a BeamACO approach for a combinatorial optimization problem known as the repetitionfree longest common subsequence problem. Given two input sequences x and y over a finite alphabet S, this problem concerns to find a longest common subsequence of x and y in which no letter is repeated. BeamACO algorithms
are combinations between the metaheuristic ant colony optimization and a deterministic tree search technique called beam search. The algorithm that we present is an adaptation of a previously published BeamACO
algorithm for the classical longest common subsequence problem. The results of the proposed algorithm outperform existing heuristics from the literature.
Tue, 01 Jul 2014 08:44:15 GMT
http://hdl.handle.net/2117/23345
20140701T08:44:15Z
Blum, Christian; Blesa Aguilera, Maria Josep; Calvo, Borja
no
In this paper we propose a BeamACO approach for a combinatorial optimization problem known as the repetitionfree longest common subsequence problem. Given two input sequences x and y over a finite alphabet S, this problem concerns to find a longest common subsequence of x and y in which no letter is repeated. BeamACO algorithms
are combinations between the metaheuristic ant colony optimization and a deterministic tree search technique called beam search. The algorithm that we present is an adaptation of a previously published BeamACO
algorithm for the classical longest common subsequence problem. The results of the proposed algorithm outperform existing heuristics from the literature.