DSpace Community:
http://hdl.handle.net/2117/3092
Sun, 05 Jul 2015 02:59:42 GMT2015-07-05T02:59:42Zwebmaster.bupc@upc.eduUniversitat Politècnica de Catalunya. Servei de Biblioteques i DocumentaciónoThe ordering principle in a fragment of approximate counting
http://hdl.handle.net/2117/28288
Title: The ordering principle in a fragment of approximate counting
Authors: Atserias Peri, Albert; Thapen, Neil
Abstract: The ordering principle states that every finite linear order has a least element. We show that, in the relativized setting, the surjective weak pigeonhole principle for polynomial time functions does not prove a Herbrandized version of the ordering principle over T-2(1). This answers an open question raised in Buss et al. [2012] and completes their program to compare the strength of Jerabek's bounded arithmetic theory for approximate counting with weakened versions of it.Fri, 12 Jun 2015 08:41:44 GMThttp://hdl.handle.net/2117/282882015-06-12T08:41:44ZAtserias Peri, Albert; Thapen, NeilnoTheory, Algorithms, Bounded arithmetic, Propositional proof complexity, Polynomial local search, Weak Pigeonhole PrincipleThe ordering principle states that every finite linear order has a least element. We show that, in the relativized setting, the surjective weak pigeonhole principle for polynomial time functions does not prove a Herbrandized version of the ordering principle over T-2(1). This answers an open question raised in Buss et al. [2012] and completes their program to compare the strength of Jerabek's bounded arithmetic theory for approximate counting with weakened versions of it.Process discovery algorithms using numerical abstract domains
http://hdl.handle.net/2117/28272
Title: Process discovery algorithms using numerical abstract domains
Authors: Carmona Vargas, Josep; Cortadella Fortuny, Jordi
Abstract: The discovery of process models from event logs has emerged as one of the crucial problems for enabling the continuous support in the life-cycle of an information system. However, in a decade of process discovery research, the algorithms and tools that have appeared are known to have strong limitations in several dimensions. The size of the logs and the formal properties of the model discovered are the two main challenges nowadays. In this paper we propose the use of numerical abstract domains for tackling these two problems, for the particular case of the discovery of Petri nets. First, numerical abstract domains enable the discovery of general process models, requiring no knowledge (e.g., the bound of the Petri net to derive) for the discovery algorithm. Second, by using divide and conquer techniques we are able to control the size of the process discovery problems. The methods proposed in this paper have been implemented in a prototype tool and experiments are reported illustrating the significance of this fresh view of the process discovery problem.Thu, 11 Jun 2015 09:06:20 GMThttp://hdl.handle.net/2117/282722015-06-11T09:06:20ZCarmona Vargas, Josep; Cortadella Fortuny, JordinoProcess discovery, Numerical abstract domains, Formal methods, Concurrency, Process modelsThe discovery of process models from event logs has emerged as one of the crucial problems for enabling the continuous support in the life-cycle of an information system. However, in a decade of process discovery research, the algorithms and tools that have appeared are known to have strong limitations in several dimensions. The size of the logs and the formal properties of the model discovered are the two main challenges nowadays. In this paper we propose the use of numerical abstract domains for tackling these two problems, for the particular case of the discovery of Petri nets. First, numerical abstract domains enable the discovery of general process models, requiring no knowledge (e.g., the bound of the Petri net to derive) for the discovery algorithm. Second, by using divide and conquer techniques we are able to control the size of the process discovery problems. The methods proposed in this paper have been implemented in a prototype tool and experiments are reported illustrating the significance of this fresh view of the process discovery problem.Better feedback for educational online judges
http://hdl.handle.net/2117/28174
Title: Better feedback for educational online judges
Authors: Mani, Anaga; Venkataramani, Divya; Petit Silvestre, Jordi; Roura Ferret, Salvador
Abstract: The verdicts of most online programming judges are, essentially, binary: the submitted codes are either “good enough” or not. Whilst this policy is appropriate for competitive or recruitment platforms, it can hinder the adoption of online judges on educative settings, where it could be adequate to provide better feedback to a student (or instructor) that has submitted a wrong code. An obvious option would be to just show him or her an instance where the code fails. However, that particular instance could be not very significant, and so could induce unreflectively patching the code. The approach considered in this paper is to data mine all the past incorrect submissions by all the users of the judge, so to extract a small subset of private test cases that may be relevant to most future users. Our solution is based on parsing the test files, building a bipartite graph, and solving a Set Cover problem by means of Integer Linear Programming. We have tested our solution with a hundred problems in Jutge.org. Those experiments suggest that our approach is general, efficient, and provides high quality results.Thu, 04 Jun 2015 08:47:03 GMThttp://hdl.handle.net/2117/281742015-06-04T08:47:03ZMani, Anaga; Venkataramani, Divya; Petit Silvestre, Jordi; Roura Ferret, SalvadornoOnline programming judges, Automatic assessment, Data miningThe verdicts of most online programming judges are, essentially, binary: the submitted codes are either “good enough” or not. Whilst this policy is appropriate for competitive or recruitment platforms, it can hinder the adoption of online judges on educative settings, where it could be adequate to provide better feedback to a student (or instructor) that has submitted a wrong code. An obvious option would be to just show him or her an instance where the code fails. However, that particular instance could be not very significant, and so could induce unreflectively patching the code. The approach considered in this paper is to data mine all the past incorrect submissions by all the users of the judge, so to extract a small subset of private test cases that may be relevant to most future users. Our solution is based on parsing the test files, building a bipartite graph, and solving a Set Cover problem by means of Integer Linear Programming. We have tested our solution with a hundred problems in Jutge.org. Those experiments suggest that our approach is general, efficient, and provides high quality results.Firefighting as a game
http://hdl.handle.net/2117/28172
Title: Firefighting as a game
Authors: Álvarez Faura, M. del Carme; Blesa Aguilera, Maria Josep; Molter, Hendrik
Abstract: The Firefighter Problem was proposed in 1995 [16] as a deterministic discrete-time model for the spread (and containment) of a fire. Its applications reach from real fires to the spreading of diseases and the containment of floods. Furthermore, it can be used to model the spread of computer viruses or viral marketing in communication networks.
In this work, we study the problem from a game-theoretical perspective. Such a context seems very appropriate when applied to large networks, where entities may act and make decisions based on their own interests, without global coordination.
We model the Firefighter Problem as a strategic game where there is one player for each time step who decides where to place the firefighters. We show that the Price of Anarchy is linear in the general case, but at most 2 for trees. We prove that the quality of the equilibria improves when allowing coalitional cooperation among players. In general, we have that the Price of Anarchy is in T(n/k) where k is the coalition size. Furthermore, we show that there are topologies which have a constant Price of Anarchy even when constant sized coalitions are considered.Thu, 04 Jun 2015 08:27:45 GMThttp://hdl.handle.net/2117/281722015-06-04T08:27:45ZÁlvarez Faura, M. del Carme; Blesa Aguilera, Maria Josep; Molter, HendriknoFirefighter problem, Spreading models for networks, Algorithmic game theory, Nash equilibria, Price of anarchy, CoalitionsThe Firefighter Problem was proposed in 1995 [16] as a deterministic discrete-time model for the spread (and containment) of a fire. Its applications reach from real fires to the spreading of diseases and the containment of floods. Furthermore, it can be used to model the spread of computer viruses or viral marketing in communication networks.
In this work, we study the problem from a game-theoretical perspective. Such a context seems very appropriate when applied to large networks, where entities may act and make decisions based on their own interests, without global coordination.
We model the Firefighter Problem as a strategic game where there is one player for each time step who decides where to place the firefighters. We show that the Price of Anarchy is linear in the general case, but at most 2 for trees. We prove that the quality of the equilibria improves when allowing coalitional cooperation among players. In general, we have that the Price of Anarchy is in T(n/k) where k is the coalition size. Furthermore, we show that there are topologies which have a constant Price of Anarchy even when constant sized coalitions are considered.Lower bounds for DNF-refutations of a relativized weak pigeonhole principle
http://hdl.handle.net/2117/28085
Title: Lower bounds for DNF-refutations of a relativized weak pigeonhole principle
Authors: Atserias Peri, Albert; Müller, Moritz; Oliva Valls, Sergi
Abstract: The relativized weak pigeonhole principle states that if at least 2n out of n(2) pigeons fly into n holes, then some hole must be doubly occupied. We prove that every DNF-refutation of the CNF encoding of this principle requires size 2((log n)3/2-is an element of) for every is an element of > 0 and every sufficiently large n. By reducing it to the standard weak pigeonhole principle with 2n pigeons and n holes, we also show that this lower bound is essentially tight in that there exist DNF-refutations of size 2((log n)O(1)) even in R(log). For the lower bound proof we need to discuss the existence of unbalanced low-degree bipartite expanders satisfying a certain robustness condition.Thu, 28 May 2015 07:25:26 GMThttp://hdl.handle.net/2117/280852015-05-28T07:25:26ZAtserias Peri, Albert; Müller, Moritz; Oliva Valls, SerginoProof complexity, Bounded arithmetic, Weak pigeonhole principles, Approximate counting, Bounded-depth frege, Propositional proof systems, Resolution lower bounds, Random formulas, Complexity gap, Primes, SizeThe relativized weak pigeonhole principle states that if at least 2n out of n(2) pigeons fly into n holes, then some hole must be doubly occupied. We prove that every DNF-refutation of the CNF encoding of this principle requires size 2((log n)3/2-is an element of) for every is an element of > 0 and every sufficiently large n. By reducing it to the standard weak pigeonhole principle with 2n pigeons and n holes, we also show that this lower bound is essentially tight in that there exist DNF-refutations of size 2((log n)O(1)) even in R(log). For the lower bound proof we need to discuss the existence of unbalanced low-degree bipartite expanders satisfying a certain robustness condition.Tableau-based reasoning for graph properties
http://hdl.handle.net/2117/28034
Title: Tableau-based reasoning for graph properties
Authors: Lambers, Leen; Orejas Valdés, Fernando
Abstract: Graphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. A simple way is based on defining an appropriate encoding of graphs in terms of classical logic. This approach has been followed by Courcelle. The alternative is the definition of a specialized logic, as done by Habel and Pennemann, who defined a logic of nested graph conditions, where graph properties are formulated explicitly making use of graphs and graph morphisms, and which has the expressive power of Courcelle's first order logic of graphs. In particular, in his thesis, Pennemann defined and implemented a sound proof system for reasoning in this logic. Moreover, he showed that his tools outperform some standard provers when working over encoded graph conditions. Unfortunately, Pennemann did not prove the completeness of his proof system. In this sense, one of the main contributions of this paper is the solution to this open problem. In particular, we prove the (refutational) completeness of a tableau method based on Pennemann's rules that provides a specific theorem-proving procedure for this logic. This procedure can be considered our second contribution. Finally, our tableaux are not standard, but we had to define a new notion of nested tableaux that could be useful for other formalisms where formulas have a hierarchical structure like nested graph conditions.Mon, 25 May 2015 14:03:48 GMThttp://hdl.handle.net/2117/280342015-05-25T14:03:48ZLambers, Leen; Orejas Valdés, FernandonoAutomated deduction, Graph logic, Graph properties, Graph transformation, Visual modellingGraphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. A simple way is based on defining an appropriate encoding of graphs in terms of classical logic. This approach has been followed by Courcelle. The alternative is the definition of a specialized logic, as done by Habel and Pennemann, who defined a logic of nested graph conditions, where graph properties are formulated explicitly making use of graphs and graph morphisms, and which has the expressive power of Courcelle's first order logic of graphs. In particular, in his thesis, Pennemann defined and implemented a sound proof system for reasoning in this logic. Moreover, he showed that his tools outperform some standard provers when working over encoded graph conditions. Unfortunately, Pennemann did not prove the completeness of his proof system. In this sense, one of the main contributions of this paper is the solution to this open problem. In particular, we prove the (refutational) completeness of a tableau method based on Pennemann's rules that provides a specific theorem-proving procedure for this logic. This procedure can be considered our second contribution. Finally, our tableaux are not standard, but we had to define a new notion of nested tableaux that could be useful for other formalisms where formulas have a hierarchical structure like nested graph conditions.Adaptive clock with useful jitter
http://hdl.handle.net/2117/27967
Title: Adaptive clock with useful jitter
Authors: Cortadella Fortuny, Jordi; Lavagno, Luciano; López Muñoz, Pedro; Lupon Navazo, Marc; Moreno Vega, Alberto; Roca Pérez, Antoni; Sapatnekar, Sachin S.
Abstract: The growing variability in nanoelectronic devices due to uncertainties from the manufacturing process and environmental conditions (power supply, temperature, aging) requires increasing design guardbands, forcing circuits to work with conservative clock frequencies. Various schemes for clock generation based on ring oscillators have been proposed with the goal to mitigate the power and performance losses
attributable to variability. However, there has been no systematic analysis to quantify the benefits of such schemes.This paper presents and analyzes an Adaptive Clocking scheme with
Useful Jitter (ACUJ) that uses variability as an opportunity to reduce power by adapting the clock frequency to the varying environmental conditions and, thus, reducing guardband margins significantly. Power can be reduced between 20% and 40% at iso-performance and performance can be boosted by similar amounts at iso-power. Additionally, energy savings can be translated to substantial advantages in terms of reliability and thermal management. More importantly, the technology can be adopted with minimal modifications to conventional EDA flows.
Description: Report - Departament Ciències de la ComputacióTue, 19 May 2015 13:40:18 GMThttp://hdl.handle.net/2117/279672015-05-19T13:40:18ZCortadella Fortuny, Jordi; Lavagno, Luciano; López Muñoz, Pedro; Lupon Navazo, Marc; Moreno Vega, Alberto; Roca Pérez, Antoni; Sapatnekar, Sachin S.noThe growing variability in nanoelectronic devices due to uncertainties from the manufacturing process and environmental conditions (power supply, temperature, aging) requires increasing design guardbands, forcing circuits to work with conservative clock frequencies. Various schemes for clock generation based on ring oscillators have been proposed with the goal to mitigate the power and performance losses
attributable to variability. However, there has been no systematic analysis to quantify the benefits of such schemes.This paper presents and analyzes an Adaptive Clocking scheme with
Useful Jitter (ACUJ) that uses variability as an opportunity to reduce power by adapting the clock frequency to the varying environmental conditions and, thus, reducing guardband margins significantly. Power can be reduced between 20% and 40% at iso-performance and performance can be boosted by similar amounts at iso-power. Additionally, energy savings can be translated to substantial advantages in terms of reliability and thermal management. More importantly, the technology can be adopted with minimal modifications to conventional EDA flows.On the complexity of exchanging
http://hdl.handle.net/2117/27400
Title: On the complexity of exchanging
Authors: Molinero Albareda, Xavier; Olsen, Martin; Serna Iglesias, María José
Abstract: We analyze the computational complexity of the problem of deciding
whether, for a given simple game, there exists the possibility of rearranging the participants in a set of j given losing coalitions into a set of j winning coalitions. We also look at the problem of turning winning coalitions into losing coalitions. We analyze the problem when the simple game is represented by a list of wining, losing, minimal winning or maximal loosing coalitions.Thu, 16 Apr 2015 17:13:57 GMThttp://hdl.handle.net/2117/274002015-04-16T17:13:57ZMolinero Albareda, Xavier; Olsen, Martin; Serna Iglesias, María JosénoTradeness of Simple Games, Computational ComplexityWe analyze the computational complexity of the problem of deciding
whether, for a given simple game, there exists the possibility of rearranging the participants in a set of j given losing coalitions into a set of j winning coalitions. We also look at the problem of turning winning coalitions into losing coalitions. We analyze the problem when the simple game is represented by a list of wining, losing, minimal winning or maximal loosing coalitions.Bounded-width QBF is PSPACE-complete
http://hdl.handle.net/2117/27300
Title: Bounded-width QBF is PSPACE-complete
Authors: Atserias Peri, Albert; Oliva Valls, Sergi
Abstract: Tree-width and path-width are two well-studied parameters of structures that measure their similarity to a tree and a path, respectively. We show that QBF on instances with constant path-width, and hence constant tree-width, remains PSPACE-complete. This answers a question by Vardi. We also show that on instances with constant path-width and a very slow-growing number of quantifier alternations (roughly inverse-Ackermann many in the number of variables), the problem remains NP-hard. Additionally, we introduce a family of formulas with bounded tree-width that do have short refutations in Q-resolution, the natural generalization of resolution for quantified Boolean formulas.Tue, 14 Apr 2015 08:29:15 GMThttp://hdl.handle.net/2117/273002015-04-14T08:29:15ZAtserias Peri, Albert; Oliva Valls, SerginoTree-width, Path-width, Quantified Boolean formulas, PSPACE-completeTree-width and path-width are two well-studied parameters of structures that measure their similarity to a tree and a path, respectively. We show that QBF on instances with constant path-width, and hence constant tree-width, remains PSPACE-complete. This answers a question by Vardi. We also show that on instances with constant path-width and a very slow-growing number of quantifier alternations (roughly inverse-Ackermann many in the number of variables), the problem remains NP-hard. Additionally, we introduce a family of formulas with bounded tree-width that do have short refutations in Q-resolution, the natural generalization of resolution for quantified Boolean formulas.A boolean rule-based approach for manufacturability-aware cell routing
http://hdl.handle.net/2117/27267
Title: A boolean rule-based approach for manufacturability-aware cell routing
Authors: Cortadella Fortuny, Jordi; Petit Silvestre, Jordi; Gómez Fernández, Sergio; Moll Echeto, Francisco de Borja
Abstract: An approach for cell routing using gridded design rules is proposed. It is technology-independent and parameterizable for different fabrics and design rules, including support for multiple-patterning lithography. The core contribution is a detailed-routing algorithm based on a Boolean formulation of the problem. The algorithm uses a novel encoding scheme, graph theory to support floating terminals, efficient heuristics to reduce the computational cost, and minimization of the number of unconnected pins in case the cell is unroutable. The versatility of the algorithm is demonstrated by routing single-and double-height cells. The efficiency is ascertained by synthesizing a library with 127 cells in about one hour and a half of CPU time. The layouts derived by the implemented tool have also been compared with the ones from a commercial library; thus, showing the competitiveness of the approach for gridded geometries.Mon, 13 Apr 2015 09:46:45 GMThttp://hdl.handle.net/2117/272672015-04-13T09:46:45ZCortadella Fortuny, Jordi; Petit Silvestre, Jordi; Gómez Fernández, Sergio; Moll Echeto, Francisco de BorjanoCell generation, Design for manufacturability, Detailed routing, Satisfiability, Regular logic-bricks, Combinatorial optimization, Design, Layout, Lithography, CircuitsAn approach for cell routing using gridded design rules is proposed. It is technology-independent and parameterizable for different fabrics and design rules, including support for multiple-patterning lithography. The core contribution is a detailed-routing algorithm based on a Boolean formulation of the problem. The algorithm uses a novel encoding scheme, graph theory to support floating terminals, efficient heuristics to reduce the computational cost, and minimization of the number of unconnected pins in case the cell is unroutable. The versatility of the algorithm is demonstrated by routing single-and double-height cells. The efficiency is ascertained by synthesizing a library with 127 cells in about one hour and a half of CPU time. The layouts derived by the implemented tool have also been compared with the ones from a commercial library; thus, showing the competitiveness of the approach for gridded geometries.Area-optimal transistor folding for 1-D gridded cell design
http://hdl.handle.net/2117/27265
Title: Area-optimal transistor folding for 1-D gridded cell design
Authors: Cortadella Fortuny, Jordi
Abstract: The 1-D design style with gridded design rules is gaining ground for addressing the printability issues in subwavelength photolithography. One of the synthesis problems in cell generation is transistor folding, which consists of breaking large transistors into smaller ones (legs) that can be placed in the active area of the cell. In the 1-D style, diffusion sharing between differently sized transistors is not allowed, thus implying a significant area overhead when active areas with different sizes are required. This paper presents a new formulation of the transistor folding problem in the context of 1-D design style and a mathematical model that delivers area-optimal solutions. The mathematical model can be customized for different variants of the problem, considering flexible transistor sizes and multiple-height cells. An innovative feature of the method is that area optimality can be guaranteed without calculating the actual location of the transistors. The model can also be enhanced to deliver solutions with good routability properties.Mon, 13 Apr 2015 09:20:30 GMThttp://hdl.handle.net/2117/272652015-04-13T09:20:30ZCortadella Fortuny, JordinoCell generation, Design for manufacturability, Linear programming, Transistor folding, Transistor sizingThe 1-D design style with gridded design rules is gaining ground for addressing the printability issues in subwavelength photolithography. One of the synthesis problems in cell generation is transistor folding, which consists of breaking large transistors into smaller ones (legs) that can be placed in the active area of the cell. In the 1-D style, diffusion sharing between differently sized transistors is not allowed, thus implying a significant area overhead when active areas with different sizes are required. This paper presents a new formulation of the transistor folding problem in the context of 1-D design style and a mathematical model that delivers area-optimal solutions. The mathematical model can be customized for different variants of the problem, considering flexible transistor sizes and multiple-height cells. An innovative feature of the method is that area optimality can be guaranteed without calculating the actual location of the transistors. The model can also be enhanced to deliver solutions with good routability properties.Partially definable forcing and bounded arithmetic
http://hdl.handle.net/2117/27193
Title: Partially definable forcing and bounded arithmetic
Authors: Atserias Peri, Albert; Müller, Moritz
Abstract: We describe a method of forcing against weak theories of arithmetic and its applications in propositional proof complexity.Thu, 09 Apr 2015 07:48:00 GMThttp://hdl.handle.net/2117/271932015-04-09T07:48:00ZAtserias Peri, Albert; Müller, MoritznoBounded arithmetic, Forcing, Proof complexity, Propositional proof systems, Depth frege proofs, Pigeonhole principle, Complexity gap, Resolution, SizeWe describe a method of forcing against weak theories of arithmetic and its applications in propositional proof complexity.Trustworthiness in P2P: performance behaviour of two fuzzy-based systems for JXTA-overlay platform
http://hdl.handle.net/2117/27102
Title: Trustworthiness in P2P: performance behaviour of two fuzzy-based systems for JXTA-overlay platform
Authors: Spaho, Evjola; Sakamoto, Shinji; Barolli, Leonard; Xhafa Xhafa, Fatos; Ikeda, Makoto
Abstract: Peer-to-peer (P2P) networks, will be very important for future distributed systems and applications. In such networks, peers are heterogeneous in providing the services and they do not have the same competence of reliability. Therefore, it is necessary to estimate whether a peer is trustworthy or not for file sharing and other services. In this paper, we propose two fuzzy-based trustworthiness system for P2P communication in JXTA-overlay. System 1 has only one fuzzy logic controller (FLC) and uses four input parameters: mutually agreed behaviour (MAB), actual behaviour criterion (ABC), peer disconnections (PD) and number of uploads (NU) and the output is peer reliability (PR). System 2 has two FLCs. In FLC1 use three input parameters: number of jobs (NJ), number of connections (NC) and connection lifetime (CL) and the output is actual behavioural criterion (ABC). We use ABC and reputation (R) as input linguistic parameters for FLC2 and the output is peer reliability (PR). We evaluate the proposed systems by computer simulations. The simulation results show that the proposed systems have a good behaviour and can be used successfully to evaluate the reliability of the new peer connected in JXTA-overlay.Fri, 27 Mar 2015 15:44:52 GMThttp://hdl.handle.net/2117/271022015-03-27T15:44:52ZSpaho, Evjola; Sakamoto, Shinji; Barolli, Leonard; Xhafa Xhafa, Fatos; Ikeda, MakotonoP2P systems, Fuzzy system, Peer reliability, ControllerPeer-to-peer (P2P) networks, will be very important for future distributed systems and applications. In such networks, peers are heterogeneous in providing the services and they do not have the same competence of reliability. Therefore, it is necessary to estimate whether a peer is trustworthy or not for file sharing and other services. In this paper, we propose two fuzzy-based trustworthiness system for P2P communication in JXTA-overlay. System 1 has only one fuzzy logic controller (FLC) and uses four input parameters: mutually agreed behaviour (MAB), actual behaviour criterion (ABC), peer disconnections (PD) and number of uploads (NU) and the output is peer reliability (PR). System 2 has two FLCs. In FLC1 use three input parameters: number of jobs (NJ), number of connections (NC) and connection lifetime (CL) and the output is actual behavioural criterion (ABC). We use ABC and reputation (R) as input linguistic parameters for FLC2 and the output is peer reliability (PR). We evaluate the proposed systems by computer simulations. The simulation results show that the proposed systems have a good behaviour and can be used successfully to evaluate the reliability of the new peer connected in JXTA-overlay.WMN-SA system for node placement in WMNs: evaluation for different realistic distributions of mesh clients
http://hdl.handle.net/2117/26860
Title: WMN-SA system for node placement in WMNs: evaluation for different realistic distributions of mesh clients
Authors: Sakamoto, Shinji; Oda, Tetsuya; Bravo, Albert; Barolli, Leonard; Ikeda, Makoto; Xhafa Xhafa, Fatos
Abstract: One of the key advantages of Wireless Mesh Networks (WMNs) is their importance for providing cost-efficient broadband connectivity. There are issues for achieving the network connectivity and user coverage, which are related with the node placement problem. We implemented a simulation system where we consider the router node placement problem in WMNs. We want to find the optimal distribution of router nodes in order to provide the best network connectivity and provide the best coverage in a set of randomly distributed clients. We modeled 3 different realistic distribution of mesh clients (Subway, Boulevard and Stadium models). From simulation results, we found that, in the case of Subway model distribution of mesh clients, connectivity and coverage reach maximum performance. For all 3 models, when the instance size increases, the performance decreases.Thu, 19 Mar 2015 14:21:51 GMThttp://hdl.handle.net/2117/268602015-03-19T14:21:51ZSakamoto, Shinji; Oda, Tetsuya; Bravo, Albert; Barolli, Leonard; Ikeda, Makoto; Xhafa Xhafa, FatosnoConnectivity, Coverage, Number of Mesh Clients, Number of Phases, Simulated Annealing, WMN-SA, WMNsOne of the key advantages of Wireless Mesh Networks (WMNs) is their importance for providing cost-efficient broadband connectivity. There are issues for achieving the network connectivity and user coverage, which are related with the node placement problem. We implemented a simulation system where we consider the router node placement problem in WMNs. We want to find the optimal distribution of router nodes in order to provide the best network connectivity and provide the best coverage in a set of randomly distributed clients. We modeled 3 different realistic distribution of mesh clients (Subway, Boulevard and Stadium models). From simulation results, we found that, in the case of Subway model distribution of mesh clients, connectivity and coverage reach maximum performance. For all 3 models, when the instance size increases, the performance decreases.Measuring precision of modeled behavior
http://hdl.handle.net/2117/26715
Title: Measuring precision of modeled behavior
Authors: Adriansyah, Arya; Muñoz Gama, Jorge; Carmona Vargas, Josep; Van Dongen, Boudewijn; van der Aalst, Wil M. P.
Abstract: Conformance checking techniques compare observed behavior (i.e., event logs) with modeled behavior for a variety of reasons. For example, discrepancies between a normative process model and recorded behavior may point to fraud or inefficiencies. The resulting diagnostics can be used for auditing and compliance management. Conformance checking can also be used to judge a process model automatically discovered from an event log. Models discovered using different process discovery techniques need to be compared objectively. These examples illustrate just a few of the many use cases for aligning observed and modeled behavior. Thus far, most conformance checking techniques focused on replay fitness, i.e., the ability to reproduce the event log. However, it is easy to construct models that allow for lots of behavior (including the observed behavior) without being precise. In this paper, we propose a method to measure precision of process models, given their event logs by first aligning the logs to the models. This way, the measurement is not sensitive to non-fitting executions and more accurate values can be obtained for non-fitting logs. Furthermore, we introduce several variants of the technique to deal better with incomplete logs and reduce possible bias due to behavioral property of process models. The approach has been implemented in the ProM 6 framework and tested against both artificial and real-life cases. Experiments show that the approach is robust to noise and applicable to handle logs and models of real-life complexity.Mon, 16 Mar 2015 11:04:52 GMThttp://hdl.handle.net/2117/267152015-03-16T11:04:52ZAdriansyah, Arya; Muñoz Gama, Jorge; Carmona Vargas, Josep; Van Dongen, Boudewijn; van der Aalst, Wil M. P.noPrecision measurement, Log-model alignment, Conformance checking, Process miningConformance checking techniques compare observed behavior (i.e., event logs) with modeled behavior for a variety of reasons. For example, discrepancies between a normative process model and recorded behavior may point to fraud or inefficiencies. The resulting diagnostics can be used for auditing and compliance management. Conformance checking can also be used to judge a process model automatically discovered from an event log. Models discovered using different process discovery techniques need to be compared objectively. These examples illustrate just a few of the many use cases for aligning observed and modeled behavior. Thus far, most conformance checking techniques focused on replay fitness, i.e., the ability to reproduce the event log. However, it is easy to construct models that allow for lots of behavior (including the observed behavior) without being precise. In this paper, we propose a method to measure precision of process models, given their event logs by first aligning the logs to the models. This way, the measurement is not sensitive to non-fitting executions and more accurate values can be obtained for non-fitting logs. Furthermore, we introduce several variants of the technique to deal better with incomplete logs and reduce possible bias due to behavioral property of process models. The approach has been implemented in the ProM 6 framework and tested against both artificial and real-life cases. Experiments show that the approach is robust to noise and applicable to handle logs and models of real-life complexity.