DSpace Collection:
http://hdl.handle.net/2117/3093
Mon, 27 Apr 2015 10:37:10 GMT2015-04-27T10:37:10Zwebmaster.bupc@upc.eduUniversitat Politècnica de Catalunya. Servei de Biblioteques i DocumentaciónoBounded-width QBF is PSPACE-complete
http://hdl.handle.net/2117/27300
Title: Bounded-width QBF is PSPACE-complete
Authors: Atserias Peri, Albert; Oliva Valls, Sergi
Abstract: Tree-width and path-width are two well-studied parameters of structures that measure their similarity to a tree and a path, respectively. We show that QBF on instances with constant path-width, and hence constant tree-width, remains PSPACE-complete. This answers a question by Vardi. We also show that on instances with constant path-width and a very slow-growing number of quantifier alternations (roughly inverse-Ackermann many in the number of variables), the problem remains NP-hard. Additionally, we introduce a family of formulas with bounded tree-width that do have short refutations in Q-resolution, the natural generalization of resolution for quantified Boolean formulas.Tue, 14 Apr 2015 08:29:15 GMThttp://hdl.handle.net/2117/273002015-04-14T08:29:15ZAtserias Peri, Albert; Oliva Valls, SerginoTree-width, Path-width, Quantified Boolean formulas, PSPACE-completeTree-width and path-width are two well-studied parameters of structures that measure their similarity to a tree and a path, respectively. We show that QBF on instances with constant path-width, and hence constant tree-width, remains PSPACE-complete. This answers a question by Vardi. We also show that on instances with constant path-width and a very slow-growing number of quantifier alternations (roughly inverse-Ackermann many in the number of variables), the problem remains NP-hard. Additionally, we introduce a family of formulas with bounded tree-width that do have short refutations in Q-resolution, the natural generalization of resolution for quantified Boolean formulas.A boolean rule-based approach for manufacturability-aware cell routing
http://hdl.handle.net/2117/27267
Title: A boolean rule-based approach for manufacturability-aware cell routing
Authors: Cortadella Fortuny, Jordi; Petit Silvestre, Jordi; Gómez Fernández, Sergio; Moll Echeto, Francisco de Borja
Abstract: An approach for cell routing using gridded design rules is proposed. It is technology-independent and parameterizable for different fabrics and design rules, including support for multiple-patterning lithography. The core contribution is a detailed-routing algorithm based on a Boolean formulation of the problem. The algorithm uses a novel encoding scheme, graph theory to support floating terminals, efficient heuristics to reduce the computational cost, and minimization of the number of unconnected pins in case the cell is unroutable. The versatility of the algorithm is demonstrated by routing single-and double-height cells. The efficiency is ascertained by synthesizing a library with 127 cells in about one hour and a half of CPU time. The layouts derived by the implemented tool have also been compared with the ones from a commercial library; thus, showing the competitiveness of the approach for gridded geometries.Mon, 13 Apr 2015 09:46:45 GMThttp://hdl.handle.net/2117/272672015-04-13T09:46:45ZCortadella Fortuny, Jordi; Petit Silvestre, Jordi; Gómez Fernández, Sergio; Moll Echeto, Francisco de BorjanoCell generation, Design for manufacturability, Detailed routing, Satisfiability, Regular logic-bricks, Combinatorial optimization, Design, Layout, Lithography, CircuitsAn approach for cell routing using gridded design rules is proposed. It is technology-independent and parameterizable for different fabrics and design rules, including support for multiple-patterning lithography. The core contribution is a detailed-routing algorithm based on a Boolean formulation of the problem. The algorithm uses a novel encoding scheme, graph theory to support floating terminals, efficient heuristics to reduce the computational cost, and minimization of the number of unconnected pins in case the cell is unroutable. The versatility of the algorithm is demonstrated by routing single-and double-height cells. The efficiency is ascertained by synthesizing a library with 127 cells in about one hour and a half of CPU time. The layouts derived by the implemented tool have also been compared with the ones from a commercial library; thus, showing the competitiveness of the approach for gridded geometries.Area-optimal transistor folding for 1-D gridded cell design
http://hdl.handle.net/2117/27265
Title: Area-optimal transistor folding for 1-D gridded cell design
Authors: Cortadella Fortuny, Jordi
Abstract: The 1-D design style with gridded design rules is gaining ground for addressing the printability issues in subwavelength photolithography. One of the synthesis problems in cell generation is transistor folding, which consists of breaking large transistors into smaller ones (legs) that can be placed in the active area of the cell. In the 1-D style, diffusion sharing between differently sized transistors is not allowed, thus implying a significant area overhead when active areas with different sizes are required. This paper presents a new formulation of the transistor folding problem in the context of 1-D design style and a mathematical model that delivers area-optimal solutions. The mathematical model can be customized for different variants of the problem, considering flexible transistor sizes and multiple-height cells. An innovative feature of the method is that area optimality can be guaranteed without calculating the actual location of the transistors. The model can also be enhanced to deliver solutions with good routability properties.Mon, 13 Apr 2015 09:20:30 GMThttp://hdl.handle.net/2117/272652015-04-13T09:20:30ZCortadella Fortuny, JordinoCell generation, Design for manufacturability, Linear programming, Transistor folding, Transistor sizingThe 1-D design style with gridded design rules is gaining ground for addressing the printability issues in subwavelength photolithography. One of the synthesis problems in cell generation is transistor folding, which consists of breaking large transistors into smaller ones (legs) that can be placed in the active area of the cell. In the 1-D style, diffusion sharing between differently sized transistors is not allowed, thus implying a significant area overhead when active areas with different sizes are required. This paper presents a new formulation of the transistor folding problem in the context of 1-D design style and a mathematical model that delivers area-optimal solutions. The mathematical model can be customized for different variants of the problem, considering flexible transistor sizes and multiple-height cells. An innovative feature of the method is that area optimality can be guaranteed without calculating the actual location of the transistors. The model can also be enhanced to deliver solutions with good routability properties.Partially definable forcing and bounded arithmetic
http://hdl.handle.net/2117/27193
Title: Partially definable forcing and bounded arithmetic
Authors: Atserias Peri, Albert; Müller, Moritz
Abstract: We describe a method of forcing against weak theories of arithmetic and its applications in propositional proof complexity.Thu, 09 Apr 2015 07:48:00 GMThttp://hdl.handle.net/2117/271932015-04-09T07:48:00ZAtserias Peri, Albert; Müller, MoritznoBounded arithmetic, Forcing, Proof complexity, Propositional proof systems, Depth frege proofs, Pigeonhole principle, Complexity gap, Resolution, SizeWe describe a method of forcing against weak theories of arithmetic and its applications in propositional proof complexity.Trustworthiness in P2P: performance behaviour of two fuzzy-based systems for JXTA-overlay platform
http://hdl.handle.net/2117/27102
Title: Trustworthiness in P2P: performance behaviour of two fuzzy-based systems for JXTA-overlay platform
Authors: Spaho, Evjola; Sakamoto, Shinji; Barolli, Leonard; Xhafa Xhafa, Fatos; Ikeda, Makoto
Abstract: Peer-to-peer (P2P) networks, will be very important for future distributed systems and applications. In such networks, peers are heterogeneous in providing the services and they do not have the same competence of reliability. Therefore, it is necessary to estimate whether a peer is trustworthy or not for file sharing and other services. In this paper, we propose two fuzzy-based trustworthiness system for P2P communication in JXTA-overlay. System 1 has only one fuzzy logic controller (FLC) and uses four input parameters: mutually agreed behaviour (MAB), actual behaviour criterion (ABC), peer disconnections (PD) and number of uploads (NU) and the output is peer reliability (PR). System 2 has two FLCs. In FLC1 use three input parameters: number of jobs (NJ), number of connections (NC) and connection lifetime (CL) and the output is actual behavioural criterion (ABC). We use ABC and reputation (R) as input linguistic parameters for FLC2 and the output is peer reliability (PR). We evaluate the proposed systems by computer simulations. The simulation results show that the proposed systems have a good behaviour and can be used successfully to evaluate the reliability of the new peer connected in JXTA-overlay.Fri, 27 Mar 2015 15:44:52 GMThttp://hdl.handle.net/2117/271022015-03-27T15:44:52ZSpaho, Evjola; Sakamoto, Shinji; Barolli, Leonard; Xhafa Xhafa, Fatos; Ikeda, MakotonoP2P systems, Fuzzy system, Peer reliability, ControllerPeer-to-peer (P2P) networks, will be very important for future distributed systems and applications. In such networks, peers are heterogeneous in providing the services and they do not have the same competence of reliability. Therefore, it is necessary to estimate whether a peer is trustworthy or not for file sharing and other services. In this paper, we propose two fuzzy-based trustworthiness system for P2P communication in JXTA-overlay. System 1 has only one fuzzy logic controller (FLC) and uses four input parameters: mutually agreed behaviour (MAB), actual behaviour criterion (ABC), peer disconnections (PD) and number of uploads (NU) and the output is peer reliability (PR). System 2 has two FLCs. In FLC1 use three input parameters: number of jobs (NJ), number of connections (NC) and connection lifetime (CL) and the output is actual behavioural criterion (ABC). We use ABC and reputation (R) as input linguistic parameters for FLC2 and the output is peer reliability (PR). We evaluate the proposed systems by computer simulations. The simulation results show that the proposed systems have a good behaviour and can be used successfully to evaluate the reliability of the new peer connected in JXTA-overlay.Measuring precision of modeled behavior
http://hdl.handle.net/2117/26715
Title: Measuring precision of modeled behavior
Authors: Adriansyah, Arya; Muñoz Gama, Jorge; Carmona Vargas, Josep; Van Dongen, Boudewijn; van der Aalst, Wil M. P.
Abstract: Conformance checking techniques compare observed behavior (i.e., event logs) with modeled behavior for a variety of reasons. For example, discrepancies between a normative process model and recorded behavior may point to fraud or inefficiencies. The resulting diagnostics can be used for auditing and compliance management. Conformance checking can also be used to judge a process model automatically discovered from an event log. Models discovered using different process discovery techniques need to be compared objectively. These examples illustrate just a few of the many use cases for aligning observed and modeled behavior. Thus far, most conformance checking techniques focused on replay fitness, i.e., the ability to reproduce the event log. However, it is easy to construct models that allow for lots of behavior (including the observed behavior) without being precise. In this paper, we propose a method to measure precision of process models, given their event logs by first aligning the logs to the models. This way, the measurement is not sensitive to non-fitting executions and more accurate values can be obtained for non-fitting logs. Furthermore, we introduce several variants of the technique to deal better with incomplete logs and reduce possible bias due to behavioral property of process models. The approach has been implemented in the ProM 6 framework and tested against both artificial and real-life cases. Experiments show that the approach is robust to noise and applicable to handle logs and models of real-life complexity.Mon, 16 Mar 2015 11:04:52 GMThttp://hdl.handle.net/2117/267152015-03-16T11:04:52ZAdriansyah, Arya; Muñoz Gama, Jorge; Carmona Vargas, Josep; Van Dongen, Boudewijn; van der Aalst, Wil M. P.noPrecision measurement, Log-model alignment, Conformance checking, Process miningConformance checking techniques compare observed behavior (i.e., event logs) with modeled behavior for a variety of reasons. For example, discrepancies between a normative process model and recorded behavior may point to fraud or inefficiencies. The resulting diagnostics can be used for auditing and compliance management. Conformance checking can also be used to judge a process model automatically discovered from an event log. Models discovered using different process discovery techniques need to be compared objectively. These examples illustrate just a few of the many use cases for aligning observed and modeled behavior. Thus far, most conformance checking techniques focused on replay fitness, i.e., the ability to reproduce the event log. However, it is easy to construct models that allow for lots of behavior (including the observed behavior) without being precise. In this paper, we propose a method to measure precision of process models, given their event logs by first aligning the logs to the models. This way, the measurement is not sensitive to non-fitting executions and more accurate values can be obtained for non-fitting logs. Furthermore, we introduce several variants of the technique to deal better with incomplete logs and reduce possible bias due to behavioral property of process models. The approach has been implemented in the ProM 6 framework and tested against both artificial and real-life cases. Experiments show that the approach is robust to noise and applicable to handle logs and models of real-life complexity.Degree lower bounds of tower-type for approximating formulas with parity quantifiers
http://hdl.handle.net/2117/26697
Title: Degree lower bounds of tower-type for approximating formulas with parity quantifiers
Authors: Atserias Peri, Albert; Dawar, Anuj
Abstract: Kolaitis and Kopparty have shown that for any first-order formula with parity quantifiers over the language of graphs, there is a family of multivariate polynomials of constant-degree that agree with the formula on all but a 2(-Omega(n))-fraction of the graphs with n vertices. The proof bounds the degree of the polynomials by a tower of exponentials whose height is the nesting depth of parity quantifiers in the formula. We show that this tower-type dependence is necessary. We build a family of formulas of depth q whose approximating polynomials must have degree bounded from below by a tower of exponentials of height proportional to q. Our proof has two main parts. First, we adapt and extend the results by Kolaitis and Kopparty that describe the joint distribution of the parities of the numbers of copies of small subgraphs in a random graph to the setting of graphs of growing size. Second, we analyze a variant of Karp's graph canonical labeling algorithm and exploit its massive parallelism to get a formula of low depth that defines an almost canonical pre-order on a random graph.Fri, 13 Mar 2015 13:18:21 GMThttp://hdl.handle.net/2117/266972015-03-13T13:18:21ZAtserias Peri, Albert; Dawar, AnujnoAlgorithms, Theory, Canonical labeling algorithm, Convergence laws, Gowers uniformity norm, Logic, Parity quantifiers, Random graphs, GraphsKolaitis and Kopparty have shown that for any first-order formula with parity quantifiers over the language of graphs, there is a family of multivariate polynomials of constant-degree that agree with the formula on all but a 2(-Omega(n))-fraction of the graphs with n vertices. The proof bounds the degree of the polynomials by a tower of exponentials whose height is the nesting depth of parity quantifiers in the formula. We show that this tower-type dependence is necessary. We build a family of formulas of depth q whose approximating polynomials must have degree bounded from below by a tower of exponentials of height proportional to q. Our proof has two main parts. First, we adapt and extend the results by Kolaitis and Kopparty that describe the joint distribution of the parities of the numbers of copies of small subgraphs in a random graph to the setting of graphs of growing size. Second, we analyze a variant of Karp's graph canonical labeling algorithm and exploit its massive parallelism to get a formula of low depth that defines an almost canonical pre-order on a random graph.Cooperation through social influence
http://hdl.handle.net/2117/26600
Title: Cooperation through social influence
Authors: Molinero Albareda, Xavier; Riquelme Csori, Fabián; Serna Iglesias, María José
Abstract: We consider a simple and altruistic multiagent system in which the agents are eager to perform a collective task but where their real engagement depends on the willingness to perform the task of other influential agents. We model this scenario by an influence game, a cooperative simple game in which a team (or coalition) of players succeeds if it is able to convince enough agents to participate in the task (to vote in favor of a decision). We take the linear threshold model as the influence model. We show first the expressiveness of influence games showing that they capture the class of simple games. Then we characterize the computational complexity of various problems on influence games, including measures (length and width), values (Shapley-Shubik and Banzhaf) and properties (of teams and players). Finally, we analyze those problems for some particular extremal cases, with respect to the propagation of influence, showing tighter complexity characterizations.Thu, 05 Mar 2015 17:47:09 GMThttp://hdl.handle.net/2117/266002015-03-05T17:47:09ZMolinero Albareda, Xavier; Riquelme Csori, Fabián; Serna Iglesias, María JosénoComputational complexity, Influence games, Simple games, Spread of influenceWe consider a simple and altruistic multiagent system in which the agents are eager to perform a collective task but where their real engagement depends on the willingness to perform the task of other influential agents. We model this scenario by an influence game, a cooperative simple game in which a team (or coalition) of players succeeds if it is able to convince enough agents to participate in the task (to vote in favor of a decision). We take the linear threshold model as the influence model. We show first the expressiveness of influence games showing that they capture the class of simple games. Then we characterize the computational complexity of various problems on influence games, including measures (length and width), values (Shapley-Shubik and Banzhaf) and properties (of teams and players). Finally, we analyze those problems for some particular extremal cases, with respect to the propagation of influence, showing tighter complexity characterizations.On the complexity of problems on simple games
http://hdl.handle.net/2117/25093
Title: On the complexity of problems on simple games
Authors: Freixas Bosch, Josep; Molinero Albareda, Xavier; Olsen, Martin; Serna Iglesias, María José
Abstract: Simple games cover voting systems in which a single alter-
native, such as a bill or an amendment, is pitted against the status quo. A simple game or a yes-no voting system is a set of rules that specifies exactly which collections of “yea” votes yield passage of the issue at hand. Each of these collections of “yea” voters forms a winning coalition. We are interested in performing a complexity analysis on problems defined on such families of games. This analysis as usual depends on the game representation used as input. We consider four natural explicit representations: winning, losing, minimal winning, and maximal losing. We first analyze the complexity of testing whether a game is simple and testing whether a game is weighted. We show that, for the four types of representations, both problems can be solved in polynomial time. Finally, we provide results on the complexity of testing whether a simple game or a weighted game is of a special type. We analyze strongness, properness, weightedness, homogeneousness, decisiveness and majorityness, which are desirable properties to be fulfilled for a simple game.
Finally, we consider the possibility of representing a game in a more
succinct and natural way and show that the corresponding recognition
problem is hard.Thu, 18 Dec 2014 19:12:37 GMThttp://hdl.handle.net/2117/250932014-12-18T19:12:37ZFreixas Bosch, Josep; Molinero Albareda, Xavier; Olsen, Martin; Serna Iglesias, María JosénoSimple, Weighted, Majority games, NP-completenessSimple games cover voting systems in which a single alter-
native, such as a bill or an amendment, is pitted against the status quo. A simple game or a yes-no voting system is a set of rules that specifies exactly which collections of “yea” votes yield passage of the issue at hand. Each of these collections of “yea” voters forms a winning coalition. We are interested in performing a complexity analysis on problems defined on such families of games. This analysis as usual depends on the game representation used as input. We consider four natural explicit representations: winning, losing, minimal winning, and maximal losing. We first analyze the complexity of testing whether a game is simple and testing whether a game is weighted. We show that, for the four types of representations, both problems can be solved in polynomial time. Finally, we provide results on the complexity of testing whether a simple game or a weighted game is of a special type. We analyze strongness, properness, weightedness, homogeneousness, decisiveness and majorityness, which are desirable properties to be fulfilled for a simple game.
Finally, we consider the possibility of representing a game in a more
succinct and natural way and show that the corresponding recognition
problem is hard.Long non-coding RNAs as a source of new peptides
http://hdl.handle.net/2117/24421
Title: Long non-coding RNAs as a source of new peptides
Authors: Ruiz Orera, Jorge; Messeguer Peypoch, Xavier; Subirana Torrent, Juan A.; Albà Soler, M. Mar
Abstract: Deep transcriptome sequencing has revealed the existence of many transcripts that lack long or conserved open reading frames (ORFs) and which have been termed long non-coding RNAs (lncRNAs). The vast majority of lncRNAs are lineage-specific and do not yet have a known function. In this study, we test the hypothesis that they may act as a repository for the synthesis of new peptides. We find that a large fraction of the lncRNAs expressed in cells from six different species is associated with ribosomes. The patterns of ribosome protection are consistent with the translation of short peptides. lncRNAs show similar coding potential and sequence constraints than evolutionary young protein coding sequences, indicating that they play an important role in de novo protein evolution.Mon, 20 Oct 2014 08:38:40 GMThttp://hdl.handle.net/2117/244212014-10-20T08:38:40ZRuiz Orera, Jorge; Messeguer Peypoch, Xavier; Subirana Torrent, Juan A.; Albà Soler, M. MarnoOpen reading frames, Polycistronic messenger-RNA, Transcriptional landscape, Saccaromyces-cerevisiae, Gene-expression, Disease genes, Human-cells, Evolution, RevealsDeep transcriptome sequencing has revealed the existence of many transcripts that lack long or conserved open reading frames (ORFs) and which have been termed long non-coding RNAs (lncRNAs). The vast majority of lncRNAs are lineage-specific and do not yet have a known function. In this study, we test the hypothesis that they may act as a repository for the synthesis of new peptides. We find that a large fraction of the lncRNAs expressed in cells from six different species is associated with ribosomes. The patterns of ribosome protection are consistent with the translation of short peptides. lncRNAs show similar coding potential and sequence constraints than evolutionary young protein coding sequences, indicating that they play an important role in de novo protein evolution.Power indices of influence games and new centrality measures for agent societies and social networks
http://hdl.handle.net/2117/24245
Title: Power indices of influence games and new centrality measures for agent societies and social networks
Authors: Molinero Albareda, Xavier; Riquelme Csori, Fabián; Serna Iglesias, María José
Abstract: We propose as centrality measures for social networks two classical power indices, Banzhaf and Shapley-Shubik, and two new measures, effort and satisfaction, related to the spread of influence process that emerge from the subjacent influence game. We perform a comparison of these measures with three well known centrality measures, degree, closeness and betweenness, applied to three simple social networks.Fri, 03 Oct 2014 13:58:17 GMThttp://hdl.handle.net/2117/242452014-10-03T13:58:17ZMolinero Albareda, Xavier; Riquelme Csori, Fabián; Serna Iglesias, María JosénoSocial network, Centrality, Power index, Influence game, Simple gameWe propose as centrality measures for social networks two classical power indices, Banzhaf and Shapley-Shubik, and two new measures, effort and satisfaction, related to the spread of influence process that emerge from the subjacent influence game. We perform a comparison of these measures with three well known centrality measures, degree, closeness and betweenness, applied to three simple social networks.Single-entry single-exit decomposed conformance checking
http://hdl.handle.net/2117/23672
Title: Single-entry single-exit decomposed conformance checking
Authors: Muñoz Gama, Jorge; Carmona Vargas, Josep; Van der Aalst, Wil M. P.
Abstract: An exponential growth of event data can be witnessed across all industries. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue. The omnipresence of large amounts of event data is an important enabler for process mining. Process mining techniques can be used to discover, monitor and improve real processes by extracting knowledge from observed behavior. However, unprecedented volumes of event data also provide new challenges and often state-of-the-art process mining techniques cannot cope. This paper focuses on “conformance checking in the large” and presents a novel decomposition technique that partitions larger process models and event logs into smaller parts that can be analyzed independently. The so-called Single-Entry Single-Exit (SESE) decomposition not only helps to speed up conformance checking, but also provides improved diagnostics. The analyst can zoom in on the problematic parts of the process. Importantly, the conditions under which the conformance of the whole can be assessed by verifying the conformance of the SESE parts are described, which enables the decomposition and distribution of large conformance checking problems. All the techniques have been implemented in ProM, and experimental results are provided.Fri, 01 Aug 2014 12:25:50 GMThttp://hdl.handle.net/2117/236722014-08-01T12:25:50ZMuñoz Gama, Jorge; Carmona Vargas, Josep; Van der Aalst, Wil M. P.noProcess mining, Conformance checking, Decomposition, Process diagnosisAn exponential growth of event data can be witnessed across all industries. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue. The omnipresence of large amounts of event data is an important enabler for process mining. Process mining techniques can be used to discover, monitor and improve real processes by extracting knowledge from observed behavior. However, unprecedented volumes of event data also provide new challenges and often state-of-the-art process mining techniques cannot cope. This paper focuses on “conformance checking in the large” and presents a novel decomposition technique that partitions larger process models and event logs into smaller parts that can be analyzed independently. The so-called Single-Entry Single-Exit (SESE) decomposition not only helps to speed up conformance checking, but also provides improved diagnostics. The analyst can zoom in on the problematic parts of the process. Importantly, the conditions under which the conformance of the whole can be assessed by verifying the conformance of the SESE parts are described, which enables the decomposition and distribution of large conformance checking problems. All the techniques have been implemented in ProM, and experimental results are provided.Reasoning about orchestrations of web services using partial correctness
http://hdl.handle.net/2117/23530
Title: Reasoning about orchestrations of web services using partial correctness
Authors: Stewart, Alan; Gabarró Vallès, Joaquim; Keenan, Anthony
Abstract: A service is a remote computational facility which is made available for general use by means of a
wide-area network. Several types of service arise in practice: stateless services, shared state services and services
with states which are customised for individual users. A service-based orchestration is a multi-threaded compu-
tation which invokes remote services in order to deliver results back to a user (publication). In this paper a means
of specifying services and reasoning about the correctness of orchestrations over
stateless
services is presented.
As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed.
For this reason a partial-correctness powerdomain approach is proposed to capture the semantics of recursive
orchestrations.Wed, 16 Jul 2014 12:57:09 GMThttp://hdl.handle.net/2117/235302014-07-16T12:57:09ZStewart, Alan; Gabarró Vallès, Joaquim; Keenan, AnthonynoWorld Wide Web – Service – Specification – Orchestration – Orc – Partial correctness – Pre-orders – Fixed-points – PowerdomainsA service is a remote computational facility which is made available for general use by means of a
wide-area network. Several types of service arise in practice: stateless services, shared state services and services
with states which are customised for individual users. A service-based orchestration is a multi-threaded compu-
tation which invokes remote services in order to deliver results back to a user (publication). In this paper a means
of specifying services and reasoning about the correctness of orchestrations over
stateless
services is presented.
As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed.
For this reason a partial-correctness powerdomain approach is proposed to capture the semantics of recursive
orchestrations.Excessively duplicating patterns represent non-regular languages
http://hdl.handle.net/2117/23504
Title: Excessively duplicating patterns represent non-regular languages
Authors: Creus López, Carles; Godoy Balil, Guillem; Ramos Garrido, Lander
Abstract: A constrained term pattern s:¿ represents the language of all instances of the term s satisfying the constraint ¿. For each variable in s, this constraint specifies the language of its allowed substitutions. Regularity of languages represented by sets of patterns has been studied for a long time. This problem is known to be co-NP-complete when the constraints allow each variable to be replaced by any term over a fixed signature, and EXPTIME-complete when the constraints restrict each variable to a regular set. In both cases, duplication of variables in the terms of the patterns is a necessary condition for non-regularity. This is because duplications force the recognizer to test equality between subterms. Hence, for the specific classes of constraints mentioned above, if all patterns are linear, then the represented language is necessarily regular. In this paper we focus on the opposite case, that is when there are patterns withMon, 14 Jul 2014 12:29:43 GMThttp://hdl.handle.net/2117/235042014-07-14T12:29:43ZCreus López, Carles; Godoy Balil, Guillem; Ramos Garrido, LandernoTheory of computation, Pattern, Regular tree language, Tree automaton, Tree homomorphismA constrained term pattern s:¿ represents the language of all instances of the term s satisfying the constraint ¿. For each variable in s, this constraint specifies the language of its allowed substitutions. Regularity of languages represented by sets of patterns has been studied for a long time. This problem is known to be co-NP-complete when the constraints allow each variable to be replaced by any term over a fixed signature, and EXPTIME-complete when the constraints restrict each variable to a regular set. In both cases, duplication of variables in the terms of the patterns is a necessary condition for non-regularity. This is because duplications force the recognizer to test equality between subterms. Hence, for the specific classes of constraints mentioned above, if all patterns are linear, then the represented language is necessarily regular. In this paper we focus on the opposite case, that is when there are patterns withTuring's algorithmic lens: from computability to complexity theory
http://hdl.handle.net/2117/22738
Title: Turing's algorithmic lens: from computability to complexity theory
Authors: Díaz Cort, Josep; Torras, Carme
Abstract: The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time) and NP (problems solvable in non-deterministic polynomial time) were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.Mon, 28 Apr 2014 17:13:15 GMThttp://hdl.handle.net/2117/227382014-04-28T17:13:15ZDíaz Cort, Josep; Torras, Carmenoautomation
Author keywords:
Turing, computer science, computability, complexity theory, algorithmicsThe decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time) and NP (problems solvable in non-deterministic polynomial time) were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.