Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/3095
2016-02-11T19:28:38ZReasoning about policy behavior in logic-based trust management systems: Some complexity results and an operational framework
http://hdl.handle.net/2117/82625
Reasoning about policy behavior in logic-based trust management systems: Some complexity results and an operational framework
Pasarella Sánchez, Ana Edelmira; Lobo, Jorge
In this paper we show that the logical framework proposed by Becker et al. [1] to reason about security policy behavior in a trust management context can be captured by an operational framework that is based on the language proposed by Miller in 1989 to deal with scoping and/or modules in logic programming. The framework of Becker et al. uses propositional Horn clauses to represent both policies and credentials, implications in clauses are interpreted in counterfactual logic, a Hilbert-style proof system is defined and a system based on SAT is used to prove whether properties about credentials, permissions
and policies are valid, i.e. true under all possible policies. Our
contributions in this paper are three. First, we show that this kind
of validation can rely on an operational semantics (derivability
relation) of a language very similar to Miller’s language, which is
very close to derivability in logic programs. Second, we are able
to establish that, as in propositional logic, validity of formulas
is a co-NP-complete problem. And third, we present a provably
correct implementation of a goal-oriented algorithm for validity.
2016-02-05T12:38:45ZPasarella Sánchez, Ana EdelmiraLobo, JorgeIn this paper we show that the logical framework proposed by Becker et al. [1] to reason about security policy behavior in a trust management context can be captured by an operational framework that is based on the language proposed by Miller in 1989 to deal with scoping and/or modules in logic programming. The framework of Becker et al. uses propositional Horn clauses to represent both policies and credentials, implications in clauses are interpreted in counterfactual logic, a Hilbert-style proof system is defined and a system based on SAT is used to prove whether properties about credentials, permissions
and policies are valid, i.e. true under all possible policies. Our
contributions in this paper are three. First, we show that this kind
of validation can rely on an operational semantics (derivability
relation) of a language very similar to Miller’s language, which is
very close to derivability in logic programs. Second, we are able
to establish that, as in propositional logic, validity of formulas
is a co-NP-complete problem. And third, we present a provably
correct implementation of a goal-oriented algorithm for validity.A cost-benefit analysis of continuous assessment
http://hdl.handle.net/2117/82536
A cost-benefit analysis of continuous assessment
Duch Brown, Amalia; Gabarró Vallès, Joaquim; Petit Silvestre, Jordi; Blesa Aguilera, Maria Josep; Serna Iglesias, María José
The first course on programming is fundamental in the Facultat d’Informàtica de Barcelona. After a major redesign of the Programming-1 course in 2006 to give it a more practical flavor, an increasing number of measures have been undertaken over the years to try to increase its pass rate while maintaining a fixed quality level. These measures, that can be roughly summarized as an important increase in assessment, imply an increase in the workload of both students and instructors that does not always correspond to the increase of pass rate they provide. In this paper, and within the context of this course, we analyze quantitatively the amount of work required from faculty to implement the series of measures and we conclude that, within this course, continuous assessment is expensive and has reached its limit.
2016-02-04T09:49:32ZDuch Brown, AmaliaGabarró Vallès, JoaquimPetit Silvestre, JordiBlesa Aguilera, Maria JosepSerna Iglesias, María JoséThe first course on programming is fundamental in the Facultat d’Informàtica de Barcelona. After a major redesign of the Programming-1 course in 2006 to give it a more practical flavor, an increasing number of measures have been undertaken over the years to try to increase its pass rate while maintaining a fixed quality level. These measures, that can be roughly summarized as an important increase in assessment, imply an increase in the workload of both students and instructors that does not always correspond to the increase of pass rate they provide. In this paper, and within the context of this course, we analyze quantitatively the amount of work required from faculty to implement the series of measures and we conclude that, within this course, continuous assessment is expensive and has reached its limit.The robustness of periodic orchestrations in uncertain evolving environments
http://hdl.handle.net/2117/82017
The robustness of periodic orchestrations in uncertain evolving environments
Castro Rabal, Jorge; Gabarró Vallès, Joaquim; Serna Iglesias, María José; Stewart, Alan
A framework for assessing the robustness of long-duration repetitive orchestrations in uncertain evolving environments is proposed. The model assumes that service-based evaluation environments are stable over short time-frames only; over longer periods service-based environments evolve as demand fluctuates and contention for shared resources varies.
The behaviour of a short-duration orchestration E in a stable environment is assessed by an uncertainty profile U and a corresponding zero-sum angel-daemon game Gamma (U).
Here the angel-daemon approach is extended to assess evolving environments by means of a subfamily of stochastic games. These games are called strategy oblivious because their transition probabilities are strategy independent. It is shown that the value of a strategy oblivious stochastic game is well defined and that it can be computed by solving a linear system. Finally, the proposed stochastic framework is used to assess the evolution of the Gabrmn IT system.
2016-01-26T08:45:30ZCastro Rabal, JorgeGabarró Vallès, JoaquimSerna Iglesias, María JoséStewart, AlanA framework for assessing the robustness of long-duration repetitive orchestrations in uncertain evolving environments is proposed. The model assumes that service-based evaluation environments are stable over short time-frames only; over longer periods service-based environments evolve as demand fluctuates and contention for shared resources varies.
The behaviour of a short-duration orchestration E in a stable environment is assessed by an uncertainty profile U and a corresponding zero-sum angel-daemon game Gamma (U).
Here the angel-daemon approach is extended to assess evolving environments by means of a subfamily of stochastic games. These games are called strategy oblivious because their transition probabilities are strategy independent. It is shown that the value of a strategy oblivious stochastic game is well defined and that it can be computed by solving a linear system. Finally, the proposed stochastic framework is used to assess the evolution of the Gabrmn IT system.Entailment among probabilistic implications
http://hdl.handle.net/2117/79017
Entailment among probabilistic implications
Atserias, Albert; Balcázar Navarro, José Luis
We study a natural variant of the implicational fragment of propositional logic. Its formulas are pairs of conjunctions of positive literals, related together by an implicational-like connective, the semantics of this sort of implication is defined in terms of a threshold on a conditional probability of the consequent, given the antecedent: we are dealing with what the data analysis community calls confidence of partial implications or association rules. Existing studies of redundancy among these partial implications have characterized so far only entailment from one premise and entailment from two premises. By exploiting a previously noted alternative view of this entailment in terms of linear programming duality, we characterize exactly the cases of entailment from arbitrary numbers of premises. As a result, we obtain decision algorithms of better complexity, additionally, for each potential case of entailment, we identify a critical confidence threshold and show that it is, actually, intrinsic to each set of premises and antecedent of the conclusion.
2015-11-11T12:38:12ZAtserias, AlbertBalcázar Navarro, José LuisWe study a natural variant of the implicational fragment of propositional logic. Its formulas are pairs of conjunctions of positive literals, related together by an implicational-like connective, the semantics of this sort of implication is defined in terms of a threshold on a conditional probability of the consequent, given the antecedent: we are dealing with what the data analysis community calls confidence of partial implications or association rules. Existing studies of redundancy among these partial implications have characterized so far only entailment from one premise and entailment from two premises. By exploiting a previously noted alternative view of this entailment in terms of linear programming duality, we characterize exactly the cases of entailment from arbitrary numbers of premises. As a result, we obtain decision algorithms of better complexity, additionally, for each potential case of entailment, we identify a critical confidence threshold and show that it is, actually, intrinsic to each set of premises and antecedent of the conclusion.A retargetable and accurate methodology for logic-IP-internal electromigration assessment
http://hdl.handle.net/2117/76990
A retargetable and accurate methodology for logic-IP-internal electromigration assessment
Jain, Palkesh; Sapatnekar, Sachin S.; Cortadella Fortuny, Jordi
A new methodology for SoC-level logic-IP-internal EM verification is presented, which provides an on-the-fly retargeting capability for reliability constraints. This flexibility is available at the design verification stage, in the form of allowing arbitrary specifications (of lifetimes, temperatures, voltages and failure rates), as well as interoperability of IPs across foundries. The methodology is characterization- and reuse-based, and naturally incorporates complex effects such as clock gating and variable switching rates at different pins. The benefit from such a framework is demonstrated on a 28nm design, with close SPICE-correlation and verification in a retargeted reliability condition.
2015-09-21T11:28:48ZJain, PalkeshSapatnekar, Sachin S.Cortadella Fortuny, JordiA new methodology for SoC-level logic-IP-internal EM verification is presented, which provides an on-the-fly retargeting capability for reliability constraints. This flexibility is available at the design verification stage, in the form of allowing arbitrary specifications (of lifetimes, temperatures, voltages and failure rates), as well as interoperability of IPs across foundries. The methodology is characterization- and reuse-based, and naturally incorporates complex effects such as clock gating and variable switching rates at different pins. The benefit from such a framework is demonstrated on a 28nm design, with close SPICE-correlation and verification in a retargeted reliability condition.On the stability of generalized second price auctions with budgets
http://hdl.handle.net/2117/76532
On the stability of generalized second price auctions with budgets
Díaz Cort, Josep; Giotis, Ioannis; Kirousis, Lefteris; Markakis, Evangelos; Serna Iglesias, María José
The Generalized Second Price (GSP) auction used typically to model sponsored search auctions does not include the notion of budget constraints, which is present in practice. Motivated by this, we introduce the different variants of GSP auctions that take budgets into account in natural ways. We examine their stability by focusing on the existence of Nash equilibria and envy-free assignments. We highlight the differences between these mechanisms and find that only some of them exhibit both notions of stability. This shows the importance of carefully picking the right mechanism to ensure stable outcomes in the presence of budgets.
2015-08-28T11:00:33ZDíaz Cort, JosepGiotis, IoannisKirousis, LefterisMarkakis, EvangelosSerna Iglesias, María JoséThe Generalized Second Price (GSP) auction used typically to model sponsored search auctions does not include the notion of budget constraints, which is present in practice. Motivated by this, we introduce the different variants of GSP auctions that take budgets into account in natural ways. We examine their stability by focusing on the existence of Nash equilibria and envy-free assignments. We highlight the differences between these mechanisms and find that only some of them exhibit both notions of stability. This shows the importance of carefully picking the right mechanism to ensure stable outcomes in the presence of budgets.Better feedback for educational online judges
http://hdl.handle.net/2117/28174
Better feedback for educational online judges
Mani, Anaga; Venkataramani, Divya; Petit Silvestre, Jordi; Roura Ferret, Salvador
The verdicts of most online programming judges are, essentially, binary: the submitted codes are either “good enough” or not. Whilst this policy is appropriate for competitive or recruitment platforms, it can hinder the adoption of online judges on educative settings, where it could be adequate to provide better feedback to a student (or instructor) that has submitted a wrong code. An obvious option would be to just show him or her an instance where the code fails. However, that particular instance could be not very significant, and so could induce unreflectively patching the code. The approach considered in this paper is to data mine all the past incorrect submissions by all the users of the judge, so to extract a small subset of private test cases that may be relevant to most future users. Our solution is based on parsing the test files, building a bipartite graph, and solving a Set Cover problem by means of Integer Linear Programming. We have tested our solution with a hundred problems in Jutge.org. Those experiments suggest that our approach is general, efficient, and provides high quality results.
2015-06-04T08:47:03ZMani, AnagaVenkataramani, DivyaPetit Silvestre, JordiRoura Ferret, SalvadorThe verdicts of most online programming judges are, essentially, binary: the submitted codes are either “good enough” or not. Whilst this policy is appropriate for competitive or recruitment platforms, it can hinder the adoption of online judges on educative settings, where it could be adequate to provide better feedback to a student (or instructor) that has submitted a wrong code. An obvious option would be to just show him or her an instance where the code fails. However, that particular instance could be not very significant, and so could induce unreflectively patching the code. The approach considered in this paper is to data mine all the past incorrect submissions by all the users of the judge, so to extract a small subset of private test cases that may be relevant to most future users. Our solution is based on parsing the test files, building a bipartite graph, and solving a Set Cover problem by means of Integer Linear Programming. We have tested our solution with a hundred problems in Jutge.org. Those experiments suggest that our approach is general, efficient, and provides high quality results.Tableau-based reasoning for graph properties
http://hdl.handle.net/2117/28034
Tableau-based reasoning for graph properties
Lambers, Leen; Orejas Valdés, Fernando
Graphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. A simple way is based on defining an appropriate encoding of graphs in terms of classical logic. This approach has been followed by Courcelle. The alternative is the definition of a specialized logic, as done by Habel and Pennemann, who defined a logic of nested graph conditions, where graph properties are formulated explicitly making use of graphs and graph morphisms, and which has the expressive power of Courcelle's first order logic of graphs. In particular, in his thesis, Pennemann defined and implemented a sound proof system for reasoning in this logic. Moreover, he showed that his tools outperform some standard provers when working over encoded graph conditions. Unfortunately, Pennemann did not prove the completeness of his proof system. In this sense, one of the main contributions of this paper is the solution to this open problem. In particular, we prove the (refutational) completeness of a tableau method based on Pennemann's rules that provides a specific theorem-proving procedure for this logic. This procedure can be considered our second contribution. Finally, our tableaux are not standard, but we had to define a new notion of nested tableaux that could be useful for other formalisms where formulas have a hierarchical structure like nested graph conditions.
2015-05-25T14:03:48ZLambers, LeenOrejas Valdés, FernandoGraphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. A simple way is based on defining an appropriate encoding of graphs in terms of classical logic. This approach has been followed by Courcelle. The alternative is the definition of a specialized logic, as done by Habel and Pennemann, who defined a logic of nested graph conditions, where graph properties are formulated explicitly making use of graphs and graph morphisms, and which has the expressive power of Courcelle's first order logic of graphs. In particular, in his thesis, Pennemann defined and implemented a sound proof system for reasoning in this logic. Moreover, he showed that his tools outperform some standard provers when working over encoded graph conditions. Unfortunately, Pennemann did not prove the completeness of his proof system. In this sense, one of the main contributions of this paper is the solution to this open problem. In particular, we prove the (refutational) completeness of a tableau method based on Pennemann's rules that provides a specific theorem-proving procedure for this logic. This procedure can be considered our second contribution. Finally, our tableaux are not standard, but we had to define a new notion of nested tableaux that could be useful for other formalisms where formulas have a hierarchical structure like nested graph conditions.WMN-SA system for node placement in WMNs: evaluation for different realistic distributions of mesh clients
http://hdl.handle.net/2117/26860
WMN-SA system for node placement in WMNs: evaluation for different realistic distributions of mesh clients
Sakamoto, Shinji; Oda, Tetsuya; Bravo, Albert; Barolli, Leonard; Ikeda, Makoto; Xhafa Xhafa, Fatos
One of the key advantages of Wireless Mesh Networks (WMNs) is their importance for providing cost-efficient broadband connectivity. There are issues for achieving the network connectivity and user coverage, which are related with the node placement problem. We implemented a simulation system where we consider the router node placement problem in WMNs. We want to find the optimal distribution of router nodes in order to provide the best network connectivity and provide the best coverage in a set of randomly distributed clients. We modeled 3 different realistic distribution of mesh clients (Subway, Boulevard and Stadium models). From simulation results, we found that, in the case of Subway model distribution of mesh clients, connectivity and coverage reach maximum performance. For all 3 models, when the instance size increases, the performance decreases.
2015-03-19T14:21:51ZSakamoto, ShinjiOda, TetsuyaBravo, AlbertBarolli, LeonardIkeda, MakotoXhafa Xhafa, FatosOne of the key advantages of Wireless Mesh Networks (WMNs) is their importance for providing cost-efficient broadband connectivity. There are issues for achieving the network connectivity and user coverage, which are related with the node placement problem. We implemented a simulation system where we consider the router node placement problem in WMNs. We want to find the optimal distribution of router nodes in order to provide the best network connectivity and provide the best coverage in a set of randomly distributed clients. We modeled 3 different realistic distribution of mesh clients (Subway, Boulevard and Stadium models). From simulation results, we found that, in the case of Subway model distribution of mesh clients, connectivity and coverage reach maximum performance. For all 3 models, when the instance size increases, the performance decreases.PMLAB: An scripting environment for process mining
http://hdl.handle.net/2117/26167
PMLAB: An scripting environment for process mining
Carmona Vargas, Josep; Solé, Marc
In a decade of process mining research, several algorithms have been proposed to solve particular process mining tasks. At the same pace, tools have appeared both in the academic and the commercial domains. These tools have enabled the use of process mining practices to a rather limited extent. In this paper we advocate for a change in the mentality: process mining may be an exploratory discipline, and PMLAB - a Python-based scripting environment supporting this - is proposed. This demo presents the main features of the PMLAB environment.
2015-01-30T08:58:16ZCarmona Vargas, JosepSolé, MarcIn a decade of process mining research, several algorithms have been proposed to solve particular process mining tasks. At the same pace, tools have appeared both in the academic and the commercial domains. These tools have enabled the use of process mining practices to a rather limited extent. In this paper we advocate for a change in the mentality: process mining may be an exploratory discipline, and PMLAB - a Python-based scripting environment supporting this - is proposed. This demo presents the main features of the PMLAB environment.