LOGPROG - Lògica i Programacióhttp://hdl.handle.net/2117/35162024-03-29T10:28:39Z2024-03-29T10:28:39ZEfficient deduction in equality Horn logic by Horn-completionNieuwenhuis, Robert Lukas MarioNivela Alós, M. Pilar Brígidahttp://hdl.handle.net/2117/3705082023-09-10T02:20:41Z2022-07-19T11:36:19ZEfficient deduction in equality Horn logic by Horn-completion
Nieuwenhuis, Robert Lukas Mario; Nivela Alós, M. Pilar Brígida
We present a new unfailing completion procedure for Horn clauses with equality, including goal clauses. It is refutationally complete, and improves previous methods in that superpositions are computed only with unconditional equations, and on one single arbitrarily chosen equation in the condition of each Horn axiom, (i.e. not necessarily on the maximal ones). An interesting aspect of this note is the proof of completeness, which is based on the techniques defined by Bachmair and Ganzinger (1990). It is given here in detail and is short, simple and highly self-contained.
2022-07-19T11:36:19ZNieuwenhuis, Robert Lukas MarioNivela Alós, M. Pilar BrígidaWe present a new unfailing completion procedure for Horn clauses with equality, including goal clauses. It is refutationally complete, and improves previous methods in that superpositions are computed only with unconditional equations, and on one single arbitrarily chosen equation in the condition of each Horn axiom, (i.e. not necessarily on the maximal ones). An interesting aspect of this note is the proof of completeness, which is based on the techniques defined by Bachmair and Ganzinger (1990). It is given here in detail and is short, simple and highly self-contained.An implementation of the KNS orderingRivero Almeida, José Miguelhttp://hdl.handle.net/2117/3704912022-07-19T10:50:12Z2022-07-19T10:47:50ZAn implementation of the KNS ordering
Rivero Almeida, José Miguel
We describe an implementation of the KNS ordering within the TRIP system, a Quintus-Prolog written laboratory for experimenting with new rewrite-like approaches to theorem proving in first-order logic. The aim of the TRIP system is not to obtain high-performance theorem provers, but rather to test techniques that could be the basis for such theorem provers. The KNS ordering is a partial ordering scheme for proving the uniform termination of term rewriting systems, which is a crucial aspect of Knuth-Bendix completion-like theorem proving procedures. The KNS ordering is used within TRIP as a subsystem of a very general completion procedure for first-order clauses with equality, based on recent work done in this field by Bachmair and Ganzinger (1990,1991) subsuming and generalizing many known results. Our system contributes to the further development of these methods in several ways, and especially w.r.t. the definition of new techniques which are necessary for exploiting in their full power the redundancy notions that have now become available. We apply results from, where a highly simplified framework for defining Knuth-Bendix-like completion procedures for full first order clauses with equality is defined and instantiated by a new, simpler and more restrictive inference system. This implementation uses the notion of clausal rewriting (Nieuwenhuis and Orejas, 1991) for the definition of a single formalized method for proving the redundancy of clauses and inferences.
2022-07-19T10:47:50ZRivero Almeida, José MiguelWe describe an implementation of the KNS ordering within the TRIP system, a Quintus-Prolog written laboratory for experimenting with new rewrite-like approaches to theorem proving in first-order logic. The aim of the TRIP system is not to obtain high-performance theorem provers, but rather to test techniques that could be the basis for such theorem provers. The KNS ordering is a partial ordering scheme for proving the uniform termination of term rewriting systems, which is a crucial aspect of Knuth-Bendix completion-like theorem proving procedures. The KNS ordering is used within TRIP as a subsystem of a very general completion procedure for first-order clauses with equality, based on recent work done in this field by Bachmair and Ganzinger (1990,1991) subsuming and generalizing many known results. Our system contributes to the further development of these methods in several ways, and especially w.r.t. the definition of new techniques which are necessary for exploiting in their full power the redundancy notions that have now become available. We apply results from, where a highly simplified framework for defining Knuth-Bendix-like completion procedures for full first order clauses with equality is defined and instantiated by a new, simpler and more restrictive inference system. This implementation uses the notion of clausal rewriting (Nieuwenhuis and Orejas, 1991) for the definition of a single formalized method for proving the redundancy of clauses and inferences.Automatic generation of polynomial loop invariants for imperative programsRodríguez Carbonell, EnricKapur, Deepakhttp://hdl.handle.net/2117/3699222022-07-11T09:10:12Z2022-07-11T09:09:53ZAutomatic generation of polynomial loop invariants for imperative programs
Rodríguez Carbonell, Enric; Kapur, Deepak
A general framework is presented for automatig the discovery of loop invariants for imperative programs. Theoretical results about the correctness and completeness of the proposed method are given. More importantly, it is shown how this abstract approach can be used to automatically infer polynomial invariants. The medod has been implemented in Maple. Evidence of its practical interest is shown by means of several non-trivial examples, for which the polynomial loop invariants generated are directly applicable for proving correctness by means of a simple verifier.
2022-07-11T09:09:53ZRodríguez Carbonell, EnricKapur, DeepakA general framework is presented for automatig the discovery of loop invariants for imperative programs. Theoretical results about the correctness and completeness of the proposed method are given. More importantly, it is shown how this abstract approach can be used to automatically infer polynomial invariants. The medod has been implemented in Maple. Evidence of its practical interest is shown by means of several non-trivial examples, for which the polynomial loop invariants generated are directly applicable for proving correctness by means of a simple verifier.Choosing the root of the tree decomposition when solving WCSPs: preliminary resultsPetrova, AleksandraLarrosa Bondia, Francisco JavierRollón Rico, Emmahttp://hdl.handle.net/2117/3568602022-05-17T11:34:54Z2021-11-22T13:35:18ZChoosing the root of the tree decomposition when solving WCSPs: preliminary results
Petrova, Aleksandra; Larrosa Bondia, Francisco Javier; Rollón Rico, Emma
In this paper we analyze the effect of selecting the root in a tree decomposition when using decomposition-based backtracking algorithms. We focus on optimization tasks for Graphical Models using the BTD algorithm. We show that the choice of the root typically has a dramatic effect in the solving performance. Then we investigate different simple measures to predict near optimal roots. Our study shows that correlations are often low, so the automatic selection of a near optimal root will require more sophisticated techniques.
2021-11-22T13:35:18ZPetrova, AleksandraLarrosa Bondia, Francisco JavierRollón Rico, EmmaIn this paper we analyze the effect of selecting the root in a tree decomposition when using decomposition-based backtracking algorithms. We focus on optimization tasks for Graphical Models using the BTD algorithm. We show that the choice of the root typically has a dramatic effect in the solving performance. Then we investigate different simple measures to predict near optimal roots. Our study shows that correlations are often low, so the automatic selection of a near optimal root will require more sophisticated techniques.Employee scheduling with SAT-based pseudo-boolean constraint solvingNieuwenhuis, Robert Lukas MarioOliveras Llunell, AlbertRodríguez Carbonell, EnricRollón Rico, Emmahttp://hdl.handle.net/2117/3556382023-09-10T07:38:26Z2021-11-04T14:43:25ZEmployee scheduling with SAT-based pseudo-boolean constraint solving
Nieuwenhuis, Robert Lukas Mario; Oliveras Llunell, Albert; Rodríguez Carbonell, Enric; Rollón Rico, Emma
The aim of this paper is practical: to show that, for at least one important real-world problem, modern SAT-based technology can beat the extremely mature branch-and-cut solving methods implemented in well-known state-of-the-art commercial solvers such as CPLEX or Gurobi. The problem of employee scheduling consists in assigning a work schedule to each of the employees of an organization, in such a way that demands are met, legal and contractual constraints are respected, and staff preferences are taken into account. This problem is typically handled by first modeling it as a 0-1 integer linear program (ILP). Our experimental setup considers as a case study the 0-1 ILPs obtained from the staff scheduling of a real-world car rental company, and carefully compares the performance of CPLEX and Gurobi with our own simple conflict-driven constraint-learning pseudo-Boolean solver.
2021-11-04T14:43:25ZNieuwenhuis, Robert Lukas MarioOliveras Llunell, AlbertRodríguez Carbonell, EnricRollón Rico, EmmaThe aim of this paper is practical: to show that, for at least one important real-world problem, modern SAT-based technology can beat the extremely mature branch-and-cut solving methods implemented in well-known state-of-the-art commercial solvers such as CPLEX or Gurobi. The problem of employee scheduling consists in assigning a work schedule to each of the employees of an organization, in such a way that demands are met, legal and contractual constraints are respected, and staff preferences are taken into account. This problem is typically handled by first modeling it as a 0-1 integer linear program (ILP). Our experimental setup considers as a case study the 0-1 ILPs obtained from the staff scheduling of a real-world car rental company, and carefully compares the performance of CPLEX and Gurobi with our own simple conflict-driven constraint-learning pseudo-Boolean solver.A heuristic approach to the design of optimal cross-docking boxesNieuwenhuis, Robert Lukas MarioOliveras Llunell, AlbertRodríguez Carbonell, Enrichttp://hdl.handle.net/2117/3549052023-09-10T15:10:54Z2021-10-28T12:11:17ZA heuristic approach to the design of optimal cross-docking boxes
Nieuwenhuis, Robert Lukas Mario; Oliveras Llunell, Albert; Rodríguez Carbonell, Enric
Multinational companies frequently work with manufacturers that receive large orders for different products (or product varieties: size, shape, color, texture, material), to serve thousands of different final destinations (e.g., shops) requesting a combination of different quantities of each product. It is not the manufacturers’ task to create the individual shipments for each final destination. But manufacturers can deliver part of their production in so-called cross-docking boxes (or other containers) of a few+ (say, three) types, each type containing a given assortment, i.e., different quantities of different products. At a logistics center, the shipments for each destination are then made of cross-docking boxes plus additional “picking” units. The expensive part is the picking, since cross-docking boxes require little or no manipulation. The problem we solve in this paper is, given a large set of orders for each destination, to design the cross-docking box types in order to minimize picking. We formally define a variant of this problem and develop a heuristic method to solve it. Finally, we present extensive experimental results on a large set of real-world benchmarks proving that our approach gives high-quality solutions (optimal or near optimal) in a very limited amount of time.
2021-10-28T12:11:17ZNieuwenhuis, Robert Lukas MarioOliveras Llunell, AlbertRodríguez Carbonell, EnricMultinational companies frequently work with manufacturers that receive large orders for different products (or product varieties: size, shape, color, texture, material), to serve thousands of different final destinations (e.g., shops) requesting a combination of different quantities of each product. It is not the manufacturers’ task to create the individual shipments for each final destination. But manufacturers can deliver part of their production in so-called cross-docking boxes (or other containers) of a few+ (say, three) types, each type containing a given assortment, i.e., different quantities of different products. At a logistics center, the shipments for each destination are then made of cross-docking boxes plus additional “picking” units. The expensive part is the picking, since cross-docking boxes require little or no manipulation. The problem we solve in this paper is, given a large set of orders for each destination, to design the cross-docking box types in order to minimize picking. We formally define a variant of this problem and develop a heuristic method to solve it. Finally, we present extensive experimental results on a large set of real-world benchmarks proving that our approach gives high-quality solutions (optimal or near optimal) in a very limited amount of time.Stuck-at-off fault analysis in memristor-based architecture for synchronizationEscudero, ManuelVourkas, IoannisRubio Gimeno, Albertohttp://hdl.handle.net/2117/3538132021-10-19T08:20:48Z2021-10-19T08:19:11ZStuck-at-off fault analysis in memristor-based architecture for synchronization
Escudero, Manuel; Vourkas, Ioannis; Rubio Gimeno, Alberto
Nonlinear circuits may be interconnected and organized in networks to couple their dynamics and achieve synchronization, a process that is commonly observed in nature. Recent works have shown that memristors may be used as coupling elements in synchronization applications. However, these devices may suffer from low endurance and thus switching faults. Hence, evaluating redundancy and adaptive properties against possible device failures is important. In this direction, we study the performance of a network of chaotic circuits that are coupled using memristors organized in a crossbar geometry. This topology is analyzed in terms of robustness while assuming the existence of defective coupling devices. Our simulation results, based on a physics-based model of bipolar memristor, demonstrate the adaptive behavior of the crossbar architecture and show evidence of different network topologies whose performance outperforms that of the fully functional crossbar.
2021-10-19T08:19:11ZEscudero, ManuelVourkas, IoannisRubio Gimeno, AlbertoNonlinear circuits may be interconnected and organized in networks to couple their dynamics and achieve synchronization, a process that is commonly observed in nature. Recent works have shown that memristors may be used as coupling elements in synchronization applications. However, these devices may suffer from low endurance and thus switching faults. Hence, evaluating redundancy and adaptive properties against possible device failures is important. In this direction, we study the performance of a network of chaotic circuits that are coupled using memristors organized in a crossbar geometry. This topology is analyzed in terms of robustness while assuming the existence of defective coupling devices. Our simulation results, based on a physics-based model of bipolar memristor, demonstrate the adaptive behavior of the crossbar architecture and show evidence of different network topologies whose performance outperforms that of the fully functional crossbar.Decision levels are stable: towards better SAT heuristicsNieuwenhuis, Robert Lukas MarioLozano Navarro, AdriánOliveras Llunell, AlbertRodríguez Carbonell, Enrichttp://hdl.handle.net/2117/3496962023-09-10T07:55:14Z2021-07-20T06:30:23ZDecision levels are stable: towards better SAT heuristics
Nieuwenhuis, Robert Lukas Mario; Lozano Navarro, Adrián; Oliveras Llunell, Albert; Rodríguez Carbonell, Enric
We shed new light on the Literal Block Distance (LBD) and glue-based heuristics used in current SAT solvers. For this, we first introduce the concept of stickiness: given a run of a CDCL SAT solver, for each pair of literals we define, by a real value between 0 and 1, how sticky they are, basically, how frequently they are set at the same decision level.
By means of a careful and detailed experimental setup and analysis, we confirm the following quite surprising fact: given a SAT instance, when running different CDCL SAT solvers on it, no matter their settings or random seeds, the stickiness relation between literals is always very similar, in a precisely defined sense.
We analyze how quickly stickiness stabilizes in a run (quite quickly), and show that it is stable even under different encodings of cardinality constraints. We then describe how and why these solid new insights lead to heuristics refinements for SAT (and extensions, such as SMT) and improved information sharing in parallel solvers.
2021-07-20T06:30:23ZNieuwenhuis, Robert Lukas MarioLozano Navarro, AdriánOliveras Llunell, AlbertRodríguez Carbonell, EnricWe shed new light on the Literal Block Distance (LBD) and glue-based heuristics used in current SAT solvers. For this, we first introduce the concept of stickiness: given a run of a CDCL SAT solver, for each pair of literals we define, by a real value between 0 and 1, how sticky they are, basically, how frequently they are set at the same decision level.
By means of a careful and detailed experimental setup and analysis, we confirm the following quite surprising fact: given a SAT instance, when running different CDCL SAT solvers on it, no matter their settings or random seeds, the stickiness relation between literals is always very similar, in a precisely defined sense.
We analyze how quickly stickiness stabilizes in a run (quite quickly), and show that it is stable even under different encodings of cardinality constraints. We then describe how and why these solid new insights lead to heuristics refinements for SAT (and extensions, such as SMT) and improved information sharing in parallel solvers.First-order completion with ordering constraints: some positive and some negative resultsNieuwenhuis, Robert Lukas MarioRubio Gimeno, Albertohttp://hdl.handle.net/2117/3303982023-09-10T15:06:05Z2020-10-19T08:49:57ZFirst-order completion with ordering constraints: some positive and some negative results
Nieuwenhuis, Robert Lukas Mario; Rubio Gimeno, Alberto
We show by means of counter examples that some well-known results on the completeness of deduction methods with ordering constraints are incorrect. The problem is caused by the fact that the usual lifting lemmata do not hold.
In this paper, these problems are overcome by using a different lifting method. We define a completion procedure for ordering constrained first-order clauses with equality, including notions of redundant inferences and clauses, as done in [BG 91] for clauses without constraints. This completion procedure is refutationally complete if the initial set of constrained clauses fulfills a property which we have called pureness. In particular, clauses without constraints are pure. Pureness is preserved during the completion process. Since the constraints generated during completion reduce the search space considerably, our results allow to do very efficient theorem proving in first-order logic with equality. Moreover, complete sets of axioms (canonical sets of rewrite rules in the equational case) can be obtained in more cases.
2020-10-19T08:49:57ZNieuwenhuis, Robert Lukas MarioRubio Gimeno, AlbertoWe show by means of counter examples that some well-known results on the completeness of deduction methods with ordering constraints are incorrect. The problem is caused by the fact that the usual lifting lemmata do not hold.
In this paper, these problems are overcome by using a different lifting method. We define a completion procedure for ordering constrained first-order clauses with equality, including notions of redundant inferences and clauses, as done in [BG 91] for clauses without constraints. This completion procedure is refutationally complete if the initial set of constrained clauses fulfills a property which we have called pureness. In particular, clauses without constraints are pure. Pureness is preserved during the completion process. Since the constraints generated during completion reduce the search space considerably, our results allow to do very efficient theorem proving in first-order logic with equality. Moreover, complete sets of axioms (canonical sets of rewrite rules in the equational case) can be obtained in more cases.Basic superposition is completeNieuwenhuis, Robert Lukas MarioRubio Gimeno, Albertohttp://hdl.handle.net/2117/3302072023-09-10T03:58:10Z2020-10-14T10:18:27ZBasic superposition is complete
Nieuwenhuis, Robert Lukas Mario; Rubio Gimeno, Alberto
We define a formalism of equality constraints and use it to prove the completeness of what we have called basic superposition: a restricted form of superposition in which only the subterms not originated in previous inferences is superposed upon. We first apply our results to the equational case and to unfailing Knuth-Bendix completion. Second, we extend the techniques to the case of full first-order clauses with equality, proving the refutational completeness of a new simple inference system. Finally, it is briefly outlined how this method can be applied to further restrict inference systems by the use of ordering constraints.
2020-10-14T10:18:27ZNieuwenhuis, Robert Lukas MarioRubio Gimeno, AlbertoWe define a formalism of equality constraints and use it to prove the completeness of what we have called basic superposition: a restricted form of superposition in which only the subterms not originated in previous inferences is superposed upon. We first apply our results to the equational case and to unfailing Knuth-Bendix completion. Second, we extend the techniques to the case of full first-order clauses with equality, proving the refutational completeness of a new simple inference system. Finally, it is briefly outlined how this method can be applied to further restrict inference systems by the use of ordering constraints.