Reports de recerca
http://hdl.handle.net/2117/3973
2016-10-27T20:44:49ZGeometry simplification
http://hdl.handle.net/2117/91170
Geometry simplification
Andújar Gran, Carlos Antonio
In this work we present the principles and applications of geometry
simplification, focusing on simplification of polygonal representations
of solids and surfaces. Related concepts such as multiresolution,
level-of-detail and geometry compression are also discussed. A
characterization of surface simplification methods is presented,
including a classification, review and evaluation of the more relevant
methods.
2016-10-27T13:44:05ZAndújar Gran, Carlos AntonioIn this work we present the principles and applications of geometry
simplification, focusing on simplification of polygonal representations
of solids and surfaces. Related concepts such as multiresolution,
level-of-detail and geometry compression are also discussed. A
characterization of surface simplification methods is presented,
including a classification, review and evaluation of the more relevant
methods.The Events method for temporal integrity constraint handling in bitemporal deductive databases
http://hdl.handle.net/2117/91169
The Events method for temporal integrity constraint handling in bitemporal deductive databases
Martín Escofet, Carme
A bitemporal deductive database is a deductive database that supports valid and
transaction time. A temporal integrity constraint deals
with only valid time, only transaction time or both times. A set of facts to be
einserted and deleted in a bitemporal deductive database can
be done in a past, present or future valid time and at current transaction time.
The temporal integrity constraint handling in bitemporal
deductive databases causes that the maintenance of consistency becomes more
complex than another databases. The $$$events method$$$ is
based on applying transition and event rules, which explicitly define the
insertions and deletions given by a database update. In the
conceptual model, we augment the database with temporal transition and event
rules and then standard SLDNF-resolution can be used
to verify that a transaction does not violate any temporal integrity constraint.
In the representational data model, we use time point-based
intervals to store temporal information. In this paper, we adapt the $$$events
method$$$ for handling temporal integrity constraints.
Finally, we present the interaction between the above-mentioned conceptual and
representational data models.
2016-10-27T13:35:15ZMartín Escofet, CarmeA bitemporal deductive database is a deductive database that supports valid and
transaction time. A temporal integrity constraint deals
with only valid time, only transaction time or both times. A set of facts to be
einserted and deleted in a bitemporal deductive database can
be done in a past, present or future valid time and at current transaction time.
The temporal integrity constraint handling in bitemporal
deductive databases causes that the maintenance of consistency becomes more
complex than another databases. The $$$events method$$$ is
based on applying transition and event rules, which explicitly define the
insertions and deletions given by a database update. In the
conceptual model, we augment the database with temporal transition and event
rules and then standard SLDNF-resolution can be used
to verify that a transaction does not violate any temporal integrity constraint.
In the representational data model, we use time point-based
intervals to store temporal information. In this paper, we adapt the $$$events
method$$$ for handling temporal integrity constraints.
Finally, we present the interaction between the above-mentioned conceptual and
representational data models.Higher-order recursive path orderings
http://hdl.handle.net/2117/91168
Higher-order recursive path orderings
Jouannaud, J P; Rubio Gimeno, Alberto
This paper extends the termination proof techniques based on reduction
orderings to a higher-order setting, by adapting the recursive path
ordering definition to higher-order simply-typed lambda-terms. The
main result is that this ordering is well-founded, compatible with
beta-reductions, and with polymorphic typing. We also restrict the
ordering so as to obtain a new ordering operating on higher-order
terms in eta-long beta-normal form. Both orderings are powerful
enough to allow for complex examples, including the polymorphic
version of Gödel's recursor for simple inductive types.
2016-10-27T13:26:10ZJouannaud, J PRubio Gimeno, AlbertoThis paper extends the termination proof techniques based on reduction
orderings to a higher-order setting, by adapting the recursive path
ordering definition to higher-order simply-typed lambda-terms. The
main result is that this ordering is well-founded, compatible with
beta-reductions, and with polymorphic typing. We also restrict the
ordering so as to obtain a new ordering operating on higher-order
terms in eta-long beta-normal form. Both orderings are powerful
enough to allow for complex examples, including the polymorphic
version of Gödel's recursor for simple inductive types.The complexity of game isomorphism
http://hdl.handle.net/2117/91166
The complexity of game isomorphism
Gabarró Vallès, Joaquim; García, Alina; Serna Iglesias, María José
We address the question of whether two multiplayer strategic games are equivalent and the computational complexity of deciding such a property. We introduce two notions of isomorphisms, strong and weak. Each one of those isomorphisms preserves a different structure of the game. Strong isomorphisms are defined to preserve the utility functions and Nash equilibria. Weak isomorphisms preserve only the player's preference relations and thus pure Nash equilibria. We show that the computational complexity of the game isomorphism problem depends on the level of succinctness of the description of the input games but it is independent on which of the two types of isomorphisms is considered. Utilities in games can be given succinctly by Turing machines, boolean circuits or boolean formulas, or explicitly by tables. Actions can be given also explicitly or succinctly. When the games are given in general form, we asume a explicit description of actions and a succinct description of utilities. We show that the game isomorphism problem for general form games is equivalent to the circuit isomorphism when utilities are described by TMs and to the boolean formula isomorphism problem when utilities are described by formulas. When the game is given in explicit form, we show that the game isomorphism problem is equivalent to the graph isomorphism problem.
2016-10-27T12:23:55ZGabarró Vallès, JoaquimGarcía, AlinaSerna Iglesias, María JoséWe address the question of whether two multiplayer strategic games are equivalent and the computational complexity of deciding such a property. We introduce two notions of isomorphisms, strong and weak. Each one of those isomorphisms preserves a different structure of the game. Strong isomorphisms are defined to preserve the utility functions and Nash equilibria. Weak isomorphisms preserve only the player's preference relations and thus pure Nash equilibria. We show that the computational complexity of the game isomorphism problem depends on the level of succinctness of the description of the input games but it is independent on which of the two types of isomorphisms is considered. Utilities in games can be given succinctly by Turing machines, boolean circuits or boolean formulas, or explicitly by tables. Actions can be given also explicitly or succinctly. When the games are given in general form, we asume a explicit description of actions and a succinct description of utilities. We show that the game isomorphism problem for general form games is equivalent to the circuit isomorphism when utilities are described by TMs and to the boolean formula isomorphism problem when utilities are described by formulas. When the game is given in explicit form, we show that the game isomorphism problem is equivalent to the graph isomorphism problem.Skeletonless porosimeter simulation
http://hdl.handle.net/2117/91165
Skeletonless porosimeter simulation
Rodríguez, Jorge; Cruz Matías, Irving; Vergés Garcia, Eduard; Ayala Vallespí, M. Dolors
We introduce a new approach to simulate a virtual mercury intrusion porosimetry (MIP) using neither skeleton computing nor seed-growing methods. Most of the existing methods to determine local pore sizes in a porous medium require to compute the skeleton of the pore space. However, the skeleton computation is a very time consuming process. Instead, our approach uses a particular spatial enumeration encoding of the porous media, a set of disjoint boxes, and an algorithm able to determine the set of boxes invaded by the mercury at each iteration without any need of a previous skeleton computation. The algorithm detects all the pores which must be lled for a given mercury intrusion pressure, which is related to a diameter by the Washburn equation. The presented method is able to detect narrow throats and one-dimensional transitions between pores in order to prevent incorrect full uid invasion of the whole sample. The particular encoding used in this work is a new compact version of an existing model, the Ordered Union of Disjoint Boxes (OUDB). Finally, the pore size distribution of the porous medium and the corresponding pore graph can be obtained from the analyzed sample.
2016-10-27T12:06:51ZRodríguez, JorgeCruz Matías, IrvingVergés Garcia, EduardAyala Vallespí, M. DolorsWe introduce a new approach to simulate a virtual mercury intrusion porosimetry (MIP) using neither skeleton computing nor seed-growing methods. Most of the existing methods to determine local pore sizes in a porous medium require to compute the skeleton of the pore space. However, the skeleton computation is a very time consuming process. Instead, our approach uses a particular spatial enumeration encoding of the porous media, a set of disjoint boxes, and an algorithm able to determine the set of boxes invaded by the mercury at each iteration without any need of a previous skeleton computation. The algorithm detects all the pores which must be lled for a given mercury intrusion pressure, which is related to a diameter by the Washburn equation. The presented method is able to detect narrow throats and one-dimensional transitions between pores in order to prevent incorrect full uid invasion of the whole sample. The particular encoding used in this work is a new compact version of an existing model, the Ordered Union of Disjoint Boxes (OUDB). Finally, the pore size distribution of the porous medium and the corresponding pore graph can be obtained from the analyzed sample.3D pore analysis of sedimentary rocks
http://hdl.handle.net/2117/91163
3D pore analysis of sedimentary rocks
Vergés Garcia, Eduard; Tost Pardell, Daniela; Ayala Vallespí, M. Dolors; Ramos Guerrero, Emilio; Grau Carrion, Sergi
A 3D representation of the internal structure and fabric of sedimentary rocks is of paramount interest to evaluate their structural parameters such as porosity, pore-size distribution and permeability. The classical experimental technique to evaluate the pore space volume and pore size distribution is the Mercury Intrusion Porosimetry (MIP). Computer-based methods use 3D imaging technologies such as Computer Tomography (CT) scanned images to construct and evaluate a 3D virtual representation of the internal pore distribution. In this work, based on a three samples set of sandstone, we apply two numerical (computer-based) methods in order to reconstruct and analyse the internal pore network, and compare it with the results obtained by MIP analysis. The first numerical method performs a virtual simulation of MIP. The second one obtains a graph of pores using a sphere-filling based approach. For all methods, we compute the global porosity and the pore-size distribution. Moreover, with the numerical methods, we obtain the total porosity and a graph representing the pore space that can be visualized with 3D illustration techniques.
2016-10-27T11:48:27ZVergés Garcia, EduardTost Pardell, DanielaAyala Vallespí, M. DolorsRamos Guerrero, EmilioGrau Carrion, SergiA 3D representation of the internal structure and fabric of sedimentary rocks is of paramount interest to evaluate their structural parameters such as porosity, pore-size distribution and permeability. The classical experimental technique to evaluate the pore space volume and pore size distribution is the Mercury Intrusion Porosimetry (MIP). Computer-based methods use 3D imaging technologies such as Computer Tomography (CT) scanned images to construct and evaluate a 3D virtual representation of the internal pore distribution. In this work, based on a three samples set of sandstone, we apply two numerical (computer-based) methods in order to reconstruct and analyse the internal pore network, and compare it with the results obtained by MIP analysis. The first numerical method performs a virtual simulation of MIP. The second one obtains a graph of pores using a sphere-filling based approach. For all methods, we compute the global porosity and the pore-size distribution. Moreover, with the numerical methods, we obtain the total porosity and a graph representing the pore space that can be visualized with 3D illustration techniques.Symmetry breaking in tournaments
http://hdl.handle.net/2117/91158
Symmetry breaking in tournaments
Lozano Bojados, Antoni
We provide upper bounds for the determining number and the metric dimension of tournaments. A set of vertices S is a determining set for a tournament T if every nontrivial automorphism of T moves at least one vertex of S, while S is a resolving set for T if every two distinct vertices in T have different distances to some vertex in S. We show that the minimum size of a determining set for an order n tournament (its determining number) is bounded by n/3, while the minimum size of a resolving set for an order n strong tournament (its metric dimension) is bounded by n/2. Both bounds are optimal.
2016-10-27T11:29:19ZLozano Bojados, AntoniWe provide upper bounds for the determining number and the metric dimension of tournaments. A set of vertices S is a determining set for a tournament T if every nontrivial automorphism of T moves at least one vertex of S, while S is a resolving set for T if every two distinct vertices in T have different distances to some vertex in S. We show that the minimum size of a determining set for an order n tournament (its determining number) is bounded by n/3, while the minimum size of a resolving set for an order n strong tournament (its metric dimension) is bounded by n/2. Both bounds are optimal.SALMon: A SOA system for monitoring service level agreements
http://hdl.handle.net/2117/91157
SALMon: A SOA system for monitoring service level agreements
Oriol Hilari, Marc; Franch Gutiérrez, Javier; Marco Gómez, Jordi
In this paper we present SALMon, a tool assessing the satisfaction of service level agreement (SLA) clauses by service-oriented systems. SALMon itself is organized as a service-oriented system that offers two kind of services: 1) the Monitor service that measures the values in execution time of dynamic quality attributes (like response time or availability), and 2) the Analyzer service that detects and reports violations of SLA clauses from the values obtained with the Monitor. The SALMon tool is highly versatile, allowing: 1) both active testing and passive monitoring as strategies, 2) different types of technologies for the monitored/tested systems (e.g., Web services, RESTful services), 3) agile definition of measure instruments for new quality attributes. The service-oriented nature of SALMon makes it scalable and easy to integrate with other services that need its functionalities.
2016-10-27T11:19:11ZOriol Hilari, MarcFranch Gutiérrez, JavierMarco Gómez, JordiIn this paper we present SALMon, a tool assessing the satisfaction of service level agreement (SLA) clauses by service-oriented systems. SALMon itself is organized as a service-oriented system that offers two kind of services: 1) the Monitor service that measures the values in execution time of dynamic quality attributes (like response time or availability), and 2) the Analyzer service that detects and reports violations of SLA clauses from the values obtained with the Monitor. The SALMon tool is highly versatile, allowing: 1) both active testing and passive monitoring as strategies, 2) different types of technologies for the monitored/tested systems (e.g., Web services, RESTful services), 3) agile definition of measure instruments for new quality attributes. The service-oriented nature of SALMon makes it scalable and easy to integrate with other services that need its functionalities.Fuzzy inputs and missing data in similarity-based heterogeneous neural networks
http://hdl.handle.net/2117/91156
Fuzzy inputs and missing data in similarity-based heterogeneous neural networks
Belanche Muñoz, Luis Antonio; Valdés Ramos, Julio José
Fuzzy heterogeneous networks are recently introduced feed-forward
neural network models composed of neurons of a general class whose
inputs and weights are mixtures of continuous variables (crisp and/or
fuzzy) with discrete quantities, also admitting missing data. These
networks have net input functions based on similarity relations
between the inputs to and the weights of a neuron. They thus accept
heterogeneous --possibly missing-- inputs, and can be coupled with
classical neurons in hybrid network architectures, trained by means of
genetic algorithms or other evolutionary methods.
This report compares the effectiveness of the fuzzy heterogeneous
model based on similarity with that of the classical feed-forward one,
in the context of an investigation in the field of environmental
sciences, namely, the geochemical study of natural waters in the
Arctic (Spitzbergen). Classification accuracy, the effect of working
with crisp or fuzzy inputs, the use of traditional scalar product {em
vs.} similarity based functions, and the presence of missing data, are
studied.
The results obtained show that, from these standpoints, fuzzy
heterogeneous networks based on similarity perform better than classical
feed-forward models. This behaviour is consistent with previous
results in other application domains.
2016-10-27T10:37:51ZBelanche Muñoz, Luis AntonioValdés Ramos, Julio JoséFuzzy heterogeneous networks are recently introduced feed-forward
neural network models composed of neurons of a general class whose
inputs and weights are mixtures of continuous variables (crisp and/or
fuzzy) with discrete quantities, also admitting missing data. These
networks have net input functions based on similarity relations
between the inputs to and the weights of a neuron. They thus accept
heterogeneous --possibly missing-- inputs, and can be coupled with
classical neurons in hybrid network architectures, trained by means of
genetic algorithms or other evolutionary methods.
This report compares the effectiveness of the fuzzy heterogeneous
model based on similarity with that of the classical feed-forward one,
in the context of an investigation in the field of environmental
sciences, namely, the geochemical study of natural waters in the
Arctic (Spitzbergen). Classification accuracy, the effect of working
with crisp or fuzzy inputs, the use of traditional scalar product {em
vs.} similarity based functions, and the presence of missing data, are
studied.
The results obtained show that, from these standpoints, fuzzy
heterogeneous networks based on similarity perform better than classical
feed-forward models. This behaviour is consistent with previous
results in other application domains.A fully syntactic AC-RPO
http://hdl.handle.net/2117/91121
A fully syntactic AC-RPO
Rubio Gimeno, Alberto
We present the first fully syntactic (i.e., non-interpretation-based)
AC-compatible recursive path ordering (RPO). It is very simple, and
hence easy to implement, and its behaviour is intuitive as in the
standard RPO. The ordering is AC-total, and defined uniformly for
both ground and non-ground terms, as well as for partial precedences.
More importantly, it is the first one that can deal incrementally with
partial precedences, an aspect that is essential, together with its
intuitive behaviour, for interactive applications like Knuth-Bendix
completion.
2016-10-26T14:54:29ZRubio Gimeno, AlbertoWe present the first fully syntactic (i.e., non-interpretation-based)
AC-compatible recursive path ordering (RPO). It is very simple, and
hence easy to implement, and its behaviour is intuitive as in the
standard RPO. The ordering is AC-total, and defined uniformly for
both ground and non-ground terms, as well as for partial precedences.
More importantly, it is the first one that can deal incrementally with
partial precedences, an aspect that is essential, together with its
intuitive behaviour, for interactive applications like Knuth-Bendix
completion.