Reports de recerca
http://hdl.handle.net/2117/3973
Thu, 11 Feb 2016 21:55:03 GMT2016-02-11T21:55:03ZRewriting in categories of spans
http://hdl.handle.net/2117/82838
Rewriting in categories of spans
Monserrat, Miquel; Rosselló, Francesc; Torrens, Joan; Valiente Feruglio, Gabriel Alejandro
Single-pushout transformation in a category of spans, in some sense a
generalization of the usual notion of partial morphism,
is studied in this paper. Contrary to the usual notion of partial
morphism, spans are single objects instead of equivalence
classes. A necessary condition for the existence of the pushout of two spans is
established which involves properties of the base category, from which the
category of spans is derived, as well as properties of the spans themselves.
Several interesting categories of partial morphisms of hypergraphs
are proved to satisfy the necessary condition.
Thu, 11 Feb 2016 11:24:40 GMThttp://hdl.handle.net/2117/828382016-02-11T11:24:40ZMonserrat, MiquelRosselló, FrancescTorrens, JoanValiente Feruglio, Gabriel AlejandroSingle-pushout transformation in a category of spans, in some sense a
generalization of the usual notion of partial morphism,
is studied in this paper. Contrary to the usual notion of partial
morphism, spans are single objects instead of equivalence
classes. A necessary condition for the existence of the pushout of two spans is
established which involves properties of the base category, from which the
category of spans is derived, as well as properties of the spans themselves.
Several interesting categories of partial morphisms of hypergraphs
are proved to satisfy the necessary condition.Algebraic transformation of unary partial algebras I: double-pushout approach
http://hdl.handle.net/2117/82835
Algebraic transformation of unary partial algebras I: double-pushout approach
Burmeister, P.; Rossello, F.; Torrens, J.; Valiente Feruglio, Gabriel Alejandro
The transformation of total graph structures has been studied from the
algebraic point of view over more than two decades now, and it has
motivated the development of the so-called double-pushout and
single-pushout approaches to graph transformation. In this
article we extend the double-pushout approach to the algebraic
transformation of partial many-sorted unary algebras.
Such a generalization has been motivated by the need to model the
transformation of structures which are richer and more complex than
acyclic graphs and hypergraphs. The main result presented in this
article is an algebraic characterization of the double-pushout
transformation in the categories of all homomorphisms and all closed
homomorphisms of unary partial algebras over a given signature,
together with a corresponding operational characterization which may
serve as a basis for implementation.
Moreover, both categories are shown to satisfy the strongest of the
HLR (High Level Replacement) conditions with respect to closed
monomorphisms. HLR conditions are
fundamental to rewriting because they guarantee the satisfaction of many
rewriting theorems concerning confluence, parallelism and
concurrency.
Thu, 11 Feb 2016 11:17:38 GMThttp://hdl.handle.net/2117/828352016-02-11T11:17:38ZBurmeister, P.Rossello, F.Torrens, J.Valiente Feruglio, Gabriel AlejandroThe transformation of total graph structures has been studied from the
algebraic point of view over more than two decades now, and it has
motivated the development of the so-called double-pushout and
single-pushout approaches to graph transformation. In this
article we extend the double-pushout approach to the algebraic
transformation of partial many-sorted unary algebras.
Such a generalization has been motivated by the need to model the
transformation of structures which are richer and more complex than
acyclic graphs and hypergraphs. The main result presented in this
article is an algebraic characterization of the double-pushout
transformation in the categories of all homomorphisms and all closed
homomorphisms of unary partial algebras over a given signature,
together with a corresponding operational characterization which may
serve as a basis for implementation.
Moreover, both categories are shown to satisfy the strongest of the
HLR (High Level Replacement) conditions with respect to closed
monomorphisms. HLR conditions are
fundamental to rewriting because they guarantee the satisfaction of many
rewriting theorems concerning confluence, parallelism and
concurrency.Logic as general rationality: a survey
http://hdl.handle.net/2117/82830
Logic as general rationality: a survey
Sales Porta, Ton
Logic and probability, which happen to share historical origins,
are asked nowadays to solve new problems such as reasoning under
uncertainty, or with incomplete information or imprecisely
formulated statements. The paper surveys how both fields have striven
to solve them and how a common formalism, already suggested by
Kolmogorov and Popper, may be what both disciplines lack to become
a general theory of rationality.
Thu, 11 Feb 2016 10:59:05 GMThttp://hdl.handle.net/2117/828302016-02-11T10:59:05ZSales Porta, TonLogic and probability, which happen to share historical origins,
are asked nowadays to solve new problems such as reasoning under
uncertainty, or with incomplete information or imprecisely
formulated statements. The paper surveys how both fields have striven
to solve them and how a common formalism, already suggested by
Kolmogorov and Popper, may be what both disciplines lack to become
a general theory of rationality.Limited logical belief analysis
http://hdl.handle.net/2117/82614
Limited logical belief analysis
Moreno Ribas, Antonio
The process of rational inquiry can be defined as the
evolution of the beliefs of a rational agent as a consequence
of its internal inference procedures and its interaction with the
environment. These beliefs can be modelled in a formal way
using doxastic logics.
The possible worlds model and its associated Kripke semantics
provide an intuitive semantics for these logics, but they seem to commit us to
model agents that are logically omniscient and
perfect reasoners. These problems can be avoided with a syntactic
view of possible worlds, defining them as arbitrary sets of sentences
in a propositional belief logic.
In this article this syntactic view of possible worlds is taken, and
a dynamic analysis of the agent's beliefs
is suggested in order to model the process of
rational inquiry in which the agent is permanently
engaged. One component of this analysis, the logical one, is
summarily described. This dimension of analysis is performed using a
modified version of the analytic tableaux method, and it models the
evolution of the beliefs due to the agent's inference power. It
is shown how non-perfect reasoning is achieved in two ways: on one
hand, the agent's deductive abilities
can be controlled by restricting the tautologies that it is
allowed to use in the course of this logical
analysis; on the other hand, the agent is not obliged to perform an
exhaustive analysis of the initial tableau.
Fri, 05 Feb 2016 11:52:44 GMThttp://hdl.handle.net/2117/826142016-02-05T11:52:44ZMoreno Ribas, AntonioThe process of rational inquiry can be defined as the
evolution of the beliefs of a rational agent as a consequence
of its internal inference procedures and its interaction with the
environment. These beliefs can be modelled in a formal way
using doxastic logics.
The possible worlds model and its associated Kripke semantics
provide an intuitive semantics for these logics, but they seem to commit us to
model agents that are logically omniscient and
perfect reasoners. These problems can be avoided with a syntactic
view of possible worlds, defining them as arbitrary sets of sentences
in a propositional belief logic.
In this article this syntactic view of possible worlds is taken, and
a dynamic analysis of the agent's beliefs
is suggested in order to model the process of
rational inquiry in which the agent is permanently
engaged. One component of this analysis, the logical one, is
summarily described. This dimension of analysis is performed using a
modified version of the analytic tableaux method, and it models the
evolution of the beliefs due to the agent's inference power. It
is shown how non-perfect reasoning is achieved in two ways: on one
hand, the agent's deductive abilities
can be controlled by restricting the tautologies that it is
allowed to use in the course of this logical
analysis; on the other hand, the agent is not obliged to perform an
exhaustive analysis of the initial tableau.Using bidirectional chart parsing for corpus analysis
http://hdl.handle.net/2117/82613
Using bidirectional chart parsing for corpus analysis
Ageno Pulido, Alicia; Rodríguez Hontoria, Horacio
Several experiments have been developed around a bidirectional island-driven
chart parser. The system follows basically the approach of Stock, Satta and
Corazza, and the experiments have been designed and performed with the purpose
of examining several ways of improvement: basic strategy of the algorithm (pure
island-driven versus mixed island-driven/bottom-up approaches), strategies for
extension of the islands, strategies for selecting the initial islands, ways of
scoring the possible extensions, etc. Both the system and the results obtained
up to date are presented in this paper.
Fri, 05 Feb 2016 11:45:16 GMThttp://hdl.handle.net/2117/826132016-02-05T11:45:16ZAgeno Pulido, AliciaRodríguez Hontoria, HoracioSeveral experiments have been developed around a bidirectional island-driven
chart parser. The system follows basically the approach of Stock, Satta and
Corazza, and the experiments have been designed and performed with the purpose
of examining several ways of improvement: basic strategy of the algorithm (pure
island-driven versus mixed island-driven/bottom-up approaches), strategies for
extension of the islands, strategies for selecting the initial islands, ways of
scoring the possible extensions, etc. Both the system and the results obtained
up to date are presented in this paper.POS tagging using relaxation techniques
http://hdl.handle.net/2117/82611
POS tagging using relaxation techniques
Padró, Lluís
Relaxation labelling is an optimization technique used in many fields
to solve constraint satisfaction problems. The algorithm finds a
combination of values for a set of variables such that satisfies -
to the maximum possible degree - a set of given constraints. This
paper describes some experiments performed applying it to POS tagging,
and the results obtained. It also ponders the possibility of applying
it to word sense disambiguation.
Fri, 05 Feb 2016 11:36:56 GMThttp://hdl.handle.net/2117/826112016-02-05T11:36:56ZPadró, LluísRelaxation labelling is an optimization technique used in many fields
to solve constraint satisfaction problems. The algorithm finds a
combination of values for a set of variables such that satisfies -
to the maximum possible degree - a set of given constraints. This
paper describes some experiments performed applying it to POS tagging,
and the results obtained. It also ponders the possibility of applying
it to word sense disambiguation.Towards learning a constraint grammar from annotated corpora using decision trees
http://hdl.handle.net/2117/82609
Towards learning a constraint grammar from annotated corpora using decision trees
Màrquez Villodre, Lluís; Rodríguez Hontoria, Horacio
Inside the framework of robust parsers for the syntactic analysis of
unrestricted text, the aim of this work is the construction of a system
capable of automatically learning Constraint Grammar rules from a POS
annotated Corpus. The system presented is able by now to acquire constraint
rules for POS tagging and we plan to extend it to cover syntactic rules.
The learning process uses a supervised learning algorithm based on
building a discrimination forest, with a decision tree attached to each
case of POS ambiguity. The system has been applied to four representative
cases of ambiguity performing on a Spanish Corpus. The results obtained
in these experiments and some discussion about the appropriateness of the
proposed learning technique are presented in this paper.
Fri, 05 Feb 2016 11:23:26 GMThttp://hdl.handle.net/2117/826092016-02-05T11:23:26ZMàrquez Villodre, LluísRodríguez Hontoria, HoracioInside the framework of robust parsers for the syntactic analysis of
unrestricted text, the aim of this work is the construction of a system
capable of automatically learning Constraint Grammar rules from a POS
annotated Corpus. The system presented is able by now to acquire constraint
rules for POS tagging and we plan to extend it to cover syntactic rules.
The learning process uses a supervised learning algorithm based on
building a discrimination forest, with a decision tree attached to each
case of POS ambiguity. The system has been applied to four representative
cases of ambiguity performing on a Spanish Corpus. The results obtained
in these experiments and some discussion about the appropriateness of the
proposed learning technique are presented in this paper.Word sense disambiguation using conceptual density
http://hdl.handle.net/2117/82607
Word sense disambiguation using conceptual density
Agirre, Eneko; Rigau Claramunt, German
This paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.
Fri, 05 Feb 2016 10:58:03 GMThttp://hdl.handle.net/2117/826072016-02-05T10:58:03ZAgirre, EnekoRigau Claramunt, GermanThis paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.A Proposal for word sense disambiguation using conceptual distance
http://hdl.handle.net/2117/82606
A Proposal for word sense disambiguation using conceptual distance
Agirre, Eneko; Rigau Claramunt, German
This paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.
Fri, 05 Feb 2016 10:47:39 GMThttp://hdl.handle.net/2117/826062016-02-05T10:47:39ZAgirre, EnekoRigau Claramunt, GermanThis paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.Compressibility of infinite binary sequences
http://hdl.handle.net/2117/82554
Compressibility of infinite binary sequences
Balcázar Navarro, José Luis; Gavaldà Mestre, Ricard; Hermo Huguet, Montserrat
It is known that infinite binary sequences of constant
Kolmogorov complexity are exactly the recursive ones.
Such a kind of statement no longer holds in the presence of resource bounds.
Contrary to what intuition might suggest, there are sequences of
constant, polynomial-time bounded Kolmogorov complexity that are
not polynomial-time computable. This motivates the study of
several resource-bounded variants in search for a characterization,
similar in spirit, of the polynomial-time computable sequences.
We propose some definitions, based on Kobayashi's notion of
compressibility, and compare them to both the standard resource-bounded
Kolmogorov complexity of infinite strings, and the uniform complexity.
Some nontrivial coincidences and disagreements are proved.
The resource-unbounded case is also considered.
Thu, 04 Feb 2016 14:10:32 GMThttp://hdl.handle.net/2117/825542016-02-04T14:10:32ZBalcázar Navarro, José LuisGavaldà Mestre, RicardHermo Huguet, MontserratIt is known that infinite binary sequences of constant
Kolmogorov complexity are exactly the recursive ones.
Such a kind of statement no longer holds in the presence of resource bounds.
Contrary to what intuition might suggest, there are sequences of
constant, polynomial-time bounded Kolmogorov complexity that are
not polynomial-time computable. This motivates the study of
several resource-bounded variants in search for a characterization,
similar in spirit, of the polynomial-time computable sequences.
We propose some definitions, based on Kobayashi's notion of
compressibility, and compare them to both the standard resource-bounded
Kolmogorov complexity of infinite strings, and the uniform complexity.
Some nontrivial coincidences and disagreements are proved.
The resource-unbounded case is also considered.