Reports de recerca
http://hdl.handle.net/2117/3973
2016-02-09T18:38:15ZLimited logical belief analysis
http://hdl.handle.net/2117/82614
Limited logical belief analysis
Moreno Ribas, Antonio
The process of rational inquiry can be defined as the
evolution of the beliefs of a rational agent as a consequence
of its internal inference procedures and its interaction with the
environment. These beliefs can be modelled in a formal way
using doxastic logics.
The possible worlds model and its associated Kripke semantics
provide an intuitive semantics for these logics, but they seem to commit us to
model agents that are logically omniscient and
perfect reasoners. These problems can be avoided with a syntactic
view of possible worlds, defining them as arbitrary sets of sentences
in a propositional belief logic.
In this article this syntactic view of possible worlds is taken, and
a dynamic analysis of the agent's beliefs
is suggested in order to model the process of
rational inquiry in which the agent is permanently
engaged. One component of this analysis, the logical one, is
summarily described. This dimension of analysis is performed using a
modified version of the analytic tableaux method, and it models the
evolution of the beliefs due to the agent's inference power. It
is shown how non-perfect reasoning is achieved in two ways: on one
hand, the agent's deductive abilities
can be controlled by restricting the tautologies that it is
allowed to use in the course of this logical
analysis; on the other hand, the agent is not obliged to perform an
exhaustive analysis of the initial tableau.
2016-02-05T11:52:44ZMoreno Ribas, AntonioThe process of rational inquiry can be defined as the
evolution of the beliefs of a rational agent as a consequence
of its internal inference procedures and its interaction with the
environment. These beliefs can be modelled in a formal way
using doxastic logics.
The possible worlds model and its associated Kripke semantics
provide an intuitive semantics for these logics, but they seem to commit us to
model agents that are logically omniscient and
perfect reasoners. These problems can be avoided with a syntactic
view of possible worlds, defining them as arbitrary sets of sentences
in a propositional belief logic.
In this article this syntactic view of possible worlds is taken, and
a dynamic analysis of the agent's beliefs
is suggested in order to model the process of
rational inquiry in which the agent is permanently
engaged. One component of this analysis, the logical one, is
summarily described. This dimension of analysis is performed using a
modified version of the analytic tableaux method, and it models the
evolution of the beliefs due to the agent's inference power. It
is shown how non-perfect reasoning is achieved in two ways: on one
hand, the agent's deductive abilities
can be controlled by restricting the tautologies that it is
allowed to use in the course of this logical
analysis; on the other hand, the agent is not obliged to perform an
exhaustive analysis of the initial tableau.Using bidirectional chart parsing for corpus analysis
http://hdl.handle.net/2117/82613
Using bidirectional chart parsing for corpus analysis
Ageno Pulido, Alicia; Rodríguez Hontoria, Horacio
Several experiments have been developed around a bidirectional island-driven
chart parser. The system follows basically the approach of Stock, Satta and
Corazza, and the experiments have been designed and performed with the purpose
of examining several ways of improvement: basic strategy of the algorithm (pure
island-driven versus mixed island-driven/bottom-up approaches), strategies for
extension of the islands, strategies for selecting the initial islands, ways of
scoring the possible extensions, etc. Both the system and the results obtained
up to date are presented in this paper.
2016-02-05T11:45:16ZAgeno Pulido, AliciaRodríguez Hontoria, HoracioSeveral experiments have been developed around a bidirectional island-driven
chart parser. The system follows basically the approach of Stock, Satta and
Corazza, and the experiments have been designed and performed with the purpose
of examining several ways of improvement: basic strategy of the algorithm (pure
island-driven versus mixed island-driven/bottom-up approaches), strategies for
extension of the islands, strategies for selecting the initial islands, ways of
scoring the possible extensions, etc. Both the system and the results obtained
up to date are presented in this paper.POS tagging using relaxation techniques
http://hdl.handle.net/2117/82611
POS tagging using relaxation techniques
Padró, Lluís
Relaxation labelling is an optimization technique used in many fields
to solve constraint satisfaction problems. The algorithm finds a
combination of values for a set of variables such that satisfies -
to the maximum possible degree - a set of given constraints. This
paper describes some experiments performed applying it to POS tagging,
and the results obtained. It also ponders the possibility of applying
it to word sense disambiguation.
2016-02-05T11:36:56ZPadró, LluísRelaxation labelling is an optimization technique used in many fields
to solve constraint satisfaction problems. The algorithm finds a
combination of values for a set of variables such that satisfies -
to the maximum possible degree - a set of given constraints. This
paper describes some experiments performed applying it to POS tagging,
and the results obtained. It also ponders the possibility of applying
it to word sense disambiguation.Towards learning a constraint grammar from annotated corpora using decision trees
http://hdl.handle.net/2117/82609
Towards learning a constraint grammar from annotated corpora using decision trees
Màrquez Villodre, Lluís; Rodríguez Hontoria, Horacio
Inside the framework of robust parsers for the syntactic analysis of
unrestricted text, the aim of this work is the construction of a system
capable of automatically learning Constraint Grammar rules from a POS
annotated Corpus. The system presented is able by now to acquire constraint
rules for POS tagging and we plan to extend it to cover syntactic rules.
The learning process uses a supervised learning algorithm based on
building a discrimination forest, with a decision tree attached to each
case of POS ambiguity. The system has been applied to four representative
cases of ambiguity performing on a Spanish Corpus. The results obtained
in these experiments and some discussion about the appropriateness of the
proposed learning technique are presented in this paper.
2016-02-05T11:23:26ZMàrquez Villodre, LluísRodríguez Hontoria, HoracioInside the framework of robust parsers for the syntactic analysis of
unrestricted text, the aim of this work is the construction of a system
capable of automatically learning Constraint Grammar rules from a POS
annotated Corpus. The system presented is able by now to acquire constraint
rules for POS tagging and we plan to extend it to cover syntactic rules.
The learning process uses a supervised learning algorithm based on
building a discrimination forest, with a decision tree attached to each
case of POS ambiguity. The system has been applied to four representative
cases of ambiguity performing on a Spanish Corpus. The results obtained
in these experiments and some discussion about the appropriateness of the
proposed learning technique are presented in this paper.Word sense disambiguation using conceptual density
http://hdl.handle.net/2117/82607
Word sense disambiguation using conceptual density
Agirre, Eneko; Rigau Claramunt, German
This paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.
2016-02-05T10:58:03ZAgirre, EnekoRigau Claramunt, GermanThis paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.A Proposal for word sense disambiguation using conceptual distance
http://hdl.handle.net/2117/82606
A Proposal for word sense disambiguation using conceptual distance
Agirre, Eneko; Rigau Claramunt, German
This paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.
2016-02-05T10:47:39ZAgirre, EnekoRigau Claramunt, GermanThis paper presents a method for the resolution of lexical ambiguity and its
automatic evaluation over the Brown Corpus. The method relies on the use of
the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed
for this purpose. This fully automatic method requires no hand coding of
lexical entries, hand tagging of text nor any kind of training process. The
results of the experiment have been automatically evaluated against SemCor,
the sense-tagged version of the Brown Corpus.Compressibility of infinite binary sequences
http://hdl.handle.net/2117/82554
Compressibility of infinite binary sequences
Balcázar Navarro, José Luis; Gavaldà Mestre, Ricard; Hermo Huguet, Montserrat
It is known that infinite binary sequences of constant
Kolmogorov complexity are exactly the recursive ones.
Such a kind of statement no longer holds in the presence of resource bounds.
Contrary to what intuition might suggest, there are sequences of
constant, polynomial-time bounded Kolmogorov complexity that are
not polynomial-time computable. This motivates the study of
several resource-bounded variants in search for a characterization,
similar in spirit, of the polynomial-time computable sequences.
We propose some definitions, based on Kobayashi's notion of
compressibility, and compare them to both the standard resource-bounded
Kolmogorov complexity of infinite strings, and the uniform complexity.
Some nontrivial coincidences and disagreements are proved.
The resource-unbounded case is also considered.
2016-02-04T14:10:32ZBalcázar Navarro, José LuisGavaldà Mestre, RicardHermo Huguet, MontserratIt is known that infinite binary sequences of constant
Kolmogorov complexity are exactly the recursive ones.
Such a kind of statement no longer holds in the presence of resource bounds.
Contrary to what intuition might suggest, there are sequences of
constant, polynomial-time bounded Kolmogorov complexity that are
not polynomial-time computable. This motivates the study of
several resource-bounded variants in search for a characterization,
similar in spirit, of the polynomial-time computable sequences.
We propose some definitions, based on Kobayashi's notion of
compressibility, and compare them to both the standard resource-bounded
Kolmogorov complexity of infinite strings, and the uniform complexity.
Some nontrivial coincidences and disagreements are proved.
The resource-unbounded case is also considered.Change of belief in SKL model frames (automatization based on analytic tableaux)
http://hdl.handle.net/2117/82550
Change of belief in SKL model frames (automatization based on analytic tableaux)
Alvarado, G.; Núñez Esquer, Gustavo
A recent major approach to deal, formally and computationally, with knowledge and belief is the AGM. We will show the adequate representation of this paradigm using three-valued model frames. AGM expansion, contraction, and revision operations, with modal epistemic formulas of a priori knowledge, a posteriori knowledge, belief, and potential knowledge are introduced. Operativity of the mentioned proposal, in an automated deduction perspective but intuition preserving, is well suited using the (Three-Valued) analytic ableaux (AT) method. Treatment of the update theory of Katsuno and Mendelzon - the other major approach - inside our framework is outlined.
2016-02-04T13:47:42ZAlvarado, G.Núñez Esquer, GustavoA recent major approach to deal, formally and computationally, with knowledge and belief is the AGM. We will show the adequate representation of this paradigm using three-valued model frames. AGM expansion, contraction, and revision operations, with modal epistemic formulas of a priori knowledge, a posteriori knowledge, belief, and potential knowledge are introduced. Operativity of the mentioned proposal, in an automated deduction perspective but intuition preserving, is well suited using the (Three-Valued) analytic ableaux (AT) method. Treatment of the update theory of Katsuno and Mendelzon - the other major approach - inside our framework is outlined.Skip trees, an alternative data structure to skip-lists in a concurrent approach
http://hdl.handle.net/2117/82542
Skip trees, an alternative data structure to skip-lists in a concurrent approach
Messeguer Peypoch, Xavier
We present a new type of search trees isomorphic to Skip-lists, i.e.,
there is a one-to-one mapping that commutes with the update
algorithms. Moreover, as Skip-trees inherit all the race properties
from Skip-lists, they can be selected as well as Skip-lists. We
design a concurrent algorithm on the fly approach to update
skip-trees. Among other advantages, this algorithm is more compressive
than the one designed by Pugh for Skip-lists. It is based on the
closest relationship between Skip-trees and the family of B-trees that
allows the translation of local rules between them. From a practical
point of view, although the Skip-list should be in main memory, the
Skip-trees can be registered into secondary external storage,
therefore it can be likely to analyse its ability to manage Databases.
2016-02-04T12:26:41ZMesseguer Peypoch, XavierWe present a new type of search trees isomorphic to Skip-lists, i.e.,
there is a one-to-one mapping that commutes with the update
algorithms. Moreover, as Skip-trees inherit all the race properties
from Skip-lists, they can be selected as well as Skip-lists. We
design a concurrent algorithm on the fly approach to update
skip-trees. Among other advantages, this algorithm is more compressive
than the one designed by Pugh for Skip-lists. It is based on the
closest relationship between Skip-trees and the family of B-trees that
allows the translation of local rules between them. From a practical
point of view, although the Skip-list should be in main memory, the
Skip-trees can be registered into secondary external storage,
therefore it can be likely to analyse its ability to manage Databases.(Pure) logic out of probability
http://hdl.handle.net/2117/82541
(Pure) logic out of probability
Sales Porta, Ton
Logic and Probability are seen today as independent fields, but they share a considerable common ground, which historically underlies both disciplines and has prompted Reichenbach, Carnap or Popper to consider connection-building treatments of Logic and Probability as desirable. In this spirit we delineate a logic based on an additive non-functional truth valuation which, though technically indistinguishable from (axiomatic) Probability, can however be "decontaminated" from parasitical probabilistic readings (such as "event", "probability" or "conditioning") and be given instead a logical reading (in terms of "sentence", "truth" or "relativity"). The resulting assertion-based sentential calculus becomes a very natural extension of ordinary two-valued reasoning. The text of the paper corresponds to the author's invited contribution to the Workshop on "Aspects of mechanizing inference", held in Naples Oct. 30 - Nov. 2, 1995.; Logic and Probability are seen today as independent fields, but they share a considerable common ground, which historically underlies both disciplines and has prompted Reichenbach, Carnap or Popper to consider connection-building treatments of Logic and Probability as desirable. In this spirit we delineate a logic based on an additive non-functional truth valuation which, though technically indistinguishable from (axiomatic) Probability, can however be "decontaminated" from parasitical probabilistic readings (such as "event", "probability" or "conditioning") and be given instead a logical reading (in terms of "sentence", "truth" or "relativity"). The resulting assertion-based sentential calculus becomes a very natural extension of ordinary two-valued reasoning.
The text of the paper corresponds to the author's invited contribution to the Workshop on "Aspects of mechanizing inference", held in Naples Oct. 30 - Nov. 2, 1995.
2016-02-04T12:18:44ZSales Porta, TonLogic and Probability are seen today as independent fields, but they share a considerable common ground, which historically underlies both disciplines and has prompted Reichenbach, Carnap or Popper to consider connection-building treatments of Logic and Probability as desirable. In this spirit we delineate a logic based on an additive non-functional truth valuation which, though technically indistinguishable from (axiomatic) Probability, can however be "decontaminated" from parasitical probabilistic readings (such as "event", "probability" or "conditioning") and be given instead a logical reading (in terms of "sentence", "truth" or "relativity"). The resulting assertion-based sentential calculus becomes a very natural extension of ordinary two-valued reasoning. The text of the paper corresponds to the author's invited contribution to the Workshop on "Aspects of mechanizing inference", held in Naples Oct. 30 - Nov. 2, 1995.
Logic and Probability are seen today as independent fields, but they share a considerable common ground, which historically underlies both disciplines and has prompted Reichenbach, Carnap or Popper to consider connection-building treatments of Logic and Probability as desirable. In this spirit we delineate a logic based on an additive non-functional truth valuation which, though technically indistinguishable from (axiomatic) Probability, can however be "decontaminated" from parasitical probabilistic readings (such as "event", "probability" or "conditioning") and be given instead a logical reading (in terms of "sentence", "truth" or "relativity"). The resulting assertion-based sentential calculus becomes a very natural extension of ordinary two-valued reasoning.