LARCA - Laboratori d'Algorísmia Relacional, Complexitat i Aprenentatge
http://hdl.handle.net/2117/3486
2016-02-13T21:33:53ZComputational coverage of TLG: displacement
http://hdl.handle.net/2117/82818
Computational coverage of TLG: displacement
Morrill, Glyn; Valentín Fernández Gallart, José Oriol
This paper reports on the coverage of TLG of Morrill (1994) and Moortgat (1997), and on how it has been computer implemented. We computer-analyse examples of displacement: discontinuous idioms, quantification, (medial) relativisation, VP ellipsis, (medial) pied piping, appositive relativisation, parentheticals, gapping, comparative
subdeletion, and reflexivisation, and, in the appendix, Dutch verb raising and crossserial dependency.
2016-02-11T09:25:44ZMorrill, GlynValentín Fernández Gallart, José OriolThis paper reports on the coverage of TLG of Morrill (1994) and Moortgat (1997), and on how it has been computer implemented. We computer-analyse examples of displacement: discontinuous idioms, quantification, (medial) relativisation, VP ellipsis, (medial) pied piping, appositive relativisation, parentheticals, gapping, comparative
subdeletion, and reflexivisation, and, in the appendix, Dutch verb raising and crossserial dependency.Computational coverage of TLG: nonlinearity
http://hdl.handle.net/2117/82813
Computational coverage of TLG: nonlinearity
Morrill, Glyn; Valentín Fernández Gallart, José Oriol
We study nonlinear connectives (exponentials) in the context of Type Logical Grammar (TLG). We devise four conservative extensions of the displacement calculus with brackets, Db!, Db!?, Db!b and Db!b?r which contain the universal and existential exponential modalities of linear logic (LL). These modalities do not exhibit the same structural properties as in LL, which in TLG are especially adapted for linguistic purposes. The universal modality ! for TLG allows only the commutative and contraction rules, but not weakening, whereas the existential modality ? allows the so-called (intuitionistic) Mingle rule, which derives a restricted version of weakening. We provide a Curry-Howard labelling for both exponential connectives. As it turns out, controlled contraction by ! gives a way to account for the so-called parasitic gaps, and controlled Mingle ? iteration, in particular iterated coordination.
Finally, the four calculi are proved to be Cut-Free, and decidability is proved for a linguistically suffcient special case of Db!b?r (and hence
Db!b).
2016-02-11T09:02:19ZMorrill, GlynValentín Fernández Gallart, José OriolWe study nonlinear connectives (exponentials) in the context of Type Logical Grammar (TLG). We devise four conservative extensions of the displacement calculus with brackets, Db!, Db!?, Db!b and Db!b?r which contain the universal and existential exponential modalities of linear logic (LL). These modalities do not exhibit the same structural properties as in LL, which in TLG are especially adapted for linguistic purposes. The universal modality ! for TLG allows only the commutative and contraction rules, but not weakening, whereas the existential modality ? allows the so-called (intuitionistic) Mingle rule, which derives a restricted version of weakening. We provide a Curry-Howard labelling for both exponential connectives. As it turns out, controlled contraction by ! gives a way to account for the so-called parasitic gaps, and controlled Mingle ? iteration, in particular iterated coordination.
Finally, the four calculi are proved to be Cut-Free, and decidability is proved for a linguistically suffcient special case of Db!b?r (and hence
Db!b).Characterizing chronic disease and polymedication prescription patterns from electronic health records
http://hdl.handle.net/2117/82778
Characterizing chronic disease and polymedication prescription patterns from electronic health records
Zamora, Martí; Baradad, Manel; Amado, Ester; Cordomí, Sílvia; Limón, Esther; Ribera, Juliana; Arias Vicente, Marta; Gavaldà Mestre, Ricard
Population aging in developed countries brings an increased prevalence of chronic disease and of polymedication-patients with several prescribed types of medication. Attention to chronic, polymedicated patients is a priority for its high cost and the associated risks, and tools for analyzing, understanding, and managing this reality are becoming necessary. We describe a prototype of a system for discovering, analyzing, and visualizing the co-occurrence of diagnostics, interventions, and medication prescriptions in a large patient database. The final tool is intended to be used both by health managers and planners and for primary care clinicians in direct contact with patients (for example for detecting unusual disease patterns and incorrect or missing medication). At the core of the analysis module there is a representation of diagnostics and medications as a hypergraph, and the most crucial functionalities rely on hypergraph transversal/variants of association rule discovery methods, with particular emphasis on discovering surprising or alarming combinations. The test database comes from the primary care system in the area of Barcelona for 2013, with over 1.6 million potential patients and almost 20 million diagnostics and prescriptions.
2016-02-10T13:17:39ZZamora, MartíBaradad, ManelAmado, EsterCordomí, SílviaLimón, EstherRibera, JulianaArias Vicente, MartaGavaldà Mestre, RicardPopulation aging in developed countries brings an increased prevalence of chronic disease and of polymedication-patients with several prescribed types of medication. Attention to chronic, polymedicated patients is a priority for its high cost and the associated risks, and tools for analyzing, understanding, and managing this reality are becoming necessary. We describe a prototype of a system for discovering, analyzing, and visualizing the co-occurrence of diagnostics, interventions, and medication prescriptions in a large patient database. The final tool is intended to be used both by health managers and planners and for primary care clinicians in direct contact with patients (for example for detecting unusual disease patterns and incorrect or missing medication). At the core of the analysis module there is a representation of diagnostics and medications as a hypergraph, and the most crucial functionalities rely on hypergraph transversal/variants of association rule discovery methods, with particular emphasis on discovering surprising or alarming combinations. The test database comes from the primary care system in the area of Barcelona for 2013, with over 1.6 million potential patients and almost 20 million diagnostics and prescriptions.Multiplicative-additive focusing for parsing as deduction
http://hdl.handle.net/2117/82769
Multiplicative-additive focusing for parsing as deduction
Morrill, Glyn; Valentín Fernández Gallart, José Oriol
Spurious ambiguity is the phenomenon whereby distinct derivations in grammar may assign the same structural reading, resulting in redundancy in the parse search space and inefficiency in parsing. Understanding the problem depends on identifying the essential mathematical structure of derivations. This is trivial in the case of context free grammar, where the parse structures are ordered trees; in the case of type logical categorial grammar, the parse structures are proof nets. However, with respect to multiplicatives intrinsic proof nets have not yet been given for displacement calculus, and proof
nets for additives, which have applications to polymorphism, are not easy to characterise. Here we approach multiplicative-additive spurious ambiguity by means of the proof-theoretic technique of
focalisation.
2016-02-10T12:25:04ZMorrill, GlynValentín Fernández Gallart, José OriolSpurious ambiguity is the phenomenon whereby distinct derivations in grammar may assign the same structural reading, resulting in redundancy in the parse search space and inefficiency in parsing. Understanding the problem depends on identifying the essential mathematical structure of derivations. This is trivial in the case of context free grammar, where the parse structures are ordered trees; in the case of type logical categorial grammar, the parse structures are proof nets. However, with respect to multiplicatives intrinsic proof nets have not yet been given for displacement calculus, and proof
nets for additives, which have applications to polymorphism, are not easy to characterise. Here we approach multiplicative-additive spurious ambiguity by means of the proof-theoretic technique of
focalisation.Structural ambiguity in Montague Grammar and categorial grammar
http://hdl.handle.net/2117/82703
Structural ambiguity in Montague Grammar and categorial grammar
Morrill, Glyn
We give a type logical categorial grammar for the syntax and semantics of Montague's seminal fragment, which includes ambiguities of quantification and intensionality and their interactions, and we present the analyses assigned by a parser/theorem prover CatLog to the examples in the first half of Chapter 7 of the classic text Introduction to Montague Semantics of Dowty, Wall and Peters (1981).
2016-02-09T10:42:01ZMorrill, GlynWe give a type logical categorial grammar for the syntax and semantics of Montague's seminal fragment, which includes ambiguities of quantification and intensionality and their interactions, and we present the analyses assigned by a parser/theorem prover CatLog to the examples in the first half of Chapter 7 of the classic text Introduction to Montague Semantics of Dowty, Wall and Peters (1981).Approximating the expressive power of logics in finite models
http://hdl.handle.net/2117/82062
Approximating the expressive power of logics in finite models
Arratia Quesada, Argimiro Alejandro; Ortiz, Carlos E.
We present a probability logic (essentially a first order language extended with quantifiers that count the fraction of elements in a model that satisfy a first order formula) which, on the one hand, captures uniform circuit classes such as AC0 and TC0 over arithmetic models, namely, finite structures with linear order and arithmetic relations, and, on the other hand, their semantics, with respect to our arithmetic models, can be closely approximated by giving interpretations of their formulas on finite structures where all relations (including the order) are restricted to be “modular” (i.e. to act subject to an integer modulo). In order to give a precise measure of the proximity between satisfaction of a formula in an arithmetic model and satisfaction of the same formula in the “approximate” model, we define the approximate formulas and work on a notion of approximate truth. We also indicate how to enhance the expressive power of our probability logic in order to capture polynomial time decidable queries.
There are various motivations for this work. As of today, there is not known logical description of any computational complexity class below NP which does not requires a built–in linear order. Also, it is widely recognized that many model theoretic techniques for showing definability in logics on finite structures become almost useless when order is present. Hence, if we want to obtain significant lower bound results in computational complexity via the logical description we ought to find ways of by-passing the ordering restriction. With this work we take steps towards understanding how well can we approximate, without a true order, the expressive power of logics that capture complexity classes on ordered structures.
2016-01-26T13:41:06ZArratia Quesada, Argimiro AlejandroOrtiz, Carlos E.We present a probability logic (essentially a first order language extended with quantifiers that count the fraction of elements in a model that satisfy a first order formula) which, on the one hand, captures uniform circuit classes such as AC0 and TC0 over arithmetic models, namely, finite structures with linear order and arithmetic relations, and, on the other hand, their semantics, with respect to our arithmetic models, can be closely approximated by giving interpretations of their formulas on finite structures where all relations (including the order) are restricted to be “modular” (i.e. to act subject to an integer modulo). In order to give a precise measure of the proximity between satisfaction of a formula in an arithmetic model and satisfaction of the same formula in the “approximate” model, we define the approximate formulas and work on a notion of approximate truth. We also indicate how to enhance the expressive power of our probability logic in order to capture polynomial time decidable queries.
There are various motivations for this work. As of today, there is not known logical description of any computational complexity class below NP which does not requires a built–in linear order. Also, it is widely recognized that many model theoretic techniques for showing definability in logics on finite structures become almost useless when order is present. Hence, if we want to obtain significant lower bound results in computational complexity via the logical description we ought to find ways of by-passing the ordering restriction. With this work we take steps towards understanding how well can we approximate, without a true order, the expressive power of logics that capture complexity classes on ordered structures.The robustness of periodic orchestrations in uncertain evolving environments
http://hdl.handle.net/2117/82017
The robustness of periodic orchestrations in uncertain evolving environments
Castro Rabal, Jorge; Gabarró Vallès, Joaquim; Serna Iglesias, María José; Stewart, Alan
A framework for assessing the robustness of long-duration repetitive orchestrations in uncertain evolving environments is proposed. The model assumes that service-based evaluation environments are stable over short time-frames only; over longer periods service-based environments evolve as demand fluctuates and contention for shared resources varies.
The behaviour of a short-duration orchestration E in a stable environment is assessed by an uncertainty profile U and a corresponding zero-sum angel-daemon game Gamma (U).
Here the angel-daemon approach is extended to assess evolving environments by means of a subfamily of stochastic games. These games are called strategy oblivious because their transition probabilities are strategy independent. It is shown that the value of a strategy oblivious stochastic game is well defined and that it can be computed by solving a linear system. Finally, the proposed stochastic framework is used to assess the evolution of the Gabrmn IT system.
2016-01-26T08:45:30ZCastro Rabal, JorgeGabarró Vallès, JoaquimSerna Iglesias, María JoséStewart, AlanA framework for assessing the robustness of long-duration repetitive orchestrations in uncertain evolving environments is proposed. The model assumes that service-based evaluation environments are stable over short time-frames only; over longer periods service-based environments evolve as demand fluctuates and contention for shared resources varies.
The behaviour of a short-duration orchestration E in a stable environment is assessed by an uncertainty profile U and a corresponding zero-sum angel-daemon game Gamma (U).
Here the angel-daemon approach is extended to assess evolving environments by means of a subfamily of stochastic games. These games are called strategy oblivious because their transition probabilities are strategy independent. It is shown that the value of a strategy oblivious stochastic game is well defined and that it can be computed by solving a linear system. Finally, the proposed stochastic framework is used to assess the evolution of the Gabrmn IT system.Pattern Structures and Concept Lattices for Data Mining and Knowledge Processing
http://hdl.handle.net/2117/81250
Pattern Structures and Concept Lattices for Data Mining and Knowledge Processing
Kaytoue, Mehdi; Codocedo, Victor; Aleksey, Buzmakov; Baixeries i Juvillà, Jaume
This article aims at presenting recent advances in Formal Concept Analysis (2010-2015), especially when the question is dealing with complex data (numbers, graphs, sequences, etc.) in domains such as databases (functional dependencies), data-mining (local pattern discovery), information retrieval and information fusion. As these advances
are mainly published in artificial intelligence and FCA dedicated venues, a dissemination towards data mining and machine learning is worthwhile.
2016-01-11T18:47:36ZKaytoue, MehdiCodocedo, VictorAleksey, BuzmakovBaixeries i Juvillà, JaumeThis article aims at presenting recent advances in Formal Concept Analysis (2010-2015), especially when the question is dealing with complex data (numbers, graphs, sequences, etc.) in domains such as databases (functional dependencies), data-mining (local pattern discovery), information retrieval and information fusion. As these advances
are mainly published in artificial intelligence and FCA dedicated venues, a dissemination towards data mining and machine learning is worthwhile.Absolute-type shaft encoding using LFSR sequences with a prescribed length
http://hdl.handle.net/2117/79981
Absolute-type shaft encoding using LFSR sequences with a prescribed length
Fuertes Armengol, José Mª; Balle Pigem, Borja de; Ventura Capell, Enric
Maximal-length binary sequences have existed for a long time. They have many interesting properties, and one of them is that, when taken in blocks of n consecutive positions, they form 2n - 1 different codes in a closed circular sequence. This property can be used to measure absolute angular positions as the circle can be divided into as many parts as different codes can be retrieved. This paper describes how a closed binary sequence with an arbitrary length can be effectively designed with the minimal possible block length using linear feedback shift registers. Such sequences can be used to measure a specified exact number of angular positions using the minimal possible number of sensors that linear methods allow.
2015-11-26T17:41:12ZFuertes Armengol, José MªBalle Pigem, Borja deVentura Capell, EnricMaximal-length binary sequences have existed for a long time. They have many interesting properties, and one of them is that, when taken in blocks of n consecutive positions, they form 2n - 1 different codes in a closed circular sequence. This property can be used to measure absolute angular positions as the circle can be divided into as many parts as different codes can be retrieved. This paper describes how a closed binary sequence with an arbitrary length can be effectively designed with the minimal possible block length using linear feedback shift registers. Such sequences can be used to measure a specified exact number of angular positions using the minimal possible number of sensors that linear methods allow.Non-crossing dependencies: Least effort, not grammar
http://hdl.handle.net/2117/79345
Non-crossing dependencies: Least effort, not grammar
Ferrer Cancho, Ramon
The use of null hypotheses (in a statistical sense) is common in hard sciences but not in theoretical linguistics. Here the null hypothesis that the low frequency of syntactic dependency crossings is expected by an arbitrary ordering of words is rejected. It is shown that this would require star dependency structures, which are both unrealistic and too restrictive. The hypothesis of the limited resources of the human brain is revisited. Stronger null hypotheses taking into account actual dependency lengths for the likelihood of crossings are presented. Those hypotheses suggests that crossings are likely to reduce when dependencies are shortened. A hypothesis based on pressure to reduce dependency lengths is more parsimonious than a principle of minimization of crossings or a grammatical ban that is totally dissociated from the general and non-linguistic principle of economy.
2015-11-17T09:45:11ZFerrer Cancho, RamonThe use of null hypotheses (in a statistical sense) is common in hard sciences but not in theoretical linguistics. Here the null hypothesis that the low frequency of syntactic dependency crossings is expected by an arbitrary ordering of words is rejected. It is shown that this would require star dependency structures, which are both unrealistic and too restrictive. The hypothesis of the limited resources of the human brain is revisited. Stronger null hypotheses taking into account actual dependency lengths for the likelihood of crossings are presented. Those hypotheses suggests that crossings are likely to reduce when dependencies are shortened. A hypothesis based on pressure to reduce dependency lengths is more parsimonious than a principle of minimization of crossings or a grammatical ban that is totally dissociated from the general and non-linguistic principle of economy.