DSpace Community:
http://hdl.handle.net/2117/3486
Sun, 05 Jul 2015 02:50:07 GMT2015-07-05T02:50:07Zwebmaster.bupc@upc.eduUniversitat Politècnica de Catalunya. Servei de Biblioteques i DocumentaciónoDisplacement logic for anaphora
http://hdl.handle.net/2117/28349
Title: Displacement logic for anaphora
Authors: Morrill, Glyn; Valentín Fernández Gallart, José Oriol
Abstract: The displacement calculus of Morrill, Valentín and Fadda (2011) [25] aspires to replace the calculus of Lambek (1958) [13] as the foundation of categorial grammar by accommodating intercalation as well as concatenation while remaining free of structural rules and enjoying Cut-elimination and its good corollaries. Jäger (2005) [11] proposes a type logical treatment of anaphora with syntactic duplication using limited contraction. Morrill and Valentín (2010) [24] apply (modal) displacement calculus to anaphora with lexical duplication and propose extension with a negation as failure in conjunction with additives to capture binding conditions. In this paper we present an account of anaphora developing characteristics and employing machinery from both of these proposals.Fri, 19 Jun 2015 09:15:45 GMThttp://hdl.handle.net/2117/283492015-06-19T09:15:45ZMorrill, Glyn; Valentín Fernández Gallart, José OriolnoAnaphora, Binding principles, Categorial logic, Cut-elimination, Displacement calculus, Negation as failureThe displacement calculus of Morrill, Valentín and Fadda (2011) [25] aspires to replace the calculus of Lambek (1958) [13] as the foundation of categorial grammar by accommodating intercalation as well as concatenation while remaining free of structural rules and enjoying Cut-elimination and its good corollaries. Jäger (2005) [11] proposes a type logical treatment of anaphora with syntactic duplication using limited contraction. Morrill and Valentín (2010) [24] apply (modal) displacement calculus to anaphora with lexical duplication and propose extension with a negation as failure in conjunction with additives to capture binding conditions. In this paper we present an account of anaphora developing characteristics and employing machinery from both of these proposals.The placement of the head that minimizes online memory: a complex systems approach
http://hdl.handle.net/2117/28306
Title: The placement of the head that minimizes online memory: a complex systems approach
Authors: Ferrer Cancho, Ramon
Abstract: It is well known that the length of a syntactic dependency determines its online memory cost. Thus, the problem of the placement of a head and its dependents (complements or modifiers) that minimizes online memory is equivalent to the problem of the minimum linear arrangement of a star tree. However, how that length is translated into cognitive cost is not known. This study shows that the online memory cost is minimized when the head is placed at the center, regardless of the function that transforms length into cost, provided only that this function is strictly monotonically increasing. Online memory defines a quasi-convex adaptive landscape with a single central minimum if the number of elements is odd and two central minima if that number is even. We discuss various aspects of the dynamics of word order of subject (S), verb (V) and object (O) from a complex systems perspective and suggest that word orders tend to evolve by swapping adjacent constituents from an initial or early SOV configuration that is attracted towards a central word order by online memory minimization. We also suggest that the stability of SVO is due to at least two factors, the quasi-convex shape of the adaptive landscape in the online memory dimension and online memory adaptations that avoid regression to SOV. Although OVS is also optimal for placing the verb at the center, its low frequency is explained by its long distance to the seminal SOV in the permutation space.Mon, 15 Jun 2015 11:41:48 GMThttp://hdl.handle.net/2117/283062015-06-15T11:41:48ZFerrer Cancho, RamonnoLanguage dynamics, Neutrality, Adaptive landscape, Head placement, Language evolution, Word orderIt is well known that the length of a syntactic dependency determines its online memory cost. Thus, the problem of the placement of a head and its dependents (complements or modifiers) that minimizes online memory is equivalent to the problem of the minimum linear arrangement of a star tree. However, how that length is translated into cognitive cost is not known. This study shows that the online memory cost is minimized when the head is placed at the center, regardless of the function that transforms length into cost, provided only that this function is strictly monotonically increasing. Online memory defines a quasi-convex adaptive landscape with a single central minimum if the number of elements is odd and two central minima if that number is even. We discuss various aspects of the dynamics of word order of subject (S), verb (V) and object (O) from a complex systems perspective and suggest that word orders tend to evolve by swapping adjacent constituents from an initial or early SOV configuration that is attracted towards a central word order by online memory minimization. We also suggest that the stability of SVO is due to at least two factors, the quasi-convex shape of the adaptive landscape in the online memory dimension and online memory adaptations that avoid regression to SOV. Although OVS is also optimal for placing the verb at the center, its low frequency is explained by its long distance to the seminal SOV in the permutation space.Reply to the commentary "Be careful when assuming the obvious", by P. Alday
http://hdl.handle.net/2117/28305
Title: Reply to the commentary "Be careful when assuming the obvious", by P. Alday
Authors: Ferrer Cancho, Ramon
Abstract: Here we respond to some comments by Alday concerning headedness in linguistic theory and the validity of the assumptions of a mathematical model for word order. For brevity, we focus only on two assumptions: the unit of measurement of dependency length and the monotonicity of the cost of a dependency as a function of its length. We also revise the implicit psychological bias in Alday’s comments. Notwithstanding, Alday is indicating the path for linguistic research with his unusual concerns about parsimony from multiple dimensions.Mon, 15 Jun 2015 11:27:58 GMThttp://hdl.handle.net/2117/283052015-06-15T11:27:58ZFerrer Cancho, RamonnoClitics, Units of measurement, Headedness, Language evolution, Word order, Principles and parameters theoryHere we respond to some comments by Alday concerning headedness in linguistic theory and the validity of the assumptions of a mathematical model for word order. For brevity, we focus only on two assumptions: the unit of measurement of dependency length and the monotonicity of the cost of a dependency as a function of its length. We also revise the implicit psychological bias in Alday’s comments. Notwithstanding, Alday is indicating the path for linguistic research with his unusual concerns about parsimony from multiple dimensions.The risks of mixing dependency lengths from sequences of different length
http://hdl.handle.net/2117/28279
Title: The risks of mixing dependency lengths from sequences of different length
Authors: Ferrer Cancho, Ramon; Liu, Haitao
Abstract: Mixing dependency lengths from sequences of different length is a common practice in language research. However, the empirical distribution of dependency lengths of sentences of the same length differs from that of sentences of varying length. The distribution of dependency lengths depends on sentence length for real sentences and also under the null hypothesis that dependencies connect vertices located in random positions of the sequence. This suggests that certain results, such as the distribution of syntactic dependency lengths mixing dependencies from sentences of varying length, could be a mere consequence of that mixing. Furthermore, differences in the global averages of dependency length (mixing lengths from sentences of varying length) for two different languages do not simply imply a priori that one language optimizes dependency lengths better than the other because those differences could be due to differences in the distribution of sentence lengths and other factors.Thu, 11 Jun 2015 11:35:43 GMThttp://hdl.handle.net/2117/282792015-06-11T11:35:43ZFerrer Cancho, Ramon; Liu, HaitaonoSyntactic dependency, Syntax, Dependency lengthMixing dependency lengths from sequences of different length is a common practice in language research. However, the empirical distribution of dependency lengths of sentences of the same length differs from that of sentences of varying length. The distribution of dependency lengths depends on sentence length for real sentences and also under the null hypothesis that dependencies connect vertices located in random positions of the sequence. This suggests that certain results, such as the distribution of syntactic dependency lengths mixing dependencies from sentences of varying length, could be a mere consequence of that mixing. Furthermore, differences in the global averages of dependency length (mixing lengths from sentences of varying length) for two different languages do not simply imply a priori that one language optimizes dependency lengths better than the other because those differences could be due to differences in the distribution of sentence lengths and other factors.Beyond description: Comment on “Approaching human language with complex networks” by Cong and Liu
http://hdl.handle.net/2117/28273
Title: Beyond description: Comment on “Approaching human language with complex networks” by Cong and Liu
Authors: Ferrer Cancho, RamonThu, 11 Jun 2015 10:03:57 GMThttp://hdl.handle.net/2117/282732015-06-11T10:03:57ZFerrer Cancho, RamonnoA categorial type logic
http://hdl.handle.net/2117/28269
Title: A categorial type logic
Authors: Morrill, Glyn
Abstract: In logical categorial grammar [23,11] syntactic structures are categorial proofs and semantic structures are intuitionistic proofs, and the syntax-semantics interface comprises a homomorphism from syntactic proofs to semantic proofs. Thereby, logical categorial grammar embodies in a pure logical form the principles of compositionality, lex-icalism, and parsing as deduction. Interest has focused on multimodal versions but the advent of the (dis)placement calculus of Morrill, Valentín and Fadda [21] suggests that the role of structural rules can be reduced, and this facilitates computational implementation. In this paper we specify a comprehensive formalism of (dis) placement logic for the parser/theorem prover CatLog integrating categorial logic connectives proposed to date and illustrate with a cover grammar of the Montague fragment.Thu, 11 Jun 2015 08:22:10 GMThttp://hdl.handle.net/2117/282692015-06-11T08:22:10ZMorrill, GlynnoIn logical categorial grammar [23,11] syntactic structures are categorial proofs and semantic structures are intuitionistic proofs, and the syntax-semantics interface comprises a homomorphism from syntactic proofs to semantic proofs. Thereby, logical categorial grammar embodies in a pure logical form the principles of compositionality, lex-icalism, and parsing as deduction. Interest has focused on multimodal versions but the advent of the (dis)placement calculus of Morrill, Valentín and Fadda [21] suggests that the role of structural rules can be reduced, and this facilitates computational implementation. In this paper we specify a comprehensive formalism of (dis) placement logic for the parser/theorem prover CatLog integrating categorial logic connectives proposed to date and illustrate with a cover grammar of the Montague fragment.Adaptively learning probabilistic deterministic automata from data streams
http://hdl.handle.net/2117/28256
Title: Adaptively learning probabilistic deterministic automata from data streams
Authors: Balle Pigem, Borja de; Castro Rabal, Jorge; Gavaldà Mestre, Ricard
Abstract: Markovian models with hidden state are widely-used formalisms for modeling sequential phenomena. Learnability of these models has been well studied when the sample is given in batch mode, and algorithms with PAC-like learning guarantees exist for specific classes of models such as Probabilistic Deterministic Finite Automata (PDFA). Here we focus on PDFA and give an algorithm for inferring models in this class in the restrictive data stream scenario: Unlike existing methods, our algorithm works incrementally and in one pass, uses memory sublinear in the stream length, and processes input items in amortized constant time. We also present extensions of the algorithm that (1) reduce to a minimum the need for guessing parameters of the target distribution and (2) are able to adapt to changes in the input distribution, relearning new models when needed. We provide rigorous PAC-like bounds for all of the above. Our algorithm makes a key usage of stream sketching techniques for reducing memory and processing time, and is modular in that it can use different tests for state equivalence and for change detection in the stream.Wed, 10 Jun 2015 12:27:22 GMThttp://hdl.handle.net/2117/282562015-06-10T12:27:22ZBalle Pigem, Borja de; Castro Rabal, Jorge; Gavaldà Mestre, RicardnoPAC learning, Data streams, Probabilistic automata, PDFA, Stream sketchesMarkovian models with hidden state are widely-used formalisms for modeling sequential phenomena. Learnability of these models has been well studied when the sample is given in batch mode, and algorithms with PAC-like learning guarantees exist for specific classes of models such as Probabilistic Deterministic Finite Automata (PDFA). Here we focus on PDFA and give an algorithm for inferring models in this class in the restrictive data stream scenario: Unlike existing methods, our algorithm works incrementally and in one pass, uses memory sublinear in the stream length, and processes input items in amortized constant time. We also present extensions of the algorithm that (1) reduce to a minimum the need for guessing parameters of the target distribution and (2) are able to adapt to changes in the input distribution, relearning new models when needed. We provide rigorous PAC-like bounds for all of the above. Our algorithm makes a key usage of stream sketching techniques for reducing memory and processing time, and is modular in that it can use different tests for state equivalence and for change detection in the stream.Learning read-constant polynomials of constant degree modulo composites
http://hdl.handle.net/2117/28159
Title: Learning read-constant polynomials of constant degree modulo composites
Authors: Chattopadhyay, Arkadev; Gavaldà Mestre, Ricard; Arnsfelt Hansen, Kristoffer; Thérien, Denis
Abstract: Boolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable in the target polynomial appears in a constant number of monomials. Our algorithm extends to superconstant but low degree polynomials and still runs in quasipolynomial time.Wed, 03 Jun 2015 09:08:42 GMThttp://hdl.handle.net/2117/281592015-06-03T09:08:42ZChattopadhyay, Arkadev; Gavaldà Mestre, Ricard; Arnsfelt Hansen, Kristoffer; Thérien, DenisnoPolynomials over finite rings, Exact learning, Membership queries, Modular gatesBoolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable in the target polynomial appears in a constant number of monomials. Our algorithm extends to superconstant but low degree polynomials and still runs in quasipolynomial time.Building green cloud services at low cost
http://hdl.handle.net/2117/28156
Title: Building green cloud services at low cost
Authors: Berral García, Josep Lluís; Goiri, Iñigo; Nguyen, Thu D.; Gavaldà Mestre, Ricard; Torres Viñals, Jordi; Bianchini, Ricardo
Abstract: Interest in powering datacenters at least partially using on-site renewable sources, e.g. solar or wind, has been growing. In fact, researchers have studied distributed services comprising networks of such “green” datacenters, and load distribution
approaches that “follow the renewables” to maximize their use. However, prior works have not considered where to site such a network for efficient production of renewable energy, while minimizing both datacenter and renewable plant building costs. Moreover, researchers have not built real load management systems for follow-the-renewables services. Thus,
in this paper, we propose a framework, optimization problem, and solution approach for siting and provisioning green datacenters for a follow-the-renewables HPC cloud service. We illustrate the location selection tradeoffs by quantifying the minimum cost of achieving different amounts of renewable energy. Finally, we design and implement a system capable of migrating virtual machines across the green datacenters to follow the renewables. Among other interesting results, we demonstrate that one can build green HPC cloud services at a relatively low additional cost compared to existing services.Wed, 03 Jun 2015 08:45:58 GMThttp://hdl.handle.net/2117/281562015-06-03T08:45:58ZBerral García, Josep Lluís; Goiri, Iñigo; Nguyen, Thu D.; Gavaldà Mestre, Ricard; Torres Viñals, Jordi; Bianchini, RicardonoDatacenter, Renewable energy, Green computingInterest in powering datacenters at least partially using on-site renewable sources, e.g. solar or wind, has been growing. In fact, researchers have studied distributed services comprising networks of such “green” datacenters, and load distribution
approaches that “follow the renewables” to maximize their use. However, prior works have not considered where to site such a network for efficient production of renewable energy, while minimizing both datacenter and renewable plant building costs. Moreover, researchers have not built real load management systems for follow-the-renewables services. Thus,
in this paper, we propose a framework, optimization problem, and solution approach for siting and provisioning green datacenters for a follow-the-renewables HPC cloud service. We illustrate the location selection tradeoffs by quantifying the minimum cost of achieving different amounts of renewable energy. Finally, we design and implement a system capable of migrating virtual machines across the green datacenters to follow the renewables. Among other interesting results, we demonstrate that one can build green HPC cloud services at a relatively low additional cost compared to existing services.Semantically inactive multiplicatives and words as types
http://hdl.handle.net/2117/28038
Title: Semantically inactive multiplicatives and words as types
Authors: Morrill, Glyn; Valentín Fernández Gallart, José Oriol
Abstract: The literature on categorial type logic includes proposals for semantically inactive additives, quantifiers, and modalities (Morrill 1994[17]; Hepple 1990[2]; Moortgat 1997[9]), but to our knowledge there has been no proposal for semantically inactive multiplicatives. In this paper we formulate such a proposal (thus filling a gap in the typology of categorial connectives) in the context of the displacement calculus Morrill et al. (2011[16]), and we give a formulation of words as types whereby for every expression w there is a corresponding type W(w). We show how this machinary can treat the syntax and semantics of collocations involving apparently contentless words such as expletives, particle verbs, and (discontinuous) idioms. In addition, we give an account in these terms of the only known examples treated by Hybrid Type Logical Grammar (HTLG henceforth; Kubota and Levine 2012[4]) beyond the scope of unaugmented displacement calculus: gapping of particle verbs and discontinuous idioms.Mon, 25 May 2015 15:23:38 GMThttp://hdl.handle.net/2117/280382015-05-25T15:23:38ZMorrill, Glyn; Valentín Fernández Gallart, José OriolnoCalculations, SemanticsThe literature on categorial type logic includes proposals for semantically inactive additives, quantifiers, and modalities (Morrill 1994[17]; Hepple 1990[2]; Moortgat 1997[9]), but to our knowledge there has been no proposal for semantically inactive multiplicatives. In this paper we formulate such a proposal (thus filling a gap in the typology of categorial connectives) in the context of the displacement calculus Morrill et al. (2011[16]), and we give a formulation of words as types whereby for every expression w there is a corresponding type W(w). We show how this machinary can treat the syntax and semantics of collocations involving apparently contentless words such as expletives, particle verbs, and (discontinuous) idioms. In addition, we give an account in these terms of the only known examples treated by Hybrid Type Logical Grammar (HTLG henceforth; Kubota and Levine 2012[4]) beyond the scope of unaugmented displacement calculus: gapping of particle verbs and discontinuous idioms.¿Libro de texto o material digital? ¿Aprendizaje autónomo o dirigido? : La enseñanza de la tecnología en distintos entornos personales de aprendizaje
http://hdl.handle.net/2117/27644
Title: ¿Libro de texto o material digital? ¿Aprendizaje autónomo o dirigido? : La enseñanza de la tecnología en distintos entornos personales de aprendizaje
Authors: Hernández Fernández, Antonio; Siscart, Bibiana
Abstract: El objetivo de este estudio fue comparar diversas metodologías de aprendizaje (autónomo o dirigido por el profesor) según el material usado (digital o libro de texto tradicional), creando un Entorno Personal de Aprendizaje diferente para cada grupo de alumnos. El alumnado evaluó finalmente la propuesta docente, valorando las carencias y puntos fuertes de cada metodología. Esta experiencia piloto realizada en secundaria será el punto de partida para su extensión a la enseñanza universitaria //
The aim of this study is the comparison between different learning methodologies (autonomous or explained by the teacher) according to the materials used (digital or traditional textbook), creating a Personal Learning Environment specific to each group of students. Finally, all the students evaluate each proposal, detailing aspects that they missed in the methodology. This previous work on secondary school is the starting point of a more extensive work to be carried out at the University level.Wed, 29 Apr 2015 08:55:47 GMThttp://hdl.handle.net/2117/276442015-04-29T08:55:47ZHernández Fernández, Antonio; Siscart, BibiananoAprendizaje digital, Entorno Personal de Aprendizaje, Libro de texto, Aprendizaje autónomo, Aprendizaje dirigido, Digital learning, Personal Learning Environment, Textbook, Autonomous learning, Directed learningEl objetivo de este estudio fue comparar diversas metodologías de aprendizaje (autónomo o dirigido por el profesor) según el material usado (digital o libro de texto tradicional), creando un Entorno Personal de Aprendizaje diferente para cada grupo de alumnos. El alumnado evaluó finalmente la propuesta docente, valorando las carencias y puntos fuertes de cada metodología. Esta experiencia piloto realizada en secundaria será el punto de partida para su extensión a la enseñanza universitaria //
The aim of this study is the comparison between different learning methodologies (autonomous or explained by the teacher) according to the materials used (digital or traditional textbook), creating a Personal Learning Environment specific to each group of students. Finally, all the students evaluate each proposal, detailing aspects that they missed in the methodology. This previous work on secondary school is the starting point of a more extensive work to be carried out at the University level.Isometries on L-2(X) and monotone functions
http://hdl.handle.net/2117/27512
Title: Isometries on L-2(X) and monotone functions
Authors: Boza Rocho, Santiago; Soria, Javier
Abstract: We study necessary and sufficient conditions on a bounded operator T defined on the Hilbert space L-2(X) to be an isometry and show that, under suitable hypotheses, it suffices to restrict T to a smaller class of functions (e.g., if X = R+, to the cone of positive and decreasing functions). We also consider the problem of characterizing the sets Y subset of X for which the orthogonal projection of the operator T on L-2(Y) is also an isometry. Finally, we illustrate our results with several examples involving classical operators on different settings. (C) 2013 WILEY-VCH Verlag GmbH & Co. KGaA, WeinheimWed, 22 Apr 2015 11:07:17 GMThttp://hdl.handle.net/2117/275122015-04-22T11:07:17ZBoza Rocho, Santiago; Soria, JaviernoIsometries, Hardy operator, monotone functions, Hardy operator, Minus, Decreasing functions, Measure-spaces, Inequalities, Identity, ConeWe study necessary and sufficient conditions on a bounded operator T defined on the Hilbert space L-2(X) to be an isometry and show that, under suitable hypotheses, it suffices to restrict T to a smaller class of functions (e.g., if X = R+, to the cone of positive and decreasing functions). We also consider the problem of characterizing the sets Y subset of X for which the orthogonal projection of the operator T on L-2(Y) is also an isometry. Finally, we illustrate our results with several examples involving classical operators on different settings. (C) 2013 WILEY-VCH Verlag GmbH & Co. KGaA, WeinheimWhen is Menzerath-Altmann law mathematically trivial? A new approach
http://hdl.handle.net/2117/27198
Title: When is Menzerath-Altmann law mathematically trivial? A new approach
Authors: Ferrer Cancho, Ramon; Hernández Fernández, Antonio; Baixeries i Juvillà, Jaume; Debowski, Lukasz; Macutek, Jan
Abstract: Menzerath’s law, the tendency of Z (the mean size of the parts) to decrease as X (the number of parts) increases, is found in language, music and genomes. Recently, it has been argued that the presence of the law in genomes is an inevitable consequence of the fact that Z = Y/X, which would imply that Z scales with X as Z~1/X. That scaling is a very particular case of Menzerath-Altmann law that has been rejected by means of a correlation test between X and Y in genomes, being X the number of chromosomes of a species, Y its genome size in bases and Z the mean chromosome size. Here we review the statistical foundations of that test and consider three non-parametric tests based upon different correlation metrics and one parametric test to evaluate if Z~1/X in genomes. The most powerful test is a new non-parametric one based upon the correlation ratio, which is able to reject Z~1/X in nine out of 11 taxonomic groups and detect a borderline group. Rather than a fact, Z~1/X is a baseline that real genomes do not meet. The view of Menzerath-Altmann law as inevitable is seriously flawed.Thu, 09 Apr 2015 09:05:34 GMThttp://hdl.handle.net/2117/271982015-04-09T09:05:34ZFerrer Cancho, Ramon; Hernández Fernández, Antonio; Baixeries i Juvillà, Jaume; Debowski, Lukasz; Macutek, JannoMenzerath-Altmann law, Power-lawsMenzerath’s law, the tendency of Z (the mean size of the parts) to decrease as X (the number of parts) increases, is found in language, music and genomes. Recently, it has been argued that the presence of the law in genomes is an inevitable consequence of the fact that Z = Y/X, which would imply that Z scales with X as Z~1/X. That scaling is a very particular case of Menzerath-Altmann law that has been rejected by means of a correlation test between X and Y in genomes, being X the number of chromosomes of a species, Y its genome size in bases and Z the mean chromosome size. Here we review the statistical foundations of that test and consider three non-parametric tests based upon different correlation metrics and one parametric test to evaluate if Z~1/X in genomes. The most powerful test is a new non-parametric one based upon the correlation ratio, which is able to reject Z~1/X in nine out of 11 taxonomic groups and detect a borderline group. Rather than a fact, Z~1/X is a baseline that real genomes do not meet. The view of Menzerath-Altmann law as inevitable is seriously flawed.Stopping criteria in contrastive divergence: Alternatives to the reconstruction error
http://hdl.handle.net/2117/26625
Title: Stopping criteria in contrastive divergence: Alternatives to the reconstruction error
Authors: Buchaca, David; Romero Merino, Enrique; Mazzanti Castrillejo, Fernando Pablo; Delgado Pin, Jordi
Abstract: Restricted Boltzmann Machines (RBMs) are general unsupervised learning devices to ascertain generative models of data distributions.
RBMs are often trained using the Contrastive Divergence learning algorithm (CD), an approximation to the gradient of the data log-likelihood.
A simple reconstruction error is often used to decide whether the approximation provided by the CD algorithm is good enough, though several authors (Schulz et al., 2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of this procedure. However, not many alternatives to the reconstruction error have been used in the literature. In this manuscript we investigate simple alternatives to the reconstruction error in order to detect as soon as possible the decrease in the log-likelihood during learning.Mon, 09 Mar 2015 12:28:53 GMThttp://hdl.handle.net/2117/266252015-03-09T12:28:53ZBuchaca, David; Romero Merino, Enrique; Mazzanti Castrillejo, Fernando Pablo; Delgado Pin, JordinoRestricted Boltzmann Machines, RBMs: Contrastive divergence learning algorithmRestricted Boltzmann Machines (RBMs) are general unsupervised learning devices to ascertain generative models of data distributions.
RBMs are often trained using the Contrastive Divergence learning algorithm (CD), an approximation to the gradient of the data log-likelihood.
A simple reconstruction error is often used to decide whether the approximation provided by the CD algorithm is good enough, though several authors (Schulz et al., 2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of this procedure. However, not many alternatives to the reconstruction error have been used in the literature. In this manuscript we investigate simple alternatives to the reconstruction error in order to detect as soon as possible the decrease in the log-likelihood during learning.Characterization of database dependencies with FCA and pattern structures
http://hdl.handle.net/2117/24652
Title: Characterization of database dependencies with FCA and pattern structures
Authors: Baixeries i Juvillà, Jaume; Kaytoue, Mehdi; Napoli, Amedeo
Abstract: In this review paper, we present some recent results on the characterization of
Functional Dependencies and variations with the formalism
of Pattern Structures and Formal Concept Analysis.
Although these dependencies have been paramount in database theory,
they have been used in different fields:
artificial intelligence and knowledge discovery, among others.Mon, 10 Nov 2014 16:42:21 GMThttp://hdl.handle.net/2117/246522014-11-10T16:42:21ZBaixeries i Juvillà, Jaume; Kaytoue, Mehdi; Napoli, AmedeonoIn this review paper, we present some recent results on the characterization of
Functional Dependencies and variations with the formalism
of Pattern Structures and Formal Concept Analysis.
Although these dependencies have been paramount in database theory,
they have been used in different fields:
artificial intelligence and knowledge discovery, among others.