Reports de recerca
http://hdl.handle.net/2117/3943
Sun, 17 Dec 2017 19:44:05 GMT2017-12-17T19:44:05ZFundamentos teóricos del análisis de correspondencias
http://hdl.handle.net/2117/111105
Fundamentos teóricos del análisis de correspondencias
Martí Recober, Manuel; Aluja Banet, Tomàs; Bécue Bertaut, Mónica María
Thu, 23 Nov 2017 09:03:26 GMThttp://hdl.handle.net/2117/1111052017-11-23T09:03:26ZMartí Recober, ManuelAluja Banet, TomàsBécue Bertaut, Mónica MaríaAnálisis de correspondencias múltiples sobre un grafo
http://hdl.handle.net/2117/111092
Análisis de correspondencias múltiples sobre un grafo
Aluja Banet, Tomàs; Martí Recober, Manuel
En anàlisi de dades sovint hom analitza matrius de dades formades per variables nominals, correlacionades amb unes altres anomenades variables
Wed, 22 Nov 2017 18:23:20 GMThttp://hdl.handle.net/2117/1110922017-11-22T18:23:20ZAluja Banet, TomàsMartí Recober, ManuelEn anàlisi de dades sovint hom analitza matrius de dades formades per variables nominals, correlacionades amb unes altres anomenades variablesLocal and partial correspondence analysis application to the analysis of electoral data
http://hdl.handle.net/2117/110727
Local and partial correspondence analysis application to the analysis of electoral data
Aluja Banet, Tomàs
In data analysis we must often analyze data sets whose observations are related by a graph structure. This is the case for electoral data, where the electoral units correspond to a definite geographical areas. In this case can be interesting to analyze the same phenomenon fixing some a priori relation.
First part we are going to present the rationale of these methods. The local analysis aims to eleiminate the effect of geographical position of individuals, represented by a contiguity graph, in an exploratory factorial analysis of spatial data. It will be proved interesting to analyze the electoral results keeping the socio-economic position constant, by means of a similarity graph. This is called partial analysis beacuse is based on the same idea of instrumental variables of Rao and partial correlation analysis.
In the second part of the article, this methodology is applied to the data matrix formed by 1059 electoral units, called sections, giving the electoral results in the last autonomous election of 1984 in Barcelona. Moreover, it will be interesting to define regions of units with homogeneous electoral behaviour, obtained by an algorithm of clustering with contiguity constraint.
Thu, 16 Nov 2017 09:07:20 GMThttp://hdl.handle.net/2117/1107272017-11-16T09:07:20ZAluja Banet, TomàsIn data analysis we must often analyze data sets whose observations are related by a graph structure. This is the case for electoral data, where the electoral units correspond to a definite geographical areas. In this case can be interesting to analyze the same phenomenon fixing some a priori relation.
First part we are going to present the rationale of these methods. The local analysis aims to eleiminate the effect of geographical position of individuals, represented by a contiguity graph, in an exploratory factorial analysis of spatial data. It will be proved interesting to analyze the electoral results keeping the socio-economic position constant, by means of a similarity graph. This is called partial analysis beacuse is based on the same idea of instrumental variables of Rao and partial correlation analysis.
In the second part of the article, this methodology is applied to the data matrix formed by 1059 electoral units, called sections, giving the electoral results in the last autonomous election of 1984 in Barcelona. Moreover, it will be interesting to define regions of units with homogeneous electoral behaviour, obtained by an algorithm of clustering with contiguity constraint.Complementary remarks and improvements to a lagrangean heuristic for capacitated plant location problems
http://hdl.handle.net/2117/110692
Complementary remarks and improvements to a lagrangean heuristic for capacitated plant location problems
Barceló Bugeda, Jaime; Casanovas Garcia, Josep
In a former paper, [1], a heuristic using multipliers from a langrean relaxation was proposed for getting feasible solutions to a class of pure integer capacited plant location problems. The heuristic consisted of three steps, the last one being a plant interchange step. Further computational experience has shown that the proposed interchange procedure could fail. In this paper we investigate the computational behaviour of the heuristic without interchange procedure, and we give the result of our computational experience.
Wed, 15 Nov 2017 15:24:05 GMThttp://hdl.handle.net/2117/1106922017-11-15T15:24:05ZBarceló Bugeda, JaimeCasanovas Garcia, JosepIn a former paper, [1], a heuristic using multipliers from a langrean relaxation was proposed for getting feasible solutions to a class of pure integer capacited plant location problems. The heuristic consisted of three steps, the last one being a plant interchange step. Further computational experience has shown that the proposed interchange procedure could fail. In this paper we investigate the computational behaviour of the heuristic without interchange procedure, and we give the result of our computational experience.Clinical trial designs using CompARE. An on-line exploratory tool for investigators
http://hdl.handle.net/2117/104928
Clinical trial designs using CompARE. An on-line exploratory tool for investigators
Gómez Mateu, Moisés; Gómez Melis, Guadalupe
Conclusions from randomized clinical trials (RCT) rely primarily
on the primary endpoint (PE) chosen at the design stage of the study. There should generally be only one PE which should be able to provide the most clinically relevant and scientific evidence regarding the potential eficacy of the new treatment.
Therefore, it is of utmost importance to select it appropriately.
Composite endpoints, consisting of the union of several endpoints, are often used as PE in RCT. Gomez and Lagakos (2013) develop a statistical methodology to evaluate the convenience of using a CE as opposed to one of its components.
Their strategy is based on the asymptotic relative eficiency (ARE), relating the efi is based on the asymptotic relative eficiency (ARE), relating the eciency of using the logrank test based on the CE versus the eficiency based on one of its components. This paper introduces the freeware online platform CompARE that facilitates the study of the performance of different candidate endpoints which could be used as PE at the design stage of a trial. CompARE, through an intuitive
interface, implements the novel ARE method.
Report de Recerca aprovat per la Comissió de doctorat i de recerca del Departament d'EIO
Fri, 26 May 2017 12:58:29 GMThttp://hdl.handle.net/2117/1049282017-05-26T12:58:29ZGómez Mateu, MoisésGómez Melis, GuadalupeConclusions from randomized clinical trials (RCT) rely primarily
on the primary endpoint (PE) chosen at the design stage of the study. There should generally be only one PE which should be able to provide the most clinically relevant and scientific evidence regarding the potential eficacy of the new treatment.
Therefore, it is of utmost importance to select it appropriately.
Composite endpoints, consisting of the union of several endpoints, are often used as PE in RCT. Gomez and Lagakos (2013) develop a statistical methodology to evaluate the convenience of using a CE as opposed to one of its components.
Their strategy is based on the asymptotic relative eficiency (ARE), relating the efi is based on the asymptotic relative eficiency (ARE), relating the eciency of using the logrank test based on the CE versus the eficiency based on one of its components. This paper introduces the freeware online platform CompARE that facilitates the study of the performance of different candidate endpoints which could be used as PE at the design stage of a trial. CompARE, through an intuitive
interface, implements the novel ARE method.Indústria 4.0 / Status Report Marc de referència sobre la Indústria 4.0 octubre 2016
http://hdl.handle.net/2117/99456
Indústria 4.0 / Status Report Marc de referència sobre la Indústria 4.0 octubre 2016
Fonseca Casas, Pau
L¿objecte d¿aquest document és donar a conèixer els elements de la Indústria 4.0 als enginyers, al teixit industrial català i a la societat, podent ser utilitzat com a instrument que faciliti el debat i la construcció d'un discurs normalitzat al voltant de la mateixa. Existeix el debat sobre fins a quin punt el màrqueting de la Indústria 4.0 va per davant de la realitat o a l¿inrevés. En qualsevol cas, l¿objectiu de la Comissió i4.0 d¿Enginyers de Catalunya és contribuir a l¿establiment de bases sòlides i a la formalització del cos de coneixent de la Indústria 4.0.
Tue, 17 Jan 2017 12:53:31 GMThttp://hdl.handle.net/2117/994562017-01-17T12:53:31ZFonseca Casas, PauL¿objecte d¿aquest document és donar a conèixer els elements de la Indústria 4.0 als enginyers, al teixit industrial català i a la societat, podent ser utilitzat com a instrument que faciliti el debat i la construcció d'un discurs normalitzat al voltant de la mateixa. Existeix el debat sobre fins a quin punt el màrqueting de la Indústria 4.0 va per davant de la realitat o a l¿inrevés. En qualsevol cas, l¿objectiu de la Comissió i4.0 d¿Enginyers de Catalunya és contribuir a l¿establiment de bases sòlides i a la formalització del cos de coneixent de la Indústria 4.0.Generación automàtica de reglas difusas en dominios poco estructurados con variables numéricas
http://hdl.handle.net/2117/97832
Generación automàtica de reglas difusas en dominios poco estructurados con variables numéricas
Vazquez, Fernando; Gibert, Karina
In this report, an application of a methodology of automatic
generation of conceptual descriptions for characterizing a given partition in an ill-structured domain is presented. A specific application on a wastewater treatment process (wwtp) illustrates the behaviour of this methodology. The methodology is based on the combination of statistical tools and inductive
learning, in such a way that the nature of the data is preserved, avoiding
previous transformations of the variables. Thus qualitative and
quantitative information can be induced from data. This information is
useful for the automatic generation of a system of fuzzy rules, which, in
turn, allows the posterior recognition of the obtained classes.
In previous works it has been proved that the multiple box-plot is a useful
and powerful statistical tool for distinguishing classes by means of
numerical variables. It constitutes the basis for the methodology presented
here, which permits detection of relevant variables characterized of any
classes.
In this report, we propose the first version of a formal methodology having
as an objective the automatic generation of conceptual class descriptions.
The goal is to characterize the various situations that can arise in a day
at a wastewater treatment plant (relevant information to facilitate the
plant's managing the decision making processes).
Wed, 07 Dec 2016 10:28:46 GMThttp://hdl.handle.net/2117/978322016-12-07T10:28:46ZVazquez, FernandoGibert, KarinaIn this report, an application of a methodology of automatic
generation of conceptual descriptions for characterizing a given partition in an ill-structured domain is presented. A specific application on a wastewater treatment process (wwtp) illustrates the behaviour of this methodology. The methodology is based on the combination of statistical tools and inductive
learning, in such a way that the nature of the data is preserved, avoiding
previous transformations of the variables. Thus qualitative and
quantitative information can be induced from data. This information is
useful for the automatic generation of a system of fuzzy rules, which, in
turn, allows the posterior recognition of the obtained classes.
In previous works it has been proved that the multiple box-plot is a useful
and powerful statistical tool for distinguishing classes by means of
numerical variables. It constitutes the basis for the methodology presented
here, which permits detection of relevant variables characterized of any
classes.
In this report, we propose the first version of a formal methodology having
as an objective the automatic generation of conceptual class descriptions.
The goal is to characterize the various situations that can arise in a day
at a wastewater treatment plant (relevant information to facilitate the
plant's managing the decision making processes).A Methodology of knowledge discovery in serial measurement applied to a psychiatric domain
http://hdl.handle.net/2117/97830
A Methodology of knowledge discovery in serial measurement applied to a psychiatric domain
Rodas Osollo, Jorge Enrique; Gibert, Karina; Rojo, Emilio; Cortés García, Claudio Ulises
The paper introduces a methodology of Knowledge Discovery in Serial Measurement (KDSM) for analyzing repeated very short time series with a blocking factor in ill-structured domains. This proposal
focuses on results obtained on a real application to psychiatry, where common statistical analysis (time series analysis, multivariate\dots) and artificial intelligence techniques (knowledge based methods,
inductive learning) used independently are often inadequate because of the intrinsic characteristics of the domain. This work shows how the limitations of the classical approaches are overcomed by using
KDSM. KDSM is built as the combination of {\it clustering based on rules}, introduced by Gibert (1994), with some Inductive Learning (AI) and clustering (Statistics) techniques.
Wed, 07 Dec 2016 10:14:43 GMThttp://hdl.handle.net/2117/978302016-12-07T10:14:43ZRodas Osollo, Jorge EnriqueGibert, KarinaRojo, EmilioCortés García, Claudio UlisesThe paper introduces a methodology of Knowledge Discovery in Serial Measurement (KDSM) for analyzing repeated very short time series with a blocking factor in ill-structured domains. This proposal
focuses on results obtained on a real application to psychiatry, where common statistical analysis (time series analysis, multivariate\dots) and artificial intelligence techniques (knowledge based methods,
inductive learning) used independently are often inadequate because of the intrinsic characteristics of the domain. This work shows how the limitations of the classical approaches are overcomed by using
KDSM. KDSM is built as the combination of {\it clustering based on rules}, introduced by Gibert (1994), with some Inductive Learning (AI) and clustering (Statistics) techniques.Determinación de factores influyentes sobre una respuesta en un dominio poco estructurado
http://hdl.handle.net/2117/97644
Determinación de factores influyentes sobre una respuesta en un dominio poco estructurado
Rodas Osollo, Jorge Enrique; Gibert, Karina; Rojo, Emilio
This report focuses on results obtained from a classification
technique applied to time series data in a medical ill-structured
The statistical analysis and classification --in ill-structured--
of such data are often inadequate because of the intrinsic
characteristics of those domains.
The database in this analysis contains information relative to
patients with major depressive disorders or esquizofrenia; as a
consequence, a high quantity of database variables contain data
corresponding to measures taken in different instant of time,
making curves.
For this reason we are motivated about how we can establish a
useful classification technique of curves in a medical
ill-structured domain.
Thu, 01 Dec 2016 17:26:04 GMThttp://hdl.handle.net/2117/976442016-12-01T17:26:04ZRodas Osollo, Jorge EnriqueGibert, KarinaRojo, EmilioThis report focuses on results obtained from a classification
technique applied to time series data in a medical ill-structured
The statistical analysis and classification --in ill-structured--
of such data are often inadequate because of the intrinsic
characteristics of those domains.
The database in this analysis contains information relative to
patients with major depressive disorders or esquizofrenia; as a
consequence, a high quantity of database variables contain data
corresponding to measures taken in different instant of time,
making curves.
For this reason we are motivated about how we can establish a
useful classification technique of curves in a medical
ill-structured domain.Using the partial least squares (PLS) method to establish critical success factor interdependence in ERP implementation projects
http://hdl.handle.net/2117/97551
Using the partial least squares (PLS) method to establish critical success factor interdependence in ERP implementation projects
Esteves, José; Pastor Collado, Juan Antonio; Casanovas Garcia, Josep
This technical research report proposes the usage of a statistical approach named Partial
Least squares (PLS) to define the relationships between critical success factors for ERP
implementation projects. In previous research work, we developed a unified model of
critical success factors for ERP implementation projects. Some researchers have
evidenced the relationships between these critical success factors, however no one has
defined in a formal way these relationships. PLS is one of the techniques of structural
equation modeling approach. Therefore, in this report is presented an overview of this
approach. We provide an example of PLS method modelling application; in this case we
use two critical success factors. However, our project will be extended to all the critical
success factors of our unified model. To compute the data, we are going to use PLS-graph
developed by Wynne Chin.
Wed, 30 Nov 2016 15:55:49 GMThttp://hdl.handle.net/2117/975512016-11-30T15:55:49ZEsteves, JoséPastor Collado, Juan AntonioCasanovas Garcia, JosepThis technical research report proposes the usage of a statistical approach named Partial
Least squares (PLS) to define the relationships between critical success factors for ERP
implementation projects. In previous research work, we developed a unified model of
critical success factors for ERP implementation projects. Some researchers have
evidenced the relationships between these critical success factors, however no one has
defined in a formal way these relationships. PLS is one of the techniques of structural
equation modeling approach. Therefore, in this report is presented an overview of this
approach. We provide an example of PLS method modelling application; in this case we
use two critical success factors. However, our project will be extended to all the critical
success factors of our unified model. To compute the data, we are going to use PLS-graph
developed by Wynne Chin.