Articles de revista
http://hdl.handle.net/2117/3499
20151127T05:09:21Z

Optimal buildings’ energy consumption calculus through a distributed experiment execution
http://hdl.handle.net/2117/78876
Optimal buildings’ energy consumption calculus through a distributed experiment execution
Fonseca Casas, Pau; Fonseca i Casas, Antoni; Garrido Soriano, Núria; Ortiz, Joana; Casanovas Garcia, Josep; Solom, Jaume
The calculus of building energy consumption is a demanding task because multiple factors must be considered during experimentation. Additionally, the definition of the model and the experiments is complex because the problem is multidisciplinary. When we face complex models and experiments that require a considerable amount of computational resources, the application of solutions is imperative to reduce the amount of time needed to define the model and the experiments and to obtain the answers. In this paper, we first address the definition and the implementation of an environmental model that describes the behavior of a building from a sustainability point of view and enables the use of several simulations and calculus engines in a cosimulation scenario. Second, we define a distributed experimental framework that enables us to obtain results in an accurate amount of time. This methodology has been applied to the energy consumption calculation, but it can also be applied to other modeling problems that usually require a considerable amount of resources by reducing the amount of time needed to perform modeling, implementation, verification, and experimentation.
20151106T11:02:33Z
Fonseca Casas, Pau
Fonseca i Casas, Antoni
Garrido Soriano, Núria
Ortiz, Joana
Casanovas Garcia, Josep
Solom, Jaume
The calculus of building energy consumption is a demanding task because multiple factors must be considered during experimentation. Additionally, the definition of the model and the experiments is complex because the problem is multidisciplinary. When we face complex models and experiments that require a considerable amount of computational resources, the application of solutions is imperative to reduce the amount of time needed to define the model and the experiments and to obtain the answers. In this paper, we first address the definition and the implementation of an environmental model that describes the behavior of a building from a sustainability point of view and enables the use of several simulations and calculus engines in a cosimulation scenario. Second, we define a distributed experimental framework that enables us to obtain results in an accurate amount of time. This methodology has been applied to the energy consumption calculation, but it can also be applied to other modeling problems that usually require a considerable amount of resources by reducing the amount of time needed to perform modeling, implementation, verification, and experimentation.

Using specification and description language to formalize multiagent systems
http://hdl.handle.net/2117/28481
Using specification and description language to formalize multiagent systems
Fonseca Casas, Pau
Simulation is a multidisciplinary field of study used in different scopes, involving people with different areas of knowledge and backgrounds. Formal languages become important tools in order to build, understand, and maintain the simulation models. The formalization of an intelligent agent is not an easy task because of the complex behavior it owns. In this study, we apply a formal and graphical language, called Specification and Description Language, to formalize an intelligent agent. This formalization captures the complete and unambiguous behavior of the agents and simplifies the understanding of the agents’ behaviors because of the graphic structure of the language. This formal representation of the model also simplifies joining multiagent system (MAS) models and interaction models through the formalization. In addition, because Specification and Description Language is a standard language, several tools are capable of understanding the model, which leads to an automatic implementation.
20150701T08:31:37Z
Fonseca Casas, Pau
Simulation is a multidisciplinary field of study used in different scopes, involving people with different areas of knowledge and backgrounds. Formal languages become important tools in order to build, understand, and maintain the simulation models. The formalization of an intelligent agent is not an easy task because of the complex behavior it owns. In this study, we apply a formal and graphical language, called Specification and Description Language, to formalize an intelligent agent. This formalization captures the complete and unambiguous behavior of the agents and simplifies the understanding of the agents’ behaviors because of the graphic structure of the language. This formal representation of the model also simplifies joining multiagent system (MAS) models and interaction models through the formalization. In addition, because Specification and Description Language is a standard language, several tools are capable of understanding the model, which leads to an automatic implementation.

Passenger flow simulation in a hub airport: an application to the Barcelona International Airport
http://hdl.handle.net/2117/28478
Passenger flow simulation in a hub airport: an application to the Barcelona International Airport
Fonseca Casas, Pau; Casanovas Garcia, Josep; Ferran, Xavier
This paper describes a conceptual model intended to be applied in a general approach to the microsimulation of hub airports terminals. The proposed methodology is illustrated with the development of a simulation model originally intended to help in the design of the new terminal at Barcelona International Airport. This model represents in detail, among many other elements, passengers’ flows in the different areas of these complex facilities. Agentbased simulation techniques were included to represent the different actors’ behaviors, and a formal representation of the model using Specification and Description Language (SDL) was used to represent the complexity of all the system elements. To preprocess a diverse and considerable amount of raw data provided by airport designers and other sources to feed the simulation environment Flight Planner Manager was developed as a toolkit to parameterize the different model factors and to generate required specific input data. This project was conducted over 3 years leading to the development of a system not only conceived to assess in the airport initial design process but also to constitute a recurrent decision taking instrument to dynamically optimize terminal management and operations.
20150701T07:58:10Z
Fonseca Casas, Pau
Casanovas Garcia, Josep
Ferran, Xavier
This paper describes a conceptual model intended to be applied in a general approach to the microsimulation of hub airports terminals. The proposed methodology is illustrated with the development of a simulation model originally intended to help in the design of the new terminal at Barcelona International Airport. This model represents in detail, among many other elements, passengers’ flows in the different areas of these complex facilities. Agentbased simulation techniques were included to represent the different actors’ behaviors, and a formal representation of the model using Specification and Description Language (SDL) was used to represent the complexity of all the system elements. To preprocess a diverse and considerable amount of raw data provided by airport designers and other sources to feed the simulation environment Flight Planner Manager was developed as a toolkit to parameterize the different model factors and to generate required specific input data. This project was conducted over 3 years leading to the development of a system not only conceived to assess in the airport initial design process but also to constitute a recurrent decision taking instrument to dynamically optimize terminal management and operations.

Geographical differences in whooping cough in Catalonia, Spain, from 1990 to 2010
http://hdl.handle.net/2117/28471
Geographical differences in whooping cough in Catalonia, Spain, from 1990 to 2010
Crespo, Inma; Soldevila, Nuria; Muñoz Gracia, María del Pilar; Godoy, Pere; Carmona, Gloria; Domínguez García, Angela
Whooping cough is a communicable disease whose incidence has increase d in recent years in some countries with vaccination. Since 1981, in Catalonia (Spain), cases must be reported to the Public Health Department. In 1997, surveillance changed from aggrega ted counts to individual report and the surveillance system was improved after 2002. Ca talan public health is universal with equal coverage geographically. The aim of this st udy was to determine whether there are differences in whooping cough incidence in rural and urban counties.
20150630T11:56:03Z
Crespo, Inma
Soldevila, Nuria
Muñoz Gracia, María del Pilar
Godoy, Pere
Carmona, Gloria
Domínguez García, Angela
Whooping cough is a communicable disease whose incidence has increase d in recent years in some countries with vaccination. Since 1981, in Catalonia (Spain), cases must be reported to the Public Health Department. In 1997, surveillance changed from aggrega ted counts to individual report and the surveillance system was improved after 2002. Ca talan public health is universal with equal coverage geographically. The aim of this st udy was to determine whether there are differences in whooping cough incidence in rural and urban counties.

Comments on: spacetime wind speed forecasting for improved power system dispatch
http://hdl.handle.net/2117/28469
Comments on: spacetime wind speed forecasting for improved power system dispatch
Muñoz Gracia, María del Pilar
20150630T11:41:06Z
Muñoz Gracia, María del Pilar

Formal simulation model to optimize building sustainability
http://hdl.handle.net/2117/28468
Formal simulation model to optimize building sustainability
Fonseca Casas, Pau; Fonseca Casas, Antoni; Garrido Soriano, Núria; Casanovas Garcia, Josep
In this work, we present a simulation model that makes it possible to find optimal values for various building parameters and the associated impacts that reduce the energy demand or consumption of the building. In the study, we consider several situations with different levels of thermal insulation. To define and to integrate the different models, a formal language (Specification and Description Language, SDL) is used. The main reason for using this formal language is that it makes it possible to define simulation models from graphical diagrams in an unambiguous and standard way. This simplifies the multidisciplinary interaction between team members. Additionally, the fact that SDL is an ISO standard simplifies its implementation because several tools understand this language. This simplification of the model makes it possible to increase the model credibility and simplify the validation and verification processes. In the present project, the simulation tools used were SDLPS (to rule the main simulation process) and Energy+ (as a calculus engine for energy demand). The interactions between all these tools are detailed and specified in the model, allowing a deeper comprehension of the process that define the life of a building from the point of view of its sustainability. © 2014 Elsevier Ltd. All rights reserved.
20150630T11:35:46Z
Fonseca Casas, Pau
Fonseca Casas, Antoni
Garrido Soriano, Núria
Casanovas Garcia, Josep
In this work, we present a simulation model that makes it possible to find optimal values for various building parameters and the associated impacts that reduce the energy demand or consumption of the building. In the study, we consider several situations with different levels of thermal insulation. To define and to integrate the different models, a formal language (Specification and Description Language, SDL) is used. The main reason for using this formal language is that it makes it possible to define simulation models from graphical diagrams in an unambiguous and standard way. This simplifies the multidisciplinary interaction between team members. Additionally, the fact that SDL is an ISO standard simplifies its implementation because several tools understand this language. This simplification of the model makes it possible to increase the model credibility and simplify the validation and verification processes. In the present project, the simulation tools used were SDLPS (to rule the main simulation process) and Energy+ (as a calculus engine for energy demand). The interactions between all these tools are detailed and specified in the model, allowing a deeper comprehension of the process that define the life of a building from the point of view of its sustainability. © 2014 Elsevier Ltd. All rights reserved.

Deadlockfree scheduling method for flexible manufacturing systems based on timed colored Petri nets and Anytime Heuristic Search
http://hdl.handle.net/2117/28096
Deadlockfree scheduling method for flexible manufacturing systems based on timed colored Petri nets and Anytime Heuristic Search
Baruwa, Olatunde T.; Piera, Miquel Angel; Guasch Petit, Antonio
This paper addresses the deadlock (DL)free scheduling problem of flexible manufacturing systems (FMS) characterized by resource sharing, limited buffer capacity, routing flexibility, and the availability of material handling systems. The FMS scheduling problem is formulated using timed colored Petri net (TCPN) modeling where each operation has a certain number of preconditions, an estimated duration, and a set of postconditions. Based on the reachability analysis of TCPN modeling, we propose a new anytime heuristic search algorithm which finds optimal or nearoptimal DLfree schedules with respect to makespan as the performance criterion. The methodology tackles the timeconstrained problem of very demanding systems (high diversity production and maketoorder) in which computational time is a critical factor to produce optimal schedules that are DLfree. In such a rapidly changing environment and under tight customer duedates, producing optimal schedules becomes intractable given the time limitations and the NPhard nature of scheduling problems. The proposed anytime search algorithm combines breadthfirst iterative deepening A* with suboptimal breadthfirst heuristic search and backtracking. It guarantees that the search produces the best solution obtained so far within the allotted computation time and provides better solutions when given more time. The effectiveness of the approach is evaluated on a comprehensive benchmark set of DLprone FMS examples and the computational results show the superiority of the proposed approach over the previous works.
20150528T13:24:17Z
Baruwa, Olatunde T.
Piera, Miquel Angel
Guasch Petit, Antonio
This paper addresses the deadlock (DL)free scheduling problem of flexible manufacturing systems (FMS) characterized by resource sharing, limited buffer capacity, routing flexibility, and the availability of material handling systems. The FMS scheduling problem is formulated using timed colored Petri net (TCPN) modeling where each operation has a certain number of preconditions, an estimated duration, and a set of postconditions. Based on the reachability analysis of TCPN modeling, we propose a new anytime heuristic search algorithm which finds optimal or nearoptimal DLfree schedules with respect to makespan as the performance criterion. The methodology tackles the timeconstrained problem of very demanding systems (high diversity production and maketoorder) in which computational time is a critical factor to produce optimal schedules that are DLfree. In such a rapidly changing environment and under tight customer duedates, producing optimal schedules becomes intractable given the time limitations and the NPhard nature of scheduling problems. The proposed anytime search algorithm combines breadthfirst iterative deepening A* with suboptimal breadthfirst heuristic search and backtracking. It guarantees that the search produces the best solution obtained so far within the allotted computation time and provides better solutions when given more time. The effectiveness of the approach is evaluated on a comprehensive benchmark set of DLprone FMS examples and the computational results show the superiority of the proposed approach over the previous works.

Estimate of influenza cases using generalized linear, additive and mixed models
http://hdl.handle.net/2117/27118
Estimate of influenza cases using generalized linear, additive and mixed models
Oviedo de La Fuente, Manuel; Domínguez García, Angela; Muñoz Gracia, María del Pilar
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 20102014 using data obtained from the SISAP program (Institut Catala de la Salut, Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate nonlinear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100000 people. The mean rate was 13.75 (range 027.5) in the winter months (December, January, February) and 3.38 (range 012.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
20150331T18:04:48Z
Oviedo de La Fuente, Manuel
Domínguez García, Angela
Muñoz Gracia, María del Pilar
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 20102014 using data obtained from the SISAP program (Institut Catala de la Salut, Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate nonlinear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100000 people. The mean rate was 13.75 (range 027.5) in the winter months (December, January, February) and 3.38 (range 012.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.

MFACT, a new funcionality in MFA FactoMineR
http://hdl.handle.net/2117/22005
MFACT, a new funcionality in MFA FactoMineR
Kostov, Belchin Adriyanov; Bécue Bertaut, Mónica María; Husson, François
We present multiple factor analysis for contingency tables (MFACT) and its implementation
in the FactoMineR package. This method, through an option of the MFA function, allows us to deal
with multiple contingency or frequency tables, in addition to the categorical and quantitative multiple
tables already considered in previous versions of the package. Thanks to this revised function, either
a multiple contingency table or a mixed multiple table integrating quantitative, categorical and
frequency data can be tackled.
The FactoMineR package (Lê et al., 2008; Husson et al., 2011) offers the most commonly used
principal component methods: principal component analysis (PCA), correspondence analysis (CA;
Benzécri, 1973), multiple correspondence analysis (MCA; Lebart et al., 2006) and multiple factor
analysis (MFA; Escofier and Pagès, 2008). Detailed presentations of these methods enriched by
numerous examples can be consulted at the website http://factominer.free.fr/.
An extension of the MFA function that considers contingency or frequency tables as proposed by
BécueBertaut and Pagès (2004, 2008) is detailed in this article.
First, an example is presented in order to motivate the approach. Next, the mortality data used
to illustrate the method are introduced. Then we briefly describe multiple factor analysis (MFA)
and present the principles of its extension to contingency tables. A real example on mortality data
illustrates the handling of the MFA function to analyse these multiple tables and, finally, conclusions
are presented.
20140312T13:04:40Z
Kostov, Belchin Adriyanov
Bécue Bertaut, Mónica María
Husson, François
We present multiple factor analysis for contingency tables (MFACT) and its implementation
in the FactoMineR package. This method, through an option of the MFA function, allows us to deal
with multiple contingency or frequency tables, in addition to the categorical and quantitative multiple
tables already considered in previous versions of the package. Thanks to this revised function, either
a multiple contingency table or a mixed multiple table integrating quantitative, categorical and
frequency data can be tackled.
The FactoMineR package (Lê et al., 2008; Husson et al., 2011) offers the most commonly used
principal component methods: principal component analysis (PCA), correspondence analysis (CA;
Benzécri, 1973), multiple correspondence analysis (MCA; Lebart et al., 2006) and multiple factor
analysis (MFA; Escofier and Pagès, 2008). Detailed presentations of these methods enriched by
numerous examples can be consulted at the website http://factominer.free.fr/.
An extension of the MFA function that considers contingency or frequency tables as proposed by
BécueBertaut and Pagès (2004, 2008) is detailed in this article.
First, an example is presented in order to motivate the approach. Next, the mortality data used
to illustrate the method are introduced. Then we briefly describe multiple factor analysis (MFA)
and present the principles of its extension to contingency tables. A real example on mortality data
illustrates the handling of the MFA function to analyse these multiple tables and, finally, conclusions
are presented.

Statistical inference for HardyWeinberg proportions in the presence of missing genotype information
http://hdl.handle.net/2117/21263
Statistical inference for HardyWeinberg proportions in the presence of missing genotype information
Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor
In genetic association studies, tests for HardyWeinberg proportions are often employed as a quality control checking
procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy
Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing
values in order to improve inference for HardyWeinberg proportions. For imputation we employ a multinomial logit model
that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single
nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at
random. Deviation from HardyWeinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients
estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by
discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of
the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high
correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that
imputation of missing data leads to improved statistical inference for HardyWeinberg proportions
20140117T12:14:15Z
Graffelman, Jan
Sánchez, Milagros
Cook, Samantha
Moreno, Victor
In genetic association studies, tests for HardyWeinberg proportions are often employed as a quality control checking
procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy
Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing
values in order to improve inference for HardyWeinberg proportions. For imputation we employ a multinomial logit model
that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single
nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at
random. Deviation from HardyWeinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients
estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by
discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of
the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high
correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that
imputation of missing data leads to improved statistical inference for HardyWeinberg proportions