Departament d'Estadística i Investigació Operativa
http://hdl.handle.net/2117/3941
Sat, 21 Oct 2017 14:21:11 GMT2017-10-21T14:21:11ZDemand aggregator flexibility forecast: price incentives sensitivity assessment
http://hdl.handle.net/2117/108911
Demand aggregator flexibility forecast: price incentives sensitivity assessment
Kotsis, Grigorios; Moschos, Ioannis; Corchero García, Cristina; Cruz Zambrano, Miguel
This work seeks to determine the potentials of a Demand Aggregator into the Demand Response scheme. The authors describe and validate the optimization technique used by the Aggregator to enable demand flexibility in domestic microgrid premises. The microgrid is comprised of Distributed Generation and shiftable load devices. By applying a monetary incentive signal in the microgrid's Energy Management System, the Aggregator empowers a change in the load profile, which signifies the potential of this concept in future electricity market and grid applications.
Fri, 20 Oct 2017 11:22:46 GMThttp://hdl.handle.net/2117/1089112017-10-20T11:22:46ZKotsis, GrigoriosMoschos, IoannisCorchero García, CristinaCruz Zambrano, MiguelThis work seeks to determine the potentials of a Demand Aggregator into the Demand Response scheme. The authors describe and validate the optimization technique used by the Aggregator to enable demand flexibility in domestic microgrid premises. The microgrid is comprised of Distributed Generation and shiftable load devices. By applying a monetary incentive signal in the microgrid's Energy Management System, the Aggregator empowers a change in the load profile, which signifies the potential of this concept in future electricity market and grid applications.Nonlinear loads model for harmonics flow prediction, using multivariate regression
http://hdl.handle.net/2117/108748
Nonlinear loads model for harmonics flow prediction, using multivariate regression
Lamich Arocas, Manuel; Balcells Sendra, Josep; Corbalán Fuertes, Montserrat; Griful Ponsati, Eulàlia
This paper describes a method for obtaining a model of a single or a set of nonlinear loads (NLL) connected to a certain point of an electrical network. The basic assumption is that the network supplying the NLL has significant series impedances and is disturbed by other parallel, random, and unknown neighbor loads, sharing part of the supply system with the NLL. The main interest for obtaining the model is its further use to predict the amount and flow of harmonic currents generated by the NLL, in the case of adding a filter to reduce the harmonics distortion. The modeling technique used in the paper is based on multivariate multiple outputs regression and leads to a set of equations giving the NLL behavior (one for each of the harmonic currents). The model is obtained from data taken at measuring point and is only valid to predict the NLL behavior when new loads are connected at this point. The modeling method was first tested with V, I data coming from simulations using a MATLAB-Simulink SimPowerSystems toolbox. Finally, the method has been validated using V, I data taken in a real installation with different neighbor loads and under different load conditions.
Tue, 17 Oct 2017 12:00:15 GMThttp://hdl.handle.net/2117/1087482017-10-17T12:00:15ZLamich Arocas, ManuelBalcells Sendra, JosepCorbalán Fuertes, MontserratGriful Ponsati, EulàliaThis paper describes a method for obtaining a model of a single or a set of nonlinear loads (NLL) connected to a certain point of an electrical network. The basic assumption is that the network supplying the NLL has significant series impedances and is disturbed by other parallel, random, and unknown neighbor loads, sharing part of the supply system with the NLL. The main interest for obtaining the model is its further use to predict the amount and flow of harmonic currents generated by the NLL, in the case of adding a filter to reduce the harmonics distortion. The modeling technique used in the paper is based on multivariate multiple outputs regression and leads to a set of equations giving the NLL behavior (one for each of the harmonic currents). The model is obtained from data taken at measuring point and is only valid to predict the NLL behavior when new loads are connected at this point. The modeling method was first tested with V, I data coming from simulations using a MATLAB-Simulink SimPowerSystems toolbox. Finally, the method has been validated using V, I data taken in a real installation with different neighbor loads and under different load conditions.Comparison of production strategies and degree of postponement when incorporating additive manufacturing to product supply chains
http://hdl.handle.net/2117/108718
Comparison of production strategies and degree of postponement when incorporating additive manufacturing to product supply chains
Minguella Canela, Joaquim; Muguruza Blanco, Asier; Bonada Bo, Jordi; Ramón Lumbierres, Daniel Jacobo; Heredia, F.-Javier (Francisco Javier); Gimeno Feu, Robert; Guo, Ping; Hamilton, Mary; Shastry, Kiron; Webb, Sunny
The best-selling products manufactured nowadays are made in long series along rigid product value chains. Product repetition and continuous/stable manufacturing is seen as a chance for achieving economies of scale. Nevertheless, these speculative strategies fail to meet special customer demands, thus reducing the effective market share of a product in a range. Additive Manufacturing technologies open promising product customization opportunities; however, to achieve it, it is necessary to delay the production operations in order to incorporate the customer’s inputs in the product materialization. The study offered in the present paper compares different possible production strategies for a product (via conventional technologies and Additive Manufacturing) and assesses the degree of postponement that it would be recommended in order to meet a certain demand distribution. The problem solving is calculated by a program containing a stochastic mathematical model which incorporates extensive information on costs and lead times for the required manufacturing operations.
Mon, 16 Oct 2017 11:33:58 GMThttp://hdl.handle.net/2117/1087182017-10-16T11:33:58ZMinguella Canela, JoaquimMuguruza Blanco, AsierBonada Bo, JordiRamón Lumbierres, Daniel JacoboHeredia, F.-Javier (Francisco Javier)Gimeno Feu, RobertGuo, PingHamilton, MaryShastry, KironWebb, SunnyThe best-selling products manufactured nowadays are made in long series along rigid product value chains. Product repetition and continuous/stable manufacturing is seen as a chance for achieving economies of scale. Nevertheless, these speculative strategies fail to meet special customer demands, thus reducing the effective market share of a product in a range. Additive Manufacturing technologies open promising product customization opportunities; however, to achieve it, it is necessary to delay the production operations in order to incorporate the customer’s inputs in the product materialization. The study offered in the present paper compares different possible production strategies for a product (via conventional technologies and Additive Manufacturing) and assesses the degree of postponement that it would be recommended in order to meet a certain demand distribution. The problem solving is calculated by a program containing a stochastic mathematical model which incorporates extensive information on costs and lead times for the required manufacturing operations.Choosing the most relevant level sets for depicting a sample of densities
http://hdl.handle.net/2117/108716
Choosing the most relevant level sets for depicting a sample of densities
Delicado Useros, Pedro Francisco; Vieu, Philippe
When exploring a sample composed with a set of bivariate density functions, the question of the visualisation of the data has to front with the choice of the relevant level set(s). The approach proposed in this paper consists in defining the optimal level set(s) as being the one(s) allowing for the best reconstitution of the whole density. A fully data-driven procedure is developed in order to estimate the link between the level set(s) and their corresponding density, to construct optimal level set(s) and to choose automatically the number of relevant level set(s). The method is based on recent advances in functional data analysis when both response and predictors are functional. After a wide description of the methodology, finite sample studies are presented (including both real and simulated data) while theoretical studies are reported to a final appendix.
The final publication is available at link.springer.com
Mon, 16 Oct 2017 11:23:54 GMThttp://hdl.handle.net/2117/1087162017-10-16T11:23:54ZDelicado Useros, Pedro FranciscoVieu, PhilippeWhen exploring a sample composed with a set of bivariate density functions, the question of the visualisation of the data has to front with the choice of the relevant level set(s). The approach proposed in this paper consists in defining the optimal level set(s) as being the one(s) allowing for the best reconstitution of the whole density. A fully data-driven procedure is developed in order to estimate the link between the level set(s) and their corresponding density, to construct optimal level set(s) and to choose automatically the number of relevant level set(s). The method is based on recent advances in functional data analysis when both response and predictors are functional. After a wide description of the methodology, finite sample studies are presented (including both real and simulated data) while theoretical studies are reported to a final appendix.A linear optimization based method for data privacy in statistical tabular data
http://hdl.handle.net/2117/108513
A linear optimization based method for data privacy in statistical tabular data
Castro Pérez, Jordi; González Alastrué, José Antonio
National Statistical Agencies routinely disseminate large amount of data. Prior to dissemination these data have to be protected to avoid releasing confidential information. Controlled tabular adjustment (CTA) is one of the available methods for this purpose. CTA formulates an optimization problem that looks for the safe table which is closest to the original one. The standard CTA approach results in a mixed integer linear optimization (MILO) problem, which is very challenging for current technology. In this work we present a much less costly variant of CTA that formulates a multiobjective linear optimization (LO) problem, where binary variables are pre-fixed, and the resulting continuous problem is solved by lexicographic optimization. Extensive computational results are reported using both commercial (CPLEX and XPRESS) and open source (Clp) solvers, with either simplex or interior-point methods, on a set of real instances. Most instances were successfully solved with the LO-CTA variant in less than one hour, while many of them are computationally very expensive with the MILO-CTA formulation. The interior-point method outperformed simplex in this particular application.
Mon, 09 Oct 2017 10:13:22 GMThttp://hdl.handle.net/2117/1085132017-10-09T10:13:22ZCastro Pérez, JordiGonzález Alastrué, José AntonioNational Statistical Agencies routinely disseminate large amount of data. Prior to dissemination these data have to be protected to avoid releasing confidential information. Controlled tabular adjustment (CTA) is one of the available methods for this purpose. CTA formulates an optimization problem that looks for the safe table which is closest to the original one. The standard CTA approach results in a mixed integer linear optimization (MILO) problem, which is very challenging for current technology. In this work we present a much less costly variant of CTA that formulates a multiobjective linear optimization (LO) problem, where binary variables are pre-fixed, and the resulting continuous problem is solved by lexicographic optimization. Extensive computational results are reported using both commercial (CPLEX and XPRESS) and open source (Clp) solvers, with either simplex or interior-point methods, on a set of real instances. Most instances were successfully solved with the LO-CTA variant in less than one hour, while many of them are computationally very expensive with the MILO-CTA formulation. The interior-point method outperformed simplex in this particular application.On geometrical properties of preconditioners in IPMs for classes of block-angular problems
http://hdl.handle.net/2117/108510
On geometrical properties of preconditioners in IPMs for classes of block-angular problems
Castro Pérez, Jordi; Nasini, Stefano
One of the most efficient interior-point methods for some classes of block-angular structured problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. In this work we show that the choice of a good preconditioner depends on geometrical properties of the constraint structure. In particular, the principal angles between the subspaces generated by the diagonal blocks and the linking constraints can be used to estimate ex ante the efficiency of the preconditioner. Numerical validation is provided with some generated optimization problems. An application to the solution of multicommodity network flow problems with nodal capacities and equal flows of up to 64 million variables and up to 7.9 million constraints is also presented. These computational results also show that predictor-corrector directions combined with iterative system solves can be a competitive option for large instances.
Mon, 09 Oct 2017 09:31:06 GMThttp://hdl.handle.net/2117/1085102017-10-09T09:31:06ZCastro Pérez, JordiNasini, StefanoOne of the most efficient interior-point methods for some classes of block-angular structured problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. In this work we show that the choice of a good preconditioner depends on geometrical properties of the constraint structure. In particular, the principal angles between the subspaces generated by the diagonal blocks and the linking constraints can be used to estimate ex ante the efficiency of the preconditioner. Numerical validation is provided with some generated optimization problems. An application to the solution of multicommodity network flow problems with nodal capacities and equal flows of up to 64 million variables and up to 7.9 million constraints is also presented. These computational results also show that predictor-corrector directions combined with iterative system solves can be a competitive option for large instances.Puntos de servicio en aseos públicos para minimizar y equilibrar los tiempos de espera de hombres y mujeres
http://hdl.handle.net/2117/107818
Puntos de servicio en aseos públicos para minimizar y equilibrar los tiempos de espera de hombres y mujeres
Grima Cintas, Pedro; Marco Almagro, Lluís; Tort-Martorell Llabrés, Xavier
The article raises the issue of the queues that usually appear in the ladies toilets in contrast to the fluidity with which gentlemen can access theirs, and points that this situation is only due to bad dimensioning of the toilets and thus that the problem has an easy solution. After describing and parameterizing the queuing forming processes in ladies and gentlemen toilets, a simulation program is used to analyze in each case the influence of the number of service points in waiting times. Finally, the results obtained are used to give recommendations in order to balance the waiting times of ladies and gentlemen while keeping them below reasonable levels.
Wed, 20 Sep 2017 10:48:33 GMThttp://hdl.handle.net/2117/1078182017-09-20T10:48:33ZGrima Cintas, PedroMarco Almagro, LluísTort-Martorell Llabrés, XavierThe article raises the issue of the queues that usually appear in the ladies toilets in contrast to the fluidity with which gentlemen can access theirs, and points that this situation is only due to bad dimensioning of the toilets and thus that the problem has an easy solution. After describing and parameterizing the queuing forming processes in ladies and gentlemen toilets, a simulation program is used to analyze in each case the influence of the number of service points in waiting times. Finally, the results obtained are used to give recommendations in order to balance the waiting times of ladies and gentlemen while keeping them below reasonable levels.CHROMA: a maturity model for the information-driven decision-making process
http://hdl.handle.net/2117/107812
CHROMA: a maturity model for the information-driven decision-making process
Tort-Martorell Llabrés, Xavier; Parra, Xileidys
A novel maturity model for the information-driven decision-making process (DMP) in organisations is presented. The 'circumplex hierarchical representation of organisation maturity assessment' (CHROMA) model was developed for evaluating organisations regarding their competence and readiness in using information to support decisions. This model groups the most important informed decision factors into five dimensions data availability, data quality, data analysis and insights, information use and decision-making. The model addresses these dimensions in an organised and systematic way, providing a framework for characterising the organisation's use of information in DMPs from an uninitiated stage to a completely embedded one. This model was tested in a pilot study on three small/medium-sized enterprises. The assessment involves interviewing key company personnel and evaluating the attributes and dimensions of the CHROMA model. Results indicate that the model is useful for identifying strengths and weaknesses, thereby providing insights for prioritising improvement actions.
Wed, 20 Sep 2017 10:11:28 GMThttp://hdl.handle.net/2117/1078122017-09-20T10:11:28ZTort-Martorell Llabrés, XavierParra, XileidysA novel maturity model for the information-driven decision-making process (DMP) in organisations is presented. The 'circumplex hierarchical representation of organisation maturity assessment' (CHROMA) model was developed for evaluating organisations regarding their competence and readiness in using information to support decisions. This model groups the most important informed decision factors into five dimensions data availability, data quality, data analysis and insights, information use and decision-making. The model addresses these dimensions in an organised and systematic way, providing a framework for characterising the organisation's use of information in DMPs from an uninitiated stage to a completely embedded one. This model was tested in a pilot study on three small/medium-sized enterprises. The assessment involves interviewing key company personnel and evaluating the attributes and dimensions of the CHROMA model. Results indicate that the model is useful for identifying strengths and weaknesses, thereby providing insights for prioritising improvement actions.A methodology to discover and understand complex patterns: interpreted integrative multiview clustering (I2MC)
http://hdl.handle.net/2117/107553
A methodology to discover and understand complex patterns: interpreted integrative multiview clustering (I2MC)
Sevilla-Villanueva, Beatriz; Gibert, Karina; Sànchez-Marrè, Miquel
The main goal of this work is to develop a methodology for finding nutritional patterns from a variety of individual characteristics which can contribute to better understand the interactions between nutrition and health, provided that the complexity of the phenomenon gives poor performance using classical approaches. An innovative methodology based on a combination of advanced clustering techniques and consistent conceptual interpretation of clusters is proposed to find more understandable patterns or clusters. The Interpreted Integrative Multiview Clustering (I2MC) combines the previously proposed Integrative Multiview Clustering (IMC) with a new interpretation methodology NCIMS. IMC uses crossing operations over the several partitions obtained with the different views. Comparison with other classical clustering techniques is provided to assess the performance of this approach. IMC helps to reduce the high dimensionality of the data based on multiview division of variables. Two innovative Cluster Interpretation methodologies are proposed to support the understanding of the clusters. These are automatic methods to detect the significant variables that describe the clusters; also, a mechanism to deal with the consistency between the interpretations inter clusters of a single partition CI-IMS, or between pairs of nested partitions NCIMS. Some formal concepts are specifically introduced to be used in the NCIMS. I2MC is used to validate the interpretability of the participant’s profiles from an intervention nutritional study. The method has advantages to deal with complex datasets including heterogeneous variables corresponding to different topics and is able to provide meaningful partitions.
Tue, 12 Sep 2017 11:46:22 GMThttp://hdl.handle.net/2117/1075532017-09-12T11:46:22ZSevilla-Villanueva, BeatrizGibert, KarinaSànchez-Marrè, MiquelThe main goal of this work is to develop a methodology for finding nutritional patterns from a variety of individual characteristics which can contribute to better understand the interactions between nutrition and health, provided that the complexity of the phenomenon gives poor performance using classical approaches. An innovative methodology based on a combination of advanced clustering techniques and consistent conceptual interpretation of clusters is proposed to find more understandable patterns or clusters. The Interpreted Integrative Multiview Clustering (I2MC) combines the previously proposed Integrative Multiview Clustering (IMC) with a new interpretation methodology NCIMS. IMC uses crossing operations over the several partitions obtained with the different views. Comparison with other classical clustering techniques is provided to assess the performance of this approach. IMC helps to reduce the high dimensionality of the data based on multiview division of variables. Two innovative Cluster Interpretation methodologies are proposed to support the understanding of the clusters. These are automatic methods to detect the significant variables that describe the clusters; also, a mechanism to deal with the consistency between the interpretations inter clusters of a single partition CI-IMS, or between pairs of nested partitions NCIMS. Some formal concepts are specifically introduced to be used in the NCIMS. I2MC is used to validate the interpretability of the participant’s profiles from an intervention nutritional study. The method has advantages to deal with complex datasets including heterogeneous variables corresponding to different topics and is able to provide meaningful partitions.Measuring non-linear dependence for two random variables distributed along a curve
http://hdl.handle.net/2117/107521
Measuring non-linear dependence for two random variables distributed along a curve
Delicado Useros, Pedro Francisco; Marcelo, Smrekar
We propose new dependence measures for two real random variables not necessarily linearly related. Covariance and linear correlation are expressed in terms of principal components and are generalized for variables distributed along a curve. Properties of these measures are discussed. The new measures are estimated using principal curves and are computed for simulated and real data sets. Finally, we present several statistical applications for the new dependence measures.
The final publication is available at link.springer.com
Fri, 08 Sep 2017 10:12:24 GMThttp://hdl.handle.net/2117/1075212017-09-08T10:12:24ZDelicado Useros, Pedro FranciscoMarcelo, SmrekarWe propose new dependence measures for two real random variables not necessarily linearly related. Covariance and linear correlation are expressed in terms of principal components and are generalized for variables distributed along a curve. Properties of these measures are discussed. The new measures are estimated using principal curves and are computed for simulated and real data sets. Finally, we present several statistical applications for the new dependence measures.