GNOM - Grup d'Optimització Numèrica i Modelització
http://hdl.handle.net/2117/3320
Sun, 22 Oct 2017 04:50:37 GMT2017-10-22T04:50:37ZDemand aggregator flexibility forecast: price incentives sensitivity assessment
http://hdl.handle.net/2117/108911
Demand aggregator flexibility forecast: price incentives sensitivity assessment
Kotsis, Grigorios; Moschos, Ioannis; Corchero García, Cristina; Cruz Zambrano, Miguel
This work seeks to determine the potentials of a Demand Aggregator into the Demand Response scheme. The authors describe and validate the optimization technique used by the Aggregator to enable demand flexibility in domestic microgrid premises. The microgrid is comprised of Distributed Generation and shiftable load devices. By applying a monetary incentive signal in the microgrid's Energy Management System, the Aggregator empowers a change in the load profile, which signifies the potential of this concept in future electricity market and grid applications.
Fri, 20 Oct 2017 11:22:46 GMThttp://hdl.handle.net/2117/1089112017-10-20T11:22:46ZKotsis, GrigoriosMoschos, IoannisCorchero García, CristinaCruz Zambrano, MiguelThis work seeks to determine the potentials of a Demand Aggregator into the Demand Response scheme. The authors describe and validate the optimization technique used by the Aggregator to enable demand flexibility in domestic microgrid premises. The microgrid is comprised of Distributed Generation and shiftable load devices. By applying a monetary incentive signal in the microgrid's Energy Management System, the Aggregator empowers a change in the load profile, which signifies the potential of this concept in future electricity market and grid applications.Comparison of production strategies and degree of postponement when incorporating additive manufacturing to product supply chains
http://hdl.handle.net/2117/108718
Comparison of production strategies and degree of postponement when incorporating additive manufacturing to product supply chains
Minguella Canela, Joaquim; Muguruza Blanco, Asier; Bonada Bo, Jordi; Ramón Lumbierres, Daniel Jacobo; Heredia, F.-Javier (Francisco Javier); Gimeno Feu, Robert; Guo, Ping; Hamilton, Mary; Shastry, Kiron; Webb, Sunny
The best-selling products manufactured nowadays are made in long series along rigid product value chains. Product repetition and continuous/stable manufacturing is seen as a chance for achieving economies of scale. Nevertheless, these speculative strategies fail to meet special customer demands, thus reducing the effective market share of a product in a range. Additive Manufacturing technologies open promising product customization opportunities; however, to achieve it, it is necessary to delay the production operations in order to incorporate the customer’s inputs in the product materialization. The study offered in the present paper compares different possible production strategies for a product (via conventional technologies and Additive Manufacturing) and assesses the degree of postponement that it would be recommended in order to meet a certain demand distribution. The problem solving is calculated by a program containing a stochastic mathematical model which incorporates extensive information on costs and lead times for the required manufacturing operations.
Mon, 16 Oct 2017 11:33:58 GMThttp://hdl.handle.net/2117/1087182017-10-16T11:33:58ZMinguella Canela, JoaquimMuguruza Blanco, AsierBonada Bo, JordiRamón Lumbierres, Daniel JacoboHeredia, F.-Javier (Francisco Javier)Gimeno Feu, RobertGuo, PingHamilton, MaryShastry, KironWebb, SunnyThe best-selling products manufactured nowadays are made in long series along rigid product value chains. Product repetition and continuous/stable manufacturing is seen as a chance for achieving economies of scale. Nevertheless, these speculative strategies fail to meet special customer demands, thus reducing the effective market share of a product in a range. Additive Manufacturing technologies open promising product customization opportunities; however, to achieve it, it is necessary to delay the production operations in order to incorporate the customer’s inputs in the product materialization. The study offered in the present paper compares different possible production strategies for a product (via conventional technologies and Additive Manufacturing) and assesses the degree of postponement that it would be recommended in order to meet a certain demand distribution. The problem solving is calculated by a program containing a stochastic mathematical model which incorporates extensive information on costs and lead times for the required manufacturing operations.A linear optimization based method for data privacy in statistical tabular data
http://hdl.handle.net/2117/108513
A linear optimization based method for data privacy in statistical tabular data
Castro Pérez, Jordi; González Alastrué, José Antonio
National Statistical Agencies routinely disseminate large amount of data. Prior to dissemination these data have to be protected to avoid releasing confidential information. Controlled tabular adjustment (CTA) is one of the available methods for this purpose. CTA formulates an optimization problem that looks for the safe table which is closest to the original one. The standard CTA approach results in a mixed integer linear optimization (MILO) problem, which is very challenging for current technology. In this work we present a much less costly variant of CTA that formulates a multiobjective linear optimization (LO) problem, where binary variables are pre-fixed, and the resulting continuous problem is solved by lexicographic optimization. Extensive computational results are reported using both commercial (CPLEX and XPRESS) and open source (Clp) solvers, with either simplex or interior-point methods, on a set of real instances. Most instances were successfully solved with the LO-CTA variant in less than one hour, while many of them are computationally very expensive with the MILO-CTA formulation. The interior-point method outperformed simplex in this particular application.
Mon, 09 Oct 2017 10:13:22 GMThttp://hdl.handle.net/2117/1085132017-10-09T10:13:22ZCastro Pérez, JordiGonzález Alastrué, José AntonioNational Statistical Agencies routinely disseminate large amount of data. Prior to dissemination these data have to be protected to avoid releasing confidential information. Controlled tabular adjustment (CTA) is one of the available methods for this purpose. CTA formulates an optimization problem that looks for the safe table which is closest to the original one. The standard CTA approach results in a mixed integer linear optimization (MILO) problem, which is very challenging for current technology. In this work we present a much less costly variant of CTA that formulates a multiobjective linear optimization (LO) problem, where binary variables are pre-fixed, and the resulting continuous problem is solved by lexicographic optimization. Extensive computational results are reported using both commercial (CPLEX and XPRESS) and open source (Clp) solvers, with either simplex or interior-point methods, on a set of real instances. Most instances were successfully solved with the LO-CTA variant in less than one hour, while many of them are computationally very expensive with the MILO-CTA formulation. The interior-point method outperformed simplex in this particular application.On geometrical properties of preconditioners in IPMs for classes of block-angular problems
http://hdl.handle.net/2117/108510
On geometrical properties of preconditioners in IPMs for classes of block-angular problems
Castro Pérez, Jordi; Nasini, Stefano
One of the most efficient interior-point methods for some classes of block-angular structured problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. In this work we show that the choice of a good preconditioner depends on geometrical properties of the constraint structure. In particular, the principal angles between the subspaces generated by the diagonal blocks and the linking constraints can be used to estimate ex ante the efficiency of the preconditioner. Numerical validation is provided with some generated optimization problems. An application to the solution of multicommodity network flow problems with nodal capacities and equal flows of up to 64 million variables and up to 7.9 million constraints is also presented. These computational results also show that predictor-corrector directions combined with iterative system solves can be a competitive option for large instances.
Mon, 09 Oct 2017 09:31:06 GMThttp://hdl.handle.net/2117/1085102017-10-09T09:31:06ZCastro Pérez, JordiNasini, StefanoOne of the most efficient interior-point methods for some classes of block-angular structured problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. In this work we show that the choice of a good preconditioner depends on geometrical properties of the constraint structure. In particular, the principal angles between the subspaces generated by the diagonal blocks and the linking constraints can be used to estimate ex ante the efficiency of the preconditioner. Numerical validation is provided with some generated optimization problems. An application to the solution of multicommodity network flow problems with nodal capacities and equal flows of up to 64 million variables and up to 7.9 million constraints is also presented. These computational results also show that predictor-corrector directions combined with iterative system solves can be a competitive option for large instances.The probabilistic p-center problem: Planning service for potential customers
http://hdl.handle.net/2117/106526
The probabilistic p-center problem: Planning service for potential customers
Martínez Merino, Luisa I.; Albareda Sambola, Maria; Rodríguez Chía, Antonio Manuel
This work deals with the probabilistic p-center problem, which aims at minimizing the expected maximum distance between any site with demand and its center, considering that each site has demand with a specific probability. The problem is of interest when emergencies may occur at predefined sites with known probabilities. For this problem we propose and analyze different formulations as well as a Variable Neighborhood Search heuristic. Computational tests are reported, showing the potentials and limits of each formulation, the impact of their enhancements, and the effectiveness of the heuristic.
Mon, 17 Jul 2017 10:35:32 GMThttp://hdl.handle.net/2117/1065262017-07-17T10:35:32ZMartínez Merino, Luisa I.Albareda Sambola, MariaRodríguez Chía, Antonio ManuelThis work deals with the probabilistic p-center problem, which aims at minimizing the expected maximum distance between any site with demand and its center, considering that each site has demand with a specific probability. The problem is of interest when emergencies may occur at predefined sites with known probabilities. For this problem we propose and analyze different formulations as well as a Variable Neighborhood Search heuristic. Computational tests are reported, showing the potentials and limits of each formulation, the impact of their enhancements, and the effectiveness of the heuristic.Optimización de costos logísticos: un caso de estudio de una empresa de plásticos
http://hdl.handle.net/2117/105002
Optimización de costos logísticos: un caso de estudio de una empresa de plásticos
Mata Pérez, Miguel; Heredia, F.-Javier (Francisco Javier); Morales Carreón, Claudia Maribel
Hoy en día los costos logísticos representan una gran oportunidad de mejora para las empresas siendo los costos de transporte y los costos de inventario los más representativos. En este trabajo se presenta un estudio de una empresa ubicada en la región, la cual incurre actualmente en altos costos logísticos en su proceso de importación de materia prima desde Asia hasta su filial en Monterrey, N.L. Por medio de un modelo matemático entero mixto se consigue minimizar los costos antes mencionados.
El modelo tiene las siguientes características: es de ubicación de facilidades en cuatro etapas, multiproducto, multiperiodo y multitransporte.
Mon, 29 May 2017 12:54:47 GMThttp://hdl.handle.net/2117/1050022017-05-29T12:54:47ZMata Pérez, MiguelHeredia, F.-Javier (Francisco Javier)Morales Carreón, Claudia MaribelHoy en día los costos logísticos representan una gran oportunidad de mejora para las empresas siendo los costos de transporte y los costos de inventario los más representativos. En este trabajo se presenta un estudio de una empresa ubicada en la región, la cual incurre actualmente en altos costos logísticos en su proceso de importación de materia prima desde Asia hasta su filial en Monterrey, N.L. Por medio de un modelo matemático entero mixto se consigue minimizar los costos antes mencionados.
El modelo tiene las siguientes características: es de ubicación de facilidades en cuatro etapas, multiproducto, multiperiodo y multitransporte.Taking advantage of unexpected WebCONSORT results
http://hdl.handle.net/2117/103249
Taking advantage of unexpected WebCONSORT results
Cobo Valeri, Erik; González Alastrué, José Antonio
To estimate treatment effects, trials are initiated by randomising patients to the interventions under study and finish by comparing patient evolution. In order to improve the trial report, the CONSORT statement provides authors and peer reviewers with a guide of the essential items that would allow research replication. Additionally, WebCONSORT aims to facilitate author reporting by providing the items from the different CONSORT extensions that are relevant to the trial being reported. WebCONSORT has been estimated to improve the proportion of reported items by 0.04 (95% CI, –0.02 to 0.10), interpreted as “no important difference”, in accordance with the scheduled desired scenario of a 0.15 effect size improvement. However, in a non-scheduled analysis, it was found that, despite clear instructions, around a third of manuscripts selected for trials by the editorial staff were not actually randomised trials. We argue that surprises benefit science, and that further research should be conducted in order to improve the performance of editorial staff.
Tue, 04 Apr 2017 09:20:35 GMThttp://hdl.handle.net/2117/1032492017-04-04T09:20:35ZCobo Valeri, ErikGonzález Alastrué, José AntonioTo estimate treatment effects, trials are initiated by randomising patients to the interventions under study and finish by comparing patient evolution. In order to improve the trial report, the CONSORT statement provides authors and peer reviewers with a guide of the essential items that would allow research replication. Additionally, WebCONSORT aims to facilitate author reporting by providing the items from the different CONSORT extensions that are relevant to the trial being reported. WebCONSORT has been estimated to improve the proportion of reported items by 0.04 (95% CI, –0.02 to 0.10), interpreted as “no important difference”, in accordance with the scheduled desired scenario of a 0.15 effect size improvement. However, in a non-scheduled analysis, it was found that, despite clear instructions, around a third of manuscripts selected for trials by the editorial staff were not actually randomised trials. We argue that surprises benefit science, and that further research should be conducted in order to improve the performance of editorial staff.A second order cone formulation of continuous CTA model
http://hdl.handle.net/2117/103229
A second order cone formulation of continuous CTA model
Lesaja, Goran; Castro Pérez, Jordi; Oganian, Anna
In this paper we consider a minimum distance Controlled Tabular Adjustment (CTA) model for statistical disclosure limitation (control) of tabular data. The goal of the CTA model is to find the closest safe table to some original tabular data set that contains sensitive information. The measure of closeness is usually measured using l1 or l2 norm; with each measure having its advantages and disadvantages. Recently, in [4] a regularization of the l1 -CTA using Pseudo-Huber func- tion was introduced in an attempt to combine positive characteristics of both l1 -CTA and l2 -CTA. All three models can be solved using appro- priate versions of Interior-Point Methods (IPM). It is known that IPM in general works better on well structured problems such as conic op- timization problems, thus, reformulation of these CTA models as conic optimization problem may be advantageous. We present reformulation of Pseudo-Huber-CTA, and l1 -CTA as Second-Order Cone (SOC) op- timization problems and test the validity of the approach on the small example of two-dimensional tabular data set.
The final publication is available at link.springer.com
Mon, 03 Apr 2017 14:27:25 GMThttp://hdl.handle.net/2117/1032292017-04-03T14:27:25ZLesaja, GoranCastro Pérez, JordiOganian, AnnaIn this paper we consider a minimum distance Controlled Tabular Adjustment (CTA) model for statistical disclosure limitation (control) of tabular data. The goal of the CTA model is to find the closest safe table to some original tabular data set that contains sensitive information. The measure of closeness is usually measured using l1 or l2 norm; with each measure having its advantages and disadvantages. Recently, in [4] a regularization of the l1 -CTA using Pseudo-Huber func- tion was introduced in an attempt to combine positive characteristics of both l1 -CTA and l2 -CTA. All three models can be solved using appro- priate versions of Interior-Point Methods (IPM). It is known that IPM in general works better on well structured problems such as conic op- timization problems, thus, reformulation of these CTA models as conic optimization problem may be advantageous. We present reformulation of Pseudo-Huber-CTA, and l1 -CTA as Second-Order Cone (SOC) op- timization problems and test the validity of the approach on the small example of two-dimensional tabular data set.Revisiting interval protection, a.k.a. partial cell suppression, for tabular data
http://hdl.handle.net/2117/103224
Revisiting interval protection, a.k.a. partial cell suppression, for tabular data
Castro Pérez, Jordi; Via Baraldés, Anna
Interval protection or partial cell suppression was introduced in “M. Fischetti, J.-J. Salazar, Partial cell suppression: A new methodology for statistical disclosure control, Statistics and Computing, 13, 13–21, 2003” as a “linearization” of the difficult cell suppression problem. Interval protection replaces some cells by intervals containing the original cell value, unlike in cell suppression where the values are suppressed. Although the resulting optimization problem is still huge—as in cell suppression, it is linear, thus allowing the application of efficient procedures. In this work we present preliminary results with a prototype implementation of Benders decomposition for interval protection. Although the above seminal publication about partial cell suppression applied a similar methodology, our approach differs in two aspects: (i) the boundaries of the intervals are completely independent in our implementation, whereas the one of 2003 solved a simpler variant where boundaries must satisfy a certain ratio; (ii) our prototype is applied to a set of seven general and hierarchical tables, whereas only three two-dimensional tables were solved with the implementation of 2003.
The final publication is available at link.springer.com
Mon, 03 Apr 2017 13:07:52 GMThttp://hdl.handle.net/2117/1032242017-04-03T13:07:52ZCastro Pérez, JordiVia Baraldés, AnnaInterval protection or partial cell suppression was introduced in “M. Fischetti, J.-J. Salazar, Partial cell suppression: A new methodology for statistical disclosure control, Statistics and Computing, 13, 13–21, 2003” as a “linearization” of the difficult cell suppression problem. Interval protection replaces some cells by intervals containing the original cell value, unlike in cell suppression where the values are suppressed. Although the resulting optimization problem is still huge—as in cell suppression, it is linear, thus allowing the application of efficient procedures. In this work we present preliminary results with a prototype implementation of Benders decomposition for interval protection. Although the above seminal publication about partial cell suppression applied a similar methodology, our approach differs in two aspects: (i) the boundaries of the intervals are completely independent in our implementation, whereas the one of 2003 solved a simpler variant where boundaries must satisfy a certain ratio; (ii) our prototype is applied to a set of seven general and hierarchical tables, whereas only three two-dimensional tables were solved with the implementation of 2003.Hub network design problems with profits
http://hdl.handle.net/2117/102005
Hub network design problems with profits
Alibeyg, Armaghan; Contreras Aguilar, Ivan; Fernández Aréizaga, Elena
This paper presents a class of hub network design problems with profit-oriented objectives, which extend several families of classical hub location problems. Potential applications arise in the design of air and ground transportation networks. These problems include decisions on the origin/destination nodes that will be served as well as the activation of different types of edges, and consider the simultaneous optimization of the collected profit, setup cost of the hub network and transportation cost. Alternative models and integer programming formulations are proposed and analyzed. Results from computational experiments show the complexity of such models and highlight their superiority for decision-making.
Tue, 07 Mar 2017 09:01:52 GMThttp://hdl.handle.net/2117/1020052017-03-07T09:01:52ZAlibeyg, ArmaghanContreras Aguilar, IvanFernández Aréizaga, ElenaThis paper presents a class of hub network design problems with profit-oriented objectives, which extend several families of classical hub location problems. Potential applications arise in the design of air and ground transportation networks. These problems include decisions on the origin/destination nodes that will be served as well as the activation of different types of edges, and consider the simultaneous optimization of the collected profit, setup cost of the hub network and transportation cost. Alternative models and integer programming formulations are proposed and analyzed. Results from computational experiments show the complexity of such models and highlight their superiority for decision-making.