GNOM - Grup d'Optimització Numèrica i Modelització
http://hdl.handle.net/2117/3320
Thu, 20 Apr 2017 20:56:14 GMT2017-04-20T20:56:14ZTaking advantage of unexpected WebCONSORT results
http://hdl.handle.net/2117/103249
Taking advantage of unexpected WebCONSORT results
Cobo Valeri, Erik; González Alastrué, José Antonio
To estimate treatment effects, trials are initiated by randomising patients to the interventions under study and finish by comparing patient evolution. In order to improve the trial report, the CONSORT statement provides authors and peer reviewers with a guide of the essential items that would allow research replication. Additionally, WebCONSORT aims to facilitate author reporting by providing the items from the different CONSORT extensions that are relevant to the trial being reported. WebCONSORT has been estimated to improve the proportion of reported items by 0.04 (95% CI, –0.02 to 0.10), interpreted as “no important difference”, in accordance with the scheduled desired scenario of a 0.15 effect size improvement. However, in a non-scheduled analysis, it was found that, despite clear instructions, around a third of manuscripts selected for trials by the editorial staff were not actually randomised trials. We argue that surprises benefit science, and that further research should be conducted in order to improve the performance of editorial staff.
Tue, 04 Apr 2017 09:20:35 GMThttp://hdl.handle.net/2117/1032492017-04-04T09:20:35ZCobo Valeri, ErikGonzález Alastrué, José AntonioTo estimate treatment effects, trials are initiated by randomising patients to the interventions under study and finish by comparing patient evolution. In order to improve the trial report, the CONSORT statement provides authors and peer reviewers with a guide of the essential items that would allow research replication. Additionally, WebCONSORT aims to facilitate author reporting by providing the items from the different CONSORT extensions that are relevant to the trial being reported. WebCONSORT has been estimated to improve the proportion of reported items by 0.04 (95% CI, –0.02 to 0.10), interpreted as “no important difference”, in accordance with the scheduled desired scenario of a 0.15 effect size improvement. However, in a non-scheduled analysis, it was found that, despite clear instructions, around a third of manuscripts selected for trials by the editorial staff were not actually randomised trials. We argue that surprises benefit science, and that further research should be conducted in order to improve the performance of editorial staff.A second order cone formulation of continuous CTA model
http://hdl.handle.net/2117/103229
A second order cone formulation of continuous CTA model
Lesaja, Goran; Castro Pérez, Jordi; Oganian, Anna
In this paper we consider a minimum distance Controlled Tabular Adjustment (CTA) model for statistical disclosure limitation (control) of tabular data. The goal of the CTA model is to find the closest safe table to some original tabular data set that contains sensitive information. The measure of closeness is usually measured using l1 or l2 norm; with each measure having its advantages and disadvantages. Recently, in [4] a regularization of the l1 -CTA using Pseudo-Huber func- tion was introduced in an attempt to combine positive characteristics of both l1 -CTA and l2 -CTA. All three models can be solved using appro- priate versions of Interior-Point Methods (IPM). It is known that IPM in general works better on well structured problems such as conic op- timization problems, thus, reformulation of these CTA models as conic optimization problem may be advantageous. We present reformulation of Pseudo-Huber-CTA, and l1 -CTA as Second-Order Cone (SOC) op- timization problems and test the validity of the approach on the small example of two-dimensional tabular data set.
The final publication is available at link.springer.com
Mon, 03 Apr 2017 14:27:25 GMThttp://hdl.handle.net/2117/1032292017-04-03T14:27:25ZLesaja, GoranCastro Pérez, JordiOganian, AnnaIn this paper we consider a minimum distance Controlled Tabular Adjustment (CTA) model for statistical disclosure limitation (control) of tabular data. The goal of the CTA model is to find the closest safe table to some original tabular data set that contains sensitive information. The measure of closeness is usually measured using l1 or l2 norm; with each measure having its advantages and disadvantages. Recently, in [4] a regularization of the l1 -CTA using Pseudo-Huber func- tion was introduced in an attempt to combine positive characteristics of both l1 -CTA and l2 -CTA. All three models can be solved using appro- priate versions of Interior-Point Methods (IPM). It is known that IPM in general works better on well structured problems such as conic op- timization problems, thus, reformulation of these CTA models as conic optimization problem may be advantageous. We present reformulation of Pseudo-Huber-CTA, and l1 -CTA as Second-Order Cone (SOC) op- timization problems and test the validity of the approach on the small example of two-dimensional tabular data set.Revisiting interval protection, a.k.a. partial cell suppression, for tabular data
http://hdl.handle.net/2117/103224
Revisiting interval protection, a.k.a. partial cell suppression, for tabular data
Castro Pérez, Jordi; Via Baraldés, Anna
Interval protection or partial cell suppression was introduced in “M. Fischetti, J.-J. Salazar, Partial cell suppression: A new methodology for statistical disclosure control, Statistics and Computing, 13, 13–21, 2003” as a “linearization” of the difficult cell suppression problem. Interval protection replaces some cells by intervals containing the original cell value, unlike in cell suppression where the values are suppressed. Although the resulting optimization problem is still huge—as in cell suppression, it is linear, thus allowing the application of efficient procedures. In this work we present preliminary results with a prototype implementation of Benders decomposition for interval protection. Although the above seminal publication about partial cell suppression applied a similar methodology, our approach differs in two aspects: (i) the boundaries of the intervals are completely independent in our implementation, whereas the one of 2003 solved a simpler variant where boundaries must satisfy a certain ratio; (ii) our prototype is applied to a set of seven general and hierarchical tables, whereas only three two-dimensional tables were solved with the implementation of 2003.
The final publication is available at link.springer.com
Mon, 03 Apr 2017 13:07:52 GMThttp://hdl.handle.net/2117/1032242017-04-03T13:07:52ZCastro Pérez, JordiVia Baraldés, AnnaInterval protection or partial cell suppression was introduced in “M. Fischetti, J.-J. Salazar, Partial cell suppression: A new methodology for statistical disclosure control, Statistics and Computing, 13, 13–21, 2003” as a “linearization” of the difficult cell suppression problem. Interval protection replaces some cells by intervals containing the original cell value, unlike in cell suppression where the values are suppressed. Although the resulting optimization problem is still huge—as in cell suppression, it is linear, thus allowing the application of efficient procedures. In this work we present preliminary results with a prototype implementation of Benders decomposition for interval protection. Although the above seminal publication about partial cell suppression applied a similar methodology, our approach differs in two aspects: (i) the boundaries of the intervals are completely independent in our implementation, whereas the one of 2003 solved a simpler variant where boundaries must satisfy a certain ratio; (ii) our prototype is applied to a set of seven general and hierarchical tables, whereas only three two-dimensional tables were solved with the implementation of 2003.Hub network design problems with profits
http://hdl.handle.net/2117/102005
Hub network design problems with profits
Alibeyg, Armaghan; Contreras Aguilar, Ivan; Fernández Aréizaga, Elena
This paper presents a class of hub network design problems with profit-oriented objectives, which extend several families of classical hub location problems. Potential applications arise in the design of air and ground transportation networks. These problems include decisions on the origin/destination nodes that will be served as well as the activation of different types of edges, and consider the simultaneous optimization of the collected profit, setup cost of the hub network and transportation cost. Alternative models and integer programming formulations are proposed and analyzed. Results from computational experiments show the complexity of such models and highlight their superiority for decision-making.
Tue, 07 Mar 2017 09:01:52 GMThttp://hdl.handle.net/2117/1020052017-03-07T09:01:52ZAlibeyg, ArmaghanContreras Aguilar, IvanFernández Aréizaga, ElenaThis paper presents a class of hub network design problems with profit-oriented objectives, which extend several families of classical hub location problems. Potential applications arise in the design of air and ground transportation networks. These problems include decisions on the origin/destination nodes that will be served as well as the activation of different types of edges, and consider the simultaneous optimization of the collected profit, setup cost of the hub network and transportation cost. Alternative models and integer programming formulations are proposed and analyzed. Results from computational experiments show the complexity of such models and highlight their superiority for decision-making.Introducing capacitaties in the location of unreliable facilities
http://hdl.handle.net/2117/100936
Introducing capacitaties in the location of unreliable facilities
Albareda Sambola, Maria; Landete, Mercedes; Monge Ivars, Juan Francisco; Sainz Pardo, José Luis
The goal of this paper is to introduce facility capacities into the Reliability Fixed-Charge Location Problem in a sensible way. To this end, we develop and compare different models, which represent a tradeoff between the extreme models currently available in the literature, where a priori assignments are either fixed, or can be fully modified after failures occur. In a series of computational experiments we analyze the obtained solutions and study the price of introducing capacity constraints according to the alternative models both, in terms of computational burden and of solution cost.
Mon, 13 Feb 2017 15:30:49 GMThttp://hdl.handle.net/2117/1009362017-02-13T15:30:49ZAlbareda Sambola, MariaLandete, MercedesMonge Ivars, Juan FranciscoSainz Pardo, José LuisThe goal of this paper is to introduce facility capacities into the Reliability Fixed-Charge Location Problem in a sensible way. To this end, we develop and compare different models, which represent a tradeoff between the extreme models currently available in the literature, where a priori assignments are either fixed, or can be fully modified after failures occur. In a series of computational experiments we analyze the obtained solutions and study the price of introducing capacity constraints according to the alternative models both, in terms of computational burden and of solution cost.An evaluation of urban consolidation centers through continuous analysis with non-equal market share companies
http://hdl.handle.net/2117/100390
An evaluation of urban consolidation centers through continuous analysis with non-equal market share companies
Roca Riu, Mireia; Estrada Romeu, Miguel Ángel; Fernández Aréizaga, Elena
This paper analyzes the logistic cost savings caused by the implementation of Urban Consolidation Centers (UCC) in a dense area of a city. In these urban terminals, freight flows from interurban carriers are consolidated and transferred to a neutral last-mile carrier to perform final deliveries. This operation would reduce both last-mile fleet size and average distance cost. Our UCC modeling approach is focused on continuous analytic models for the general case of carriers with different market shares. Savings are highly sensitive to the design of the system: the increment of capacity in interurban vehicles and the proximity of the UCC terminal to the area in relation to current distribution centers. An exhaustive collection of possible market shares distributions are discussed. Results show that market shares distribution does not affect cost savings significantly. The analysis of the proposed model also highlights the trade-off between savings in the system and a minimum market share per company when the consolidation center is established.
Tue, 31 Jan 2017 15:28:37 GMThttp://hdl.handle.net/2117/1003902017-01-31T15:28:37ZRoca Riu, MireiaEstrada Romeu, Miguel ÁngelFernández Aréizaga, ElenaThis paper analyzes the logistic cost savings caused by the implementation of Urban Consolidation Centers (UCC) in a dense area of a city. In these urban terminals, freight flows from interurban carriers are consolidated and transferred to a neutral last-mile carrier to perform final deliveries. This operation would reduce both last-mile fleet size and average distance cost. Our UCC modeling approach is focused on continuous analytic models for the general case of carriers with different market shares. Savings are highly sensitive to the design of the system: the increment of capacity in interurban vehicles and the proximity of the UCC terminal to the area in relation to current distribution centers. An exhaustive collection of possible market shares distributions are discussed. Results show that market shares distribution does not affect cost savings significantly. The analysis of the proposed model also highlights the trade-off between savings in the system and a minimum market share per company when the consolidation center is established.Diseño de rutas de recogida de residuos sólidos urbanos en el área metropolitana de Barcelona
http://hdl.handle.net/2117/90193
Diseño de rutas de recogida de residuos sólidos urbanos en el área metropolitana de Barcelona
Bautista Valhondo, Joaquín; Pereira Gude, Jordi; Fernández Aréizaga, Elena
Los problemas aso ciados a la recogida de residuos sólidos urbanos son muy variados. En este traba jo se presenta el problema de diseño de itinerarios de recogida y se muestran los resultados ofrecidos p or un pro cedimiento basado en colonias de hormigas a la recogida en un núcleo urbano del Área Metrop olitana de Barcelona.
Mon, 26 Sep 2016 11:33:11 GMThttp://hdl.handle.net/2117/901932016-09-26T11:33:11ZBautista Valhondo, JoaquínPereira Gude, JordiFernández Aréizaga, ElenaLos problemas aso ciados a la recogida de residuos sólidos urbanos son muy variados. En este traba jo se presenta el problema de diseño de itinerarios de recogida y se muestran los resultados ofrecidos p or un pro cedimiento basado en colonias de hormigas a la recogida en un núcleo urbano del Área Metrop olitana de Barcelona.Interior-point solver for convex separable block-angular problems
http://hdl.handle.net/2117/90150
Interior-point solver for convex separable block-angular problems
Castro Pérez, Jordi
Constraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.
Thu, 22 Sep 2016 15:31:40 GMThttp://hdl.handle.net/2117/901502016-09-22T15:31:40ZCastro Pérez, JordiConstraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
http://hdl.handle.net/2117/90149
A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
Castro Pérez, Jordi; Nasini, Stefano; Saldanha da Gama, Francisco
We propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.
Thu, 22 Sep 2016 15:07:07 GMThttp://hdl.handle.net/2117/901492016-09-22T15:07:07ZCastro Pérez, JordiNasini, StefanoSaldanha da Gama, FranciscoWe propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.Numerical implementation and computational results of nonlinear network optimization with linear side constraints
http://hdl.handle.net/2117/89385
Numerical implementation and computational results of nonlinear network optimization with linear side constraints
Heredia, F.-Javier (Francisco Javier); Nabona Francisco, Narcís
Thu, 25 Aug 2016 13:03:53 GMThttp://hdl.handle.net/2117/893852016-08-25T13:03:53ZHeredia, F.-Javier (Francisco Javier)Nabona Francisco, Narcís