Articles de revista
http://hdl.handle.net/2117/3321
Fri, 15 Dec 2017 18:04:43 GMT2017-12-15T18:04:43ZHeuristic solucions to the facility location problem with general Bernoulli demands
http://hdl.handle.net/2117/111476
Heuristic solucions to the facility location problem with general Bernoulli demands
Albareda Sambola, Maria; Fernández Aréizaga, Elena; Saldanha da Gama, Francisco
In this paper, a heuristic procedure is proposed for the facility location problem with general Bernoulli demands. This is a discrete facility location problem with stochastic demands that can be formulated as a two-stage stochastic program with recourse. In particular, facility locations and customer assignments must be decided here and now, i.e., before knowing the customers who will actually require to be served. In a second stage, service decisions are made according to the actual requests. The heuristic proposed consists of a greedy randomized adaptive search procedure followed by a path relinking. The heterogeneous Bernoulli demands make prohibitive the computational effort for evaluating feasible solutions. Thus the expected cost of a feasible solution is simulated when necessary. The results of extensive computational tests performed for evaluating the quality of the heuristic are reported, showing that high-quality feasible solutions can be obtained for the problem in fairly small computational times.
Fri, 01 Dec 2017 15:29:15 GMThttp://hdl.handle.net/2117/1114762017-12-01T15:29:15ZAlbareda Sambola, MariaFernández Aréizaga, ElenaSaldanha da Gama, FranciscoIn this paper, a heuristic procedure is proposed for the facility location problem with general Bernoulli demands. This is a discrete facility location problem with stochastic demands that can be formulated as a two-stage stochastic program with recourse. In particular, facility locations and customer assignments must be decided here and now, i.e., before knowing the customers who will actually require to be served. In a second stage, service decisions are made according to the actual requests. The heuristic proposed consists of a greedy randomized adaptive search procedure followed by a path relinking. The heterogeneous Bernoulli demands make prohibitive the computational effort for evaluating feasible solutions. Thus the expected cost of a feasible solution is simulated when necessary. The results of extensive computational tests performed for evaluating the quality of the heuristic are reported, showing that high-quality feasible solutions can be obtained for the problem in fairly small computational times.Comparison of production strategies and degree of postponement when incorporating additive manufacturing to product supply chains
http://hdl.handle.net/2117/110899
Comparison of production strategies and degree of postponement when incorporating additive manufacturing to product supply chains
Minguella Canela, Joaquim; Muguruza Blanco, Asier; Ramón Lumbierres, Daniel Jacobo; Heredia, F.-Javier (Francisco Javier); Gimeno Feu, Robert; Guo, Ping; Hamilton, Mary; Shastry, Kiron; Webb, Sunny
The best-selling products manufactured nowadays are made in long series along rigid product value chains. Product repetition and continuous/stable manufacturing is seen as a chance for achieving economies of scale. Nevertheless, these speculative strategies fail to meet special customer demands, thus reducing the effective market share of a product in a range.
Additive Manufacturing technologies open promising product customization opportunities; however, to achieve it, it is necessary to delay the production operations in order to incorporate the customer’s inputs in the product materialization.
The study offered in the present paper compares different possible production strategies for a product (via conventional technologies and Additive Manufacturing) and assesses the degree of postponement that it would be recommended in order to meet a certain demand distribution. The problem solving is calculated by a program containing a stochastic mathematical model which incorporates extensive information on costs and lead times for the required manufacturing operations.
Mon, 20 Nov 2017 08:13:49 GMThttp://hdl.handle.net/2117/1108992017-11-20T08:13:49ZMinguella Canela, JoaquimMuguruza Blanco, AsierRamón Lumbierres, Daniel JacoboHeredia, F.-Javier (Francisco Javier)Gimeno Feu, RobertGuo, PingHamilton, MaryShastry, KironWebb, SunnyThe best-selling products manufactured nowadays are made in long series along rigid product value chains. Product repetition and continuous/stable manufacturing is seen as a chance for achieving economies of scale. Nevertheless, these speculative strategies fail to meet special customer demands, thus reducing the effective market share of a product in a range.
Additive Manufacturing technologies open promising product customization opportunities; however, to achieve it, it is necessary to delay the production operations in order to incorporate the customer’s inputs in the product materialization.
The study offered in the present paper compares different possible production strategies for a product (via conventional technologies and Additive Manufacturing) and assesses the degree of postponement that it would be recommended in order to meet a certain demand distribution. The problem solving is calculated by a program containing a stochastic mathematical model which incorporates extensive information on costs and lead times for the required manufacturing operations.A linear optimization based method for data privacy in statistical tabular data
http://hdl.handle.net/2117/108513
A linear optimization based method for data privacy in statistical tabular data
Castro Pérez, Jordi; González Alastrué, José Antonio
National Statistical Agencies routinely disseminate large amount of data. Prior to dissemination these data have to be protected to avoid releasing confidential information. Controlled tabular adjustment (CTA) is one of the available methods for this purpose. CTA formulates an optimization problem that looks for the safe table which is closest to the original one. The standard CTA approach results in a mixed integer linear optimization (MILO) problem, which is very challenging for current technology. In this work we present a much less costly variant of CTA that formulates a multiobjective linear optimization (LO) problem, where binary variables are pre-fixed, and the resulting continuous problem is solved by lexicographic optimization. Extensive computational results are reported using both commercial (CPLEX and XPRESS) and open source (Clp) solvers, with either simplex or interior-point methods, on a set of real instances. Most instances were successfully solved with the LO-CTA variant in less than one hour, while many of them are computationally very expensive with the MILO-CTA formulation. The interior-point method outperformed simplex in this particular application.
Mon, 09 Oct 2017 10:13:22 GMThttp://hdl.handle.net/2117/1085132017-10-09T10:13:22ZCastro Pérez, JordiGonzález Alastrué, José AntonioNational Statistical Agencies routinely disseminate large amount of data. Prior to dissemination these data have to be protected to avoid releasing confidential information. Controlled tabular adjustment (CTA) is one of the available methods for this purpose. CTA formulates an optimization problem that looks for the safe table which is closest to the original one. The standard CTA approach results in a mixed integer linear optimization (MILO) problem, which is very challenging for current technology. In this work we present a much less costly variant of CTA that formulates a multiobjective linear optimization (LO) problem, where binary variables are pre-fixed, and the resulting continuous problem is solved by lexicographic optimization. Extensive computational results are reported using both commercial (CPLEX and XPRESS) and open source (Clp) solvers, with either simplex or interior-point methods, on a set of real instances. Most instances were successfully solved with the LO-CTA variant in less than one hour, while many of them are computationally very expensive with the MILO-CTA formulation. The interior-point method outperformed simplex in this particular application.On geometrical properties of preconditioners in IPMs for classes of block-angular problems
http://hdl.handle.net/2117/108510
On geometrical properties of preconditioners in IPMs for classes of block-angular problems
Castro Pérez, Jordi; Nasini, Stefano
One of the most efficient interior-point methods for some classes of block-angular structured problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. In this work we show that the choice of a good preconditioner depends on geometrical properties of the constraint structure. In particular, the principal angles between the subspaces generated by the diagonal blocks and the linking constraints can be used to estimate ex ante the efficiency of the preconditioner. Numerical validation is provided with some generated optimization problems. An application to the solution of multicommodity network flow problems with nodal capacities and equal flows of up to 64 million variables and up to 7.9 million constraints is also presented. These computational results also show that predictor-corrector directions combined with iterative system solves can be a competitive option for large instances.
Mon, 09 Oct 2017 09:31:06 GMThttp://hdl.handle.net/2117/1085102017-10-09T09:31:06ZCastro Pérez, JordiNasini, StefanoOne of the most efficient interior-point methods for some classes of block-angular structured problems solves the normal equations by a combination of Cholesky factorizations and preconditioned conjugate gradient for, respectively, the block and linking constraints. In this work we show that the choice of a good preconditioner depends on geometrical properties of the constraint structure. In particular, the principal angles between the subspaces generated by the diagonal blocks and the linking constraints can be used to estimate ex ante the efficiency of the preconditioner. Numerical validation is provided with some generated optimization problems. An application to the solution of multicommodity network flow problems with nodal capacities and equal flows of up to 64 million variables and up to 7.9 million constraints is also presented. These computational results also show that predictor-corrector directions combined with iterative system solves can be a competitive option for large instances.The probabilistic p-center problem: Planning service for potential customers
http://hdl.handle.net/2117/106526
The probabilistic p-center problem: Planning service for potential customers
Martínez Merino, Luisa I.; Albareda Sambola, Maria; Rodríguez Chía, Antonio Manuel
This work deals with the probabilistic p-center problem, which aims at minimizing the expected maximum distance between any site with demand and its center, considering that each site has demand with a specific probability. The problem is of interest when emergencies may occur at predefined sites with known probabilities. For this problem we propose and analyze different formulations as well as a Variable Neighborhood Search heuristic. Computational tests are reported, showing the potentials and limits of each formulation, the impact of their enhancements, and the effectiveness of the heuristic.
Mon, 17 Jul 2017 10:35:32 GMThttp://hdl.handle.net/2117/1065262017-07-17T10:35:32ZMartínez Merino, Luisa I.Albareda Sambola, MariaRodríguez Chía, Antonio ManuelThis work deals with the probabilistic p-center problem, which aims at minimizing the expected maximum distance between any site with demand and its center, considering that each site has demand with a specific probability. The problem is of interest when emergencies may occur at predefined sites with known probabilities. For this problem we propose and analyze different formulations as well as a Variable Neighborhood Search heuristic. Computational tests are reported, showing the potentials and limits of each formulation, the impact of their enhancements, and the effectiveness of the heuristic.Taking advantage of unexpected WebCONSORT results
http://hdl.handle.net/2117/103249
Taking advantage of unexpected WebCONSORT results
Cobo Valeri, Erik; González Alastrué, José Antonio
To estimate treatment effects, trials are initiated by randomising patients to the interventions under study and finish by comparing patient evolution. In order to improve the trial report, the CONSORT statement provides authors and peer reviewers with a guide of the essential items that would allow research replication. Additionally, WebCONSORT aims to facilitate author reporting by providing the items from the different CONSORT extensions that are relevant to the trial being reported. WebCONSORT has been estimated to improve the proportion of reported items by 0.04 (95% CI, –0.02 to 0.10), interpreted as “no important difference”, in accordance with the scheduled desired scenario of a 0.15 effect size improvement. However, in a non-scheduled analysis, it was found that, despite clear instructions, around a third of manuscripts selected for trials by the editorial staff were not actually randomised trials. We argue that surprises benefit science, and that further research should be conducted in order to improve the performance of editorial staff.
Tue, 04 Apr 2017 09:20:35 GMThttp://hdl.handle.net/2117/1032492017-04-04T09:20:35ZCobo Valeri, ErikGonzález Alastrué, José AntonioTo estimate treatment effects, trials are initiated by randomising patients to the interventions under study and finish by comparing patient evolution. In order to improve the trial report, the CONSORT statement provides authors and peer reviewers with a guide of the essential items that would allow research replication. Additionally, WebCONSORT aims to facilitate author reporting by providing the items from the different CONSORT extensions that are relevant to the trial being reported. WebCONSORT has been estimated to improve the proportion of reported items by 0.04 (95% CI, –0.02 to 0.10), interpreted as “no important difference”, in accordance with the scheduled desired scenario of a 0.15 effect size improvement. However, in a non-scheduled analysis, it was found that, despite clear instructions, around a third of manuscripts selected for trials by the editorial staff were not actually randomised trials. We argue that surprises benefit science, and that further research should be conducted in order to improve the performance of editorial staff.Hub network design problems with profits
http://hdl.handle.net/2117/102005
Hub network design problems with profits
Alibeyg, Armaghan; Contreras Aguilar, Ivan; Fernández Aréizaga, Elena
This paper presents a class of hub network design problems with profit-oriented objectives, which extend several families of classical hub location problems. Potential applications arise in the design of air and ground transportation networks. These problems include decisions on the origin/destination nodes that will be served as well as the activation of different types of edges, and consider the simultaneous optimization of the collected profit, setup cost of the hub network and transportation cost. Alternative models and integer programming formulations are proposed and analyzed. Results from computational experiments show the complexity of such models and highlight their superiority for decision-making.
Tue, 07 Mar 2017 09:01:52 GMThttp://hdl.handle.net/2117/1020052017-03-07T09:01:52ZAlibeyg, ArmaghanContreras Aguilar, IvanFernández Aréizaga, ElenaThis paper presents a class of hub network design problems with profit-oriented objectives, which extend several families of classical hub location problems. Potential applications arise in the design of air and ground transportation networks. These problems include decisions on the origin/destination nodes that will be served as well as the activation of different types of edges, and consider the simultaneous optimization of the collected profit, setup cost of the hub network and transportation cost. Alternative models and integer programming formulations are proposed and analyzed. Results from computational experiments show the complexity of such models and highlight their superiority for decision-making.Introducing capacitaties in the location of unreliable facilities
http://hdl.handle.net/2117/100936
Introducing capacitaties in the location of unreliable facilities
Albareda Sambola, Maria; Landete, Mercedes; Monge Ivars, Juan Francisco; Sainz Pardo, José Luis
The goal of this paper is to introduce facility capacities into the Reliability Fixed-Charge Location Problem in a sensible way. To this end, we develop and compare different models, which represent a tradeoff between the extreme models currently available in the literature, where a priori assignments are either fixed, or can be fully modified after failures occur. In a series of computational experiments we analyze the obtained solutions and study the price of introducing capacity constraints according to the alternative models both, in terms of computational burden and of solution cost.
Mon, 13 Feb 2017 15:30:49 GMThttp://hdl.handle.net/2117/1009362017-02-13T15:30:49ZAlbareda Sambola, MariaLandete, MercedesMonge Ivars, Juan FranciscoSainz Pardo, José LuisThe goal of this paper is to introduce facility capacities into the Reliability Fixed-Charge Location Problem in a sensible way. To this end, we develop and compare different models, which represent a tradeoff between the extreme models currently available in the literature, where a priori assignments are either fixed, or can be fully modified after failures occur. In a series of computational experiments we analyze the obtained solutions and study the price of introducing capacity constraints according to the alternative models both, in terms of computational burden and of solution cost.Interior-point solver for convex separable block-angular problems
http://hdl.handle.net/2117/90150
Interior-point solver for convex separable block-angular problems
Castro Pérez, Jordi
Constraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.
Thu, 22 Sep 2016 15:31:40 GMThttp://hdl.handle.net/2117/901502016-09-22T15:31:40ZCastro Pérez, JordiConstraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
http://hdl.handle.net/2117/90149
A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
Castro Pérez, Jordi; Nasini, Stefano; Saldanha da Gama, Francisco
We propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.
Thu, 22 Sep 2016 15:07:07 GMThttp://hdl.handle.net/2117/901492016-09-22T15:07:07ZCastro Pérez, JordiNasini, StefanoSaldanha da Gama, FranciscoWe propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.