Articles de revista
http://hdl.handle.net/2117/3942
2016-09-29T15:28:55ZHerbivores, saprovores and natural enemies respond differently to within-field plant characteristics of wheat fields
http://hdl.handle.net/2117/90156
Herbivores, saprovores and natural enemies respond differently to within-field plant characteristics of wheat fields
Caballero López, Berta; Blanco Moreno, José M.; Pujade Villar, Juli; Ventura, Daniel; Sánchez Espigares, Josep Anton; Sans Serra, Francesc Xavier
Understanding ecosystem functioning in a farmland context by considering the variety of ecological strategies employed by arthropods is a core challenge in ecology and conservation science. We adopted a functional approach in an assessment of the relationship between three functional plant groups (grasses, broad-leaves and legumes) and the arthropod community in winter wheat fields in a Mediterranean dryland context. We sampled the arthropod community as thoroughly as possible with a combination of suction catching and flight-interception trapping. All specimens were identified to the appropriate taxonomic level (family, genus or species) and classified according to their form of feeding: chewing-herbivores, sucking-herbivores, flower-consumers, omnivores, saprovores, parasitoids or predators. We found, a richer plant community favoured a greater diversity of herbivores and, in turn, a richness of herbivores and saprovores enhanced the communities of their natural enemies, which supports the classical trophic structure hypothesis. Grass cover had a positive effect on sucking-herbivores, saprovores and their natural enemies and is probably due to grasses’ ability to provide, either directly or indirectly, alternative resources or simply by offering better environmental conditions. By including legumes in agroecosystems we can improve the conservation of beneficial arthropods like predators or parasitoids, and enhance the provision of ecosystem services such as natural pest control
2016-09-23T10:04:24ZCaballero López, BertaBlanco Moreno, José M.Pujade Villar, JuliVentura, DanielSánchez Espigares, Josep AntonSans Serra, Francesc XavierUnderstanding ecosystem functioning in a farmland context by considering the variety of ecological strategies employed by arthropods is a core challenge in ecology and conservation science. We adopted a functional approach in an assessment of the relationship between three functional plant groups (grasses, broad-leaves and legumes) and the arthropod community in winter wheat fields in a Mediterranean dryland context. We sampled the arthropod community as thoroughly as possible with a combination of suction catching and flight-interception trapping. All specimens were identified to the appropriate taxonomic level (family, genus or species) and classified according to their form of feeding: chewing-herbivores, sucking-herbivores, flower-consumers, omnivores, saprovores, parasitoids or predators. We found, a richer plant community favoured a greater diversity of herbivores and, in turn, a richness of herbivores and saprovores enhanced the communities of their natural enemies, which supports the classical trophic structure hypothesis. Grass cover had a positive effect on sucking-herbivores, saprovores and their natural enemies and is probably due to grasses’ ability to provide, either directly or indirectly, alternative resources or simply by offering better environmental conditions. By including legumes in agroecosystems we can improve the conservation of beneficial arthropods like predators or parasitoids, and enhance the provision of ecosystem services such as natural pest controlInterior-point solver for convex separable block-angular problems
http://hdl.handle.net/2117/90150
Interior-point solver for convex separable block-angular problems
Castro Pérez, Jordi
Constraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.
2016-09-22T15:31:40ZCastro Pérez, JordiConstraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
http://hdl.handle.net/2117/90149
A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
Castro Pérez, Jordi; Nasini, Stefano; Saldanha da Gama, Francisco
We propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.
2016-09-22T15:07:07ZCastro Pérez, JordiNasini, StefanoSaldanha da Gama, FranciscoWe propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.A unified approach to authorship attribution and verification
http://hdl.handle.net/2117/89602
A unified approach to authorship attribution and verification
Puig Oriol, Xavier; Font Valverde, Martí; Ginebra Molins, Josep
In authorship attribution, one assigns texts from an unknown author to either one of two or more candidate authors by comparing the disputed texts with texts known to have been written by the candidate authors. In authorship verification, one decides whether a text or a set of texts could have been written by a given author. These two problems are usually treated separately. By assuming an open-set classification framework for the attribution problem, contemplating the possibility that none of the candidate authors is the unknown author, the verification problem becomes a special case of attribution problem. Here both problems are posed as a formal Bayesian multinomial model selection problem and are given a closed-form solution, tailored for categorical data, naturally incorporating text length and dependence in the analysis, and coping well with settings with a small number of training texts. The approach to authorship verification is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it, and the approach to authorship attribution is illustrated by revisiting the authorship attribution of the Federalist papers and through a small simulation study.
2016-09-06T10:09:51ZPuig Oriol, XavierFont Valverde, MartíGinebra Molins, JosepIn authorship attribution, one assigns texts from an unknown author to either one of two or more candidate authors by comparing the disputed texts with texts known to have been written by the candidate authors. In authorship verification, one decides whether a text or a set of texts could have been written by a given author. These two problems are usually treated separately. By assuming an open-set classification framework for the attribution problem, contemplating the possibility that none of the candidate authors is the unknown author, the verification problem becomes a special case of attribution problem. Here both problems are posed as a formal Bayesian multinomial model selection problem and are given a closed-form solution, tailored for categorical data, naturally incorporating text length and dependence in the analysis, and coping well with settings with a small number of training texts. The approach to authorship verification is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it, and the approach to authorship attribution is illustrated by revisiting the authorship attribution of the Federalist papers and through a small simulation study.Reliability versus mass optimization of CO2 extraction technologies for long duration missions
http://hdl.handle.net/2117/89135
Reliability versus mass optimization of CO2 extraction technologies for long duration missions
Detrell Domingo, Gisela; Griful Ponsati, Eulàlia; Messerschmid, Ernst
The aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4-Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a Multi-Objective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.
2016-07-25T10:43:03ZDetrell Domingo, GiselaGriful Ponsati, EulàliaMesserschmid, ErnstThe aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4-Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a Multi-Objective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eight-hours of symptom onset
http://hdl.handle.net/2117/86925
REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eight-hours of symptom onset
Molina, Carlos A.; Chamorro, Ángel; Rovira, Alex; de Miquel, Maria Angeles; Serena Leal, Joaquín; Sanroman, Luis; Jovin, Tudor G.; Dávalos Errando, Antoni; Cobo Valeri, Erik
REVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 18-80, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu- sion, Alberta Stroke Program Early Computed Tomography score of > 7 on non-contrast CT or > 6 on diffusion-weighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary end-point, based on intention-to-treat cri- teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini- mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary end-points are infarct volume evaluated on CT at 24 h, dra- matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali- zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac- ranial haemorrhage rates at 24 hours and procedure related complications.
2016-05-11T10:43:51ZMolina, Carlos A.Chamorro, ÁngelRovira, Alexde Miquel, Maria AngelesSerena Leal, JoaquínSanroman, LuisJovin, Tudor G.Dávalos Errando, AntoniCobo Valeri, ErikREVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 18-80, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu- sion, Alberta Stroke Program Early Computed Tomography score of > 7 on non-contrast CT or > 6 on diffusion-weighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary end-point, based on intention-to-treat cri- teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini- mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary end-points are infarct volume evaluated on CT at 24 h, dra- matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali- zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac- ranial haemorrhage rates at 24 hours and procedure related complications.A characterization of the innovations of first order autoregressive models
http://hdl.handle.net/2117/86538
A characterization of the innovations of first order autoregressive models
Moriña, David; Puig, Pedro; Valero Baya, Jordi
Suppose that follows a simple AR(1) model, that is, it can be expressed as , where is a white noise with mean equal to and variance . There are many examples in practice where these assumptions hold very well. Consider . We shall show that the autocorrelation function of characterizes the distribution of W-t.
“The final publication is available at Springer via http://dx.doi.org/10.1007/s00184-014-0497-5"
2016-05-03T14:42:27ZMoriña, DavidPuig, PedroValero Baya, JordiSuppose that follows a simple AR(1) model, that is, it can be expressed as , where is a white noise with mean equal to and variance . There are many examples in practice where these assumptions hold very well. Consider . We shall show that the autocorrelation function of characterizes the distribution of W-t.Discussion of “Analysis of spatio-temporal mobile phone data: a case study in the metropolitan area of Milan” by Piercesare Secchi, Simone Vantini and Valeria Vitelli
http://hdl.handle.net/2117/86293
Discussion of “Analysis of spatio-temporal mobile phone data: a case study in the metropolitan area of Milan” by Piercesare Secchi, Simone Vantini and Valeria Vitelli
Delicado Useros, Pedro Francisco
The paper under discussion is a very well-written and interesting piece of work by Secchi et al. (2015) dealing with spatio-temporal data on mobile phone use in the area of Milan. I congratulate the authors for such a stimulating and interesting paper. It clearly points out that Erlang data on mobile phone use contain a large amount of rich information. The paper is an excellent example of statistical analysis of Big Data. I discuss briefly two alternative ways of dimension reduction of spatio-temporal data and illustrate them with artificial data that has been simulated according to the scheme proposed by the authors.
2016-04-27T17:40:12ZDelicado Useros, Pedro FranciscoThe paper under discussion is a very well-written and interesting piece of work by Secchi et al. (2015) dealing with spatio-temporal data on mobile phone use in the area of Milan. I congratulate the authors for such a stimulating and interesting paper. It clearly points out that Erlang data on mobile phone use contain a large amount of rich information. The paper is an excellent example of statistical analysis of Big Data. I discuss briefly two alternative ways of dimension reduction of spatio-temporal data and illustrate them with artificial data that has been simulated according to the scheme proposed by the authors.Towards a generic benchmarking platform for origin–destination flows estimation/updating algorithms: design, demonstration and validation
http://hdl.handle.net/2117/86110
Towards a generic benchmarking platform for origin–destination flows estimation/updating algorithms: design, demonstration and validation
Antoniou, Constantinos; Barceló Bugeda, Jaime; Breen, Martijn; Bullejos, Manuel; Casas, Jordi; Cipriani, Ernesto; Ciuffo, Biagio; Djukic, Tamara; Hoogendoorn, Serge; Marzano, Vittorio; Montero Mercadé, Lídia; Nigro, Marialisa; Perarnau, Josep; Punzo, Vincenzo; Toledo, Tomer; van Lint, Hans
Estimation/updating of origin-destination (OD) flows and other traffic state parameters is a classical, widely adopted procedure in transport engineering, both in off-line and in on-line contexts. Notwithstanding numerous approaches proposed in the literature, there is still room for considerable improvements, also leveraging the unprecedented opportunity offered by information and communication technologies and big data. A key issue relates to the unobservability of OD flows in real networks – except from closed highway systems – thus leading to inherent difficulties in measuring performance of OD flows estimation/updating methods and algorithms. Starting from these premises, the paper proposes a common evaluation and benchmarking framework, providing a synthetic test bed, which enables implementation and comparison of OD estimation/updating algorithms and methodologies under “standardized” conditions. The framework, implemented in a platform available to interested parties upon request, has been flexibly designed and allows comparing a variety of approaches under various settings and conditions. Specifically, the structure and the key features of the framework are presented, along with a detailed experimental design for the application of different dynamic OD flow estimation algorithms. By way of example, applications to both off-line/planning and on-line algorithms are presented, together with a demonstration of the extensibility of the presented framework to accommodate additional data sources
2016-04-22T17:11:10ZAntoniou, ConstantinosBarceló Bugeda, JaimeBreen, MartijnBullejos, ManuelCasas, JordiCipriani, ErnestoCiuffo, BiagioDjukic, TamaraHoogendoorn, SergeMarzano, VittorioMontero Mercadé, LídiaNigro, MarialisaPerarnau, JosepPunzo, VincenzoToledo, Tomervan Lint, HansEstimation/updating of origin-destination (OD) flows and other traffic state parameters is a classical, widely adopted procedure in transport engineering, both in off-line and in on-line contexts. Notwithstanding numerous approaches proposed in the literature, there is still room for considerable improvements, also leveraging the unprecedented opportunity offered by information and communication technologies and big data. A key issue relates to the unobservability of OD flows in real networks – except from closed highway systems – thus leading to inherent difficulties in measuring performance of OD flows estimation/updating methods and algorithms. Starting from these premises, the paper proposes a common evaluation and benchmarking framework, providing a synthetic test bed, which enables implementation and comparison of OD estimation/updating algorithms and methodologies under “standardized” conditions. The framework, implemented in a platform available to interested parties upon request, has been flexibly designed and allows comparing a variety of approaches under various settings and conditions. Specifically, the structure and the key features of the framework are presented, along with a detailed experimental design for the application of different dynamic OD flow estimation algorithms. By way of example, applications to both off-line/planning and on-line algorithms are presented, together with a demonstration of the extensibility of the presented framework to accommodate additional data sourcesFARMS: a new algorithm for variable selection
http://hdl.handle.net/2117/86075
FARMS: a new algorithm for variable selection
Pérez Álvarez, Susana; Gómez Melis, Guadalupe; Brander, Christian
Large datasets including an extensive number of covariates are generated these days in many different situations, for instance, in detailed genetic studies of outbreed human populations or in complex analyses of immune responses to different infections. Aiming at informing clinical interventions or vaccine design, methods for variable selection identifying those variables with the optimal prediction performance for a specific outcome are crucial. However, testing for all potential subsets of variables is not feasible and alternatives to existing methods are needed. Here, we describe a new method to handle such complex datasets, referred to as FARMS, that combines forward and all subsets regression for model selection. We apply FARMS to a host genetic and immunological dataset of over 800 individuals from Lima (Peru) and Durban (South Africa) who were HIV infected and tested for antiviral immune responses. This dataset includes more than 500 explanatory variables: around 400 variables with information on HIV immune reactivity and around 100 individual genetic characteristics. We have implemented FARMS in R statistical language and we showed that FARMS is fast and outcompetes other comparable commonly used approaches, thus providing a new tool for the thorough analysis of complex datasets without the need for massive computational infrastructure.
2016-04-21T14:44:57ZPérez Álvarez, SusanaGómez Melis, GuadalupeBrander, ChristianLarge datasets including an extensive number of covariates are generated these days in many different situations, for instance, in detailed genetic studies of outbreed human populations or in complex analyses of immune responses to different infections. Aiming at informing clinical interventions or vaccine design, methods for variable selection identifying those variables with the optimal prediction performance for a specific outcome are crucial. However, testing for all potential subsets of variables is not feasible and alternatives to existing methods are needed. Here, we describe a new method to handle such complex datasets, referred to as FARMS, that combines forward and all subsets regression for model selection. We apply FARMS to a host genetic and immunological dataset of over 800 individuals from Lima (Peru) and Durban (South Africa) who were HIV infected and tested for antiviral immune responses. This dataset includes more than 500 explanatory variables: around 400 variables with information on HIV immune reactivity and around 100 individual genetic characteristics. We have implemented FARMS in R statistical language and we showed that FARMS is fast and outcompetes other comparable commonly used approaches, thus providing a new tool for the thorough analysis of complex datasets without the need for massive computational infrastructure.