Articles de revista
http://hdl.handle.net/2117/3942
2016-08-26T07:01:05ZReliability versus mass optimization of CO2 extraction technologies for long duration missions
http://hdl.handle.net/2117/89135
Reliability versus mass optimization of CO2 extraction technologies for long duration missions
Detrell Domingo, Gisela; Griful Ponsati, Eulàlia; Messerschmid, Ernst
The aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4-Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a Multi-Objective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.
2016-07-25T10:43:03ZDetrell Domingo, GiselaGriful Ponsati, EulàliaMesserschmid, ErnstThe aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4-Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a Multi-Objective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eight-hours of symptom onset
http://hdl.handle.net/2117/86925
REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eight-hours of symptom onset
Molina, Carlos A.; Chamorro, Ángel; Rovira, Alex; de Miquel, Maria Angeles; Serena Leal, Joaquín; Sanroman, Luis; Jovin, Tudor G.; Dávalos Errando, Antoni; Cobo Valeri, Erik
REVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 18-80, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu- sion, Alberta Stroke Program Early Computed Tomography score of > 7 on non-contrast CT or > 6 on diffusion-weighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary end-point, based on intention-to-treat cri- teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini- mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary end-points are infarct volume evaluated on CT at 24 h, dra- matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali- zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac- ranial haemorrhage rates at 24 hours and procedure related complications.
2016-05-11T10:43:51ZMolina, Carlos A.Chamorro, ÁngelRovira, Alexde Miquel, Maria AngelesSerena Leal, JoaquínSanroman, LuisJovin, Tudor G.Dávalos Errando, AntoniCobo Valeri, ErikREVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 18-80, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu- sion, Alberta Stroke Program Early Computed Tomography score of > 7 on non-contrast CT or > 6 on diffusion-weighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary end-point, based on intention-to-treat cri- teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini- mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary end-points are infarct volume evaluated on CT at 24 h, dra- matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali- zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac- ranial haemorrhage rates at 24 hours and procedure related complications.A characterization of the innovations of first order autoregressive models
http://hdl.handle.net/2117/86538
A characterization of the innovations of first order autoregressive models
Moriña, David; Puig, Pedro; Valero Baya, Jordi
Suppose that follows a simple AR(1) model, that is, it can be expressed as , where is a white noise with mean equal to and variance . There are many examples in practice where these assumptions hold very well. Consider . We shall show that the autocorrelation function of characterizes the distribution of W-t.
“The final publication is available at Springer via http://dx.doi.org/10.1007/s00184-014-0497-5"
2016-05-03T14:42:27ZMoriña, DavidPuig, PedroValero Baya, JordiSuppose that follows a simple AR(1) model, that is, it can be expressed as , where is a white noise with mean equal to and variance . There are many examples in practice where these assumptions hold very well. Consider . We shall show that the autocorrelation function of characterizes the distribution of W-t.Discussion of “Analysis of spatio-temporal mobile phone data: a case study in the metropolitan area of Milan” by Piercesare Secchi, Simone Vantini and Valeria Vitelli
http://hdl.handle.net/2117/86293
Discussion of “Analysis of spatio-temporal mobile phone data: a case study in the metropolitan area of Milan” by Piercesare Secchi, Simone Vantini and Valeria Vitelli
Delicado Useros, Pedro Francisco
The paper under discussion is a very well-written and interesting piece of work by Secchi et al. (2015) dealing with spatio-temporal data on mobile phone use in the area of Milan. I congratulate the authors for such a stimulating and interesting paper. It clearly points out that Erlang data on mobile phone use contain a large amount of rich information. The paper is an excellent example of statistical analysis of Big Data. I discuss briefly two alternative ways of dimension reduction of spatio-temporal data and illustrate them with artificial data that has been simulated according to the scheme proposed by the authors.
2016-04-27T17:40:12ZDelicado Useros, Pedro FranciscoThe paper under discussion is a very well-written and interesting piece of work by Secchi et al. (2015) dealing with spatio-temporal data on mobile phone use in the area of Milan. I congratulate the authors for such a stimulating and interesting paper. It clearly points out that Erlang data on mobile phone use contain a large amount of rich information. The paper is an excellent example of statistical analysis of Big Data. I discuss briefly two alternative ways of dimension reduction of spatio-temporal data and illustrate them with artificial data that has been simulated according to the scheme proposed by the authors.Towards a generic benchmarking platform for origin–destination flows estimation/updating algorithms: design, demonstration and validation
http://hdl.handle.net/2117/86110
Towards a generic benchmarking platform for origin–destination flows estimation/updating algorithms: design, demonstration and validation
Antoniou, Constantinos; Barceló Bugeda, Jaime; Breen, Martijn; Bullejos, Manuel; Casas, Jordi; Cipriani, Ernesto; Ciuffo, Biagio; Djukic, Tamara; Hoogendoorn, Serge; Marzano, Vittorio; Montero Mercadé, Lídia; Nigro, Marialisa; Perarnau, Josep; Punzo, Vincenzo; Toledo, Tomer; van Lint, Hans
Estimation/updating of origin-destination (OD) flows and other traffic state parameters is a classical, widely adopted procedure in transport engineering, both in off-line and in on-line contexts. Notwithstanding numerous approaches proposed in the literature, there is still room for considerable improvements, also leveraging the unprecedented opportunity offered by information and communication technologies and big data. A key issue relates to the unobservability of OD flows in real networks – except from closed highway systems – thus leading to inherent difficulties in measuring performance of OD flows estimation/updating methods and algorithms. Starting from these premises, the paper proposes a common evaluation and benchmarking framework, providing a synthetic test bed, which enables implementation and comparison of OD estimation/updating algorithms and methodologies under “standardized” conditions. The framework, implemented in a platform available to interested parties upon request, has been flexibly designed and allows comparing a variety of approaches under various settings and conditions. Specifically, the structure and the key features of the framework are presented, along with a detailed experimental design for the application of different dynamic OD flow estimation algorithms. By way of example, applications to both off-line/planning and on-line algorithms are presented, together with a demonstration of the extensibility of the presented framework to accommodate additional data sources
2016-04-22T17:11:10ZAntoniou, ConstantinosBarceló Bugeda, JaimeBreen, MartijnBullejos, ManuelCasas, JordiCipriani, ErnestoCiuffo, BiagioDjukic, TamaraHoogendoorn, SergeMarzano, VittorioMontero Mercadé, LídiaNigro, MarialisaPerarnau, JosepPunzo, VincenzoToledo, Tomervan Lint, HansEstimation/updating of origin-destination (OD) flows and other traffic state parameters is a classical, widely adopted procedure in transport engineering, both in off-line and in on-line contexts. Notwithstanding numerous approaches proposed in the literature, there is still room for considerable improvements, also leveraging the unprecedented opportunity offered by information and communication technologies and big data. A key issue relates to the unobservability of OD flows in real networks – except from closed highway systems – thus leading to inherent difficulties in measuring performance of OD flows estimation/updating methods and algorithms. Starting from these premises, the paper proposes a common evaluation and benchmarking framework, providing a synthetic test bed, which enables implementation and comparison of OD estimation/updating algorithms and methodologies under “standardized” conditions. The framework, implemented in a platform available to interested parties upon request, has been flexibly designed and allows comparing a variety of approaches under various settings and conditions. Specifically, the structure and the key features of the framework are presented, along with a detailed experimental design for the application of different dynamic OD flow estimation algorithms. By way of example, applications to both off-line/planning and on-line algorithms are presented, together with a demonstration of the extensibility of the presented framework to accommodate additional data sourcesFARMS: a new algorithm for variable selection
http://hdl.handle.net/2117/86075
FARMS: a new algorithm for variable selection
Pérez Álvarez, Susana; Gómez Melis, Guadalupe; Brander, Christian
Large datasets including an extensive number of covariates are generated these days in many different situations, for instance, in detailed genetic studies of outbreed human populations or in complex analyses of immune responses to different infections. Aiming at informing clinical interventions or vaccine design, methods for variable selection identifying those variables with the optimal prediction performance for a specific outcome are crucial. However, testing for all potential subsets of variables is not feasible and alternatives to existing methods are needed. Here, we describe a new method to handle such complex datasets, referred to as FARMS, that combines forward and all subsets regression for model selection. We apply FARMS to a host genetic and immunological dataset of over 800 individuals from Lima (Peru) and Durban (South Africa) who were HIV infected and tested for antiviral immune responses. This dataset includes more than 500 explanatory variables: around 400 variables with information on HIV immune reactivity and around 100 individual genetic characteristics. We have implemented FARMS in R statistical language and we showed that FARMS is fast and outcompetes other comparable commonly used approaches, thus providing a new tool for the thorough analysis of complex datasets without the need for massive computational infrastructure.
2016-04-21T14:44:57ZPérez Álvarez, SusanaGómez Melis, GuadalupeBrander, ChristianLarge datasets including an extensive number of covariates are generated these days in many different situations, for instance, in detailed genetic studies of outbreed human populations or in complex analyses of immune responses to different infections. Aiming at informing clinical interventions or vaccine design, methods for variable selection identifying those variables with the optimal prediction performance for a specific outcome are crucial. However, testing for all potential subsets of variables is not feasible and alternatives to existing methods are needed. Here, we describe a new method to handle such complex datasets, referred to as FARMS, that combines forward and all subsets regression for model selection. We apply FARMS to a host genetic and immunological dataset of over 800 individuals from Lima (Peru) and Durban (South Africa) who were HIV infected and tested for antiviral immune responses. This dataset includes more than 500 explanatory variables: around 400 variables with information on HIV immune reactivity and around 100 individual genetic characteristics. We have implemented FARMS in R statistical language and we showed that FARMS is fast and outcompetes other comparable commonly used approaches, thus providing a new tool for the thorough analysis of complex datasets without the need for massive computational infrastructure.Optimal level sets for bivariate density representation
http://hdl.handle.net/2117/86017
Optimal level sets for bivariate density representation
Delicado Useros, Pedro Francisco; Vieu, Philippe
In bivariate density representation there is an extensive literature on level set estimation when the level is fixed, but this is not so much the case when choosing which level is (or which levels are) of most interest. This is an important practical question which depends on the kind of problem one has to deal with as well as the kind of feature one wishes to highlight in the density, the answer to which requires both the definition of what the optimal level is and the construction of a method for finding it. We consider two scenarios for this problem. The first one corresponds to situations in which one has just a single density function to be represented. However, as a result of the technical progress in data collecting, problems are emerging in which one has to deal with a sample of densities. In these situations, the need arises to develop joint representation for all these densities, and this is the second scenario considered in this paper. For each case, we provide consistency results for the estimated levels and present wide Monte Carlo simulated experiments illustrating the interest and feasibility of the proposed method. (C) 2015 Elsevier Inc. All rights reserved.
2016-04-20T16:40:48ZDelicado Useros, Pedro FranciscoVieu, PhilippeIn bivariate density representation there is an extensive literature on level set estimation when the level is fixed, but this is not so much the case when choosing which level is (or which levels are) of most interest. This is an important practical question which depends on the kind of problem one has to deal with as well as the kind of feature one wishes to highlight in the density, the answer to which requires both the definition of what the optimal level is and the construction of a method for finding it. We consider two scenarios for this problem. The first one corresponds to situations in which one has just a single density function to be represented. However, as a result of the technical progress in data collecting, problems are emerging in which one has to deal with a sample of densities. In these situations, the need arises to develop joint representation for all these densities, and this is the second scenario considered in this paper. For each case, we provide consistency results for the estimated levels and present wide Monte Carlo simulated experiments illustrating the interest and feasibility of the proposed method. (C) 2015 Elsevier Inc. All rights reserved.Testing for Hardy–Weinberg equilibrium at biallelic genetic markers on the X chromosome
http://hdl.handle.net/2117/85774
Testing for Hardy–Weinberg equilibrium at biallelic genetic markers on the X chromosome
Graffelman, Jan; Weir, B.S.
Testing genetic markers for Hardy–Weinberg equilibrium (HWE) is an important tool for detecting genotyping errors in large-scale genotyping studies. For markers at the X chromosome, typically the ¿2 or exact test is applied to the females only, and the hemizygous males are considered to be uninformative. In this paper we show that the males are relevant, because a difference in allele frequency between males and females may indicate HWE not to hold. The testing of markers on the X chromosome has received little attention, and in this paper we lay down the foundation for testing biallelic X-chromosomal markers for HWE. We develop four frequentist statistical test procedures for X-linked markers that take both males and females into account: the ¿2 test, likelihood ratio test, exact test and permutation test. Exact tests that include males are shown to have a better Type I error rate. Empirical data from the GENEVA project on venous thromboembolism is used to illustrate the proposed tests. Results obtained with the new tests differ substantially from tests that are based on female genotype counts only. The new tests detect differences in allele frequencies and seem able to uncover additional genotyping error that would have gone unnoticed in HWE tests based on females only
2016-04-18T09:51:23ZGraffelman, JanWeir, B.S.Testing genetic markers for Hardy–Weinberg equilibrium (HWE) is an important tool for detecting genotyping errors in large-scale genotyping studies. For markers at the X chromosome, typically the ¿2 or exact test is applied to the females only, and the hemizygous males are considered to be uninformative. In this paper we show that the males are relevant, because a difference in allele frequency between males and females may indicate HWE not to hold. The testing of markers on the X chromosome has received little attention, and in this paper we lay down the foundation for testing biallelic X-chromosomal markers for HWE. We develop four frequentist statistical test procedures for X-linked markers that take both males and females into account: the ¿2 test, likelihood ratio test, exact test and permutation test. Exact tests that include males are shown to have a better Type I error rate. Empirical data from the GENEVA project on venous thromboembolism is used to illustrate the proposed tests. Results obtained with the new tests differ substantially from tests that are based on female genotype counts only. The new tests detect differences in allele frequencies and seem able to uncover additional genotyping error that would have gone unnoticed in HWE tests based on females onlyCorrelation between tobacco control policies, consumption of rolled tobacco and e-cigarettes, and intention to quit conventional tobacco, in Europe
http://hdl.handle.net/2117/85771
Correlation between tobacco control policies, consumption of rolled tobacco and e-cigarettes, and intention to quit conventional tobacco, in Europe
Lidón-Moyano, Cristina; Martín Sánchez, Juan Carlos; Saliba, Patrick; Graffelman, Jan; Martínez Sánchez, Jose Maria
2016-04-18T09:42:59ZLidón-Moyano, CristinaMartín Sánchez, Juan CarlosSaliba, PatrickGraffelman, JanMartínez Sánchez, Jose MariaA mathematical programming approach for different scenarios of bilateral bartering
http://hdl.handle.net/2117/85754
A mathematical programming approach for different scenarios of bilateral bartering
Nasini, Stefano; Castro Pérez, Jordi; Fonseca Casas, Pau
The analysis of markets with indivisible goods and fixed exogenous prices has played an important role in economic models, especially in relation to wage rigidity and unemployment. This paper provides a novel mathematical programming based approach to study pure exchange economies where discrete amounts of commodities are exchanged at fixed prices. Barter processes, consisting in sequences of elementary reallocations of couple of commodities among couples of agents, are formalized as local searches converging to equilibrium allocations. A direct application of the analysed processes in the context of computational economics is provided, along with a Java implementation of the described approaches.
2016-04-15T15:03:14ZNasini, StefanoCastro Pérez, JordiFonseca Casas, PauThe analysis of markets with indivisible goods and fixed exogenous prices has played an important role in economic models, especially in relation to wage rigidity and unemployment. This paper provides a novel mathematical programming based approach to study pure exchange economies where discrete amounts of commodities are exchanged at fixed prices. Barter processes, consisting in sequences of elementary reallocations of couple of commodities among couples of agents, are formalized as local searches converging to equilibrium allocations. A direct application of the analysed processes in the context of computational economics is provided, along with a Java implementation of the described approaches.