Articles de revista
http://hdl.handle.net/2117/3942
20161207T14:39:17Z

Costeffective analysis for selecting energy efficiency measures for refurbishment of residential buildings in Catalonia
http://hdl.handle.net/2117/96940
Costeffective analysis for selecting energy efficiency measures for refurbishment of residential buildings in Catalonia
Ortiz, Joana; Fonseca Casas, Antoni; Salom, Jaume; Garrido Soriano, Núria; Fonseca Casas, Pau
This paper presents the results of a detailed method for developing costoptimal studies for the energy refurbishment of residential buildings. The method takes part of an innovative approach: twostep evaluation considering thermal comfort, energy and economic criteria. The first step, the passive evaluation, was presented previously [1] and the results are used to develop the active evaluation, which is the focus of this paper. The active evaluation develops a costoptimal analysis to compare a set of passive and active measures for the refurbishment of residential buildings. The costoptimal methodology follows the European Directives and analysed the measures from the point of view of nonrenewable primary energy consumption and the global costs over 30 years. The energy uses included in the study are heating, domestic hot water, cooling, lighting and appliances. In addition, the results have been represented following the energy labelling scale. The paper shows the results of a multifamily building built in the years 1990–2007 and located in Barcelona with two configurations: with natural ventilation and without natural ventilation. The method provides technical and economic information about the energy efficiency measures, with the objective to support the decision process.
© 2016. This version is made available under the CCBYNCND 4.0 license http://creativecommons.org/licenses/byncnd/4.0/
20161121T16:38:50Z
Ortiz, Joana
Fonseca Casas, Antoni
Salom, Jaume
Garrido Soriano, Núria
Fonseca Casas, Pau
This paper presents the results of a detailed method for developing costoptimal studies for the energy refurbishment of residential buildings. The method takes part of an innovative approach: twostep evaluation considering thermal comfort, energy and economic criteria. The first step, the passive evaluation, was presented previously [1] and the results are used to develop the active evaluation, which is the focus of this paper. The active evaluation develops a costoptimal analysis to compare a set of passive and active measures for the refurbishment of residential buildings. The costoptimal methodology follows the European Directives and analysed the measures from the point of view of nonrenewable primary energy consumption and the global costs over 30 years. The energy uses included in the study are heating, domestic hot water, cooling, lighting and appliances. In addition, the results have been represented following the energy labelling scale. The paper shows the results of a multifamily building built in the years 1990–2007 and located in Barcelona with two configurations: with natural ventilation and without natural ventilation. The method provides technical and economic information about the energy efficiency measures, with the objective to support the decision process.

Global and local distancebased generalized linear models
http://hdl.handle.net/2117/96440
Global and local distancebased generalized linear models
Boj, Eva; Caballé, Adrià; Delicado Useros, Pedro Francisco; Esteve, Anna; Fortiana, Josep
This paper introduces local distancebased generalized linear models. These models extend (weighted) distancebased linear models first to the generalized linear model framework. Then, a nonparametric version of these models is proposed by means of local fitting. Distances between individuals are the only predictor information needed to fit these models. Therefore, they are applicable, among others, to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. An implementation is provided by the R package dbstats, which also implements other distancebased prediction methods. Supplementary material for this article is available online, which reproduces all the results of this article.
20161109T16:30:39Z
Boj, Eva
Caballé, Adrià
Delicado Useros, Pedro Francisco
Esteve, Anna
Fortiana, Josep
This paper introduces local distancebased generalized linear models. These models extend (weighted) distancebased linear models first to the generalized linear model framework. Then, a nonparametric version of these models is proposed by means of local fitting. Distances between individuals are the only predictor information needed to fit these models. Therefore, they are applicable, among others, to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. An implementation is provided by the R package dbstats, which also implements other distancebased prediction methods. Supplementary material for this article is available online, which reproduces all the results of this article.

Analysis of applications to improve the energy savings in residential buildings based on Systemic Quality Model
http://hdl.handle.net/2117/91044
Analysis of applications to improve the energy savings in residential buildings based on Systemic Quality Model
Fonseca Casas, Antoni; Fonseca Casas, Pau; Casanovas Garcia, Josep
Creating a definition of the features and the architecture of a new Energy Management Software (EMS) is complex because different professionals will be involved in creating that definition and in using the tool. To simplify this definition and aid in the eventual selection of an existing EMS to fit a specific need, a set of metrics that considers the primary issues and drawbacks of the EMS is decisive. This study proposes a set of metrics to evaluate and compare EMS applications. Using these metrics will allow professionals to highlight the tendencies and detect the drawbacks of current EMS applications and to eventually develop new EMS applications based on the results of the analysis. This study presents a list of the applications to be examined and describes the primary issues to be considered in the development of a new application. This study follows the Systemic Quality Model (SQMO), which has been used as a starting point to develop new EMS, but can also be used to select an existing EMS that fits the goals of a company. Using this type of analysis, we were able to detect the primary features desired in an EMS software. These features are numerically scaled, allowing professionals to select the most appropriate EMS that fits for their purposes. This allows the development of EMS utilizing an iterative and usercentric approach. We can apply this methodology to guide the development of future EMS and to define the priorities that are desired in this type of software.
20161025T08:50:13Z
Fonseca Casas, Antoni
Fonseca Casas, Pau
Casanovas Garcia, Josep
Creating a definition of the features and the architecture of a new Energy Management Software (EMS) is complex because different professionals will be involved in creating that definition and in using the tool. To simplify this definition and aid in the eventual selection of an existing EMS to fit a specific need, a set of metrics that considers the primary issues and drawbacks of the EMS is decisive. This study proposes a set of metrics to evaluate and compare EMS applications. Using these metrics will allow professionals to highlight the tendencies and detect the drawbacks of current EMS applications and to eventually develop new EMS applications based on the results of the analysis. This study presents a list of the applications to be examined and describes the primary issues to be considered in the development of a new application. This study follows the Systemic Quality Model (SQMO), which has been used as a starting point to develop new EMS, but can also be used to select an existing EMS that fits the goals of a company. Using this type of analysis, we were able to detect the primary features desired in an EMS software. These features are numerically scaled, allowing professionals to select the most appropriate EMS that fits for their purposes. This allows the development of EMS utilizing an iterative and usercentric approach. We can apply this methodology to guide the development of future EMS and to define the priorities that are desired in this type of software.

Effect of different dispersing agents in the nonisothermal kinetics and thermomechanical behavior of PET/TiO2 composites
http://hdl.handle.net/2117/90738
Effect of different dispersing agents in the nonisothermal kinetics and thermomechanical behavior of PET/TiO2 composites
Cayuela Marín, Diana; Cot Valle, María Ana; Algaba Joaquín, Inés María; Manich Bou, Albert M.
This work is based on the analysis of the influence of dispersing agents on the nonisothermal kinetics, thermomechanical behavior and dispersing action of PET/TiO2 nanocomposites. The influence of two montanic waxes and an amide wax used as dispersing agents in the nucleating effect of the nanoparticles is studied. The dispersing agents are the following: a) a partly saponified ester of montanic acids (PSEMA), b) an ester of montanic acids with multifunctional alcohols (MAWMA) and c) an amide wax based on N,N 'Bisstearoyl ethylenediamine (AW). The nonisothermal kinetics based on the Avrami method revealed that MAWMA and PSEMA favors the nucleating effect of the nanoparticles when are included in PET. Birefringence microscopy points out the good dispersing capacity of MAWMA and AW and the termomechanical analysis confirmed that the ester of montanic acids with multifunctional alcohols MAWMA shows the best dispersing properties and best promotes the nucleating effect of the TiO2 nanoparticles when used for PET/TiO2 nanocomposites production
20161013T12:07:45Z
Cayuela Marín, Diana
Cot Valle, María Ana
Algaba Joaquín, Inés María
Manich Bou, Albert M.
This work is based on the analysis of the influence of dispersing agents on the nonisothermal kinetics, thermomechanical behavior and dispersing action of PET/TiO2 nanocomposites. The influence of two montanic waxes and an amide wax used as dispersing agents in the nucleating effect of the nanoparticles is studied. The dispersing agents are the following: a) a partly saponified ester of montanic acids (PSEMA), b) an ester of montanic acids with multifunctional alcohols (MAWMA) and c) an amide wax based on N,N 'Bisstearoyl ethylenediamine (AW). The nonisothermal kinetics based on the Avrami method revealed that MAWMA and PSEMA favors the nucleating effect of the nanoparticles when are included in PET. Birefringence microscopy points out the good dispersing capacity of MAWMA and AW and the termomechanical analysis confirmed that the ester of montanic acids with multifunctional alcohols MAWMA shows the best dispersing properties and best promotes the nucleating effect of the TiO2 nanoparticles when used for PET/TiO2 nanocomposites production

Herbivores, saprovores and natural enemies respond differently to withinfield plant characteristics of wheat fields
http://hdl.handle.net/2117/90156
Herbivores, saprovores and natural enemies respond differently to withinfield plant characteristics of wheat fields
Caballero López, Berta; Blanco Moreno, José M.; Pujade Villar, Juli; Ventura, Daniel; Sánchez Espigares, Josep Anton; Sans Serra, Francesc Xavier
Understanding ecosystem functioning in a farmland context by considering the variety of ecological strategies employed by arthropods is a core challenge in ecology and conservation science. We adopted a functional approach in an assessment of the relationship between three functional plant groups (grasses, broadleaves and legumes) and the arthropod community in winter wheat fields in a Mediterranean dryland context. We sampled the arthropod community as thoroughly as possible with a combination of suction catching and flightinterception trapping. All specimens were identified to the appropriate taxonomic level (family, genus or species) and classified according to their form of feeding: chewingherbivores, suckingherbivores, flowerconsumers, omnivores, saprovores, parasitoids or predators. We found, a richer plant community favoured a greater diversity of herbivores and, in turn, a richness of herbivores and saprovores enhanced the communities of their natural enemies, which supports the classical trophic structure hypothesis. Grass cover had a positive effect on suckingherbivores, saprovores and their natural enemies and is probably due to grasses’ ability to provide, either directly or indirectly, alternative resources or simply by offering better environmental conditions. By including legumes in agroecosystems we can improve the conservation of beneficial arthropods like predators or parasitoids, and enhance the provision of ecosystem services such as natural pest control
20160923T10:04:24Z
Caballero López, Berta
Blanco Moreno, José M.
Pujade Villar, Juli
Ventura, Daniel
Sánchez Espigares, Josep Anton
Sans Serra, Francesc Xavier
Understanding ecosystem functioning in a farmland context by considering the variety of ecological strategies employed by arthropods is a core challenge in ecology and conservation science. We adopted a functional approach in an assessment of the relationship between three functional plant groups (grasses, broadleaves and legumes) and the arthropod community in winter wheat fields in a Mediterranean dryland context. We sampled the arthropod community as thoroughly as possible with a combination of suction catching and flightinterception trapping. All specimens were identified to the appropriate taxonomic level (family, genus or species) and classified according to their form of feeding: chewingherbivores, suckingherbivores, flowerconsumers, omnivores, saprovores, parasitoids or predators. We found, a richer plant community favoured a greater diversity of herbivores and, in turn, a richness of herbivores and saprovores enhanced the communities of their natural enemies, which supports the classical trophic structure hypothesis. Grass cover had a positive effect on suckingherbivores, saprovores and their natural enemies and is probably due to grasses’ ability to provide, either directly or indirectly, alternative resources or simply by offering better environmental conditions. By including legumes in agroecosystems we can improve the conservation of beneficial arthropods like predators or parasitoids, and enhance the provision of ecosystem services such as natural pest control

Interiorpoint solver for convex separable blockangular problems
http://hdl.handle.net/2117/90150
Interiorpoint solver for convex separable blockangular problems
Castro Pérez, Jordi
Constraints matrices with blockangular structures are pervasive in optimization. Interiorpoint methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and secondorder predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structureconveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudoHuber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than stateoftheart linear and quadratic optimization solvers.
20160922T15:31:40Z
Castro Pérez, Jordi
Constraints matrices with blockangular structures are pervasive in optimization. Interiorpoint methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and secondorder predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structureconveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudoHuber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than stateoftheart linear and quadratic optimization solvers.

A cuttingplane approach for largescale capacitated multiperiod facility location using a specialized interiorpoint method
http://hdl.handle.net/2117/90149
A cuttingplane approach for largescale capacitated multiperiod facility location using a specialized interiorpoint method
Castro Pérez, Jordi; Nasini, Stefano; Saldanha da Gama, Francisco
We propose a cuttingplane approach (namely, Benders decomposition) for a class of capacitated multiperiod facility location problems. The novelty of this approach lies on the use of a specialized interiorpoint method for solving the Benders subproblems. The primal blockangular structure of the resulting linear optimization
problems is exploited by the interiorpoint method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cuttingplane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other stateoftheart methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.
20160922T15:07:07Z
Castro Pérez, Jordi
Nasini, Stefano
Saldanha da Gama, Francisco
We propose a cuttingplane approach (namely, Benders decomposition) for a class of capacitated multiperiod facility location problems. The novelty of this approach lies on the use of a specialized interiorpoint method for solving the Benders subproblems. The primal blockangular structure of the resulting linear optimization
problems is exploited by the interiorpoint method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cuttingplane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other stateoftheart methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.

A unified approach to authorship attribution and verification
http://hdl.handle.net/2117/89602
A unified approach to authorship attribution and verification
Puig Oriol, Xavier; Font Valverde, Martí; Ginebra Molins, Josep
In authorship attribution, one assigns texts from an unknown author to either one of two or more candidate authors by comparing the disputed texts with texts known to have been written by the candidate authors. In authorship verification, one decides whether a text or a set of texts could have been written by a given author. These two problems are usually treated separately. By assuming an openset classification framework for the attribution problem, contemplating the possibility that none of the candidate authors is the unknown author, the verification problem becomes a special case of attribution problem. Here both problems are posed as a formal Bayesian multinomial model selection problem and are given a closedform solution, tailored for categorical data, naturally incorporating text length and dependence in the analysis, and coping well with settings with a small number of training texts. The approach to authorship verification is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it, and the approach to authorship attribution is illustrated by revisiting the authorship attribution of the Federalist papers and through a small simulation study.
20160906T10:09:51Z
Puig Oriol, Xavier
Font Valverde, Martí
Ginebra Molins, Josep
In authorship attribution, one assigns texts from an unknown author to either one of two or more candidate authors by comparing the disputed texts with texts known to have been written by the candidate authors. In authorship verification, one decides whether a text or a set of texts could have been written by a given author. These two problems are usually treated separately. By assuming an openset classification framework for the attribution problem, contemplating the possibility that none of the candidate authors is the unknown author, the verification problem becomes a special case of attribution problem. Here both problems are posed as a formal Bayesian multinomial model selection problem and are given a closedform solution, tailored for categorical data, naturally incorporating text length and dependence in the analysis, and coping well with settings with a small number of training texts. The approach to authorship verification is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it, and the approach to authorship attribution is illustrated by revisiting the authorship attribution of the Federalist papers and through a small simulation study.

Reliability versus mass optimization of CO2 extraction technologies for long duration missions
http://hdl.handle.net/2117/89135
Reliability versus mass optimization of CO2 extraction technologies for long duration missions
Detrell Domingo, Gisela; Griful Ponsati, Eulàlia; Messerschmid, Ernst
The aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a MultiObjective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.
20160725T10:43:03Z
Detrell Domingo, Gisela
Griful Ponsati, Eulàlia
Messerschmid, Ernst
The aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a MultiObjective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.

REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eighthours of symptom onset
http://hdl.handle.net/2117/86925
REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eighthours of symptom onset
Molina, Carlos A.; Chamorro, Ángel; Rovira, Alex; de Miquel, Maria Angeles; Serena Leal, Joaquín; Sanroman, Luis; Jovin, Tudor G.; Dávalos Errando, Antoni; Cobo Valeri, Erik
REVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 1880, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu sion, Alberta Stroke Program Early Computed Tomography score of > 7 on noncontrast CT or > 6 on diffusionweighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary endpoint, based on intentiontotreat cri teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary endpoints are infarct volume evaluated on CT at 24 h, dra matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac ranial haemorrhage rates at 24 hours and procedure related complications.
20160511T10:43:51Z
Molina, Carlos A.
Chamorro, Ángel
Rovira, Alex
de Miquel, Maria Angeles
Serena Leal, Joaquín
Sanroman, Luis
Jovin, Tudor G.
Dávalos Errando, Antoni
Cobo Valeri, Erik
REVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 1880, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu sion, Alberta Stroke Program Early Computed Tomography score of > 7 on noncontrast CT or > 6 on diffusionweighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary endpoint, based on intentiontotreat cri teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary endpoints are infarct volume evaluated on CT at 24 h, dra matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac ranial haemorrhage rates at 24 hours and procedure related complications.