Articles de revista
http://hdl.handle.net/2117/3942
Thu, 27 Oct 2016 03:34:59 GMT2016-10-27T03:34:59ZAnalysis of applications to improve the energy savings in residential buildings based on Systemic Quality Model
http://hdl.handle.net/2117/91044
Analysis of applications to improve the energy savings in residential buildings based on Systemic Quality Model
Fonseca Casas, Antoni; Fonseca Casas, Pau; Casanovas Garcia, Josep
Creating a definition of the features and the architecture of a new Energy Management Software (EMS) is complex because different professionals will be involved in creating that definition and in using the tool. To simplify this definition and aid in the eventual selection of an existing EMS to fit a specific need, a set of metrics that considers the primary issues and drawbacks of the EMS is decisive. This study proposes a set of metrics to evaluate and compare EMS applications. Using these metrics will allow professionals to highlight the tendencies and detect the drawbacks of current EMS applications and to eventually develop new EMS applications based on the results of the analysis. This study presents a list of the applications to be examined and describes the primary issues to be considered in the development of a new application. This study follows the Systemic Quality Model (SQMO), which has been used as a starting point to develop new EMS, but can also be used to select an existing EMS that fits the goals of a company. Using this type of analysis, we were able to detect the primary features desired in an EMS software. These features are numerically scaled, allowing professionals to select the most appropriate EMS that fits for their purposes. This allows the development of EMS utilizing an iterative and user-centric approach. We can apply this methodology to guide the development of future EMS and to define the priorities that are desired in this type of software.
Tue, 25 Oct 2016 08:50:13 GMThttp://hdl.handle.net/2117/910442016-10-25T08:50:13ZFonseca Casas, AntoniFonseca Casas, PauCasanovas Garcia, JosepCreating a definition of the features and the architecture of a new Energy Management Software (EMS) is complex because different professionals will be involved in creating that definition and in using the tool. To simplify this definition and aid in the eventual selection of an existing EMS to fit a specific need, a set of metrics that considers the primary issues and drawbacks of the EMS is decisive. This study proposes a set of metrics to evaluate and compare EMS applications. Using these metrics will allow professionals to highlight the tendencies and detect the drawbacks of current EMS applications and to eventually develop new EMS applications based on the results of the analysis. This study presents a list of the applications to be examined and describes the primary issues to be considered in the development of a new application. This study follows the Systemic Quality Model (SQMO), which has been used as a starting point to develop new EMS, but can also be used to select an existing EMS that fits the goals of a company. Using this type of analysis, we were able to detect the primary features desired in an EMS software. These features are numerically scaled, allowing professionals to select the most appropriate EMS that fits for their purposes. This allows the development of EMS utilizing an iterative and user-centric approach. We can apply this methodology to guide the development of future EMS and to define the priorities that are desired in this type of software.Effect of different dispersing agents in the non-isothermal kinetics and thermomechanical behavior of PET/TiO2 composites
http://hdl.handle.net/2117/90738
Effect of different dispersing agents in the non-isothermal kinetics and thermomechanical behavior of PET/TiO2 composites
Cayuela Marín, Diana; Cot Valle, María Ana; Algaba Joaquín, Inés María; Manich Bou, Albert M.
This work is based on the analysis of the influence of dispersing agents on the non-isothermal kinetics, thermomechanical behavior and dispersing action of PET/TiO2 nanocomposites. The influence of two montanic waxes and an amide wax used as dispersing agents in the nucleating effect of the nanoparticles is studied. The dispersing agents are the following: a) a partly saponified ester of montanic acids (PSEMA), b) an ester of montanic acids with multifunctional alcohols (MAWMA) and c) an amide wax based on N,N '-Bisstearoyl ethylenediamine (AW). The non-isothermal kinetics based on the Avrami method revealed that MAWMA and PSEMA favors the nucleating effect of the nanoparticles when are included in PET. Birefringence microscopy points out the good dispersing capacity of MAWMA and AW and the termomechanical analysis confirmed that the ester of montanic acids with multifunctional alcohols MAWMA shows the best dispersing properties and best promotes the nucleating effect of the TiO2 nanoparticles when used for PET/TiO2 nanocomposites production
Thu, 13 Oct 2016 12:07:45 GMThttp://hdl.handle.net/2117/907382016-10-13T12:07:45ZCayuela Marín, DianaCot Valle, María AnaAlgaba Joaquín, Inés MaríaManich Bou, Albert M.This work is based on the analysis of the influence of dispersing agents on the non-isothermal kinetics, thermomechanical behavior and dispersing action of PET/TiO2 nanocomposites. The influence of two montanic waxes and an amide wax used as dispersing agents in the nucleating effect of the nanoparticles is studied. The dispersing agents are the following: a) a partly saponified ester of montanic acids (PSEMA), b) an ester of montanic acids with multifunctional alcohols (MAWMA) and c) an amide wax based on N,N '-Bisstearoyl ethylenediamine (AW). The non-isothermal kinetics based on the Avrami method revealed that MAWMA and PSEMA favors the nucleating effect of the nanoparticles when are included in PET. Birefringence microscopy points out the good dispersing capacity of MAWMA and AW and the termomechanical analysis confirmed that the ester of montanic acids with multifunctional alcohols MAWMA shows the best dispersing properties and best promotes the nucleating effect of the TiO2 nanoparticles when used for PET/TiO2 nanocomposites productionHerbivores, saprovores and natural enemies respond differently to within-field plant characteristics of wheat fields
http://hdl.handle.net/2117/90156
Herbivores, saprovores and natural enemies respond differently to within-field plant characteristics of wheat fields
Caballero López, Berta; Blanco Moreno, José M.; Pujade Villar, Juli; Ventura, Daniel; Sánchez Espigares, Josep Anton; Sans Serra, Francesc Xavier
Understanding ecosystem functioning in a farmland context by considering the variety of ecological strategies employed by arthropods is a core challenge in ecology and conservation science. We adopted a functional approach in an assessment of the relationship between three functional plant groups (grasses, broad-leaves and legumes) and the arthropod community in winter wheat fields in a Mediterranean dryland context. We sampled the arthropod community as thoroughly as possible with a combination of suction catching and flight-interception trapping. All specimens were identified to the appropriate taxonomic level (family, genus or species) and classified according to their form of feeding: chewing-herbivores, sucking-herbivores, flower-consumers, omnivores, saprovores, parasitoids or predators. We found, a richer plant community favoured a greater diversity of herbivores and, in turn, a richness of herbivores and saprovores enhanced the communities of their natural enemies, which supports the classical trophic structure hypothesis. Grass cover had a positive effect on sucking-herbivores, saprovores and their natural enemies and is probably due to grasses’ ability to provide, either directly or indirectly, alternative resources or simply by offering better environmental conditions. By including legumes in agroecosystems we can improve the conservation of beneficial arthropods like predators or parasitoids, and enhance the provision of ecosystem services such as natural pest control
Fri, 23 Sep 2016 10:04:24 GMThttp://hdl.handle.net/2117/901562016-09-23T10:04:24ZCaballero López, BertaBlanco Moreno, José M.Pujade Villar, JuliVentura, DanielSánchez Espigares, Josep AntonSans Serra, Francesc XavierUnderstanding ecosystem functioning in a farmland context by considering the variety of ecological strategies employed by arthropods is a core challenge in ecology and conservation science. We adopted a functional approach in an assessment of the relationship between three functional plant groups (grasses, broad-leaves and legumes) and the arthropod community in winter wheat fields in a Mediterranean dryland context. We sampled the arthropod community as thoroughly as possible with a combination of suction catching and flight-interception trapping. All specimens were identified to the appropriate taxonomic level (family, genus or species) and classified according to their form of feeding: chewing-herbivores, sucking-herbivores, flower-consumers, omnivores, saprovores, parasitoids or predators. We found, a richer plant community favoured a greater diversity of herbivores and, in turn, a richness of herbivores and saprovores enhanced the communities of their natural enemies, which supports the classical trophic structure hypothesis. Grass cover had a positive effect on sucking-herbivores, saprovores and their natural enemies and is probably due to grasses’ ability to provide, either directly or indirectly, alternative resources or simply by offering better environmental conditions. By including legumes in agroecosystems we can improve the conservation of beneficial arthropods like predators or parasitoids, and enhance the provision of ecosystem services such as natural pest controlInterior-point solver for convex separable block-angular problems
http://hdl.handle.net/2117/90150
Interior-point solver for convex separable block-angular problems
Castro Pérez, Jordi
Constraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.
Thu, 22 Sep 2016 15:31:40 GMThttp://hdl.handle.net/2117/901502016-09-22T15:31:40ZCastro Pérez, JordiConstraints matrices with block-angular structures are pervasive in optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solves the normal equations using sparse Cholesky factorizations for the block constraints, and a reconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work, we present an efficient solver based on this algorithm. Some of its features are as follows: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor–corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the preconditioner). The solver has been hooked to the structure-conveying modelling language (SML) based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using and distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300,000 constraints, this approach is from 2 to 3 orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
http://hdl.handle.net/2117/90149
A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method
Castro Pérez, Jordi; Nasini, Stefano; Saldanha da Gama, Francisco
We propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.
Thu, 22 Sep 2016 15:07:07 GMThttp://hdl.handle.net/2117/901492016-09-22T15:07:07ZCastro Pérez, JordiNasini, StefanoSaldanha da Gama, FranciscoWe propose a cutting-plane approach (namely, Benders decomposition) for a class of capacitated multi-period facility location problems. The novelty of this approach lies on the use of a specialized interior-point method for solving the Benders subproblems. The primal block-angular structure of the resulting linear optimization
problems is exploited by the interior-point method, allowing the (either exact or inexact) efficient solution of large instances. The consequences of different modeling
conditions and problem specifications on the computational performance are also investigated both theoretically and empirically, providing a deeper understanding of the significant factors influencing the overall efficiency of the cutting-plane method.
The methodology proposed allowed the solution of instances of up to 200 potential locations, one million customers and three periods, resulting in mixed integer linear optimization problems of up to 600 binary and 600 millions of continuous variables. Those problems were solved by the specialized approach in less than one hour and a half, outperforming other state-of-the-art methods, which exhausted the (144 Gigabytes of) available memory in the largest instances.A unified approach to authorship attribution and verification
http://hdl.handle.net/2117/89602
A unified approach to authorship attribution and verification
Puig Oriol, Xavier; Font Valverde, Martí; Ginebra Molins, Josep
In authorship attribution, one assigns texts from an unknown author to either one of two or more candidate authors by comparing the disputed texts with texts known to have been written by the candidate authors. In authorship verification, one decides whether a text or a set of texts could have been written by a given author. These two problems are usually treated separately. By assuming an open-set classification framework for the attribution problem, contemplating the possibility that none of the candidate authors is the unknown author, the verification problem becomes a special case of attribution problem. Here both problems are posed as a formal Bayesian multinomial model selection problem and are given a closed-form solution, tailored for categorical data, naturally incorporating text length and dependence in the analysis, and coping well with settings with a small number of training texts. The approach to authorship verification is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it, and the approach to authorship attribution is illustrated by revisiting the authorship attribution of the Federalist papers and through a small simulation study.
Tue, 06 Sep 2016 10:09:51 GMThttp://hdl.handle.net/2117/896022016-09-06T10:09:51ZPuig Oriol, XavierFont Valverde, MartíGinebra Molins, JosepIn authorship attribution, one assigns texts from an unknown author to either one of two or more candidate authors by comparing the disputed texts with texts known to have been written by the candidate authors. In authorship verification, one decides whether a text or a set of texts could have been written by a given author. These two problems are usually treated separately. By assuming an open-set classification framework for the attribution problem, contemplating the possibility that none of the candidate authors is the unknown author, the verification problem becomes a special case of attribution problem. Here both problems are posed as a formal Bayesian multinomial model selection problem and are given a closed-form solution, tailored for categorical data, naturally incorporating text length and dependence in the analysis, and coping well with settings with a small number of training texts. The approach to authorship verification is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it, and the approach to authorship attribution is illustrated by revisiting the authorship attribution of the Federalist papers and through a small simulation study.Reliability versus mass optimization of CO2 extraction technologies for long duration missions
http://hdl.handle.net/2117/89135
Reliability versus mass optimization of CO2 extraction technologies for long duration missions
Detrell Domingo, Gisela; Griful Ponsati, Eulàlia; Messerschmid, Ernst
The aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4-Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a Multi-Objective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.
Mon, 25 Jul 2016 10:43:03 GMThttp://hdl.handle.net/2117/891352016-07-25T10:43:03ZDetrell Domingo, GiselaGriful Ponsati, EulàliaMesserschmid, ErnstThe aim of this paper is to optimize reliability and mass of three CO2 extraction technologies/components: the 4-Bed Molecular Sieve, the Electrochemical Depolarized Concentrator and the Solid Amine Water Desorption. The first one is currently used in the International Space Station and the last two are being developed, and could be used for future long duration missions. This work is part of a complex study of the Environmental Control and Life Support System (ECLSS) reliability. The result of this paper is a methodology to analyze the reliability and mass at a component level, which is used in this paper for the CO2 extraction technologies, but that can be applied to the ECLSS technologies that perform other tasks, such as oxygen generation or water recycling, which will be a required input for the analysis of an entire ECLSS. The key parameter to evaluate any system to be used in space is mass, as it is directly related to the launch cost. Moreover, for long duration missions, reliability will play an even more important role, as no resupply or rescue mission is taken into consideration. Each technology is studied as a reparable system, where the number of spare parts to be taken for a specific mission will need to be selected, to maximize the reliability and minimize the mass of the system. The problem faced is a Multi-Objective Optimization Problem (MOOP), which does not have a single solution. Thus, optimum solutions of MOOP, the ones that cannot be improved in one of the two objectives, without degrading the other one, are found for each selected technology. The solutions of the MOOP for the three technologies are analyzed and compared, considering other parameters such as the type of mission, the maturity of the technology and potential interactions/synergies with other technologies of the ECLSS.REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eight-hours of symptom onset
http://hdl.handle.net/2117/86925
REVASCAT: a randomized trial of revascularization with SOLITAIRE FR® device vs. best medical therapy in the treatment of acute stroke due to anterior circulation large vessel occlusion presenting within eight-hours of symptom onset
Molina, Carlos A.; Chamorro, Ángel; Rovira, Alex; de Miquel, Maria Angeles; Serena Leal, Joaquín; Sanroman, Luis; Jovin, Tudor G.; Dávalos Errando, Antoni; Cobo Valeri, Erik
REVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 18-80, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu- sion, Alberta Stroke Program Early Computed Tomography score of > 7 on non-contrast CT or > 6 on diffusion-weighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary end-point, based on intention-to-treat cri- teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini- mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary end-points are infarct volume evaluated on CT at 24 h, dra- matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali- zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac- ranial haemorrhage rates at 24 hours and procedure related complications.
Wed, 11 May 2016 10:43:51 GMThttp://hdl.handle.net/2117/869252016-05-11T10:43:51ZMolina, Carlos A.Chamorro, ÁngelRovira, Alexde Miquel, Maria AngelesSerena Leal, JoaquínSanroman, LuisJovin, Tudor G.Dávalos Errando, AntoniCobo Valeri, ErikREVASCAT is a prospective, multicenter, randomized trial seeking to establish whether subjects meeting following main inclusion criteria: age 18-80, baseline National Institutes of Health Stroke Scale = 6, evidence of intracranial internal carotid artery or proximal (M1 segment) middle cerebral artery occlu- sion, Alberta Stroke Program Early Computed Tomography score of > 7 on non-contrast CT or > 6 on diffusion-weighted magnetic resonance imaging , ineligible for or with persistent occlusion after intravenous alteplase and procedure start within 8 hours from symptom onset, have higher rates of favorable outcome when treated with the SolitaireTM FR embolectomy device compared to standard medical therapy alone The primary end-point, based on intention-to-treat cri- teria is the distribution of modified Rankin Scale scores at 90 days. Projected sample size is 690 patients. Estimated common odds ratio is 1•615. Randomization is performed under a mini- mization process using age, baseline NIHSS, therapeutic window, occlusion location and investigational center. The study follows a sequential analysis (triangular model) with the first approach to test efficacy at 174 patients and subsequent analyses (if necessary) at 346, 518, and 690 subjects. Secondary end-points are infarct volume evaluated on CT at 24 h, dra- matic early favorable response, defined as NIHSS of 0–2 or NIHSS improvement = 8 points at 24 h and successful recanali- zation in the Solitaire arm according to the thrombolysis in cerebral infarction (TICI) classification defined as TICI 2b or 3. Safety variables are mortality at 90 days, symptomatic intrac- ranial haemorrhage rates at 24 hours and procedure related complications.A characterization of the innovations of first order autoregressive models
http://hdl.handle.net/2117/86538
A characterization of the innovations of first order autoregressive models
Moriña, David; Puig, Pedro; Valero Baya, Jordi
Suppose that follows a simple AR(1) model, that is, it can be expressed as , where is a white noise with mean equal to and variance . There are many examples in practice where these assumptions hold very well. Consider . We shall show that the autocorrelation function of characterizes the distribution of W-t.
“The final publication is available at Springer via http://dx.doi.org/10.1007/s00184-014-0497-5"
Tue, 03 May 2016 14:42:27 GMThttp://hdl.handle.net/2117/865382016-05-03T14:42:27ZMoriña, DavidPuig, PedroValero Baya, JordiSuppose that follows a simple AR(1) model, that is, it can be expressed as , where is a white noise with mean equal to and variance . There are many examples in practice where these assumptions hold very well. Consider . We shall show that the autocorrelation function of characterizes the distribution of W-t.Discussion of “Analysis of spatio-temporal mobile phone data: a case study in the metropolitan area of Milan” by Piercesare Secchi, Simone Vantini and Valeria Vitelli
http://hdl.handle.net/2117/86293
Discussion of “Analysis of spatio-temporal mobile phone data: a case study in the metropolitan area of Milan” by Piercesare Secchi, Simone Vantini and Valeria Vitelli
Delicado Useros, Pedro Francisco
The paper under discussion is a very well-written and interesting piece of work by Secchi et al. (2015) dealing with spatio-temporal data on mobile phone use in the area of Milan. I congratulate the authors for such a stimulating and interesting paper. It clearly points out that Erlang data on mobile phone use contain a large amount of rich information. The paper is an excellent example of statistical analysis of Big Data. I discuss briefly two alternative ways of dimension reduction of spatio-temporal data and illustrate them with artificial data that has been simulated according to the scheme proposed by the authors.
Wed, 27 Apr 2016 17:40:12 GMThttp://hdl.handle.net/2117/862932016-04-27T17:40:12ZDelicado Useros, Pedro FranciscoThe paper under discussion is a very well-written and interesting piece of work by Secchi et al. (2015) dealing with spatio-temporal data on mobile phone use in the area of Milan. I congratulate the authors for such a stimulating and interesting paper. It clearly points out that Erlang data on mobile phone use contain a large amount of rich information. The paper is an excellent example of statistical analysis of Big Data. I discuss briefly two alternative ways of dimension reduction of spatio-temporal data and illustrate them with artificial data that has been simulated according to the scheme proposed by the authors.