Articles de revista
http://hdl.handle.net/2117/3747
2019-09-21T19:21:14Z
2019-09-21T19:21:14Z
ADDTID: An alternative tool for studying earthquake/tsunami signatures in the ionosphere. Case of the 2011 Tohoku earthquake
Yang, Heng
Monte Moreno, Enrique
Hernández Pajares, Manuel
http://hdl.handle.net/2117/168528
2019-09-21T05:19:36Z
2019-09-20T12:50:58Z
ADDTID: An alternative tool for studying earthquake/tsunami signatures in the ionosphere. Case of the 2011 Tohoku earthquake
Yang, Heng; Monte Moreno, Enrique; Hernández Pajares, Manuel
Traveling Ionospheric Disturbances (ADDTID) algorithm. This algorithm automatically detects and characterizes Traveling Ionospheric Disturbances (TIDs) from Global Navigation Satellite System (GNSS) measurements. Applying the high-precision estimates of ADDTID, the propagation parameters would make it easier to distinguish TIDs from different origins, in particular the characteristics conforming the acoustic gravity waves driven by the earthquake/tsunami. This method does not assume that disturbances follow a circular pattern of propagation, and can estimate the location by the propagation pattern of tsunami wavefronts and related TIDs. In this work, we present in a single framework a description of phenomena observed by different researchers. By means of the ADDTID algorithm, we detect: (a) simultaneous TIDs of different characteristics, where the detection was robust against the curvature of the wave fronts of the perturbations and the accuracy of the estimated parameters. The results were double-checked by visual inspection from detrended Vertical Total Electron Content (VTEC) maps and keogram plots, and the parameters of the slow-speed TIDs were consistent with the tsunami waveform measurements; (b) different wavefronts between the west and east TIDs around the epicenter, consistent in time and space with the post-earthquake tsunami; (c) complete evolution of the circular TIDs driven by the tsunami during the GNSS observable area; (d) fast and short circular TIDs related to the acoustic waves of earthquake; (e) the pre-seismic activity consisting of a set of fast westward TIDs, and the comparison with neighboring days; (f) the location estimation of the tsunami wavefront along the coast and the possible use as early warning. Finally, we report disturbances that had not been previously published with a possible application to local and real-time detection of tsunamis.
2019-09-20T12:50:58Z
Yang, Heng
Monte Moreno, Enrique
Hernández Pajares, Manuel
Traveling Ionospheric Disturbances (ADDTID) algorithm. This algorithm automatically detects and characterizes Traveling Ionospheric Disturbances (TIDs) from Global Navigation Satellite System (GNSS) measurements. Applying the high-precision estimates of ADDTID, the propagation parameters would make it easier to distinguish TIDs from different origins, in particular the characteristics conforming the acoustic gravity waves driven by the earthquake/tsunami. This method does not assume that disturbances follow a circular pattern of propagation, and can estimate the location by the propagation pattern of tsunami wavefronts and related TIDs. In this work, we present in a single framework a description of phenomena observed by different researchers. By means of the ADDTID algorithm, we detect: (a) simultaneous TIDs of different characteristics, where the detection was robust against the curvature of the wave fronts of the perturbations and the accuracy of the estimated parameters. The results were double-checked by visual inspection from detrended Vertical Total Electron Content (VTEC) maps and keogram plots, and the parameters of the slow-speed TIDs were consistent with the tsunami waveform measurements; (b) different wavefronts between the west and east TIDs around the epicenter, consistent in time and space with the post-earthquake tsunami; (c) complete evolution of the circular TIDs driven by the tsunami during the GNSS observable area; (d) fast and short circular TIDs related to the acoustic waves of earthquake; (e) the pre-seismic activity consisting of a set of fast westward TIDs, and the comparison with neighboring days; (f) the location estimation of the tsunami wavefront along the coast and the possible use as early warning. Finally, we report disturbances that had not been previously published with a possible application to local and real-time detection of tsunamis.
Neural networks principal component analysis for estimating the generative multifactor model of returns under a statistical approach to the arbitrage pricing theory: Evidence from the mexican stock exchange
Ladrón de Guevara Cortés, Rogelio
Torra Porras, Salvador
Monte Moreno, Enrique
http://hdl.handle.net/2117/168380
2019-09-19T05:26:04Z
2019-09-18T16:24:15Z
Neural networks principal component analysis for estimating the generative multifactor model of returns under a statistical approach to the arbitrage pricing theory: Evidence from the mexican stock exchange
Ladrón de Guevara Cortés, Rogelio; Torra Porras, Salvador; Monte Moreno, Enrique
A nonlinear principal component analysis (NLPCA) represents an extension of the standard principal component analysis (PCA) that overcomes the limitation of the PCA’s assumption about the linearity of the model. The NLPCA belongs to the family of nonlinear versions of dimension reduction or the extraction techniques of underlying features, including nonlinear factor analysis and nonlinear independent component analysis, where the principal components are generalized from straight lines to curves. The NLPCA can be achieved via an artificial neural network specification where the PCA classic model is generalized to a nonlinear mode, namely, Neural Networks Principal Component Analysis (NNPCA). In order to extract a set of nonlinear underlying systematic risk factors, we estimate the generative multifactor model of returns in a statistical version of the Arbitrage Pricing Theory (APT), in the context of the Mexican Stock Exchange. We used an auto-associative multilayer perceptron neural network or autoencoder, where the ‘bottleneck’ layer represented the nonlinear principal components, or in our context, the scores of the underlying factors of systematic risk. This neural network represents a powerful technique capable of performing a nonlinear transformation of the observed variables into the nonlinear principal components, and to execute a nonlinear mapping that reproduces the original variables. We propose a network architecture capable of generating a loading matrix that enables us to make a first approach to the interpretation of the extracted latent risk factors. In addition, we used a two stage methodology for the econometric contrast of the APT involving first, a simultaneous estimation of the system of equations via Seemingly Unrelated Regression (SUR), and secondly, a cross-section estimation via Ordinary Least Squared corrected by heteroskedasticity and autocorrelation by means of the Newey-West heteroskedasticity and autocorrelation consistent covariances estimates (HEC). The evidence found shows that the reproductions of the observed returns using the estimated components via NNPCA are suitable in almost all cases; nevertheless, the results in an econometric contrast lead us to a partial acceptance of the APT in the samples and periods studied.
2019-09-18T16:24:15Z
Ladrón de Guevara Cortés, Rogelio
Torra Porras, Salvador
Monte Moreno, Enrique
A nonlinear principal component analysis (NLPCA) represents an extension of the standard principal component analysis (PCA) that overcomes the limitation of the PCA’s assumption about the linearity of the model. The NLPCA belongs to the family of nonlinear versions of dimension reduction or the extraction techniques of underlying features, including nonlinear factor analysis and nonlinear independent component analysis, where the principal components are generalized from straight lines to curves. The NLPCA can be achieved via an artificial neural network specification where the PCA classic model is generalized to a nonlinear mode, namely, Neural Networks Principal Component Analysis (NNPCA). In order to extract a set of nonlinear underlying systematic risk factors, we estimate the generative multifactor model of returns in a statistical version of the Arbitrage Pricing Theory (APT), in the context of the Mexican Stock Exchange. We used an auto-associative multilayer perceptron neural network or autoencoder, where the ‘bottleneck’ layer represented the nonlinear principal components, or in our context, the scores of the underlying factors of systematic risk. This neural network represents a powerful technique capable of performing a nonlinear transformation of the observed variables into the nonlinear principal components, and to execute a nonlinear mapping that reproduces the original variables. We propose a network architecture capable of generating a loading matrix that enables us to make a first approach to the interpretation of the extracted latent risk factors. In addition, we used a two stage methodology for the econometric contrast of the APT involving first, a simultaneous estimation of the system of equations via Seemingly Unrelated Regression (SUR), and secondly, a cross-section estimation via Ordinary Least Squared corrected by heteroskedasticity and autocorrelation by means of the Newey-West heteroskedasticity and autocorrelation consistent covariances estimates (HEC). The evidence found shows that the reproductions of the observed returns using the estimated components via NNPCA are suitable in almost all cases; nevertheless, the results in an econometric contrast lead us to a partial acceptance of the APT in the samples and periods studied.
Chinese-Catalan: A neural machine translation approach based on pivoting and attention mechanisms
Ruiz Costa-Jussà, Marta
Casas Manzanares, Noé
Escolano Peinado, Carlos
Rodríguez Fonollosa, José Adrián
http://hdl.handle.net/2117/165888
2019-07-17T03:31:23Z
2019-07-10T07:08:24Z
Chinese-Catalan: A neural machine translation approach based on pivoting and attention mechanisms
Ruiz Costa-Jussà, Marta; Casas Manzanares, Noé; Escolano Peinado, Carlos; Rodríguez Fonollosa, José Adrián
This article innovatively addresses machine translation from Chinese to Catalan using neural pivot strategies trained without any direct parallel data. The Catalan language is very similar to Spanish from a linguistic point of view, which motivates the use of Spanish as pivot language. Regarding neural architecture, we are using the latest state-of-the-art, which is the Transformer model, only based on attention mechanisms. Additionally, this work provides new resources to the community, which consists of a human-developed gold standard of 4,000 sentences between Catalan and Chinese and all the others United Nations official languages (Arabic, English, French, Russian, and Spanish). Results show that the standard pseudo-corpus or synthetic pivot approach performs better than cascade.
2019-07-10T07:08:24Z
Ruiz Costa-Jussà, Marta
Casas Manzanares, Noé
Escolano Peinado, Carlos
Rodríguez Fonollosa, José Adrián
This article innovatively addresses machine translation from Chinese to Catalan using neural pivot strategies trained without any direct parallel data. The Catalan language is very similar to Spanish from a linguistic point of view, which motivates the use of Spanish as pivot language. Regarding neural architecture, we are using the latest state-of-the-art, which is the Transformer model, only based on attention mechanisms. Additionally, this work provides new resources to the community, which consists of a human-developed gold standard of 4,000 sentences between Catalan and Chinese and all the others United Nations official languages (Arabic, English, French, Russian, and Spanish). Results show that the standard pseudo-corpus or synthetic pivot approach performs better than cascade.
Unemployment expectations: A socio-demographic analysis of the effect of news
Soric, Petar
Lolic, Ivana
Claveria González, Oscar
Monte Moreno, Enrique
Torra Porras, Salvador
http://hdl.handle.net/2117/134944
2019-06-22T05:21:39Z
2019-06-21T16:13:15Z
Unemployment expectations: A socio-demographic analysis of the effect of news
Soric, Petar; Lolic, Ivana; Claveria González, Oscar; Monte Moreno, Enrique; Torra Porras, Salvador
In this study, we evaluate the effect of news on consumer unemployment expectations for sixteen socio-demographic groups. To this end, we construct an unemployment sentiment indicator and extract news about several economic variables. By means of genetic programming we estimate symbolic regressions that link unemployment rates in the Euro Area to qualitative expectations about a wide range of economic variables. We then use the evolved expressions to compute unemployment expectations for each consumer group. We first assess the out-of-sample forecast accuracy of the evolved indicators, obtaining better forecasts for the leading unemployment sentiment indicator than for the coincident one. Results are similar across the different socio-demographic groups. The best forecast results are obtained for respondents between 30 and 49 years. The group where we observe the bigger differences among categories is the occupation, where the lowest forecast errors are obtained for the unemployed respondents. Next, we link news about inflation, industrial production, and stock markets to unemployment expectations. With this aim we match positive and negative news with consumers’ unemployment sentiment using a distributed lag regression model for each news item. We find asymmetries in the responses of consumers’ unemployment expectations to economic news: they tend to be stronger in the case of negative news, especially in the case of inflation.
© <2019>. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
2019-06-21T16:13:15Z
Soric, Petar
Lolic, Ivana
Claveria González, Oscar
Monte Moreno, Enrique
Torra Porras, Salvador
In this study, we evaluate the effect of news on consumer unemployment expectations for sixteen socio-demographic groups. To this end, we construct an unemployment sentiment indicator and extract news about several economic variables. By means of genetic programming we estimate symbolic regressions that link unemployment rates in the Euro Area to qualitative expectations about a wide range of economic variables. We then use the evolved expressions to compute unemployment expectations for each consumer group. We first assess the out-of-sample forecast accuracy of the evolved indicators, obtaining better forecasts for the leading unemployment sentiment indicator than for the coincident one. Results are similar across the different socio-demographic groups. The best forecast results are obtained for respondents between 30 and 49 years. The group where we observe the bigger differences among categories is the occupation, where the lowest forecast errors are obtained for the unemployed respondents. Next, we link news about inflation, industrial production, and stock markets to unemployment expectations. With this aim we match positive and negative news with consumers’ unemployment sentiment using a distributed lag regression model for each news item. We find asymmetries in the responses of consumers’ unemployment expectations to economic news: they tend to be stronger in the case of negative news, especially in the case of inflation.
Incorporation of acoustic sensors in the regulation of a mobile robot
Luna Aguilar, Christian Alejandro
Morales Diaz, América
Castelán, Mario
Nadeu Camprubí, Climent
http://hdl.handle.net/2117/130465
2019-03-15T04:19:56Z
2019-03-14T14:49:04Z
Incorporation of acoustic sensors in the regulation of a mobile robot
Luna Aguilar, Christian Alejandro; Morales Diaz, América; Castelán, Mario; Nadeu Camprubí, Climent
This article introduces the incorporation of acoustic sensors for the localization of a mobile robot. The robot is considered as a sound source and its position is located applying a Time Delay of Arrival (TDOA) method. Since the accuracy of this method varies with the microphone array, a navigation acoustic map that indicates the
location errors is built. This map also provides the robot with navigation trajectories point-to-point and the control is capable to drive the robot through these trajectories to a desired configuration. The proposed localization method is thoroughly tested using both a 900 Hz square signal and the natural sound of the robot, which is driven near the desired point with an average error of 0:067 m.
2019-03-14T14:49:04Z
Luna Aguilar, Christian Alejandro
Morales Diaz, América
Castelán, Mario
Nadeu Camprubí, Climent
This article introduces the incorporation of acoustic sensors for the localization of a mobile robot. The robot is considered as a sound source and its position is located applying a Time Delay of Arrival (TDOA) method. Since the accuracy of this method varies with the microphone array, a navigation acoustic map that indicates the
location errors is built. This map also provides the robot with navigation trajectories point-to-point and the control is capable to drive the robot through these trajectories to a desired configuration. The proposed localization method is thoroughly tested using both a 900 Hz square signal and the natural sound of the robot, which is driven near the desired point with an average error of 0:067 m.
Knowledge sharing in the health scenario
LLuch Ariet, Magi
Brugues de la Torre, Albert
Vallverdú Bayés, Sisco
Pegueroles Vallés, Josep R.
http://hdl.handle.net/2117/126996
2019-04-30T15:08:30Z
2019-01-16T17:55:11Z
Knowledge sharing in the health scenario
LLuch Ariet, Magi; Brugues de la Torre, Albert; Vallverdú Bayés, Sisco; Pegueroles Vallés, Josep R.
The understanding of certain data often requires the collection of similar data from different places to be analysed and interpreted. Interoperability standards and ontologies, are facilitating data interchange around the world. However, beyond the existing networks and advances for data transfer, data sharing protocols to support multilateral agreements are useful to exploit the knowledge of distributed Data Warehouses. The access to a certain data set in a federated Data Warehouse may be constrained by the requirement to deliver another specific data set. When bilateral agreements between two nodes of a network are not enough to solve the constraints for accessing to a certain data set, multilateral agreements for data exchange are needed.
We present the implementation of a Multi-Agent System for multilateral exchange agreements of clinical data, and evaluate how those multilateral agreements increase the percentage of data collected by a single node from the total amount of data available in the network. Different strategies to reduce the number of messages needed to achieve an agreement are also considered. The results show that with this collaborative sharing scenario the percentage of data collected dramaticaly improve from bilateral agreements to multilateral ones, up to reach almost all data available in the network.
2019-01-16T17:55:11Z
LLuch Ariet, Magi
Brugues de la Torre, Albert
Vallverdú Bayés, Sisco
Pegueroles Vallés, Josep R.
The understanding of certain data often requires the collection of similar data from different places to be analysed and interpreted. Interoperability standards and ontologies, are facilitating data interchange around the world. However, beyond the existing networks and advances for data transfer, data sharing protocols to support multilateral agreements are useful to exploit the knowledge of distributed Data Warehouses. The access to a certain data set in a federated Data Warehouse may be constrained by the requirement to deliver another specific data set. When bilateral agreements between two nodes of a network are not enough to solve the constraints for accessing to a certain data set, multilateral agreements for data exchange are needed.
We present the implementation of a Multi-Agent System for multilateral exchange agreements of clinical data, and evaluate how those multilateral agreements increase the percentage of data collected by a single node from the total amount of data available in the network. Different strategies to reduce the number of messages needed to achieve an agreement are also considered. The results show that with this collaborative sharing scenario the percentage of data collected dramaticaly improve from bilateral agreements to multilateral ones, up to reach almost all data available in the network.
The use of long-term features for GMM- and i-vector-based speaker diarization systems
Zewoudie, Abraham Woubie
Luque, Jordi
Hernando Pericás, Francisco Javier
http://hdl.handle.net/2117/123773
2019-01-24T11:47:47Z
2018-11-08T16:08:02Z
The use of long-term features for GMM- and i-vector-based speaker diarization systems
Zewoudie, Abraham Woubie; Luque, Jordi; Hernando Pericás, Francisco Javier
Several factors contribute to the performance of speaker diarization systems. For instance, the appropriate selection of speech features is one of the key aspects that affect speaker diarization systems. The other factors include the techniques employed to perform both segmentation and clustering. While the static mel frequency cepstral coefficients are the most widely used features in speech-related tasks including speaker diarization, several studies have shown the benefits of augmenting regular speech features with the static ones.
In this work, we have proposed and assessed the use of voice-quality features (i.e., jitter, shimmer, and Glottal-to-Noise Excitation ratio) within the framework of speaker diarization. These acoustic attributes are employed together with the state-of-the-art short-term cepstral and long-term prosodic features. Additionally, the use of delta dynamic features is also explored separately both for segmentation and bottom-up clustering sub-tasks. The combination of the different feature sets is carried out at several levels. At the feature level, the long-term speech features are stacked in the same feature vector. At the score level, the short- and long-term speech features are independently modeled and fused at the score likelihood level.
Various feature combinations have been applied both for Gaussian mixture modeling and i-vector-based speaker diarization systems. The experiments have been carried out on Augmented Multi-party Interaction meeting corpus. The best result, in terms of diarization error rate, is reported by using i-vector-based cosine-distance clustering together with a signal parameterization consisting of a combination of static cepstral coefficients, delta, voice-quality, and prosodic features. The best result shows about 24% relative diarization error rate improvement compared to the baseline system which is based on Gaussian mixture modeling and short-term static cepstral coefficients.
2018-11-08T16:08:02Z
Zewoudie, Abraham Woubie
Luque, Jordi
Hernando Pericás, Francisco Javier
Several factors contribute to the performance of speaker diarization systems. For instance, the appropriate selection of speech features is one of the key aspects that affect speaker diarization systems. The other factors include the techniques employed to perform both segmentation and clustering. While the static mel frequency cepstral coefficients are the most widely used features in speech-related tasks including speaker diarization, several studies have shown the benefits of augmenting regular speech features with the static ones.
In this work, we have proposed and assessed the use of voice-quality features (i.e., jitter, shimmer, and Glottal-to-Noise Excitation ratio) within the framework of speaker diarization. These acoustic attributes are employed together with the state-of-the-art short-term cepstral and long-term prosodic features. Additionally, the use of delta dynamic features is also explored separately both for segmentation and bottom-up clustering sub-tasks. The combination of the different feature sets is carried out at several levels. At the feature level, the long-term speech features are stacked in the same feature vector. At the score level, the short- and long-term speech features are independently modeled and fused at the score likelihood level.
Various feature combinations have been applied both for Gaussian mixture modeling and i-vector-based speaker diarization systems. The experiments have been carried out on Augmented Multi-party Interaction meeting corpus. The best result, in terms of diarization error rate, is reported by using i-vector-based cosine-distance clustering together with a signal parameterization consisting of a combination of static cepstral coefficients, delta, voice-quality, and prosodic features. The best result shows about 24% relative diarization error rate improvement compared to the baseline system which is based on Gaussian mixture modeling and short-term static cepstral coefficients.
Detection and description of the different ionospheric disturbances that appeared during the solar eclipse of 21 August 2017
Yang, Heng
Monte Moreno, Enrique
Hernández Pajares, Manuel
http://hdl.handle.net/2117/123433
2019-09-21T04:06:14Z
2018-10-31T19:01:48Z
Detection and description of the different ionospheric disturbances that appeared during the solar eclipse of 21 August 2017
Yang, Heng; Monte Moreno, Enrique; Hernández Pajares, Manuel
This work will provide a detailed characterization of the travelling ionospheric disturbances (TIDs) created by the solar eclipse of 21 August 2017, the shadow of which crossed the United States from the Pacific to the Atlantic ocean. The analysis is done by means of the Atomic Decomposition Detector of Traveling Ionospheric Disturbances (ADDTID) algorithm. This method automatically detects and characterizes multiple TIDs from the global navigation satellite system (GNSS) observation. The set of disturbances generated by the eclipse has a richer and more varied behavior than that associated with the shock wave directly produced by cooling effects of the moon shadow. This can be modeled in part as if the umbra and penumbra of the eclipse were moving cylinders that intersects with variable elevation angle a curved surface. This projection gives rise to regions of equal penumbra with shapes similar to ellipses, with different centers and foci. The result of this is reflected in the time evolution of the TID wavelengths produced by the eclipse, which depend on the vertical angle of the sun with the surface of the earth, and also a double bow wave phenomenon, where the bow waves are generated in advance to the umbra. We show that the delay in the appearance of the disturbances with the transit of the eclipse are compatible with the physical explanations, linked to the different origins of the disturbances and the wavelengths. Finally, we detected a consistent pattern, in location and time of disturbances in advance to the penumbra as a set of medium scale TIDs, which could be hypothesized as soliton waves of the bow wave. In all cases, the detected disturbances were checked visually on the detrended vertical total electron content (TEC) maps.
2018-10-31T19:01:48Z
Yang, Heng
Monte Moreno, Enrique
Hernández Pajares, Manuel
This work will provide a detailed characterization of the travelling ionospheric disturbances (TIDs) created by the solar eclipse of 21 August 2017, the shadow of which crossed the United States from the Pacific to the Atlantic ocean. The analysis is done by means of the Atomic Decomposition Detector of Traveling Ionospheric Disturbances (ADDTID) algorithm. This method automatically detects and characterizes multiple TIDs from the global navigation satellite system (GNSS) observation. The set of disturbances generated by the eclipse has a richer and more varied behavior than that associated with the shock wave directly produced by cooling effects of the moon shadow. This can be modeled in part as if the umbra and penumbra of the eclipse were moving cylinders that intersects with variable elevation angle a curved surface. This projection gives rise to regions of equal penumbra with shapes similar to ellipses, with different centers and foci. The result of this is reflected in the time evolution of the TID wavelengths produced by the eclipse, which depend on the vertical angle of the sun with the surface of the earth, and also a double bow wave phenomenon, where the bow waves are generated in advance to the umbra. We show that the delay in the appearance of the disturbances with the transit of the eclipse are compatible with the physical explanations, linked to the different origins of the disturbances and the wavelengths. Finally, we detected a consistent pattern, in location and time of disturbances in advance to the penumbra as a set of medium scale TIDs, which could be hypothesized as soliton waves of the bow wave. In all cases, the detected disturbances were checked visually on the detrended vertical total electron content (TEC) maps.
Economic uncertainty: a geometric indicator of discrepancy among experts’ expectations
Claveria González, Oscar
Monte Moreno, Enrique
Torra Porras, Salvador
http://hdl.handle.net/2117/121115
2019-08-18T00:25:52Z
2018-09-13T16:09:55Z
Economic uncertainty: a geometric indicator of discrepancy among experts’ expectations
Claveria González, Oscar; Monte Moreno, Enrique; Torra Porras, Salvador
In this study we present a geometric approach to proxy economic uncertainty. We design a positional indicator of disagreement among survey-based agents’ expectations about the state of the economy. Previous dispersion-based uncertainty indicators derived from business and consumer surveys exclusively make use of the two extreme pieces of information: the percentage of respondents expecting a variable to rise and to fall. With the aim of also incorporating the information coming from the share of respondents expecting a variable to remain constant, we propose a geometrical framework and use a barycentric coordinate system to generate a measure of disagreement, referred to as a discrepancy indicator. We assess its performance both empirically and experimentally by comparing it to the standard deviation of the share of positive and negative responses. When applied in sixteen European countries, we find that both time-varying metrics co-evolve in most countries for expectations about the country’s overall economic situation in the present, but not in the future. Additionally, we obtain their simulated sampling distributions and we find that the proposed indicator gravitates uniformly towards the three vertices of the simplex representing the three answering categories, as opposed to the standard deviation, which tends to overestimate the level of uncertainty as a result of ignoring the no-change responses. Consequently, we find evidence that the information coming from agents expecting a variable to remain constant has an effect on the measurement of disagreement.
The final publication is available at Springer via http://dx.doi.org/10.1007/s11205-018-1984-2
2018-09-13T16:09:55Z
Claveria González, Oscar
Monte Moreno, Enrique
Torra Porras, Salvador
In this study we present a geometric approach to proxy economic uncertainty. We design a positional indicator of disagreement among survey-based agents’ expectations about the state of the economy. Previous dispersion-based uncertainty indicators derived from business and consumer surveys exclusively make use of the two extreme pieces of information: the percentage of respondents expecting a variable to rise and to fall. With the aim of also incorporating the information coming from the share of respondents expecting a variable to remain constant, we propose a geometrical framework and use a barycentric coordinate system to generate a measure of disagreement, referred to as a discrepancy indicator. We assess its performance both empirically and experimentally by comparing it to the standard deviation of the share of positive and negative responses. When applied in sixteen European countries, we find that both time-varying metrics co-evolve in most countries for expectations about the country’s overall economic situation in the present, but not in the future. Additionally, we obtain their simulated sampling distributions and we find that the proposed indicator gravitates uniformly towards the three vertices of the simplex representing the three answering categories, as opposed to the standard deviation, which tends to overestimate the level of uncertainty as a result of ignoring the no-change responses. Consequently, we find evidence that the information coming from agents expecting a variable to remain constant has an effect on the measurement of disagreement.
TEC forecasting based on manifold trajectories
Monte Moreno, Enrique
García Rigo, Alberto
Hernández Pajares, Manuel
Yang, Heng
http://hdl.handle.net/2117/118444
2019-09-21T03:08:03Z
2018-06-25T10:48:29Z
TEC forecasting based on manifold trajectories
Monte Moreno, Enrique; García Rigo, Alberto; Hernández Pajares, Manuel; Yang, Heng
In this paper, we present a method for forecasting the ionospheric Total Electron Content (TEC) distribution from the International GNSS Service’s Global Ionospheric Maps. The forecasting system gives an estimation of the value of the TEC distribution based on linear combination of previous TEC maps (i.e., a set of 2D arrays indexed by time), and the computation of a tangent subspace in a manifold associated to each map. The use of the tangent space to each map is justified because it allows modeling the possible distortions from one observation to the next as a trajectory on the tangent manifold of the map. The coefficients of the linear combination of the last observations along with the tangent space are estimated at each time stamp to minimize the mean square forecasting error with a regularization term. The estimation is made at each time stamp to adapt the forecast to short-term variations in solar activity.
2018-06-25T10:48:29Z
Monte Moreno, Enrique
García Rigo, Alberto
Hernández Pajares, Manuel
Yang, Heng
In this paper, we present a method for forecasting the ionospheric Total Electron Content (TEC) distribution from the International GNSS Service’s Global Ionospheric Maps. The forecasting system gives an estimation of the value of the TEC distribution based on linear combination of previous TEC maps (i.e., a set of 2D arrays indexed by time), and the computation of a tangent subspace in a manifold associated to each map. The use of the tangent space to each map is justified because it allows modeling the possible distortions from one observation to the next as a trajectory on the tangent manifold of the map. The coefficients of the linear combination of the last observations along with the tangent space are estimated at each time stamp to minimize the mean square forecasting error with a regularization term. The estimation is made at each time stamp to adapt the forecast to short-term variations in solar activity.