E-printshttp://hdl.handle.net/2117/285772020-05-31T05:29:48Z2020-05-31T05:29:48ZMixing driven by radiative and evaporative cooling at the stratocumulus topde Lozar, AlbertoMellado González, Juan Pedrohttp://hdl.handle.net/2117/1896132020-05-30T09:40:26Z2020-05-30T09:31:40ZMixing driven by radiative and evaporative cooling at the stratocumulus top
de Lozar, Alberto; Mellado González, Juan Pedro
The stratocumulus-top mixing process is investigated using direct numerical simulations of a shear-free
cloud-top mixing layer driven by evaporative and radiative cooling. An extension of previous linear formulations allows for quantifying radiative cooling, evaporative cooling, and the diffusive effects that artificially
enhance mixing and evaporative cooling in high-viscosity direct numerical simulations (DNS) and many
atmospheric simulations. The diffusive cooling accounts for 20% of the total evaporative cooling for the
highest resolution (grid spacing ;14 cm), but this can be much larger (;100%) for lower resolutions that are
commonly used in large-eddy simulations (grid spacing ;5 m). This result implies that the k scaling for cloud
cover might be strongly influenced by diffusive effects. Furthermore, the definition of the inversion point as
the point of neutral buoyancy hbi(zi) 5 0 allows the derivation of two scaling laws. The in-cloud scaling law
relates the velocity and buoyancy integral scales to a buoyancy flux defined by the inversion point. The
entrainment-zone scaling law provides a relationship between the entrainment velocity and the liquid
evaporation rate. By using this inversion point, it is shown that the radiative-cooling contribution to the
entrainment velocity decouples from the evaporative-cooling contribution and behaves very similarly as in
the smoke cloud. Finally, evaporative and radiative cooling have similar strengths, when this strength is
measured by the integrated buoyancy source. This result partially explains why current entrainment parameterizations are not accurate enough, given that most of them implicitly assume that only one of the two
mechanisms rules the entrainment.
2020-05-30T09:31:40Zde Lozar, AlbertoMellado González, Juan PedroThe stratocumulus-top mixing process is investigated using direct numerical simulations of a shear-free
cloud-top mixing layer driven by evaporative and radiative cooling. An extension of previous linear formulations allows for quantifying radiative cooling, evaporative cooling, and the diffusive effects that artificially
enhance mixing and evaporative cooling in high-viscosity direct numerical simulations (DNS) and many
atmospheric simulations. The diffusive cooling accounts for 20% of the total evaporative cooling for the
highest resolution (grid spacing ;14 cm), but this can be much larger (;100%) for lower resolutions that are
commonly used in large-eddy simulations (grid spacing ;5 m). This result implies that the k scaling for cloud
cover might be strongly influenced by diffusive effects. Furthermore, the definition of the inversion point as
the point of neutral buoyancy hbi(zi) 5 0 allows the derivation of two scaling laws. The in-cloud scaling law
relates the velocity and buoyancy integral scales to a buoyancy flux defined by the inversion point. The
entrainment-zone scaling law provides a relationship between the entrainment velocity and the liquid
evaporation rate. By using this inversion point, it is shown that the radiative-cooling contribution to the
entrainment velocity decouples from the evaporative-cooling contribution and behaves very similarly as in
the smoke cloud. Finally, evaporative and radiative cooling have similar strengths, when this strength is
measured by the integrated buoyancy source. This result partially explains why current entrainment parameterizations are not accurate enough, given that most of them implicitly assume that only one of the two
mechanisms rules the entrainment.Analysis and prediction of COVID-19 for EU-EFTA-UK and other countriesCatalà Sabaté, MartíCardona Iglesias, Pere JoanPrats Soler, ClaraAlonso Muñoz, SergioÁlvarez Lacalle, EnriqueMarchena Angos, MiquelConesa, DavidLópez Codina, Danielhttp://hdl.handle.net/2117/1896122020-05-30T09:10:26Z2020-05-30T09:02:05ZAnalysis and prediction of COVID-19 for EU-EFTA-UK and other countries
Català Sabaté, Martí; Cardona Iglesias, Pere Joan; Prats Soler, Clara; Alonso Muñoz, Sergio; Álvarez Lacalle, Enrique; Marchena Angos, Miquel; Conesa, David; López Codina, Daniel
The present report aims to provide a comprehensive picture of the pandemic situation of COVID-19 in the
EU countries, and to be able to foresee the situation in the next coming days.
We employ an empirical model, verified with the evolution of the number of confirmed cases in previous
countries where the epidemic is close to conclude, including all provinces of China. The model does not
pretend to interpret the causes of the evolution of the cases but to permit the evaluation of the quality of
control measures made in each state and a short-term prediction of trends. Note, however, that the effects
of the measures’ control that start on a given day are not observed until approximately 7-10 days later.
The model and predictions are based on two parameters that are daily fitted to available data:
a: the velocity at which spreading specific rate slows down; the higher the value, the better the
control.
K: the final number of expected cumulated cases, which cannot be evaluated at the initial stages
because growth is still exponential.
We show an individual report with 8 graphs and a table with the short-term predictions for different
countries and regions. We are adjusting the model to countries and regions with at least 4 days with more
than 100 confirmed cases and a current load over 200 cases. The predicted period of a country depends on
the number of datapoints over this 100 cases threshold, and is of 5 days for those that have reported more
than 100 cumulated cases for 10 consecutive days or more. For short-term predictions, we assign higher
weight to last 3 points in the fittings, so that changes are rapidly captured by the model. The whole
methodology employed in the inform is explained in the last pages of this document.
In addition to the individual reports, the reader will find an initial dashboard with a brief analysis of the
situation in EU-EFTA-UK countries, some summary figures and tables as well as long-term predictions for
some of them, when possible. These long-term predictions are evaluated without different weights to datapoints.
We also discuss a specific issue every day.
2020-05-30T09:02:05ZCatalà Sabaté, MartíCardona Iglesias, Pere JoanPrats Soler, ClaraAlonso Muñoz, SergioÁlvarez Lacalle, EnriqueMarchena Angos, MiquelConesa, DavidLópez Codina, DanielThe present report aims to provide a comprehensive picture of the pandemic situation of COVID-19 in the
EU countries, and to be able to foresee the situation in the next coming days.
We employ an empirical model, verified with the evolution of the number of confirmed cases in previous
countries where the epidemic is close to conclude, including all provinces of China. The model does not
pretend to interpret the causes of the evolution of the cases but to permit the evaluation of the quality of
control measures made in each state and a short-term prediction of trends. Note, however, that the effects
of the measures’ control that start on a given day are not observed until approximately 7-10 days later.
The model and predictions are based on two parameters that are daily fitted to available data:
a: the velocity at which spreading specific rate slows down; the higher the value, the better the
control.
K: the final number of expected cumulated cases, which cannot be evaluated at the initial stages
because growth is still exponential.
We show an individual report with 8 graphs and a table with the short-term predictions for different
countries and regions. We are adjusting the model to countries and regions with at least 4 days with more
than 100 confirmed cases and a current load over 200 cases. The predicted period of a country depends on
the number of datapoints over this 100 cases threshold, and is of 5 days for those that have reported more
than 100 cumulated cases for 10 consecutive days or more. For short-term predictions, we assign higher
weight to last 3 points in the fittings, so that changes are rapidly captured by the model. The whole
methodology employed in the inform is explained in the last pages of this document.
In addition to the individual reports, the reader will find an initial dashboard with a brief analysis of the
situation in EU-EFTA-UK countries, some summary figures and tables as well as long-term predictions for
some of them, when possible. These long-term predictions are evaluated without different weights to datapoints.
We also discuss a specific issue every day.Experimentally well-constrained masses of 27P and 27S: implications for studies of explosive binary systemsSun, LijieXu, XinxingHou, S.Q.José Pont, Jordihttp://hdl.handle.net/2117/1896112020-05-30T05:27:16Z2020-05-29T23:02:24ZExperimentally well-constrained masses of 27P and 27S: implications for studies of explosive binary systems
Sun, Lijie; Xu, Xinxing; Hou, S.Q.; José Pont, Jordi
The mass of 27P is expected to impact the X-ray burst (XRB) model predictions of burst light curves and the composition of the burst ashes, but large uncertainties and inconsistencies still exist in the reported 27P masses. We have used the ß-decay spectroscopy of 27S to determine the most precise mass excess of 27P to date to be keV, which is 63 keV (2.3s) higher and a factor of 3 more precise than the value recommended in the 2016 Atomic Mass Evaluation. Based on the new 27P mass, the P reaction rate and its uncertainty were recalculated using Monte Carlo techniques. We also estimated the previously unknown mass excess of 27S to be 17678(77) keV, based on the measured ß-delayed two-proton energy and the Coulomb displacement energy relations. The impact of these well-constrained masses and reaction rates on the modeling of the explosive astrophysical scenarios has been investigated by post-processing XRB and hydrodynamic nova models. Compared to the model calculations based on the masses and rates from databases, the abundance of in the burst ashes is increased by a factor of 2.4, while no substantial change was found in the XRB energy generation rate or the light curve. Our calculation also suggests that 27S is not a significant waiting point in the rapid proton capture process, and the change of the P reaction rate is not sufficiently large to affect the conclusion previously drawn on the nova contribution to the synthesis of galactic 26Al.
2020-05-29T23:02:24ZSun, LijieXu, XinxingHou, S.Q.José Pont, JordiThe mass of 27P is expected to impact the X-ray burst (XRB) model predictions of burst light curves and the composition of the burst ashes, but large uncertainties and inconsistencies still exist in the reported 27P masses. We have used the ß-decay spectroscopy of 27S to determine the most precise mass excess of 27P to date to be keV, which is 63 keV (2.3s) higher and a factor of 3 more precise than the value recommended in the 2016 Atomic Mass Evaluation. Based on the new 27P mass, the P reaction rate and its uncertainty were recalculated using Monte Carlo techniques. We also estimated the previously unknown mass excess of 27S to be 17678(77) keV, based on the measured ß-delayed two-proton energy and the Coulomb displacement energy relations. The impact of these well-constrained masses and reaction rates on the modeling of the explosive astrophysical scenarios has been investigated by post-processing XRB and hydrodynamic nova models. Compared to the model calculations based on the masses and rates from databases, the abundance of in the burst ashes is increased by a factor of 2.4, while no substantial change was found in the XRB energy generation rate or the light curve. Our calculation also suggests that 27S is not a significant waiting point in the rapid proton capture process, and the change of the P reaction rate is not sufficiently large to affect the conclusion previously drawn on the nova contribution to the synthesis of galactic 26Al.Learning by Back-Propagation: a systolic algorithm and its transputer implementationMillan Ruiz, José del RocioBofill, Pauhttp://hdl.handle.net/2117/1895732020-05-30T05:27:14Z2020-05-29T19:30:31ZLearning by Back-Propagation: a systolic algorithm and its transputer implementation
Millan Ruiz, José del Rocio; Bofill, Pau
In this paper we present a systolic algorithm for back-propagation, a supervised, iterative, gradient-descent, connectionist learning rule. The algorithm works on feedforward networks where connections can skip layers and fully exploits spatial and training parallelisms, which are inherent to back-propagation. Spatial parallelism arises during the propagation of activity -forward- and error -backward- for a particular input-output pair. On the other hand, when this computation is carried out simultaneously for all input-output pairs, training parallelism is obtained. In the spatial dimension, a single systolic ring carries out sequentially the three main steps of the learning rule -forward, backward and weight increments update. Furthermore, the same pattern of matrix delivery is used in both the forward and the backward passes. In this manner, the algorithm preserves the similarity of the forward and backward passes in the original model. The resulting systolic algorithm is dual with respect to the pattern of matrix delivery -either columns or rows. Finally, an implementation of the systolic algorithm for the spatial dimension is derived, that uses a linear ring of Transputer processors.
2020-05-29T19:30:31ZMillan Ruiz, José del RocioBofill, PauIn this paper we present a systolic algorithm for back-propagation, a supervised, iterative, gradient-descent, connectionist learning rule. The algorithm works on feedforward networks where connections can skip layers and fully exploits spatial and training parallelisms, which are inherent to back-propagation. Spatial parallelism arises during the propagation of activity -forward- and error -backward- for a particular input-output pair. On the other hand, when this computation is carried out simultaneously for all input-output pairs, training parallelism is obtained. In the spatial dimension, a single systolic ring carries out sequentially the three main steps of the learning rule -forward, backward and weight increments update. Furthermore, the same pattern of matrix delivery is used in both the forward and the backward passes. In this manner, the algorithm preserves the similarity of the forward and backward passes in the original model. The resulting systolic algorithm is dual with respect to the pattern of matrix delivery -either columns or rows. Finally, an implementation of the systolic algorithm for the spatial dimension is derived, that uses a linear ring of Transputer processors.Restriccions d'integritat en bases de dades deductivesCostal Costa, Dolorshttp://hdl.handle.net/2117/1895602020-05-30T05:28:29Z2020-05-29T17:36:35ZRestriccions d'integritat en bases de dades deductives
Costal Costa, Dolors
This work describes the theory and implementation of a general theorem-proving technique for checking integrity of deductive databases called "Consistency Method", and presents examples in order to study the aplicability, advantages and disadvantages of executing this technique directly in Prolog.
2020-05-29T17:36:35ZCostal Costa, DolorsThis work describes the theory and implementation of a general theorem-proving technique for checking integrity of deductive databases called "Consistency Method", and presents examples in order to study the aplicability, advantages and disadvantages of executing this technique directly in Prolog.Viabilidad de protección sísmica de edificios con disipadores de energía histeréticaDomínguez Santos, DavidLópez Almansa, Franciscohttp://hdl.handle.net/2117/1895572020-05-30T05:27:13Z2020-05-29T17:24:49ZViabilidad de protección sísmica de edificios con disipadores de energía histerética
Domínguez Santos, David; López Almansa, Francisco
Este documento explora la viabilidad del uso de disipadores de energía en edificaciones de baja y mediana altura ubicados en zonas sísmicas. Este estudio podrá ser aplicado en edificaciones nuevas y existentes. Para ello, se analizarán y compararán tres pórticos RC de 5, 10 y 15 pisos; cada pórtico será diseñado y analizado utilizando diferentes soluciones tradicionales, incluyendo la utilización de disipadores de energía histeréticos basados en la plastificación de metales. El comportamiento de estos pórticos, se analizará en términos de parámetros modales y curvas de capacidad (Pushover). La finalidad de este estudio, será determinar las ventajas y desventajas que presenta cada solución planteada en términos estructurales.; This study may be applied in new and existing buildings. To do this, three RC frames of 5, 10 and 15 floors will be analyzed and compared; each frame will be designed and analyzed using different traditional solutions, including the use of hysterical energy dissipators based on metal plastification. The behavior of these frames will be analyzed in terms of modal parameters and capacity curves (Pushover). The purpose of this study will be to determine the advantages and disadvantages of each solution proposed in structural terms.
2020-05-29T17:24:49ZDomínguez Santos, DavidLópez Almansa, FranciscoEste documento explora la viabilidad del uso de disipadores de energía en edificaciones de baja y mediana altura ubicados en zonas sísmicas. Este estudio podrá ser aplicado en edificaciones nuevas y existentes. Para ello, se analizarán y compararán tres pórticos RC de 5, 10 y 15 pisos; cada pórtico será diseñado y analizado utilizando diferentes soluciones tradicionales, incluyendo la utilización de disipadores de energía histeréticos basados en la plastificación de metales. El comportamiento de estos pórticos, se analizará en términos de parámetros modales y curvas de capacidad (Pushover). La finalidad de este estudio, será determinar las ventajas y desventajas que presenta cada solución planteada en términos estructurales.
This study may be applied in new and existing buildings. To do this, three RC frames of 5, 10 and 15 floors will be analyzed and compared; each frame will be designed and analyzed using different traditional solutions, including the use of hysterical energy dissipators based on metal plastification. The behavior of these frames will be analyzed in terms of modal parameters and capacity curves (Pushover). The purpose of this study will be to determine the advantages and disadvantages of each solution proposed in structural terms.Logarithmic advice classesBalcázar Navarro, José LuisSchöning, Uwehttp://hdl.handle.net/2117/1895542020-05-30T05:26:43Z2020-05-29T17:16:43ZLogarithmic advice classes
Balcázar Navarro, José Luis; Schöning, Uwe
Karp and Lipton [9] introduced the notion of non-uniform complexity classes where a certain amount of "side information", the advice, is given for free. The advice only depends on the length of the input. Karp and Lipton (and also later researchers [22,17,2,12]) concentrated on the study of classes of the form C/poly where C is P, NP, or PSPACE, and poly denotes a polynomial size advice. This paper starts a study of classes of the form C/log. As a main result it is shown that in the context of an NP/log computation a log-bounded advice is equivalent to a sparse oracle in NP. In contrast, it has been shown that a poly-bounded advice corresponds to an arbitrary sparse oracle set. Furthermore, a general theorem is presented that generalizes Karp and Lipton's "round-robin tournament" method.
2020-05-29T17:16:43ZBalcázar Navarro, José LuisSchöning, UweKarp and Lipton [9] introduced the notion of non-uniform complexity classes where a certain amount of "side information", the advice, is given for free. The advice only depends on the length of the input. Karp and Lipton (and also later researchers [22,17,2,12]) concentrated on the study of classes of the form C/poly where C is P, NP, or PSPACE, and poly denotes a polynomial size advice. This paper starts a study of classes of the form C/log. As a main result it is shown that in the context of an NP/log computation a log-bounded advice is equivalent to a sparse oracle in NP. In contrast, it has been shown that a poly-bounded advice corresponds to an arbitrary sparse oracle set. Furthermore, a general theorem is presented that generalizes Karp and Lipton's "round-robin tournament" method.Calibración con cálculos dinámicos no lineales del método de Newmark para análisis de estabilidad sísmica de laderasDong, ShanLópez Almansa, FranciscoLedesma Villalba, Albertohttp://hdl.handle.net/2117/1895532020-05-30T05:26:42Z2020-05-29T17:09:16ZCalibración con cálculos dinámicos no lineales del método de Newmark para análisis de estabilidad sísmica de laderas
Dong, Shan; López Almansa, Francisco; Ledesma Villalba, Alberto
El método de Newmark es un procedimiento simplificado para analizar la estabilidad de laderas sometidas a acciones sísmicas. En esta formulación se considera un bloque infinitamente rígido situado sobre una superficie indefinida de inclinación constante; para estimar su deslizamiento permanente, se determina el valor crítico de la aceleración que desencadena este movimiento, y se integra dos veces el acelerograma de excitación a lo largo de los intervalos en que se excede dicho nivel crítico. Este método es muy popular, básicamente por su facilidad de implementación; no obstante, no se han publicado comparaciones con ensayos, deslizamientos observados o resultados de formulaciones más avanzadas. Este artículo presenta una comparación con predicciones de cálculos dinámicos no lineales efectuados con Plaxis 2D; el comportamiento plástico del suelo se describe con un modelo de Mohr-Coulomb. Los movimientos sísmicos de excitación se seleccionan para que representen situaciones reales. Los análisis se llevan a cabo en una serie de casos de estudio en los que se consideran laderas de pendiente constante y suelo homogéneo.; The Newmark method is a simplified procedure to analyze slope stability under seismic excitation; to estimate the slide, the critical value of the acceleration that triggers it is determined, and the accelerogram is integrated twice over the intervals in which such critical level is exceeded. This method is very popular, mainly because it is easily implemented; However, no comparisons with tests, observed landslides or results from more advanced formulations have been reported. This paper presents a comparison with predictions of nonlinear dynamic calculations made with Plaxis 2D; the soil plastic behavior is described with a Mohr-Coulomb model. The seismic inputs are selected to represent actual situations. The analyses are carried out for a number of case studies; in each case the slope is constant and the soil is homogeneous.
2020-05-29T17:09:16ZDong, ShanLópez Almansa, FranciscoLedesma Villalba, AlbertoEl método de Newmark es un procedimiento simplificado para analizar la estabilidad de laderas sometidas a acciones sísmicas. En esta formulación se considera un bloque infinitamente rígido situado sobre una superficie indefinida de inclinación constante; para estimar su deslizamiento permanente, se determina el valor crítico de la aceleración que desencadena este movimiento, y se integra dos veces el acelerograma de excitación a lo largo de los intervalos en que se excede dicho nivel crítico. Este método es muy popular, básicamente por su facilidad de implementación; no obstante, no se han publicado comparaciones con ensayos, deslizamientos observados o resultados de formulaciones más avanzadas. Este artículo presenta una comparación con predicciones de cálculos dinámicos no lineales efectuados con Plaxis 2D; el comportamiento plástico del suelo se describe con un modelo de Mohr-Coulomb. Los movimientos sísmicos de excitación se seleccionan para que representen situaciones reales. Los análisis se llevan a cabo en una serie de casos de estudio en los que se consideran laderas de pendiente constante y suelo homogéneo.
The Newmark method is a simplified procedure to analyze slope stability under seismic excitation; to estimate the slide, the critical value of the acceleration that triggers it is determined, and the accelerogram is integrated twice over the intervals in which such critical level is exceeded. This method is very popular, mainly because it is easily implemented; However, no comparisons with tests, observed landslides or results from more advanced formulations have been reported. This paper presents a comparison with predictions of nonlinear dynamic calculations made with Plaxis 2D; the soil plastic behavior is described with a Mohr-Coulomb model. The seismic inputs are selected to represent actual situations. The analyses are carried out for a number of case studies; in each case the slope is constant and the soil is homogeneous.Identificación no lineal de movimientos sísmicos en el lecho rocoso ingenieril a partir de registros tomados en la superficieRodríguez Sánchez, JulioLópez Almansa, FranciscoLedesma Villalba, Albertohttp://hdl.handle.net/2117/1895472020-05-30T05:27:58Z2020-05-29T16:43:56ZIdentificación no lineal de movimientos sísmicos en el lecho rocoso ingenieril a partir de registros tomados en la superficie
Rodríguez Sánchez, Julio; López Almansa, Francisco; Ledesma Villalba, Alberto
En ingeniería sísmica, los registros disponibles suelen ser superficiales y, en consecuencia, los movimientos en las capas inferiores deben ser estimados conociendo el perfil estratigráfico del terreno; estos movimientos son relevantes para proyecto sismorresistente de estructuras con partes enterradas, terrenos irregulares, e interacción suelo-estructura. Si el comportamiento del suelo es lineal, la citada estimación es una deconvolución lineal equivalente; sin embargo, en situaciones reales, especialmente para sismos severos, dicho comportamiento es no lineal y es necesario utilizar procedimientos más sofisticados. En este sentido, este trabajo presenta un algoritmo para estimar, a partir de registros sísmicos superficiales, el movimiento de las capas inferiores del suelo. En el algoritmo propuesto, el citado comportamiento no lineal del terreno se representa mediante un modelo de Masing modificado, en el que la rigidez y el amortiguamiento dependen de la deformación transversal. El dominio a analizar se discretiza en capas; las correspondientes ecuaciones del movimiento se resuelven en tiempo discreto mediante el método de Newmark. Dado que el problema numérico a resolver esta´ mal condicionado numéricamente, se utiliza un filtro de Kalman para estimar la solución.; In earthquake engineering, only ground surface records are usually available, and the lower layers motions must be estimated by knowing the stratigraphic profile. This is relevant in earthquake-resistant design of structures with underground parts, irregular soils, and soil-structure interaction. If the soil behavior is linear, the aforementioned estimation is an equivalent linear deconvolution; however, in actual situations, especially for severe earthquakes, this behavior is non-linear, and, thus, more sophisticated procedures are required. This paper presents an algorithm to estimate, after superficial records, the motion of the lower layers. The aforementioned soil non-linear behavior is represented by a modified Masing model, in which rigidity and damping depend on the transverse deformation. The domain to be analyzed is discretized in layers; the ensuing equations of motion are solved in discrete time by the Newmark method. Given that this problem is ill- conditioned, a UKF (Unscented Kalman Filter) is utilized to estimate the solution.
2020-05-29T16:43:56ZRodríguez Sánchez, JulioLópez Almansa, FranciscoLedesma Villalba, AlbertoEn ingeniería sísmica, los registros disponibles suelen ser superficiales y, en consecuencia, los movimientos en las capas inferiores deben ser estimados conociendo el perfil estratigráfico del terreno; estos movimientos son relevantes para proyecto sismorresistente de estructuras con partes enterradas, terrenos irregulares, e interacción suelo-estructura. Si el comportamiento del suelo es lineal, la citada estimación es una deconvolución lineal equivalente; sin embargo, en situaciones reales, especialmente para sismos severos, dicho comportamiento es no lineal y es necesario utilizar procedimientos más sofisticados. En este sentido, este trabajo presenta un algoritmo para estimar, a partir de registros sísmicos superficiales, el movimiento de las capas inferiores del suelo. En el algoritmo propuesto, el citado comportamiento no lineal del terreno se representa mediante un modelo de Masing modificado, en el que la rigidez y el amortiguamiento dependen de la deformación transversal. El dominio a analizar se discretiza en capas; las correspondientes ecuaciones del movimiento se resuelven en tiempo discreto mediante el método de Newmark. Dado que el problema numérico a resolver esta´ mal condicionado numéricamente, se utiliza un filtro de Kalman para estimar la solución.
In earthquake engineering, only ground surface records are usually available, and the lower layers motions must be estimated by knowing the stratigraphic profile. This is relevant in earthquake-resistant design of structures with underground parts, irregular soils, and soil-structure interaction. If the soil behavior is linear, the aforementioned estimation is an equivalent linear deconvolution; however, in actual situations, especially for severe earthquakes, this behavior is non-linear, and, thus, more sophisticated procedures are required. This paper presents an algorithm to estimate, after superficial records, the motion of the lower layers. The aforementioned soil non-linear behavior is represented by a modified Masing model, in which rigidity and damping depend on the transverse deformation. The domain to be analyzed is discretized in layers; the ensuing equations of motion are solved in discrete time by the Newmark method. Given that this problem is ill- conditioned, a UKF (Unscented Kalman Filter) is utilized to estimate the solution.Enabling Ada and OpenMP runtimes interoperability through template-based executionRoyuela Alcázar, SaraPinho, Luís MiguelQuiñones, Eduardohttp://hdl.handle.net/2117/1895462020-05-30T05:28:33Z2020-05-29T16:28:32ZEnabling Ada and OpenMP runtimes interoperability through template-based execution
Royuela Alcázar, Sara; Pinho, Luís Miguel; Quiñones, Eduardo
The growing trend to support parallel computation to enable the performance gains of the recent hardware architectures is increasingly present in more conservative domains, such as safety-critical systems. Applications such as autonomous driving require levels of performance only achievable by fully leveraging the potential parallelism in these architectures. To address this requirement, the Ada language, designed for safety and robustness, is considering to support parallel features in the next revision of the standard (Ada 202X). Recent works have motivated the use of OpenMP, a de facto standard in high-performance computing, to enable parallelism in Ada, showing the compatibility of the two models, and proposing static analysis to enhance reliability. This paper summarizes these previous efforts towards the integration of OpenMP into Ada to exploit its benefits in terms of portability, programmability and performance, while providing the safety benefits of Ada in terms of correctness. The paper extends those works proposing and evaluating an application transformation that enables the OpenMP and the Ada runtimes to operate (under certain restrictions) as they were integrated. The objective is to allow Ada programmers to (naturally) experiment and evaluate the benefits of parallelizing concurrent Ada tasks with OpenMP while ensuring the compliance with both specifications.
2020-05-29T16:28:32ZRoyuela Alcázar, SaraPinho, Luís MiguelQuiñones, EduardoThe growing trend to support parallel computation to enable the performance gains of the recent hardware architectures is increasingly present in more conservative domains, such as safety-critical systems. Applications such as autonomous driving require levels of performance only achievable by fully leveraging the potential parallelism in these architectures. To address this requirement, the Ada language, designed for safety and robustness, is considering to support parallel features in the next revision of the standard (Ada 202X). Recent works have motivated the use of OpenMP, a de facto standard in high-performance computing, to enable parallelism in Ada, showing the compatibility of the two models, and proposing static analysis to enhance reliability. This paper summarizes these previous efforts towards the integration of OpenMP into Ada to exploit its benefits in terms of portability, programmability and performance, while providing the safety benefits of Ada in terms of correctness. The paper extends those works proposing and evaluating an application transformation that enables the OpenMP and the Ada runtimes to operate (under certain restrictions) as they were integrated. The objective is to allow Ada programmers to (naturally) experiment and evaluate the benefits of parallelizing concurrent Ada tasks with OpenMP while ensuring the compliance with both specifications.