Ponències/Comunicacions de congressoshttp://hdl.handle.net/2117/805282022-05-26T07:38:36Z2022-05-26T07:38:36ZRosneT: A block tensor algebra library for out-of-core quantum computing simulationSánchez Ramírez, SergioConejero Bañón, Francisco JavierLordan Gomis, FrancescQueralt Calafat, AnnaCortés, ToniBadia Sala, Rosa MariaGarcía Sáez, Arturhttp://hdl.handle.net/2117/3643442022-05-12T13:57:08Z2022-03-17T10:39:49ZRosneT: A block tensor algebra library for out-of-core quantum computing simulation
Sánchez Ramírez, Sergio; Conejero Bañón, Francisco Javier; Lordan Gomis, Francesc; Queralt Calafat, Anna; Cortés, Toni; Badia Sala, Rosa Maria; García Sáez, Artur
With the advent of more powerful Quantum Computers, the need for larger Quantum Simulations has boosted. As the amount of resources grows exponentially with size of the target system Tensor Networks emerge as an optimal framework with which we represent Quantum States in tensor factorizations. As the extent of a tensor network increases, so does the size of intermediate tensors requiring HPC tools for their manipulation. Simulations of medium-sized circuits cannot fit on local memory, and solutions for distributed contraction of tensors are scarce. In this work we present RosneT, a library for distributed, out-of-core block tensor algebra. We use the PyCOMPSs programming model to transform tensor operations into a collection of tasks handled by the COMPSs runtime, targeting executions in existing and upcoming Exascale supercomputers. We report results validating our approach showing good scalability in simulations of Quantum circuits of up to 53 qubits.
2022-03-17T10:39:49ZSánchez Ramírez, SergioConejero Bañón, Francisco JavierLordan Gomis, FrancescQueralt Calafat, AnnaCortés, ToniBadia Sala, Rosa MariaGarcía Sáez, ArturWith the advent of more powerful Quantum Computers, the need for larger Quantum Simulations has boosted. As the amount of resources grows exponentially with size of the target system Tensor Networks emerge as an optimal framework with which we represent Quantum States in tensor factorizations. As the extent of a tensor network increases, so does the size of intermediate tensors requiring HPC tools for their manipulation. Simulations of medium-sized circuits cannot fit on local memory, and solutions for distributed contraction of tensors are scarce. In this work we present RosneT, a library for distributed, out-of-core block tensor algebra. We use the PyCOMPSs programming model to transform tensor operations into a collection of tasks handled by the COMPSs runtime, targeting executions in existing and upcoming Exascale supercomputers. We report results validating our approach showing good scalability in simulations of Quantum circuits of up to 53 qubits.Epicentral region estimation using convolutional neural networksCruz de la Cruz, Stalin LeonelTous Liesa, RubénOtero Calviño, BeatrizAlvarado Bermúdez, LeonardoMus León, SergiRojas Ulacio, Otilio Josehttp://hdl.handle.net/2117/3630352022-05-17T10:08:45Z2022-02-24T14:02:34ZEpicentral region estimation using convolutional neural networks
Cruz de la Cruz, Stalin Leonel; Tous Liesa, Rubén; Otero Calviño, Beatriz; Alvarado Bermúdez, Leonardo; Mus León, Sergi; Rojas Ulacio, Otilio Jose
Recent works have assessed the capability of deep neural networks of estimating the epicentral source region of a seismic event from a single-station three-channel signal. In all the cases, the geographical partitioning is performed by automatic tessellation algorithms such as the Voronoi decomposition. This paper evaluates the hypothesis that the source region estimation accuracy is significantly increased if the geographical partitioning is performed considering the regional geological characteristics such as the tectonic plate boundaries. Also, it raises the transformation of the training data to increase the accuracy of the predictive model based on a Projected Coordinate Reference (PCR) System.
A deep convolutional neural network (CNN) is applied over the data recorded by the broadband stations of the Venezuelan Foundation of Seismological Research (FUNVISIS) in the region of 9.5 to 11.5ºN and 67.0 to 69.0ºW between April 2018 and April 2019. In order to estimate the epicentral source region of a detected event, several geographical tessellations provided by seismologists from the area are employed. These tessellations, with different number of partitions, consider the fault systems of the study region (San Sebastián, La Victoria and Morón fault systems). The results are compared to the ones obtained with automatic partitioning performed by the k-means algorithm.
2022-02-24T14:02:34ZCruz de la Cruz, Stalin LeonelTous Liesa, RubénOtero Calviño, BeatrizAlvarado Bermúdez, LeonardoMus León, SergiRojas Ulacio, Otilio JoseRecent works have assessed the capability of deep neural networks of estimating the epicentral source region of a seismic event from a single-station three-channel signal. In all the cases, the geographical partitioning is performed by automatic tessellation algorithms such as the Voronoi decomposition. This paper evaluates the hypothesis that the source region estimation accuracy is significantly increased if the geographical partitioning is performed considering the regional geological characteristics such as the tectonic plate boundaries. Also, it raises the transformation of the training data to increase the accuracy of the predictive model based on a Projected Coordinate Reference (PCR) System.
A deep convolutional neural network (CNN) is applied over the data recorded by the broadband stations of the Venezuelan Foundation of Seismological Research (FUNVISIS) in the region of 9.5 to 11.5ºN and 67.0 to 69.0ºW between April 2018 and April 2019. In order to estimate the epicentral source region of a detected event, several geographical tessellations provided by seismologists from the area are employed. These tessellations, with different number of partitions, consider the fault systems of the study region (San Sebastián, La Victoria and Morón fault systems). The results are compared to the ones obtained with automatic partitioning performed by the k-means algorithm.A data-driven wall-shear stress model for LES using gradient boosted decision treesRadhakrishnan, SarathAdu Gyamfi, LawrenceMiró Jané, ArnauFont García, BernatCalafell Sandiumenge, JoanLehmkuhl Barba, Oriolhttp://hdl.handle.net/2117/3586662022-02-13T10:25:01Z2021-12-16T12:00:47ZA data-driven wall-shear stress model for LES using gradient boosted decision trees
Radhakrishnan, Sarath; Adu Gyamfi, Lawrence; Miró Jané, Arnau; Font García, Bernat; Calafell Sandiumenge, Joan; Lehmkuhl Barba, Oriol
With the recent advances in machine learning, data-driven strategies could augment wall modeling in large eddy simulation (LES). In this work, a wall model based on gradient boosted decision trees is presented. The model is trained to learn the boundary layer of a turbulent channel flow so that it can be used to make predictions for significantly different flows where the equilibrium assumptions are valid. The methodology of building the model is presented in detail. The experiment conducted to choose the data for training is described. The trained model is tested a posteriori on a turbulent channel flow and the flow over a wall-mounted hump. The results from the tests are compared with that of an algebraic equilibrium wall model, and the performance is evaluated. The results show that the model has succeeded in learning the boundary layer, proving the effectiveness of our methodology of data-driven model development, which is extendable to complex flows.
2021-12-16T12:00:47ZRadhakrishnan, SarathAdu Gyamfi, LawrenceMiró Jané, ArnauFont García, BernatCalafell Sandiumenge, JoanLehmkuhl Barba, OriolWith the recent advances in machine learning, data-driven strategies could augment wall modeling in large eddy simulation (LES). In this work, a wall model based on gradient boosted decision trees is presented. The model is trained to learn the boundary layer of a turbulent channel flow so that it can be used to make predictions for significantly different flows where the equilibrium assumptions are valid. The methodology of building the model is presented in detail. The experiment conducted to choose the data for training is described. The trained model is tested a posteriori on a turbulent channel flow and the flow over a wall-mounted hump. The results from the tests are compared with that of an algebraic equilibrium wall model, and the performance is evaluated. The results show that the model has succeeded in learning the boundary layer, proving the effectiveness of our methodology of data-driven model development, which is extendable to complex flows.Development of the conditional moment closure with a multi-code approach in the frame of Large Eddy SimulationsPérez Sánchez, Eduardo JavierMira, DanielLehmkuhl Barba, OriolHouzeaux, Guillaumehttp://hdl.handle.net/2117/3501752021-07-27T16:59:53Z2021-07-27T14:06:38ZDevelopment of the conditional moment closure with a multi-code approach in the frame of Large Eddy Simulations
Pérez Sánchez, Eduardo Javier; Mira, Daniel; Lehmkuhl Barba, Oriol; Houzeaux, Guillaume
The Conditional Moment Closure (CMC), devised for turbulent combustion modelling, was implemented in the multiphysics code Alya, based on the Finite Element Method (FEM), in the frame of Large Eddy Simulations (LES) for unstructured meshes. A multi-code approach has been developed to run separately the transport equations for non-reacting variables (CFD) and the conditioned quantities (CMC) in two different meshes. The fundamental aspects of the algorithm are discussed while a new strategy for the interpolation between CFD and CMC meshes is described. The Cambridge swirling burner is analysed and simulations results are compared to measurements.
2021-07-27T14:06:38ZPérez Sánchez, Eduardo JavierMira, DanielLehmkuhl Barba, OriolHouzeaux, GuillaumeThe Conditional Moment Closure (CMC), devised for turbulent combustion modelling, was implemented in the multiphysics code Alya, based on the Finite Element Method (FEM), in the frame of Large Eddy Simulations (LES) for unstructured meshes. A multi-code approach has been developed to run separately the transport equations for non-reacting variables (CFD) and the conditioned quantities (CMC) in two different meshes. The fundamental aspects of the algorithm are discussed while a new strategy for the interpolation between CFD and CMC meshes is described. The Cambridge swirling burner is analysed and simulations results are compared to measurements.Optimization of the progress variable definition using a genetic algorithm for the combustion of complex fuelsBoth, AmbrusMira Martínez, DanielLehmkuhl Barba, Oriolhttp://hdl.handle.net/2117/3500102021-12-23T13:22:19Z2021-07-23T10:24:25ZOptimization of the progress variable definition using a genetic algorithm for the combustion of complex fuels
Both, Ambrus; Mira Martínez, Daniel; Lehmkuhl Barba, Oriol
In this work counterflow diffusion flamelets of n-heptane and air are used at stable and unsteady extinguishing conditions for building a thermo-chemical database for Computational Fluid Dynamics (CFD) calculations. The injectivity of the progress variable definition is achieved through an optimization process using a genetic algorithm in combination with an adequate objective function.
2021-07-23T10:24:25ZBoth, AmbrusMira Martínez, DanielLehmkuhl Barba, OriolIn this work counterflow diffusion flamelets of n-heptane and air are used at stable and unsteady extinguishing conditions for building a thermo-chemical database for Computational Fluid Dynamics (CFD) calculations. The injectivity of the progress variable definition is achieved through an optimization process using a genetic algorithm in combination with an adequate objective function.High-fidelity simulations of the mixing and combustion of a technically premixed hydrogen flameMira Martínez, DanielBoth, AmbrusLehmkuhl Barba, OriolGomez Gonzalez, SamuelForck, JonathanTanneberger, TomStathopoulos, PanagiotisPaschereit, Christian Oliverhttp://hdl.handle.net/2117/3500082021-12-23T13:21:27Z2021-07-23T10:16:58ZHigh-fidelity simulations of the mixing and combustion of a technically premixed hydrogen flame
Mira Martínez, Daniel; Both, Ambrus; Lehmkuhl Barba, Oriol; Gomez Gonzalez, Samuel; Forck, Jonathan; Tanneberger, Tom; Stathopoulos, Panagiotis; Paschereit, Christian Oliver
Numerical simulations are used here to obtain further understanding on the flashback mechanism of a technically premixed hydrogen flame operated in lean conditions. Recent work from the authors (Mira et al., 2020) showed that the hydrogen momentum strongly influences the flame dynamics and plays a fundamental role on the stability limits of the combustor. The axial injection influences the vortex breakdown position and therefore, the propensity of the burner to produce flashback. This work is an extension of our previous work where we include a detailed description of the mixing process and the influence of equivalence ratio fluctuations and heat loss on the flame dynamics.
2021-07-23T10:16:58ZMira Martínez, DanielBoth, AmbrusLehmkuhl Barba, OriolGomez Gonzalez, SamuelForck, JonathanTanneberger, TomStathopoulos, PanagiotisPaschereit, Christian OliverNumerical simulations are used here to obtain further understanding on the flashback mechanism of a technically premixed hydrogen flame operated in lean conditions. Recent work from the authors (Mira et al., 2020) showed that the hydrogen momentum strongly influences the flame dynamics and plays a fundamental role on the stability limits of the combustor. The axial injection influences the vortex breakdown position and therefore, the propensity of the burner to produce flashback. This work is an extension of our previous work where we include a detailed description of the mixing process and the influence of equivalence ratio fluctuations and heat loss on the flame dynamics.Semi implicit solver for high fidelity LES/DNS solutions of reacting flowsSurapaneni, AnuragMira Martínez, Danielhttp://hdl.handle.net/2117/3500072021-08-01T01:15:09Z2021-07-23T10:08:52ZSemi implicit solver for high fidelity LES/DNS solutions of reacting flows
Surapaneni, Anurag; Mira Martínez, Daniel
A semi-implicit/point-implicit stiff solver (ODEPIM) for integrating chemistry in context of high fidelity LES/DNS simulations is presented. A detailed overview of the algorithm and its numerical formulation is discussed. The solver is then compared against a state-of-the-art multi-order implicit solver CVODE in terms of accuracy and costs. It was found that for typical LES/DNS timestep sizes ODEPIM was about one order faster than CVODE, which would make it a compelling alternative to pure implicit methods. ODEPIM, as mentioned in the literature depends on a fixed sub-timestep size to do the integration steps, this limits the speedup that can be achieved by the solver. A modification to the ODEPIM algorithm to determine the sub-timestep size dynamically is proposed enabling greater speedup. Solutions of a triple flame problem obtained using static and dynamic ODEPIM solvers are compared against the reference solutions obtained with CVODE. The dynamic ODEPIM solver was found to use the maximum permissible sub-timestep size, which on average was 8 to 4 times higher that the fixed sub-timestep size of the static ODEPIM solver. The size of the sub-timestep size directly correlates to the cpu cost, hence the dynamic ODEPIM solver is significantly faster than the static solver, this improvement however, comes at negligible loss in accuracy.
2021-07-23T10:08:52ZSurapaneni, AnuragMira Martínez, DanielA semi-implicit/point-implicit stiff solver (ODEPIM) for integrating chemistry in context of high fidelity LES/DNS simulations is presented. A detailed overview of the algorithm and its numerical formulation is discussed. The solver is then compared against a state-of-the-art multi-order implicit solver CVODE in terms of accuracy and costs. It was found that for typical LES/DNS timestep sizes ODEPIM was about one order faster than CVODE, which would make it a compelling alternative to pure implicit methods. ODEPIM, as mentioned in the literature depends on a fixed sub-timestep size to do the integration steps, this limits the speedup that can be achieved by the solver. A modification to the ODEPIM algorithm to determine the sub-timestep size dynamically is proposed enabling greater speedup. Solutions of a triple flame problem obtained using static and dynamic ODEPIM solvers are compared against the reference solutions obtained with CVODE. The dynamic ODEPIM solver was found to use the maximum permissible sub-timestep size, which on average was 8 to 4 times higher that the fixed sub-timestep size of the static ODEPIM solver. The size of the sub-timestep size directly correlates to the cpu cost, hence the dynamic ODEPIM solver is significantly faster than the static solver, this improvement however, comes at negligible loss in accuracy.Subdivided linear and curved meshes preserving features of a linear mesh modelJiménez Ramos, AlbertGargallo Peiró, AbelRoca Navarro, Francisco Javierhttp://hdl.handle.net/2117/3468602021-10-10T02:13:24Z2021-06-08T10:21:53ZSubdivided linear and curved meshes preserving features of a linear mesh model
Jiménez Ramos, Albert; Gargallo Peiró, Abel; Roca Navarro, Francisco Javier
To provide straight-edged and curved piece-wise polynomial meshes that target a unique smooth geometry while preserving the sharp features and smooth regions of the model, we propose a new fast curving method based on hierarchical subdivision and blending. There is no need for underlying target geometry, it is only needed a straight-edged mesh with boundary entities marked to characterize the geometry features, and a list of features to recast. The method features a unique sharp-to-smooth modeling capability not fully available in standard CAD packages. The goal is to obtain a volume mesh that under successive refinement leads to smooth regions bounded by the corresponding sharp features. The examples show that it is possible to refine and obtain smooth curves and surfaces while preserving sharp features determined by vertices and polylines. We conclude that the method is well-suited to curve large quadratic and quartic meshes in low-memory configurations.
2021-06-08T10:21:53ZJiménez Ramos, AlbertGargallo Peiró, AbelRoca Navarro, Francisco JavierTo provide straight-edged and curved piece-wise polynomial meshes that target a unique smooth geometry while preserving the sharp features and smooth regions of the model, we propose a new fast curving method based on hierarchical subdivision and blending. There is no need for underlying target geometry, it is only needed a straight-edged mesh with boundary entities marked to characterize the geometry features, and a list of features to recast. The method features a unique sharp-to-smooth modeling capability not fully available in standard CAD packages. The goal is to obtain a volume mesh that under successive refinement leads to smooth regions bounded by the corresponding sharp features. The examples show that it is possible to refine and obtain smooth curves and surfaces while preserving sharp features determined by vertices and polylines. We conclude that the method is well-suited to curve large quadratic and quartic meshes in low-memory configurations.Improving object detection in paintings based on time contextsMarinescu, Maria CristinaReshetnikov, ArtemMore López, Joaquimhttp://hdl.handle.net/2117/3415172022-03-13T09:49:45Z2021-03-11T14:46:29ZImproving object detection in paintings based on time contexts
Marinescu, Maria Cristina; Reshetnikov, Artem; More López, Joaquim
This paper proposes a novel approach to object detection for the Cultural Heritage domain, which relies on combining Deep Learning and semantic metadata about candidate objects extracted from existing sources such as Wikidata, dictionaries, or Google NGram. Working with cultural heritage presents challenges not present in every-day images. In computer vision, object detection models are usually trained with datasets whose classes are not imaginary concepts, and have neither symbolic nor time-specific dimensions. Apart from this conceptual problem, the paintings are limited in number and represent the same concept in potentially very different styles. Finally, the metadata associated with the images is often poor or inexistent, which makes it hard to properly train a model. Our approach can improve the precision of object detection by placing the classes detected by a neural network model in time, based on the dates of their first known use. By taking into account the time of inception of objects such as the TV, cell phone, or scissors, and the appearance of some objects in the geographical space that corresponds to a painting (e.g. bananas or broccoli in 15th century Europe), we can correct and refine the detected objects based on their chronologic probability.
2021-03-11T14:46:29ZMarinescu, Maria CristinaReshetnikov, ArtemMore López, JoaquimThis paper proposes a novel approach to object detection for the Cultural Heritage domain, which relies on combining Deep Learning and semantic metadata about candidate objects extracted from existing sources such as Wikidata, dictionaries, or Google NGram. Working with cultural heritage presents challenges not present in every-day images. In computer vision, object detection models are usually trained with datasets whose classes are not imaginary concepts, and have neither symbolic nor time-specific dimensions. Apart from this conceptual problem, the paintings are limited in number and represent the same concept in potentially very different styles. Finally, the metadata associated with the images is often poor or inexistent, which makes it hard to properly train a model. Our approach can improve the precision of object detection by placing the classes detected by a neural network model in time, based on the dates of their first known use. By taking into account the time of inception of objects such as the TV, cell phone, or scissors, and the appearance of some objects in the geographical space that corresponds to a painting (e.g. bananas or broccoli in 15th century Europe), we can correct and refine the detected objects based on their chronologic probability.Measuring spatial subdivisions in urban mobility with mobile phone dataGraells Garrido, EduardoMeta, IreneSerra Burriel, FeliuReyes Valenzuela, Patricio AlejandroCucchietti, Fernandohttp://hdl.handle.net/2117/3367192022-05-08T02:56:09Z2021-02-02T14:19:37ZMeasuring spatial subdivisions in urban mobility with mobile phone data
Graells Garrido, Eduardo; Meta, Irene; Serra Burriel, Feliu; Reyes Valenzuela, Patricio Alejandro; Cucchietti, Fernando
Urban population grows constantly. By 2050 two thirds of the world population will reside in urban areas. This growth is faster and more complex than the ability of cities to measure and plan for their sustainability. To understand what makes a city inclusive for all, we define a methodology to identify and characterize spatial subdivisions: areas with over- and under-representation of specific population groups, named hot and cold spots respectively. Using aggregated mobile phone data, we apply this methodology to the city of Barcelona to assess the mobility of three groups of people: women, elders, and tourists. We find that, within the three groups, cold spots have a lower diversity of amenities and services than hot spots. Also, cold spots of women and tourists tend to have lower population income. These insights apply to the floating population of Barcelona, thus augmenting the scope of how inclusiveness can be analyzed in the city.
2021-02-02T14:19:37ZGraells Garrido, EduardoMeta, IreneSerra Burriel, FeliuReyes Valenzuela, Patricio AlejandroCucchietti, FernandoUrban population grows constantly. By 2050 two thirds of the world population will reside in urban areas. This growth is faster and more complex than the ability of cities to measure and plan for their sustainability. To understand what makes a city inclusive for all, we define a methodology to identify and characterize spatial subdivisions: areas with over- and under-representation of specific population groups, named hot and cold spots respectively. Using aggregated mobile phone data, we apply this methodology to the city of Barcelona to assess the mobility of three groups of people: women, elders, and tourists. We find that, within the three groups, cold spots have a lower diversity of amenities and services than hot spots. Also, cold spots of women and tourists tend to have lower population income. These insights apply to the floating population of Barcelona, thus augmenting the scope of how inclusiveness can be analyzed in the city.