Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/80528
Thu, 18 Aug 2022 19:19:00 GMT2022-08-18T19:19:00ZIncluding in Situ Visualization and Analysis in PDI
http://hdl.handle.net/2117/368981
Including in Situ Visualization and Analysis in PDI
Witzler, Christian; Zavala Aké, J. Miguel; Sierociński, Karol; Owen, Herbert
The goal of this work was to integrate in situ possibilities into the general-purpose code-coupling library PDI [1]. This is done using the simulation code Alya as an example. Here, an open design is taken into account to later create possibilities to extend this to other simulation codes, that are using PDI.
Here, an in transit solution was chosen to separate the simulation as much as possible from the analysis and visualization. To implement this, ADIOS2 is used for data transport. However, to prevent too strong a commitment to one tool, SENSEI is interposed between simulation and ADIOS2 as well as in the in-transit endpoint between ADIOS2 and the visualization software. This allows a user who wants a different solution to easily implement it. However, the visualization with ParaView Catalyst was chosen as default for the time being.
Wed, 22 Jun 2022 09:58:23 GMThttp://hdl.handle.net/2117/3689812022-06-22T09:58:23ZWitzler, ChristianZavala Aké, J. MiguelSierociński, KarolOwen, HerbertThe goal of this work was to integrate in situ possibilities into the general-purpose code-coupling library PDI [1]. This is done using the simulation code Alya as an example. Here, an open design is taken into account to later create possibilities to extend this to other simulation codes, that are using PDI.
Here, an in transit solution was chosen to separate the simulation as much as possible from the analysis and visualization. To implement this, ADIOS2 is used for data transport. However, to prevent too strong a commitment to one tool, SENSEI is interposed between simulation and ADIOS2 as well as in the in-transit endpoint between ADIOS2 and the visualization software. This allows a user who wants a different solution to easily implement it. However, the visualization with ParaView Catalyst was chosen as default for the time being.RosneT: A block tensor algebra library for out-of-core quantum computing simulation
http://hdl.handle.net/2117/364344
RosneT: A block tensor algebra library for out-of-core quantum computing simulation
Sánchez Ramírez, Sergio; Conejero Bañón, Francisco Javier; Lordan Gomis, Francesc; Queralt Calafat, Anna; Cortés, Toni; Badia Sala, Rosa Maria; García Sáez, Artur
With the advent of more powerful Quantum Computers, the need for larger Quantum Simulations has boosted. As the amount of resources grows exponentially with size of the target system Tensor Networks emerge as an optimal framework with which we represent Quantum States in tensor factorizations. As the extent of a tensor network increases, so does the size of intermediate tensors requiring HPC tools for their manipulation. Simulations of medium-sized circuits cannot fit on local memory, and solutions for distributed contraction of tensors are scarce. In this work we present RosneT, a library for distributed, out-of-core block tensor algebra. We use the PyCOMPSs programming model to transform tensor operations into a collection of tasks handled by the COMPSs runtime, targeting executions in existing and upcoming Exascale supercomputers. We report results validating our approach showing good scalability in simulations of Quantum circuits of up to 53 qubits.
Thu, 17 Mar 2022 10:39:49 GMThttp://hdl.handle.net/2117/3643442022-03-17T10:39:49ZSánchez Ramírez, SergioConejero Bañón, Francisco JavierLordan Gomis, FrancescQueralt Calafat, AnnaCortés, ToniBadia Sala, Rosa MariaGarcía Sáez, ArturWith the advent of more powerful Quantum Computers, the need for larger Quantum Simulations has boosted. As the amount of resources grows exponentially with size of the target system Tensor Networks emerge as an optimal framework with which we represent Quantum States in tensor factorizations. As the extent of a tensor network increases, so does the size of intermediate tensors requiring HPC tools for their manipulation. Simulations of medium-sized circuits cannot fit on local memory, and solutions for distributed contraction of tensors are scarce. In this work we present RosneT, a library for distributed, out-of-core block tensor algebra. We use the PyCOMPSs programming model to transform tensor operations into a collection of tasks handled by the COMPSs runtime, targeting executions in existing and upcoming Exascale supercomputers. We report results validating our approach showing good scalability in simulations of Quantum circuits of up to 53 qubits.Epicentral region estimation using convolutional neural networks
http://hdl.handle.net/2117/363035
Epicentral region estimation using convolutional neural networks
Cruz de la Cruz, Stalin Leonel; Tous Liesa, Rubén; Otero Calviño, Beatriz; Alvarado Bermúdez, Leonardo; Mus León, Sergi; Rojas Ulacio, Otilio Jose
Recent works have assessed the capability of deep neural networks of estimating the epicentral source region of a seismic event from a single-station three-channel signal. In all the cases, the geographical partitioning is performed by automatic tessellation algorithms such as the Voronoi decomposition. This paper evaluates the hypothesis that the source region estimation accuracy is significantly increased if the geographical partitioning is performed considering the regional geological characteristics such as the tectonic plate boundaries. Also, it raises the transformation of the training data to increase the accuracy of the predictive model based on a Projected Coordinate Reference (PCR) System.
A deep convolutional neural network (CNN) is applied over the data recorded by the broadband stations of the Venezuelan Foundation of Seismological Research (FUNVISIS) in the region of 9.5 to 11.5ºN and 67.0 to 69.0ºW between April 2018 and April 2019. In order to estimate the epicentral source region of a detected event, several geographical tessellations provided by seismologists from the area are employed. These tessellations, with different number of partitions, consider the fault systems of the study region (San Sebastián, La Victoria and Morón fault systems). The results are compared to the ones obtained with automatic partitioning performed by the k-means algorithm.
Thu, 24 Feb 2022 14:02:34 GMThttp://hdl.handle.net/2117/3630352022-02-24T14:02:34ZCruz de la Cruz, Stalin LeonelTous Liesa, RubénOtero Calviño, BeatrizAlvarado Bermúdez, LeonardoMus León, SergiRojas Ulacio, Otilio JoseRecent works have assessed the capability of deep neural networks of estimating the epicentral source region of a seismic event from a single-station three-channel signal. In all the cases, the geographical partitioning is performed by automatic tessellation algorithms such as the Voronoi decomposition. This paper evaluates the hypothesis that the source region estimation accuracy is significantly increased if the geographical partitioning is performed considering the regional geological characteristics such as the tectonic plate boundaries. Also, it raises the transformation of the training data to increase the accuracy of the predictive model based on a Projected Coordinate Reference (PCR) System.
A deep convolutional neural network (CNN) is applied over the data recorded by the broadband stations of the Venezuelan Foundation of Seismological Research (FUNVISIS) in the region of 9.5 to 11.5ºN and 67.0 to 69.0ºW between April 2018 and April 2019. In order to estimate the epicentral source region of a detected event, several geographical tessellations provided by seismologists from the area are employed. These tessellations, with different number of partitions, consider the fault systems of the study region (San Sebastián, La Victoria and Morón fault systems). The results are compared to the ones obtained with automatic partitioning performed by the k-means algorithm.A data-driven wall-shear stress model for LES using gradient boosted decision trees
http://hdl.handle.net/2117/358666
A data-driven wall-shear stress model for LES using gradient boosted decision trees
Radhakrishnan, Sarath; Adu Gyamfi, Lawrence; Miró Jané, Arnau; Font García, Bernat; Calafell Sandiumenge, Joan; Lehmkuhl Barba, Oriol
With the recent advances in machine learning, data-driven strategies could augment wall modeling in large eddy simulation (LES). In this work, a wall model based on gradient boosted decision trees is presented. The model is trained to learn the boundary layer of a turbulent channel flow so that it can be used to make predictions for significantly different flows where the equilibrium assumptions are valid. The methodology of building the model is presented in detail. The experiment conducted to choose the data for training is described. The trained model is tested a posteriori on a turbulent channel flow and the flow over a wall-mounted hump. The results from the tests are compared with that of an algebraic equilibrium wall model, and the performance is evaluated. The results show that the model has succeeded in learning the boundary layer, proving the effectiveness of our methodology of data-driven model development, which is extendable to complex flows.
Thu, 16 Dec 2021 12:00:47 GMThttp://hdl.handle.net/2117/3586662021-12-16T12:00:47ZRadhakrishnan, SarathAdu Gyamfi, LawrenceMiró Jané, ArnauFont García, BernatCalafell Sandiumenge, JoanLehmkuhl Barba, OriolWith the recent advances in machine learning, data-driven strategies could augment wall modeling in large eddy simulation (LES). In this work, a wall model based on gradient boosted decision trees is presented. The model is trained to learn the boundary layer of a turbulent channel flow so that it can be used to make predictions for significantly different flows where the equilibrium assumptions are valid. The methodology of building the model is presented in detail. The experiment conducted to choose the data for training is described. The trained model is tested a posteriori on a turbulent channel flow and the flow over a wall-mounted hump. The results from the tests are compared with that of an algebraic equilibrium wall model, and the performance is evaluated. The results show that the model has succeeded in learning the boundary layer, proving the effectiveness of our methodology of data-driven model development, which is extendable to complex flows.Development of the conditional moment closure with a multi-code approach in the frame of Large Eddy Simulations
http://hdl.handle.net/2117/350175
Development of the conditional moment closure with a multi-code approach in the frame of Large Eddy Simulations
Pérez Sánchez, Eduardo Javier; Mira, Daniel; Lehmkuhl Barba, Oriol; Houzeaux, Guillaume
The Conditional Moment Closure (CMC), devised for turbulent combustion modelling, was implemented in the multiphysics code Alya, based on the Finite Element Method (FEM), in the frame of Large Eddy Simulations (LES) for unstructured meshes. A multi-code approach has been developed to run separately the transport equations for non-reacting variables (CFD) and the conditioned quantities (CMC) in two different meshes. The fundamental aspects of the algorithm are discussed while a new strategy for the interpolation between CFD and CMC meshes is described. The Cambridge swirling burner is analysed and simulations results are compared to measurements.
Tue, 27 Jul 2021 14:06:38 GMThttp://hdl.handle.net/2117/3501752021-07-27T14:06:38ZPérez Sánchez, Eduardo JavierMira, DanielLehmkuhl Barba, OriolHouzeaux, GuillaumeThe Conditional Moment Closure (CMC), devised for turbulent combustion modelling, was implemented in the multiphysics code Alya, based on the Finite Element Method (FEM), in the frame of Large Eddy Simulations (LES) for unstructured meshes. A multi-code approach has been developed to run separately the transport equations for non-reacting variables (CFD) and the conditioned quantities (CMC) in two different meshes. The fundamental aspects of the algorithm are discussed while a new strategy for the interpolation between CFD and CMC meshes is described. The Cambridge swirling burner is analysed and simulations results are compared to measurements.Optimization of the progress variable definition using a genetic algorithm for the combustion of complex fuels
http://hdl.handle.net/2117/350010
Optimization of the progress variable definition using a genetic algorithm for the combustion of complex fuels
Both, Ambrus; Mira Martínez, Daniel; Lehmkuhl Barba, Oriol
In this work counterflow diffusion flamelets of n-heptane and air are used at stable and unsteady extinguishing conditions for building a thermo-chemical database for Computational Fluid Dynamics (CFD) calculations. The injectivity of the progress variable definition is achieved through an optimization process using a genetic algorithm in combination with an adequate objective function.
Fri, 23 Jul 2021 10:24:25 GMThttp://hdl.handle.net/2117/3500102021-07-23T10:24:25ZBoth, AmbrusMira Martínez, DanielLehmkuhl Barba, OriolIn this work counterflow diffusion flamelets of n-heptane and air are used at stable and unsteady extinguishing conditions for building a thermo-chemical database for Computational Fluid Dynamics (CFD) calculations. The injectivity of the progress variable definition is achieved through an optimization process using a genetic algorithm in combination with an adequate objective function.High-fidelity simulations of the mixing and combustion of a technically premixed hydrogen flame
http://hdl.handle.net/2117/350008
High-fidelity simulations of the mixing and combustion of a technically premixed hydrogen flame
Mira Martínez, Daniel; Both, Ambrus; Lehmkuhl Barba, Oriol; Gomez Gonzalez, Samuel; Forck, Jonathan; Tanneberger, Tom; Stathopoulos, Panagiotis; Paschereit, Christian Oliver
Numerical simulations are used here to obtain further understanding on the flashback mechanism of a technically premixed hydrogen flame operated in lean conditions. Recent work from the authors (Mira et al., 2020) showed that the hydrogen momentum strongly influences the flame dynamics and plays a fundamental role on the stability limits of the combustor. The axial injection influences the vortex breakdown position and therefore, the propensity of the burner to produce flashback. This work is an extension of our previous work where we include a detailed description of the mixing process and the influence of equivalence ratio fluctuations and heat loss on the flame dynamics.
Fri, 23 Jul 2021 10:16:58 GMThttp://hdl.handle.net/2117/3500082021-07-23T10:16:58ZMira Martínez, DanielBoth, AmbrusLehmkuhl Barba, OriolGomez Gonzalez, SamuelForck, JonathanTanneberger, TomStathopoulos, PanagiotisPaschereit, Christian OliverNumerical simulations are used here to obtain further understanding on the flashback mechanism of a technically premixed hydrogen flame operated in lean conditions. Recent work from the authors (Mira et al., 2020) showed that the hydrogen momentum strongly influences the flame dynamics and plays a fundamental role on the stability limits of the combustor. The axial injection influences the vortex breakdown position and therefore, the propensity of the burner to produce flashback. This work is an extension of our previous work where we include a detailed description of the mixing process and the influence of equivalence ratio fluctuations and heat loss on the flame dynamics.Semi implicit solver for high fidelity LES/DNS solutions of reacting flows
http://hdl.handle.net/2117/350007
Semi implicit solver for high fidelity LES/DNS solutions of reacting flows
Surapaneni, Anurag; Mira Martínez, Daniel
A semi-implicit/point-implicit stiff solver (ODEPIM) for integrating chemistry in context of high fidelity LES/DNS simulations is presented. A detailed overview of the algorithm and its numerical formulation is discussed. The solver is then compared against a state-of-the-art multi-order implicit solver CVODE in terms of accuracy and costs. It was found that for typical LES/DNS timestep sizes ODEPIM was about one order faster than CVODE, which would make it a compelling alternative to pure implicit methods. ODEPIM, as mentioned in the literature depends on a fixed sub-timestep size to do the integration steps, this limits the speedup that can be achieved by the solver. A modification to the ODEPIM algorithm to determine the sub-timestep size dynamically is proposed enabling greater speedup. Solutions of a triple flame problem obtained using static and dynamic ODEPIM solvers are compared against the reference solutions obtained with CVODE. The dynamic ODEPIM solver was found to use the maximum permissible sub-timestep size, which on average was 8 to 4 times higher that the fixed sub-timestep size of the static ODEPIM solver. The size of the sub-timestep size directly correlates to the cpu cost, hence the dynamic ODEPIM solver is significantly faster than the static solver, this improvement however, comes at negligible loss in accuracy.
Fri, 23 Jul 2021 10:08:52 GMThttp://hdl.handle.net/2117/3500072021-07-23T10:08:52ZSurapaneni, AnuragMira Martínez, DanielA semi-implicit/point-implicit stiff solver (ODEPIM) for integrating chemistry in context of high fidelity LES/DNS simulations is presented. A detailed overview of the algorithm and its numerical formulation is discussed. The solver is then compared against a state-of-the-art multi-order implicit solver CVODE in terms of accuracy and costs. It was found that for typical LES/DNS timestep sizes ODEPIM was about one order faster than CVODE, which would make it a compelling alternative to pure implicit methods. ODEPIM, as mentioned in the literature depends on a fixed sub-timestep size to do the integration steps, this limits the speedup that can be achieved by the solver. A modification to the ODEPIM algorithm to determine the sub-timestep size dynamically is proposed enabling greater speedup. Solutions of a triple flame problem obtained using static and dynamic ODEPIM solvers are compared against the reference solutions obtained with CVODE. The dynamic ODEPIM solver was found to use the maximum permissible sub-timestep size, which on average was 8 to 4 times higher that the fixed sub-timestep size of the static ODEPIM solver. The size of the sub-timestep size directly correlates to the cpu cost, hence the dynamic ODEPIM solver is significantly faster than the static solver, this improvement however, comes at negligible loss in accuracy.Subdivided linear and curved meshes preserving features of a linear mesh model
http://hdl.handle.net/2117/346860
Subdivided linear and curved meshes preserving features of a linear mesh model
Jiménez Ramos, Albert; Gargallo Peiró, Abel; Roca Navarro, Francisco Javier
To provide straight-edged and curved piece-wise polynomial meshes that target a unique smooth geometry while preserving the sharp features and smooth regions of the model, we propose a new fast curving method based on hierarchical subdivision and blending. There is no need for underlying target geometry, it is only needed a straight-edged mesh with boundary entities marked to characterize the geometry features, and a list of features to recast. The method features a unique sharp-to-smooth modeling capability not fully available in standard CAD packages. The goal is to obtain a volume mesh that under successive refinement leads to smooth regions bounded by the corresponding sharp features. The examples show that it is possible to refine and obtain smooth curves and surfaces while preserving sharp features determined by vertices and polylines. We conclude that the method is well-suited to curve large quadratic and quartic meshes in low-memory configurations.
Tue, 08 Jun 2021 10:21:53 GMThttp://hdl.handle.net/2117/3468602021-06-08T10:21:53ZJiménez Ramos, AlbertGargallo Peiró, AbelRoca Navarro, Francisco JavierTo provide straight-edged and curved piece-wise polynomial meshes that target a unique smooth geometry while preserving the sharp features and smooth regions of the model, we propose a new fast curving method based on hierarchical subdivision and blending. There is no need for underlying target geometry, it is only needed a straight-edged mesh with boundary entities marked to characterize the geometry features, and a list of features to recast. The method features a unique sharp-to-smooth modeling capability not fully available in standard CAD packages. The goal is to obtain a volume mesh that under successive refinement leads to smooth regions bounded by the corresponding sharp features. The examples show that it is possible to refine and obtain smooth curves and surfaces while preserving sharp features determined by vertices and polylines. We conclude that the method is well-suited to curve large quadratic and quartic meshes in low-memory configurations.Improving object detection in paintings based on time contexts
http://hdl.handle.net/2117/341517
Improving object detection in paintings based on time contexts
Marinescu, Maria Cristina; Reshetnikov, Artem; More López, Joaquim
This paper proposes a novel approach to object detection for the Cultural Heritage domain, which relies on combining Deep Learning and semantic metadata about candidate objects extracted from existing sources such as Wikidata, dictionaries, or Google NGram. Working with cultural heritage presents challenges not present in every-day images. In computer vision, object detection models are usually trained with datasets whose classes are not imaginary concepts, and have neither symbolic nor time-specific dimensions. Apart from this conceptual problem, the paintings are limited in number and represent the same concept in potentially very different styles. Finally, the metadata associated with the images is often poor or inexistent, which makes it hard to properly train a model. Our approach can improve the precision of object detection by placing the classes detected by a neural network model in time, based on the dates of their first known use. By taking into account the time of inception of objects such as the TV, cell phone, or scissors, and the appearance of some objects in the geographical space that corresponds to a painting (e.g. bananas or broccoli in 15th century Europe), we can correct and refine the detected objects based on their chronologic probability.
Thu, 11 Mar 2021 14:46:29 GMThttp://hdl.handle.net/2117/3415172021-03-11T14:46:29ZMarinescu, Maria CristinaReshetnikov, ArtemMore López, JoaquimThis paper proposes a novel approach to object detection for the Cultural Heritage domain, which relies on combining Deep Learning and semantic metadata about candidate objects extracted from existing sources such as Wikidata, dictionaries, or Google NGram. Working with cultural heritage presents challenges not present in every-day images. In computer vision, object detection models are usually trained with datasets whose classes are not imaginary concepts, and have neither symbolic nor time-specific dimensions. Apart from this conceptual problem, the paintings are limited in number and represent the same concept in potentially very different styles. Finally, the metadata associated with the images is often poor or inexistent, which makes it hard to properly train a model. Our approach can improve the precision of object detection by placing the classes detected by a neural network model in time, based on the dates of their first known use. By taking into account the time of inception of objects such as the TV, cell phone, or scissors, and the appearance of some objects in the geographical space that corresponds to a painting (e.g. bananas or broccoli in 15th century Europe), we can correct and refine the detected objects based on their chronologic probability.