Ponències/Comunicacions de congressos
http://hdl.handle.net/2117/80528
2021-07-30T08:40:41ZDevelopment of the conditional moment closure with a multi-code approach in the frame of Large Eddy Simulations
http://hdl.handle.net/2117/350175
Development of the conditional moment closure with a multi-code approach in the frame of Large Eddy Simulations
Pérez Sánchez, Eduardo Javier; Mira, Daniel; Lehmkuhl Barba, Oriol; Houzeaux, Guillaume
The Conditional Moment Closure (CMC), devised for turbulent combustion modelling, was implemented in the multiphysics code Alya, based on the Finite Element Method (FEM), in the frame of Large Eddy Simulations (LES) for unstructured meshes. A multi-code approach has been developed to run separately the transport equations for non-reacting variables (CFD) and the conditioned quantities (CMC) in two different meshes. The fundamental aspects of the algorithm are discussed while a new strategy for the interpolation between CFD and CMC meshes is described. The Cambridge swirling burner is analysed and simulations results are compared to measurements.
2021-07-27T14:06:38ZPérez Sánchez, Eduardo JavierMira, DanielLehmkuhl Barba, OriolHouzeaux, GuillaumeThe Conditional Moment Closure (CMC), devised for turbulent combustion modelling, was implemented in the multiphysics code Alya, based on the Finite Element Method (FEM), in the frame of Large Eddy Simulations (LES) for unstructured meshes. A multi-code approach has been developed to run separately the transport equations for non-reacting variables (CFD) and the conditioned quantities (CMC) in two different meshes. The fundamental aspects of the algorithm are discussed while a new strategy for the interpolation between CFD and CMC meshes is described. The Cambridge swirling burner is analysed and simulations results are compared to measurements.Optimization of the progress variable definition using a genetic algorithm for the combustion of complex fuels
http://hdl.handle.net/2117/350010
Optimization of the progress variable definition using a genetic algorithm for the combustion of complex fuels
Both, Ambrus; Mira Martínez, Daniel; Lehmkuhl Barba, Oriol
In this work counterflow diffusion flamelets of n-heptane and air are used at stable and unsteady extinguishing conditions for building a thermo-chemical database for Computational Fluid Dynamics (CFD) calculations. The injectivity of the progress variable definition is achieved through an optimization process using a genetic algorithm in combination with an adequate objective function.
2021-07-23T10:24:25ZBoth, AmbrusMira Martínez, DanielLehmkuhl Barba, OriolIn this work counterflow diffusion flamelets of n-heptane and air are used at stable and unsteady extinguishing conditions for building a thermo-chemical database for Computational Fluid Dynamics (CFD) calculations. The injectivity of the progress variable definition is achieved through an optimization process using a genetic algorithm in combination with an adequate objective function.High-fidelity simulations of the mixing and combustion of a technically premixed hydrogen flame
http://hdl.handle.net/2117/350008
High-fidelity simulations of the mixing and combustion of a technically premixed hydrogen flame
Mira Martínez, Daniel; Both, Ambrus; Lehmkuhl Barba, Oriol; Gomez Gonzalez, Samuel; Forck, Jonathan; Tanneberger, Tom; Stathopoulos, Panagiotis; Paschereit, Christian Oliver
Numerical simulations are used here to obtain further understanding on the flashback mechanism of a technically premixed hydrogen flame operated in lean conditions. Recent work from the authors (Mira et al., 2020) showed that the hydrogen momentum strongly influences the flame dynamics and plays a fundamental role on the stability limits of the combustor. The axial injection influences the vortex breakdown position and therefore, the propensity of the burner to produce flashback. This work is an extension of our previous work where we include a detailed description of the mixing process and the influence of equivalence ratio fluctuations and heat loss on the flame dynamics.
2021-07-23T10:16:58ZMira Martínez, DanielBoth, AmbrusLehmkuhl Barba, OriolGomez Gonzalez, SamuelForck, JonathanTanneberger, TomStathopoulos, PanagiotisPaschereit, Christian OliverNumerical simulations are used here to obtain further understanding on the flashback mechanism of a technically premixed hydrogen flame operated in lean conditions. Recent work from the authors (Mira et al., 2020) showed that the hydrogen momentum strongly influences the flame dynamics and plays a fundamental role on the stability limits of the combustor. The axial injection influences the vortex breakdown position and therefore, the propensity of the burner to produce flashback. This work is an extension of our previous work where we include a detailed description of the mixing process and the influence of equivalence ratio fluctuations and heat loss on the flame dynamics.Semi implicit solver for high fidelity LES/DNS solutions of reacting flows
http://hdl.handle.net/2117/350007
Semi implicit solver for high fidelity LES/DNS solutions of reacting flows
Surapaneni, Anurag; Mira Martínez, Daniel
A semi-implicit/point-implicit stiff solver (ODEPIM) for integrating chemistry in context of high fidelity LES/DNS simulations is presented. A detailed overview of the algorithm and its numerical formulation is discussed. The solver is then compared against a state-of-the-art multi-order implicit solver CVODE in terms of accuracy and costs. It was found that for typical LES/DNS timestep sizes ODEPIM was about one order faster than CVODE, which would make it a compelling alternative to pure implicit methods. ODEPIM, as mentioned in the literature depends on a fixed sub-timestep size to do the integration steps, this limits the speedup that can be achieved by the solver. A modification to the ODEPIM algorithm to determine the sub-timestep size dynamically is proposed enabling greater speedup. Solutions of a triple flame problem obtained using static and dynamic ODEPIM solvers are compared against the reference solutions obtained with CVODE. The dynamic ODEPIM solver was found to use the maximum permissible sub-timestep size, which on average was 8 to 4 times higher that the fixed sub-timestep size of the static ODEPIM solver. The size of the sub-timestep size directly correlates to the cpu cost, hence the dynamic ODEPIM solver is significantly faster than the static solver, this improvement however, comes at negligible loss in accuracy.
2021-07-23T10:08:52ZSurapaneni, AnuragMira Martínez, DanielA semi-implicit/point-implicit stiff solver (ODEPIM) for integrating chemistry in context of high fidelity LES/DNS simulations is presented. A detailed overview of the algorithm and its numerical formulation is discussed. The solver is then compared against a state-of-the-art multi-order implicit solver CVODE in terms of accuracy and costs. It was found that for typical LES/DNS timestep sizes ODEPIM was about one order faster than CVODE, which would make it a compelling alternative to pure implicit methods. ODEPIM, as mentioned in the literature depends on a fixed sub-timestep size to do the integration steps, this limits the speedup that can be achieved by the solver. A modification to the ODEPIM algorithm to determine the sub-timestep size dynamically is proposed enabling greater speedup. Solutions of a triple flame problem obtained using static and dynamic ODEPIM solvers are compared against the reference solutions obtained with CVODE. The dynamic ODEPIM solver was found to use the maximum permissible sub-timestep size, which on average was 8 to 4 times higher that the fixed sub-timestep size of the static ODEPIM solver. The size of the sub-timestep size directly correlates to the cpu cost, hence the dynamic ODEPIM solver is significantly faster than the static solver, this improvement however, comes at negligible loss in accuracy.Subdivided linear and curved meshes preserving features of a linear mesh model
http://hdl.handle.net/2117/346860
Subdivided linear and curved meshes preserving features of a linear mesh model
Jiménez Ramos, Albert; Gargallo Peiró, Abel; Roca Navarro, Francisco Javier
To provide straight-edged and curved piece-wise polynomial meshes that target a unique smooth geometry while preserving the sharp features and smooth regions of the model, we propose a new fast curving method based on hierarchical subdivision and blending. There is no need for underlying target geometry, it is only needed a straight-edged mesh with boundary entities marked to characterize the geometry features, and a list of features to recast. The method features a unique sharp-to-smooth modeling capability not fully available in standard CAD packages. The goal is to obtain a volume mesh that under successive refinement leads to smooth regions bounded by the corresponding sharp features. The examples show that it is possible to refine and obtain smooth curves and surfaces while preserving sharp features determined by vertices and polylines. We conclude that the method is well-suited to curve large quadratic and quartic meshes in low-memory configurations.
2021-06-08T10:21:53ZJiménez Ramos, AlbertGargallo Peiró, AbelRoca Navarro, Francisco JavierTo provide straight-edged and curved piece-wise polynomial meshes that target a unique smooth geometry while preserving the sharp features and smooth regions of the model, we propose a new fast curving method based on hierarchical subdivision and blending. There is no need for underlying target geometry, it is only needed a straight-edged mesh with boundary entities marked to characterize the geometry features, and a list of features to recast. The method features a unique sharp-to-smooth modeling capability not fully available in standard CAD packages. The goal is to obtain a volume mesh that under successive refinement leads to smooth regions bounded by the corresponding sharp features. The examples show that it is possible to refine and obtain smooth curves and surfaces while preserving sharp features determined by vertices and polylines. We conclude that the method is well-suited to curve large quadratic and quartic meshes in low-memory configurations.Improving object detection in paintings based on time contexts
http://hdl.handle.net/2117/341517
Improving object detection in paintings based on time contexts
Marinescu, Maria Cristina; Reshetnikov, Artem; More López, Joaquim
This paper proposes a novel approach to object detection for the Cultural Heritage domain, which relies on combining Deep Learning and semantic metadata about candidate objects extracted from existing sources such as Wikidata, dictionaries, or Google NGram. Working with cultural heritage presents challenges not present in every-day images. In computer vision, object detection models are usually trained with datasets whose classes are not imaginary concepts, and have neither symbolic nor time-specific dimensions. Apart from this conceptual problem, the paintings are limited in number and represent the same concept in potentially very different styles. Finally, the metadata associated with the images is often poor or inexistent, which makes it hard to properly train a model. Our approach can improve the precision of object detection by placing the classes detected by a neural network model in time, based on the dates of their first known use. By taking into account the time of inception of objects such as the TV, cell phone, or scissors, and the appearance of some objects in the geographical space that corresponds to a painting (e.g. bananas or broccoli in 15th century Europe), we can correct and refine the detected objects based on their chronologic probability.
2021-03-11T14:46:29ZMarinescu, Maria CristinaReshetnikov, ArtemMore López, JoaquimThis paper proposes a novel approach to object detection for the Cultural Heritage domain, which relies on combining Deep Learning and semantic metadata about candidate objects extracted from existing sources such as Wikidata, dictionaries, or Google NGram. Working with cultural heritage presents challenges not present in every-day images. In computer vision, object detection models are usually trained with datasets whose classes are not imaginary concepts, and have neither symbolic nor time-specific dimensions. Apart from this conceptual problem, the paintings are limited in number and represent the same concept in potentially very different styles. Finally, the metadata associated with the images is often poor or inexistent, which makes it hard to properly train a model. Our approach can improve the precision of object detection by placing the classes detected by a neural network model in time, based on the dates of their first known use. By taking into account the time of inception of objects such as the TV, cell phone, or scissors, and the appearance of some objects in the geographical space that corresponds to a painting (e.g. bananas or broccoli in 15th century Europe), we can correct and refine the detected objects based on their chronologic probability.Measuring spatial subdivisions in urban mobility with mobile phone data
http://hdl.handle.net/2117/336719
Measuring spatial subdivisions in urban mobility with mobile phone data
Graells Garrido, Eduardo; Meta, Irene; Serra Burriel, Feliu; Reyes Valenzuela, Patricio Alejandro; Cucchietti, Fernando
Urban population grows constantly. By 2050 two thirds of the world population will reside in urban areas. This growth is faster and more complex than the ability of cities to measure and plan for their sustainability. To understand what makes a city inclusive for all, we define a methodology to identify and characterize spatial subdivisions: areas with over- and under-representation of specific population groups, named hot and cold spots respectively. Using aggregated mobile phone data, we apply this methodology to the city of Barcelona to assess the mobility of three groups of people: women, elders, and tourists. We find that, within the three groups, cold spots have a lower diversity of amenities and services than hot spots. Also, cold spots of women and tourists tend to have lower population income. These insights apply to the floating population of Barcelona, thus augmenting the scope of how inclusiveness can be analyzed in the city.
2021-02-02T14:19:37ZGraells Garrido, EduardoMeta, IreneSerra Burriel, FeliuReyes Valenzuela, Patricio AlejandroCucchietti, FernandoUrban population grows constantly. By 2050 two thirds of the world population will reside in urban areas. This growth is faster and more complex than the ability of cities to measure and plan for their sustainability. To understand what makes a city inclusive for all, we define a methodology to identify and characterize spatial subdivisions: areas with over- and under-representation of specific population groups, named hot and cold spots respectively. Using aggregated mobile phone data, we apply this methodology to the city of Barcelona to assess the mobility of three groups of people: women, elders, and tourists. We find that, within the three groups, cold spots have a lower diversity of amenities and services than hot spots. Also, cold spots of women and tourists tend to have lower population income. These insights apply to the floating population of Barcelona, thus augmenting the scope of how inclusiveness can be analyzed in the city.Random forest parameterization for earthquake catalog generation
http://hdl.handle.net/2117/335328
Random forest parameterization for earthquake catalog generation
Llácer Giner, David; Otero Calviño, Beatriz; Tous Liesa, Rubén; Monterrubio Velasco, Marisol; Carrasco Jiménez, José; Rojas Ulacio, Otilio
An earthquake is the vibration pattern of the Earth’s crust induced by the sliding of geological faults. They are usually recorded for later studies. However, strong earthquakes are rare, small-magnitude events may pass unnoticed and monitoring networks are limited in number and efficiency. Thus, earthquake catalog are incomplete and scarce, and researchers have developed simulators of such catalogs. In this work, we start from synthetic catalogs generated with the TREMOL-3D software. TREMOL-3D is a stochastic-based method to produce earthquake catalogs with different statistical patterns, depending on certain input parameters that mimics physical parameters. When an appropriate set of parameters are used, TREMOL-3D could generate synthetic catalogs with similar statistical properties observed in real catalogs. However, because of the size of the parameter space, a manual searching becomes unbearable. Therefore, aiming at increasing the efficiency of the parameter search, we here implement a Machine Learning approach based on Random Forest classification, for an automatic parameter screening. It has been implemented using the machine learning Python’s library SciKit Learn.
2021-01-14T11:52:04ZLlácer Giner, DavidOtero Calviño, BeatrizTous Liesa, RubénMonterrubio Velasco, MarisolCarrasco Jiménez, JoséRojas Ulacio, OtilioAn earthquake is the vibration pattern of the Earth’s crust induced by the sliding of geological faults. They are usually recorded for later studies. However, strong earthquakes are rare, small-magnitude events may pass unnoticed and monitoring networks are limited in number and efficiency. Thus, earthquake catalog are incomplete and scarce, and researchers have developed simulators of such catalogs. In this work, we start from synthetic catalogs generated with the TREMOL-3D software. TREMOL-3D is a stochastic-based method to produce earthquake catalogs with different statistical patterns, depending on certain input parameters that mimics physical parameters. When an appropriate set of parameters are used, TREMOL-3D could generate synthetic catalogs with similar statistical properties observed in real catalogs. However, because of the size of the parameter space, a manual searching becomes unbearable. Therefore, aiming at increasing the efficiency of the parameter search, we here implement a Machine Learning approach based on Random Forest classification, for an automatic parameter screening. It has been implemented using the machine learning Python’s library SciKit Learn.Effect of the actuation on the boundary layer of an airfoil at moderate reynolds number
http://hdl.handle.net/2117/329700
Effect of the actuation on the boundary layer of an airfoil at moderate reynolds number
Lehmkuhl, Oriol; Rodríguez Pérez, Ivette María; Borrell Pol, Ricard
Synthetic (zero net mass flux) jets are an active flow control technique to manipulate the flow field in wall-bounded and free-shear flows. The fluid necessary to actuate on the boundary layer is intermittently injected through an orifice and is driven by the motion of a diaphragm located on a sealed cavity below the surface
2020-10-02T10:27:53ZLehmkuhl, OriolRodríguez Pérez, Ivette MaríaBorrell Pol, RicardSynthetic (zero net mass flux) jets are an active flow control technique to manipulate the flow field in wall-bounded and free-shear flows. The fluid necessary to actuate on the boundary layer is intermittently injected through an orifice and is driven by the motion of a diaphragm located on a sealed cavity below the surfaceToward an interdisciplinary methodology to solve new (old) transportation problems
http://hdl.handle.net/2117/329622
Toward an interdisciplinary methodology to solve new (old) transportation problems
Graells-Garrido, Eduardo; Peñas-Araya, Vanessa
The rising availability of digital traces provides a fertile ground for new solutions to both, new and old problems in cities. Even though a massive data set analyzed with Data Science methods may provide a powerful solution to a problem, its adoption by relevant stakeholders is not guaranteed, due to adoption blockers such as lack of interpretability and transparency. In this context, this paper proposes a preliminary methodology toward bridging two disciplines, Data Science and Transportation, to solve urban problems with methods that are suitable for adoption. The methodology is defined by four steps where people from both disciplines go from algorithm and model definition to the building of a potentially adoptable solution. As case study, we describe how this methodology was applied to define a model to infer commuting trips with mode of transportation from mobile phone data.
2020-10-01T10:05:06ZGraells-Garrido, EduardoPeñas-Araya, VanessaThe rising availability of digital traces provides a fertile ground for new solutions to both, new and old problems in cities. Even though a massive data set analyzed with Data Science methods may provide a powerful solution to a problem, its adoption by relevant stakeholders is not guaranteed, due to adoption blockers such as lack of interpretability and transparency. In this context, this paper proposes a preliminary methodology toward bridging two disciplines, Data Science and Transportation, to solve urban problems with methods that are suitable for adoption. The methodology is defined by four steps where people from both disciplines go from algorithm and model definition to the building of a potentially adoptable solution. As case study, we describe how this methodology was applied to define a model to infer commuting trips with mode of transportation from mobile phone data.