Articles de revista
http://hdl.handle.net/2117/6145
2017-01-17T13:04:15ZNon-regularised inverse finite element analysis for 3D traction force microscopy
http://hdl.handle.net/2117/99311
Non-regularised inverse finite element analysis for 3D traction force microscopy
Muñoz Romero, José
The tractions that cells exert on a gel substrate from the observed
displacements is an increasingly attractive and valuable information in
biomedical experiments. The computation of these tractions requires in
general the solution of an inverse problem. Here, we resort to the discretisation
with finite elements of the associated direct variational formulation,
and solve the inverse analysis using a least square approach.
This strategy requires the minimisation of an error functional, which is
usually regularised in order to obtain a stable system of equations with
a unique solution. In this paper we show that for many common threedimensional
geometries, meshes and loading conditions, this regularisation
is unnecessary. In these cases, the computational cost of the inverse
problem becomes equivalent to a direct finite element problem. For the
non-regularised functional, we deduce the necessary and sufficient conditions
that the dimensions of the interpolated displacement and traction
fields must preserve in order to exactly satisfy or yield a unique solution
of the discrete equilibrium equations. We apply the theoretical results to
some illustrative examples and to real experimental data. Due to the relevance
of the results for biologists and modellers, the article concludes with
some practical rules that the finite element discretisation must satisfy.
2017-01-16T12:38:07ZMuñoz Romero, JoséThe tractions that cells exert on a gel substrate from the observed
displacements is an increasingly attractive and valuable information in
biomedical experiments. The computation of these tractions requires in
general the solution of an inverse problem. Here, we resort to the discretisation
with finite elements of the associated direct variational formulation,
and solve the inverse analysis using a least square approach.
This strategy requires the minimisation of an error functional, which is
usually regularised in order to obtain a stable system of equations with
a unique solution. In this paper we show that for many common threedimensional
geometries, meshes and loading conditions, this regularisation
is unnecessary. In these cases, the computational cost of the inverse
problem becomes equivalent to a direct finite element problem. For the
non-regularised functional, we deduce the necessary and sufficient conditions
that the dimensions of the interpolated displacement and traction
fields must preserve in order to exactly satisfy or yield a unique solution
of the discrete equilibrium equations. We apply the theoretical results to
some illustrative examples and to real experimental data. Due to the relevance
of the results for biologists and modellers, the article concludes with
some practical rules that the finite element discretisation must satisfy.Charting molecular free-energy landscapes with an atlas of collective variables
http://hdl.handle.net/2117/98674
Charting molecular free-energy landscapes with an atlas of collective variables
Hashemian, B.; Millán, Raúl Daniel; Arroyo Balaguer, Marino
Collective variables (CVs) are a fundamental tool to understand molecular flexibility, to compute free energy landscapes, and to enhance sampling in molecular dynamics simulations. However, identifying suitable CVs is challenging, and is increasingly addressed with systematic data-driven manifold learning techniques. Here, we provide a flexible framework to model molecular systems in terms of a collection of locally valid and partially overlapping CVs: an atlas of CVs. The specific motivation for such a framework is to enhance the applicability and robustness of CVs based on manifold learning methods, which fail in the presence of periodicities in the underlying conformational manifold. More generally, using an atlas of CVs rather than a single chart may help us better describe different regions of conformational space. We develop the statistical mechanics foundation for our multi-chart description and propose an algorithmic implementation. The resulting atlas of data-based CVs are then used to enhance sampling and compute free energy surfaces in two model systems, alanine dipeptide and ß-D-glucopyranose, whose conformational manifolds have toroidal and spherical topologies.
2016-12-20T19:26:33ZHashemian, B.Millán, Raúl DanielArroyo Balaguer, MarinoCollective variables (CVs) are a fundamental tool to understand molecular flexibility, to compute free energy landscapes, and to enhance sampling in molecular dynamics simulations. However, identifying suitable CVs is challenging, and is increasingly addressed with systematic data-driven manifold learning techniques. Here, we provide a flexible framework to model molecular systems in terms of a collection of locally valid and partially overlapping CVs: an atlas of CVs. The specific motivation for such a framework is to enhance the applicability and robustness of CVs based on manifold learning methods, which fail in the presence of periodicities in the underlying conformational manifold. More generally, using an atlas of CVs rather than a single chart may help us better describe different regions of conformational space. We develop the statistical mechanics foundation for our multi-chart description and propose an algorithmic implementation. The resulting atlas of data-based CVs are then used to enhance sampling and compute free energy surfaces in two model systems, alanine dipeptide and ß-D-glucopyranose, whose conformational manifolds have toroidal and spherical topologies.Un estimador de error residual semiexplícito en cantidades de interés para un problema mecánico lineal
http://hdl.handle.net/2117/98648
Un estimador de error residual semiexplícito en cantidades de interés para un problema mecánico lineal
Rosales, R.; Díez, Pedro
We aim at defining a semi-explicit approach to estimate the error in quantities of interest associated with the Finite Element solution of a linear elasticity problem. The advocated procedure is split in two parts, an implicit error estimate for the adjoint problem and an explicit estimate assessing the error in the direct (primal) problem. The implicit part of the estimate (on the adjoint problem) embraces two phases, each consisting in projecting the error on
2016-12-20T16:27:14ZRosales, R.Díez, PedroWe aim at defining a semi-explicit approach to estimate the error in quantities of interest associated with the Finite Element solution of a linear elasticity problem. The advocated procedure is split in two parts, an implicit error estimate for the adjoint problem and an explicit estimate assessing the error in the direct (primal) problem. The implicit part of the estimate (on the adjoint problem) embraces two phases, each consisting in projecting the error onVademecum-based GFEM (V-GFEM): optimal enrichment for transient problems
http://hdl.handle.net/2117/98640
Vademecum-based GFEM (V-GFEM): optimal enrichment for transient problems
Canales, Diego; Leygue, Adrien; Chinesta Soria, Francisco; González Ibáñez, David; Cueto Prendes, Elias; Feulvarch, Eric; Bergheau, Jean-Michel; Huerta, Antonio
This paper proposes a generalized finite element method based on the use of parametric solutions as enrichment functions. These parametric solutions are precomputed off-line and stored in memory in the form of a computational vademecum so that they can be used on-line with negligible cost. This renders a more efficient computational method than traditional finite element methods at performing simulations of processes. One key issue of the proposed method is the efficient computation of the parametric enrichments. These are computed and efficiently stored in memory by employing proper generalized decompositions. Although the presented method can be broadly applied, it is particularly well suited in manufacturing processes involving localized physics that depend on many parameters, such as welding. After introducing the vademecum-generalized finite element method formulation, we present some numerical examples related to the simulation of thermal models encountered in welding processes.
This is the accepted version of the following article: [Canales, D., Leygue, A., Chinesta, F., González, D., Cueto, E., Feulvarch, E., Bergheau, J. -M., and Huerta, A. (2016) Vademecum-based GFEM (V-GFEM): optimal enrichment for transient problems. Int. J. Numer. Meth. Engng, 108: 971–989. doi: 10.1002/nme.5240.], which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/nme.5240/full
2016-12-20T15:41:21ZCanales, DiegoLeygue, AdrienChinesta Soria, FranciscoGonzález Ibáñez, DavidCueto Prendes, EliasFeulvarch, EricBergheau, Jean-MichelHuerta, AntonioThis paper proposes a generalized finite element method based on the use of parametric solutions as enrichment functions. These parametric solutions are precomputed off-line and stored in memory in the form of a computational vademecum so that they can be used on-line with negligible cost. This renders a more efficient computational method than traditional finite element methods at performing simulations of processes. One key issue of the proposed method is the efficient computation of the parametric enrichments. These are computed and efficiently stored in memory by employing proper generalized decompositions. Although the presented method can be broadly applied, it is particularly well suited in manufacturing processes involving localized physics that depend on many parameters, such as welding. After introducing the vademecum-generalized finite element method formulation, we present some numerical examples related to the simulation of thermal models encountered in welding processes.Real-time simulation techniques for augmented learning in science and engineering
http://hdl.handle.net/2117/97463
Real-time simulation techniques for augmented learning in science and engineering
Quesada, C.; González, D.; Alfaro, Icíar; Cueto Prendes, Elias; Huerta, Antonio; Chinesta, Francisco
In this paper we present the basics of a novel methodology for the development of simulation-based and augmented learning tools in the context of applied science and engineering. It is based on the extensive use of model order reduction, and particularly, of the so-called Proper Generalized Decomposition (PGD) method. This method provides a sort of meta-modeling tool without the need for prior computer experiments that allows the user to obtain real-time response in the solution of complex engineering or physical problems. This real-time capability also allows for its implementation in deployed, touch-screen, handheld devices or even to be immersed into electronic textbooks. We explore here the basics of the proposed methodology and give examples on a few challenging applications never until now explored, up to our knowledge.
2016-11-29T17:42:46ZQuesada, C.González, D.Alfaro, IcíarCueto Prendes, EliasHuerta, AntonioChinesta, FranciscoIn this paper we present the basics of a novel methodology for the development of simulation-based and augmented learning tools in the context of applied science and engineering. It is based on the extensive use of model order reduction, and particularly, of the so-called Proper Generalized Decomposition (PGD) method. This method provides a sort of meta-modeling tool without the need for prior computer experiments that allows the user to obtain real-time response in the solution of complex engineering or physical problems. This real-time capability also allows for its implementation in deployed, touch-screen, handheld devices or even to be immersed into electronic textbooks. We explore here the basics of the proposed methodology and give examples on a few challenging applications never until now explored, up to our knowledge.Optimizing mesh distortion by hierarchical iteration relocation of the nodes on the CAD entities
http://hdl.handle.net/2117/90814
Optimizing mesh distortion by hierarchical iteration relocation of the nodes on the CAD entities
Ruiz Gironés, Eloi; Roca Navarro, Francisco Javier; Sarrate Ramos, Josep
Mesh untangling and smoothing is an important part of the meshing process to obtain high-quality discretizations. The usual approach consists on moving the position of the interior nodes while considering fixed the position of the boundary ones. However, the boundary nodes may constrain the quality of the whole mesh, and high-quality elements may not be generated. Specifically, thin regions in the geometry or special configurations of the boundary edges may induce low-quality elements. To overcome this drawback, we present a smoothing and untangling procedure that moves the interior nodes as well as the boundary ones, via an optimization process. The objective function is defined as a regularized distortion of the elements, and takes the nodal Cartesian coordinates as input arguments. When dealing with surface and edge nodes, the objective function uses the nodal parametric coordinates in order to avoid projecting them to the boundary. The novelty of the approach is that we consider a single target objective function (mesh distortion) where all the nodes, except the vertex nodes, are free to move on the corresponding CAD entity. Although the objective function is defined globally, for implementation purposes we propose to perform a node-by-node process. To minimize the objective function, we consider a block iterated non-linear Gauss-Seidel method using a hierarchical approach. That is, we first smooth the edge nodes, then the face nodes, and finally the inner nodes. This process is iterated using a node-by-node Gauss-Seidel approach until convergence is achieved.
2016-10-18T07:55:09ZRuiz Gironés, EloiRoca Navarro, Francisco JavierSarrate Ramos, JosepMesh untangling and smoothing is an important part of the meshing process to obtain high-quality discretizations. The usual approach consists on moving the position of the interior nodes while considering fixed the position of the boundary ones. However, the boundary nodes may constrain the quality of the whole mesh, and high-quality elements may not be generated. Specifically, thin regions in the geometry or special configurations of the boundary edges may induce low-quality elements. To overcome this drawback, we present a smoothing and untangling procedure that moves the interior nodes as well as the boundary ones, via an optimization process. The objective function is defined as a regularized distortion of the elements, and takes the nodal Cartesian coordinates as input arguments. When dealing with surface and edge nodes, the objective function uses the nodal parametric coordinates in order to avoid projecting them to the boundary. The novelty of the approach is that we consider a single target objective function (mesh distortion) where all the nodes, except the vertex nodes, are free to move on the corresponding CAD entity. Although the objective function is defined globally, for implementation purposes we propose to perform a node-by-node process. To minimize the objective function, we consider a block iterated non-linear Gauss-Seidel method using a hierarchical approach. That is, we first smooth the edge nodes, then the face nodes, and finally the inner nodes. This process is iterated using a node-by-node Gauss-Seidel approach until convergence is achieved.Adaptividade e estimativas de erro orientadas por metas aplicadas a um benchmark test de propagação de onda
http://hdl.handle.net/2117/90741
Adaptividade e estimativas de erro orientadas por metas aplicadas a um benchmark test de propagação de onda
Steffens, Lindaura; Díez, Pedro; Parés Mariné, Núria; Alves, Marcelo Krajnc
O objetivo deste artigo é estudar a eficiência e a robustez de técnicas adaptativas e estimativas de erro orientadas por metas para um benchmark test. As técnicas utilizadas aqui são baseadas em um simples pós-processo das aproximacções de elementos finitos. As estimativas de erro orientadas por metas são obtidas por analisar o problema direto e um problema auxiliar, o qual está relacionado com a quantidade de interesse específico. O procedimento proposto é válido para quantidades lineares e não-lineares. Além disso, são discutidas diferentes representacções para o erro e é analisada a influência do erro de dispersão. Os resultados numéricos mostram que as estimativas de erro fornecem boas aproximações ao erro real e que a técnica de refino adaptativo proposta conduz a uma redução mais rápida do erro.
2016-10-13T12:42:25ZSteffens, LindauraDíez, PedroParés Mariné, NúriaAlves, Marcelo KrajncO objetivo deste artigo é estudar a eficiência e a robustez de técnicas adaptativas e estimativas de erro orientadas por metas para um benchmark test. As técnicas utilizadas aqui são baseadas em um simples pós-processo das aproximacções de elementos finitos. As estimativas de erro orientadas por metas são obtidas por analisar o problema direto e um problema auxiliar, o qual está relacionado com a quantidade de interesse específico. O procedimento proposto é válido para quantidades lineares e não-lineares. Além disso, são discutidas diferentes representacções para o erro e é analisada a influência do erro de dispersão. Os resultados numéricos mostram que as estimativas de erro fornecem boas aproximações ao erro real e que a técnica de refino adaptativo proposta conduz a uma redução mais rápida do erro.Un método de captura de choques basado en las funciones de forma para Galerkin discontinuo de alto orden
http://hdl.handle.net/2117/90730
Un método de captura de choques basado en las funciones de forma para Galerkin discontinuo de alto orden
Casoni Rero, Eva; Peraire Guitart, Jaume; Huerta, Antonio
En este artículo se presenta un método de alto orden de Galerkin discontinuo para problemas de flujo com- presible, en los cuales es muy frecuente la aparición de choques. La estabilización se introduce mediante una nueva base de funciones. Esta base tiene la flexibilidad de variar localmente (en cada elemento) entre un espacio de funciones polinómicas continuas o un espacio de funciones polinómicas a trozos. Así, el método propuesto proporciona un puente entre los métodos estándar de alto orden de Galerkin disconti- nuo y los clásicos métodos de volúmenes finitos, manteniendo la localidad y compacidad del esquema. La variación de las funciones de la base se define automáticamente en función de la regularidad de la solución y la estabilización se introduce mediante el operador salto, estándar en los métodos Galerkin disconti- nuo. A diferencia de los clásicos métodos de limitadores de pendiente, la estrategia que se presenta es muy local y robusta, y es aplicable a cualquier orden de aproximación. Además, el método propuesto no requiere refinamiento adaptativo de la malla ni restricción del esquema de integración temporal. Se consideran varias aplicaciones de las ecuaciones de Euler que demuestran la validez y efectividad del método, especialmente para altos órdenes de aproximación.
2016-10-13T11:33:45ZCasoni Rero, EvaPeraire Guitart, JaumeHuerta, AntonioEn este artículo se presenta un método de alto orden de Galerkin discontinuo para problemas de flujo com- presible, en los cuales es muy frecuente la aparición de choques. La estabilización se introduce mediante una nueva base de funciones. Esta base tiene la flexibilidad de variar localmente (en cada elemento) entre un espacio de funciones polinómicas continuas o un espacio de funciones polinómicas a trozos. Así, el método propuesto proporciona un puente entre los métodos estándar de alto orden de Galerkin disconti- nuo y los clásicos métodos de volúmenes finitos, manteniendo la localidad y compacidad del esquema. La variación de las funciones de la base se define automáticamente en función de la regularidad de la solución y la estabilización se introduce mediante el operador salto, estándar en los métodos Galerkin disconti- nuo. A diferencia de los clásicos métodos de limitadores de pendiente, la estrategia que se presenta es muy local y robusta, y es aplicable a cualquier orden de aproximación. Además, el método propuesto no requiere refinamiento adaptativo de la malla ni restricción del esquema de integración temporal. Se consideran varias aplicaciones de las ecuaciones de Euler que demuestran la validez y efectividad del método, especialmente para altos órdenes de aproximación.Melt fracturing and healing: a mechanism for rhyolite magma degassing and origin of obsidian
http://hdl.handle.net/2117/90728
Melt fracturing and healing: a mechanism for rhyolite magma degassing and origin of obsidian
Cabrera, Agustín; Weinberg, Roberto F.; Wright, Heather M. N.; Zlotnik, Sergio; Cas, Ray A.F.
We present water content transects across a healed fault in pyroclastic obsidian from Lami pumice cone, Lipari, Italy, using synchrotron Fourier transform infrared spectroscopy. Results indicate that rhyolite melt degassed through the fault surface. Transects define a trough of low water content coincident with the fault trace, surrounded on either side by high-water-content plateaus. Plateaus indicate that obsidian on either side of the fault equilibrated at different pressure-temperature (P-T) conditions before being juxtaposed. The curves into the troughs indicate disequilibrium and water loss through diffusion. If we assume constant T, melt equilibrated at pressures differing by 0.74 MPa before juxtaposition, and the fault acted as a low-P permeable path for H2O that diffused from the glass within time scales of 10 and 30 min. Assuming constant P instead, melt on either side could have equilibrated at temperatures differing by as much as 100 °C, before being brought together. Water content on the fault trace is particularly sensitive to post-healing diffusion. Its preserved value indicates either higher temperature or lower pressure than the surroundings, indicative of shear heating and dynamic decompression. Our results reveal that water contents of obsidian on either side of the faults equilibrated under different P-T conditions and were out of equilibrium with each other when they were juxtaposed due to faulting immediately before the system was quenched. Degassing due to faulting could be linked to cyclical seismic activity and general degassing during silicic volcanic activity, and could be an efficient mechanism of producing low-water-content obsidian.
2016-10-13T11:13:13ZCabrera, AgustínWeinberg, Roberto F.Wright, Heather M. N.Zlotnik, SergioCas, Ray A.F.We present water content transects across a healed fault in pyroclastic obsidian from Lami pumice cone, Lipari, Italy, using synchrotron Fourier transform infrared spectroscopy. Results indicate that rhyolite melt degassed through the fault surface. Transects define a trough of low water content coincident with the fault trace, surrounded on either side by high-water-content plateaus. Plateaus indicate that obsidian on either side of the fault equilibrated at different pressure-temperature (P-T) conditions before being juxtaposed. The curves into the troughs indicate disequilibrium and water loss through diffusion. If we assume constant T, melt equilibrated at pressures differing by 0.74 MPa before juxtaposition, and the fault acted as a low-P permeable path for H2O that diffused from the glass within time scales of 10 and 30 min. Assuming constant P instead, melt on either side could have equilibrated at temperatures differing by as much as 100 °C, before being brought together. Water content on the fault trace is particularly sensitive to post-healing diffusion. Its preserved value indicates either higher temperature or lower pressure than the surroundings, indicative of shear heating and dynamic decompression. Our results reveal that water contents of obsidian on either side of the faults equilibrated under different P-T conditions and were out of equilibrium with each other when they were juxtaposed due to faulting immediately before the system was quenched. Degassing due to faulting could be linked to cyclical seismic activity and general degassing during silicic volcanic activity, and could be an efficient mechanism of producing low-water-content obsidian.The solution of linear mechanical systems in terms of path superposition
http://hdl.handle.net/2117/89392
The solution of linear mechanical systems in terms of path superposition
Magrans Fontrodona, Francesc Xavier; Poblet-Puig, Jordi; Rodríguez Ferran, Antonio
We prove that the solution of any linear mechanical system can be expressed as a linear combination of signal transmission paths. This is done in the framework of the Global Transfer Direct Transfer (GTDT) formulation for vibroacoustic problems. Transmission paths are expressed as powers of the transfer matrix. The key idea of the proof is to generalise the Neumann series of the transfer matrix --which is convergent only if its spectral radius is smaller than one-- into a modified Neumann series that is convergent regardless of the eigenvalues of the transfer matrix. The modification consists in choosing the appropriate combination coefficients for the powers of the transfer matrix in the series. A recursive formula for the computation of these factors is derived. The theoretical results are illustrated by means of numerical examples. Finally, we show that the generalised Neumann series can be understood as an acceleration (i.e. convergence speedup) of the Jacobi iterative method.
2016-08-29T10:40:45ZMagrans Fontrodona, Francesc XavierPoblet-Puig, JordiRodríguez Ferran, AntonioWe prove that the solution of any linear mechanical system can be expressed as a linear combination of signal transmission paths. This is done in the framework of the Global Transfer Direct Transfer (GTDT) formulation for vibroacoustic problems. Transmission paths are expressed as powers of the transfer matrix. The key idea of the proof is to generalise the Neumann series of the transfer matrix --which is convergent only if its spectral radius is smaller than one-- into a modified Neumann series that is convergent regardless of the eigenvalues of the transfer matrix. The modification consists in choosing the appropriate combination coefficients for the powers of the transfer matrix in the series. A recursive formula for the computation of these factors is derived. The theoretical results are illustrated by means of numerical examples. Finally, we show that the generalised Neumann series can be understood as an acceleration (i.e. convergence speedup) of the Jacobi iterative method.