Departament de Matemàtica Aplicada III (fins octubre de 2015)http://hdl.handle.net/2117/959062024-03-28T18:05:02Z2024-03-28T18:05:02ZOn the consistency of hysteresis modelsFuad Mohammad Naser, Mohammadhttp://hdl.handle.net/2117/960862023-10-12T18:40:48Z2016-11-09T14:13:35ZOn the consistency of hysteresis models
Fuad Mohammad Naser, Mohammad
Hysteresis is a nonlinear behavior encountered in a wide variety of processes including biology, optics, electronics, ferroelectricity, magnetism, mechanics, structures, among other areas. One of the main features of hysteresis processes is the property of consistency formalized in [52]. The class of operators that are considered in [52] consists of the causal ones, with the additional condition that a constant input leads to a constant output. For this class of systems, consistency has been defined formally. This property is useful in system modeling and identification as it limits the search for the system's parameters to those regions where consistency holds.
The thesis applies the concepts introduced in [52] to some hysteresis models, namely LuGre model and Duhem model. The aim of the thesis is to derive necessary conditions and sufficient one for consistency (or/and strong consistency) to hold.
For the LuGre model, the consistency and the strong consistency are studied under minimal conditions in Chapter 2. As a by-product of this study, explicit expressions are derived for the hysteresis. Such expression may be useful for identification purposes as shown in [53].
A classification of the possible Duhem models in terms of their consistency is carried out in Chapter 3. This study shows that a parameter’s should be one for the Duhem model to be compatible with a hysteresis behavior.; La histéresis es un fenómeno nolineal encontrado en varios procesos como biología, óptica, electrónica, ferroelectricidad, magnetismo, mecánica, estructuras, así como en otras áreas. Una de las características de los sistemas con histéresis es la propiedad de consistencia formalizada en [52]. La clase de operadores considerados en [52] consiste en aquellos que son causales, con la condición adicional que a una entrada constante corresponda una salida constante. Para esta clase d sistemas, la consitencia ha sido definida formalmente. Esta propiedad es útil en modelado e identificación dado que limita la búsqueda de parámetros a aquellas regiones donde la consistencia es válida.
* Esta tesis aplica los conceptos introducidos en [52] a algunos modelos de histéresis, más precisamente al modelo de LuGre y al modelo de Duhem. El objetivo de esta tesis es encontrar condiciones necesarias y condiciones suficientes para que se satisfaga la consistencia (o/y la consitencia fuerte).
* Para el modelo de LuGre, la consistencia "fuerte" se estudia en el capítulo 2 bajo condiciones mínimas. Como resultado de este estudio, se hallan fórmulas explícitas del lazo de histéresis. Tales fórmulas podrían ser de utilidad para tareas de identificación como se demuestra en [53].
* El capítulo 3 de la tesis presenta una clasificación de los modelos de Duhem posibles en términos de su consistencia. Este estudio muestra que hay un parámetro que tiene que valer uno para que el modelo sea compatible con un comportamiento histerético
2016-11-09T14:13:35ZFuad Mohammad Naser, MohammadHysteresis is a nonlinear behavior encountered in a wide variety of processes including biology, optics, electronics, ferroelectricity, magnetism, mechanics, structures, among other areas. One of the main features of hysteresis processes is the property of consistency formalized in [52]. The class of operators that are considered in [52] consists of the causal ones, with the additional condition that a constant input leads to a constant output. For this class of systems, consistency has been defined formally. This property is useful in system modeling and identification as it limits the search for the system's parameters to those regions where consistency holds.
The thesis applies the concepts introduced in [52] to some hysteresis models, namely LuGre model and Duhem model. The aim of the thesis is to derive necessary conditions and sufficient one for consistency (or/and strong consistency) to hold.
For the LuGre model, the consistency and the strong consistency are studied under minimal conditions in Chapter 2. As a by-product of this study, explicit expressions are derived for the hysteresis. Such expression may be useful for identification purposes as shown in [53].
A classification of the possible Duhem models in terms of their consistency is carried out in Chapter 3. This study shows that a parameter’s should be one for the Duhem model to be compatible with a hysteresis behavior.
La histéresis es un fenómeno nolineal encontrado en varios procesos como biología, óptica, electrónica, ferroelectricidad, magnetismo, mecánica, estructuras, así como en otras áreas. Una de las características de los sistemas con histéresis es la propiedad de consistencia formalizada en [52]. La clase de operadores considerados en [52] consiste en aquellos que son causales, con la condición adicional que a una entrada constante corresponda una salida constante. Para esta clase d sistemas, la consitencia ha sido definida formalmente. Esta propiedad es útil en modelado e identificación dado que limita la búsqueda de parámetros a aquellas regiones donde la consistencia es válida.
* Esta tesis aplica los conceptos introducidos en [52] a algunos modelos de histéresis, más precisamente al modelo de LuGre y al modelo de Duhem. El objetivo de esta tesis es encontrar condiciones necesarias y condiciones suficientes para que se satisfaga la consistencia (o/y la consitencia fuerte).
* Para el modelo de LuGre, la consistencia "fuerte" se estudia en el capítulo 2 bajo condiciones mínimas. Como resultado de este estudio, se hallan fórmulas explícitas del lazo de histéresis. Tales fórmulas podrían ser de utilidad para tareas de identificación como se demuestra en [53].
* El capítulo 3 de la tesis presenta una clasificación de los modelos de Duhem posibles en términos de su consistencia. Este estudio muestra que hay un parámetro que tiene que valer uno para que el modelo sea compatible con un comportamiento histeréticoEngineering patterns of wrinkles and bubbles in supported graphene through modeling and simulationZhang, Kuanghttp://hdl.handle.net/2117/957392023-10-12T18:42:51Z2015-10-05T12:09:25ZEngineering patterns of wrinkles and bubbles in supported graphene through modeling and simulation
Zhang, Kuang
Graphene deposited on a substrate often exhibits out-of-plane deformations with different features and origins. Networks of localized wrinkles have been observed in graphene synthesized through CVD, as a result of compressive stresses transmitted by the substrate. Graphene blisters have been reported with various sizes and shapes, and have been shown to be caused by gas trapped between graphene and substrate. Such wrinkles or bubbles locally modify the electronic properties and are often seen as defects. It has been also suggested that the strong coupling between localized deformation and electronic structure can be potentially harnessed in technology by strain engineering, although it has not been possible to precisely control the geometry of out-of-plane deformations, partly due to an insufficent theoretical understanding of the underlying mechanism, particularly under biaxial strains.
The specific contributions of the thesis are outlined next. Firstly, we study the emergence of spontaneous wrinkling in supported and laterally strained graphene with high-fidelity simulations based on an atomistically informed continuum model. With a simpler theoretical model, we characterize the onset of buckling and the nonlinear behavior after the instability in terms of the adhesion and frictional material parameters of the graphene-substrate interface. We find that a distributed rippling linear instability transits to localized wrinkles due to the nonlinearity in the van der Waals graphene-substrate interactions. We identify friction as a selection mechanism for the separation between wrinkles, because the formation of far apart wrinkles is penalized by the work of friction.
Secondly, we examine the mechanics of wrinkling in supported graphene upon biaxial strains. With realistic simulations and an energetic analysis, we understand how strain anisotropy, adhesion and friction govern spontaneous wrinkling. We then propose a strategy to control the location of wrinkles
through patterns of weaker adhesion. These mechanically self-assembled networks are stable under the pressure produced by an enclosed fluid and form continuous channels, opening the door to nano-fluidic applications.
Finally, we examine the coexistence of wrinkles and blisters in supported graphene. By changing the applied strain and gas mass trapped beneath the graphene sample, we build a morphological diagram determining the size and shape of graphene bubbles, and their coexistence with wrinkles. As a whole, the research described above depicts a systematic and broad understanding of out-of-plane deformations in monolayer graphene on a substrate, and could be a theoretical foundation towards strain engineering in supported graphene.; La deposición de grafeno sobre un substrato a menudo exhibe deformaciones fuera del plano con diversas características y orígenes. Al sintetizar grafeno se han observado mediante CVD redes de arrugas localizadas, como resultado de tensiones de compresión transmitidas por el sustrato. Mientras tanto ha sido posible identificar el gas atrapado entre el grafeno y el sustrato como el responsable del origen de ampollas de tamaños y formas diversos. Tanto las arrugas como las burbujas modifican localmente las propiedades electrónicas, lo que a menudo se considera como defectos. También se ha sugerido que el fuerte acoplamiento entre la deformación localizada y la estructura electrónica podría ser tecnológicamente aprovechada mediante la ingeniería de deformación. Desafortunadamente hasta el presente no ha sido posible controlar con precisión la geometría de las deformaciones fuera del plano, en parte debido a una insuficiente comprensión teórica del mecanismo subyacente, especialmente bajo deformaciones biaxiales. Las contribuciones específicas de la presente tesis se describen a continuación. En primer lugar, estudiamos la aparición espontánea de arrugas en muestras de grafeno simplemente apoyadas y lateralmente tensionadas mediante simulaciones de alta fidelidad, con base en un modelo continuo atomísticamente informado. A través de un modelo teórico simple caracterizamos la aparición de pandeo y el comportamiento no lineal, después de la inestabilidad, en términos de los parámetros de adhesión y de fricción de la interfaz grafeno-sustrato. Encontramos que una ondulación distribuida de una inestabilidad lineal transita hacia arrugas localizadas debido a la no linealidad en las interacciones de van der Waals entre el grafeno y el substrato. Identificamos la fricción como un mecanismo de selección para la separación entre las arrugas, debido a que la formación de arrugas distantes es penalizada por el trabajo de fricción. En segundo lugar, analizamos la mecánica de arrugas en grafeno apoyado bajo deformaciones biaxiales. Mediante simulaciones realistas y un análisis energético, entendemos cómo la deformación anisotropíca, la adherencia y la fricción gobiernan la rugosidad espontánea. Consecuentemente, proponemos una estrategia para controlar la ubicación de las arrugas a través de patrones de débil adherencia. Estas redes mecánicamente auto-ensambladas son estables bajo la presión producida por un fluido encerrado y forman canales continuos, posibilitando el desarrollo de aplicaciones de nano fluidos. Finalmente, analizamos la coexistencia de arrugas y de ampollas en muestras de grafeno apoyado. Mediante el control de la deformación aplicada así como de la masa de gas atrapada debajo de la muestra de grafeno, construimos un diagrama morfológico para determinar el tamaño y la forma de burbujas de grafeno, y su coexistencia con las arrugas. En su conjunto, la investigación descrita anteriormente representa una comprensión sistemática y amplia de las deformaciones fuera del plano en muestras de grafeno monocapa apoyadas sobre un sustrato, y podría servir como fundamento teórico en la incipiente área de ingeniería de deformación en grafeno.
2015-10-05T12:09:25ZZhang, KuangGraphene deposited on a substrate often exhibits out-of-plane deformations with different features and origins. Networks of localized wrinkles have been observed in graphene synthesized through CVD, as a result of compressive stresses transmitted by the substrate. Graphene blisters have been reported with various sizes and shapes, and have been shown to be caused by gas trapped between graphene and substrate. Such wrinkles or bubbles locally modify the electronic properties and are often seen as defects. It has been also suggested that the strong coupling between localized deformation and electronic structure can be potentially harnessed in technology by strain engineering, although it has not been possible to precisely control the geometry of out-of-plane deformations, partly due to an insufficent theoretical understanding of the underlying mechanism, particularly under biaxial strains.
The specific contributions of the thesis are outlined next. Firstly, we study the emergence of spontaneous wrinkling in supported and laterally strained graphene with high-fidelity simulations based on an atomistically informed continuum model. With a simpler theoretical model, we characterize the onset of buckling and the nonlinear behavior after the instability in terms of the adhesion and frictional material parameters of the graphene-substrate interface. We find that a distributed rippling linear instability transits to localized wrinkles due to the nonlinearity in the van der Waals graphene-substrate interactions. We identify friction as a selection mechanism for the separation between wrinkles, because the formation of far apart wrinkles is penalized by the work of friction.
Secondly, we examine the mechanics of wrinkling in supported graphene upon biaxial strains. With realistic simulations and an energetic analysis, we understand how strain anisotropy, adhesion and friction govern spontaneous wrinkling. We then propose a strategy to control the location of wrinkles
through patterns of weaker adhesion. These mechanically self-assembled networks are stable under the pressure produced by an enclosed fluid and form continuous channels, opening the door to nano-fluidic applications.
Finally, we examine the coexistence of wrinkles and blisters in supported graphene. By changing the applied strain and gas mass trapped beneath the graphene sample, we build a morphological diagram determining the size and shape of graphene bubbles, and their coexistence with wrinkles. As a whole, the research described above depicts a systematic and broad understanding of out-of-plane deformations in monolayer graphene on a substrate, and could be a theoretical foundation towards strain engineering in supported graphene.
La deposición de grafeno sobre un substrato a menudo exhibe deformaciones fuera del plano con diversas características y orígenes. Al sintetizar grafeno se han observado mediante CVD redes de arrugas localizadas, como resultado de tensiones de compresión transmitidas por el sustrato. Mientras tanto ha sido posible identificar el gas atrapado entre el grafeno y el sustrato como el responsable del origen de ampollas de tamaños y formas diversos. Tanto las arrugas como las burbujas modifican localmente las propiedades electrónicas, lo que a menudo se considera como defectos. También se ha sugerido que el fuerte acoplamiento entre la deformación localizada y la estructura electrónica podría ser tecnológicamente aprovechada mediante la ingeniería de deformación. Desafortunadamente hasta el presente no ha sido posible controlar con precisión la geometría de las deformaciones fuera del plano, en parte debido a una insuficiente comprensión teórica del mecanismo subyacente, especialmente bajo deformaciones biaxiales. Las contribuciones específicas de la presente tesis se describen a continuación. En primer lugar, estudiamos la aparición espontánea de arrugas en muestras de grafeno simplemente apoyadas y lateralmente tensionadas mediante simulaciones de alta fidelidad, con base en un modelo continuo atomísticamente informado. A través de un modelo teórico simple caracterizamos la aparición de pandeo y el comportamiento no lineal, después de la inestabilidad, en términos de los parámetros de adhesión y de fricción de la interfaz grafeno-sustrato. Encontramos que una ondulación distribuida de una inestabilidad lineal transita hacia arrugas localizadas debido a la no linealidad en las interacciones de van der Waals entre el grafeno y el substrato. Identificamos la fricción como un mecanismo de selección para la separación entre las arrugas, debido a que la formación de arrugas distantes es penalizada por el trabajo de fricción. En segundo lugar, analizamos la mecánica de arrugas en grafeno apoyado bajo deformaciones biaxiales. Mediante simulaciones realistas y un análisis energético, entendemos cómo la deformación anisotropíca, la adherencia y la fricción gobiernan la rugosidad espontánea. Consecuentemente, proponemos una estrategia para controlar la ubicación de las arrugas a través de patrones de débil adherencia. Estas redes mecánicamente auto-ensambladas son estables bajo la presión producida por un fluido encerrado y forman canales continuos, posibilitando el desarrollo de aplicaciones de nano fluidos. Finalmente, analizamos la coexistencia de arrugas y de ampollas en muestras de grafeno apoyado. Mediante el control de la deformación aplicada así como de la masa de gas atrapada debajo de la muestra de grafeno, construimos un diagrama morfológico para determinar el tamaño y la forma de burbujas de grafeno, y su coexistencia con las arrugas. En su conjunto, la investigación descrita anteriormente representa una comprensión sistemática y amplia de las deformaciones fuera del plano en muestras de grafeno monocapa apoyadas sobre un sustrato, y podría servir como fundamento teórico en la incipiente área de ingeniería de deformación en grafeno.Machine learning in multiscale modeling and simulations of molecular systemsHashemian, Behroozhttp://hdl.handle.net/2117/957112023-10-12T18:36:20Z2015-07-16T12:26:41ZMachine learning in multiscale modeling and simulations of molecular systems
Hashemian, Behrooz
Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. However, identifying a representative set of CVs for a given system is far from obvious, and most often relies on physical intuition or partial knowledge about the systems. An inappropriate choice of CVs is misleading and can lead to inefficient sampling. Thus, there is a need for systematic approaches to effectively identify CVs.
In recent years, machine learning techniques, especially nonlinear dimensionality reduction (NLDR), have shown their ability to automatically identify the most important collective behavior of molecular systems. These methods have been widely used to visualize molecular trajectories. However, in general they do not provide a differentiable mapping from high-dimensional configuration space to their low-dimensional representation, as required in enhanced sampling methods, and they cannot deal with systems with inherently nontrivial conformational manifolds.
In the fist part of this dissertation, we introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. SandCV identifies the system's conformational manifold, handles out-of-manifold conformations by a closest point projection, and exactly computes the Jacobian of the resulting CVs. We also illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble.
We then demonstrate that NLDR methods face serious obstacles when the underlying CVs present periodicities, e.g.~arising from proper dihedral angles. As a result, NLDR methods collapse very distant configurations, thus leading to misinterpretations and inefficiencies in enhanced sampling. Here, we identify this largely overlooked problem, and discuss possible approaches to overcome it. Additionally, we characterize flexibility of alanine dipeptide molecule and show that it evolves around a flat torus in four-dimensional space.
In the final part of this thesis, we propose a novel method, atlas of collective variables, that systematically overcomes topological obstacles, ameliorates the geometrical distortions and thus allows NLDR techniques to perform optimally in molecular simulations. This method automatically partitions the configuration space and treats each partition separately. Then, it connects these partitions from the statistical mechanics standpoint.; Las variables colectivas (CVs, acrónimo inglés de collective variables) son representaciones de baja dimensionalidad del estado de un sistema complejo, que nos ayudan a racionalizar conformaciones moleculares y muestrear paisajes de energía libre con simulaciones de dinámica molecular. Sin embargo, identificar unas CVs representativas para un sistema dado dista de ser evidente, por lo que a menudo se confía en la intuición física o en el conocimiento parcial de los sistemas bajo estudio. Una elección inadecuada de las CVs puede dar a interpretaciones engañosas y conducir a un muestreo ineficiente. Por lo tanto, hay una necesidad de desarrollar enfoques sistemáticos para identificar CVs de manera efectiva. En los últimos años, las técnicas de aprendizaje de máquina, especialmente las técnicas de reducción de dimensionalidad no lineal (NLDR, acrónimo inglés de nonlinear dimensionality reduction), han demostrado su capacidad para identificar automáticamente el comportamiento colectivo de sistemas moleculares. Estos métodos han sido ampliamente utilizados para visualizar las trayectorias moleculares. No obstante, en general las técnicas de NLDR no proporcionan una aplicación diferenciable de las configuraciones de alta dimensión a su representación de baja dimensión, condición que es requerida en los métodos mejorados de muestreo, por lo que no pueden hacer frente a sistemas con variedades conformacionales inherentemente no triviales. En la primer parte de esta tesis doctoral, introducimos una metodología que, a partir de un conjunto de conformaciones representativo de la flexibilidad del sistema molecular, construye variables colectivas suaves y no lineales basadas en datos (SandCV, acrónimo en inglés de smooth and nonlinear data-driven collective variables) obtenidos utilizando algoritmos de aprendizaje de variedades no lineales. Demostramos el método con una molécula de referencia estándar y mostramos cómo puede ser combinado de forma no intrusiva con métodos mejorados de muestreo ya existentes, aquí el método de la fuerza de sesgo adaptativa. SandCV identifica la variedad conformacional del sistema, maneja conformaciones fuera de la variedad por una proyección al punto más cercano de la variedad, y calcula exactamente el Jacobiano de las CVs resultantes. También ilustramos cómo simulaciones de muestreo mejoradas pueden, mediante SandCV, explorar regiones que fueron mal muestreadas en el conjunto molecular inicial. A continuación, demostramos que los métodos NLDR se enfrentan a serios obstáculos cuando las CVs subyacentes presentan periodicidad, por ejemplo, derivados de ángulos diedrales. Como consecuencia, los métodos NLDR colapsan configuraciones muy distantes, lo que conduce a interpretaciones erróneas y a ineficiencias en el muestreo mejorado. Aquí, identificamos este problema en gran medida pasado por alto, y discutimos los posibles enfoques para superarlo. Además, caracterizamos la flexibilidad de la molécula de dipéptido alanina y demostramos que evoluciona en torno a un toro plano en cuatro dimensiones. En la parte final de esta tesis, proponemos una metodología novedosa, atlas de variables colectivas, que supera sistemáticamente obstáculos topológicos, aminora las distorsiones geométricas y por lo tanto permite que las técnicas NLDR trabajen de manera óptima en simulaciones moleculares. Este método divide de forma automática el espacio configuracional y trata a cada partición por separado. Después, conecta estas particiones del punto de vista de mecánica estadística.
2015-07-16T12:26:41ZHashemian, BehroozCollective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. However, identifying a representative set of CVs for a given system is far from obvious, and most often relies on physical intuition or partial knowledge about the systems. An inappropriate choice of CVs is misleading and can lead to inefficient sampling. Thus, there is a need for systematic approaches to effectively identify CVs.
In recent years, machine learning techniques, especially nonlinear dimensionality reduction (NLDR), have shown their ability to automatically identify the most important collective behavior of molecular systems. These methods have been widely used to visualize molecular trajectories. However, in general they do not provide a differentiable mapping from high-dimensional configuration space to their low-dimensional representation, as required in enhanced sampling methods, and they cannot deal with systems with inherently nontrivial conformational manifolds.
In the fist part of this dissertation, we introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. SandCV identifies the system's conformational manifold, handles out-of-manifold conformations by a closest point projection, and exactly computes the Jacobian of the resulting CVs. We also illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble.
We then demonstrate that NLDR methods face serious obstacles when the underlying CVs present periodicities, e.g.~arising from proper dihedral angles. As a result, NLDR methods collapse very distant configurations, thus leading to misinterpretations and inefficiencies in enhanced sampling. Here, we identify this largely overlooked problem, and discuss possible approaches to overcome it. Additionally, we characterize flexibility of alanine dipeptide molecule and show that it evolves around a flat torus in four-dimensional space.
In the final part of this thesis, we propose a novel method, atlas of collective variables, that systematically overcomes topological obstacles, ameliorates the geometrical distortions and thus allows NLDR techniques to perform optimally in molecular simulations. This method automatically partitions the configuration space and treats each partition separately. Then, it connects these partitions from the statistical mechanics standpoint.
Las variables colectivas (CVs, acrónimo inglés de collective variables) son representaciones de baja dimensionalidad del estado de un sistema complejo, que nos ayudan a racionalizar conformaciones moleculares y muestrear paisajes de energía libre con simulaciones de dinámica molecular. Sin embargo, identificar unas CVs representativas para un sistema dado dista de ser evidente, por lo que a menudo se confía en la intuición física o en el conocimiento parcial de los sistemas bajo estudio. Una elección inadecuada de las CVs puede dar a interpretaciones engañosas y conducir a un muestreo ineficiente. Por lo tanto, hay una necesidad de desarrollar enfoques sistemáticos para identificar CVs de manera efectiva. En los últimos años, las técnicas de aprendizaje de máquina, especialmente las técnicas de reducción de dimensionalidad no lineal (NLDR, acrónimo inglés de nonlinear dimensionality reduction), han demostrado su capacidad para identificar automáticamente el comportamiento colectivo de sistemas moleculares. Estos métodos han sido ampliamente utilizados para visualizar las trayectorias moleculares. No obstante, en general las técnicas de NLDR no proporcionan una aplicación diferenciable de las configuraciones de alta dimensión a su representación de baja dimensión, condición que es requerida en los métodos mejorados de muestreo, por lo que no pueden hacer frente a sistemas con variedades conformacionales inherentemente no triviales. En la primer parte de esta tesis doctoral, introducimos una metodología que, a partir de un conjunto de conformaciones representativo de la flexibilidad del sistema molecular, construye variables colectivas suaves y no lineales basadas en datos (SandCV, acrónimo en inglés de smooth and nonlinear data-driven collective variables) obtenidos utilizando algoritmos de aprendizaje de variedades no lineales. Demostramos el método con una molécula de referencia estándar y mostramos cómo puede ser combinado de forma no intrusiva con métodos mejorados de muestreo ya existentes, aquí el método de la fuerza de sesgo adaptativa. SandCV identifica la variedad conformacional del sistema, maneja conformaciones fuera de la variedad por una proyección al punto más cercano de la variedad, y calcula exactamente el Jacobiano de las CVs resultantes. También ilustramos cómo simulaciones de muestreo mejoradas pueden, mediante SandCV, explorar regiones que fueron mal muestreadas en el conjunto molecular inicial. A continuación, demostramos que los métodos NLDR se enfrentan a serios obstáculos cuando las CVs subyacentes presentan periodicidad, por ejemplo, derivados de ángulos diedrales. Como consecuencia, los métodos NLDR colapsan configuraciones muy distantes, lo que conduce a interpretaciones erróneas y a ineficiencias en el muestreo mejorado. Aquí, identificamos este problema en gran medida pasado por alto, y discutimos los posibles enfoques para superarlo. Además, caracterizamos la flexibilidad de la molécula de dipéptido alanina y demostramos que evoluciona en torno a un toro plano en cuatro dimensiones. En la parte final de esta tesis, proponemos una metodología novedosa, atlas de variables colectivas, que supera sistemáticamente obstáculos topológicos, aminora las distorsiones geométricas y por lo tanto permite que las técnicas NLDR trabajen de manera óptima en simulaciones moleculares. Este método divide de forma automática el espacio configuracional y trata a cada partición por separado. Después, conecta estas particiones del punto de vista de mecánica estadística.Estimación bayesiana de cópulas extremales en procesos de PoissonOrtego Martínez, María Isabelhttp://hdl.handle.net/2117/956452023-10-12T18:35:48Z2015-03-17T14:33:46ZEstimación bayesiana de cópulas extremales en procesos de Poisson
Ortego Martínez, María Isabel
The estimation of occurrence probabilities of extremal quantities is essential in the study of hazards associated with natural phenomena. The extremal quantities of interest usually correspond to phenomena characterized by two or more magnitudes, often showing dependence among them. In order to better characterize situations that could be dangerous, the magnitudes that describe the phenomenon should be jointly described.
A Poisson-GPD model, which describes the occurrence of extremal events and their marginal sizes, has been established: the occurrence of the extremal events is represented by means of a Poisson process, and each event is characterized by a size modelled by a Generalized Pareto Distribution, GPD. The dependence between events is modelled through copula functions: a family of Gumbel copulas, suitable for the type of data treated, and a new type of copula that is introduced, the CrEnC copula. The CrEnC copula minimizes the mutual information in situations in which only partial information in the form of restrictions is available, such as marginal models or joint moments of the variables.
In this context, data are often scarce, and the uncertainty in the estimation of the model will be great. A Bayesian estimation process that takes into account this uncertainty has been established. Goodness-of-fit of some aspects of the model (GPD goodness-of-fit, GPD-Weibull hypothesis and global goodness-of-fit) has been checked using a selection of Bayesian p-values, which incorporate the uncertainty of the parameter estimation. Once the model has been estimated, a post-process of information has been performed to obtain a posteriori quantities of interest, such as exceedance probabilities of reference values or return periods of events of a certain size.
The proposed model is applied to three datasets, with different characteristics. The results obtained are good: the introduced CrEnC copulas correctly represent the dependence in situations in which only partial information is available, and the Bayesian estimation of the parameters of the model gives added value to the results, because it allows the uncertainty of the posterior estimates, such as hazard and dependence parameters, to be evaluated.; La estimación de probabilidades de ocurrencia de cantidades extremales es imprescindible en el estudio de la peligrosidad de fenómenos naturales. Las cantidades extremales de interés suelen corresponder a fenómenos caracterizados por dos o más magnitudes, que en muchos casos son dependientes entre sí. Por tanto, para poder caracterizar mejor las situaciones que pudieran resultar peligrosas, se deben estudiar conjuntamente las magnitudes que describen el fenómeno. Se ha establecido un modelo Poisson-GPD que permite describir la ocurrencia de los sucesos extremales y sus tamaños marginales: la ocurrencia de los sucesos extremales se representa mediante un proceso de Poisson y cada suceso se caracteriza por un tamaño modelado según una distribución generalizada de Pareto, GPD. La dependencia entre sucesos se modeliza mediante funciones cópula: se utiliza una familia de cópulas Gumbel, adecuada al tipo de datos, y se introduce un nuevo tipo de cópula, la cópula CrEnC. La cópula CrEnC minimiza la información mutua en situaciones donde se dispone de información parcial en forma de restricciones, tales como los modelos marginales o momentos conjuntos de las variables. La representación de estas cópulas en R^2 permite mejorar tanto su estima como la apreciación de la bondad de ajuste a los datos. Se proporciona un algoritmo de estimación de cópulas CrEnC, que incluye una aproximación de las funciones normalizadoras mediante el método Montecarlo. En este contexto los datos suelen ser escasos, por lo que la incertidumbre en la estimación del modelo será elevada. Se ha establecido un proceso de estimación bayesiana de los parámetros, la cual permite tener en cuenta esta incertidumbre. La bondad de ajuste de diversos aspectos del modelo (bondad de ajuste GPD, hipótesis GPD-Weibull y bondad de ajuste global) se ha valorado mediante una selección de p-valores bayesianos, los cuales incorporan la incertidumbre de la estimación de los parámetros. Una vez estimado el modelo, se realiza un post-proceso de la información, donde se obtienen cantidades a posteriori de interés, como probabilidades de excedencia de valores de referencia o periodos de retorno de sucesos de un tamaño determinado. El modelo propuesto se aplica a tres conjuntos de datos de características diferentes. Se obtienen buenos resultados: las cópulas CrEnC introducidas representan correctamente la dependencia en situaciones en las que sólo se dispone de información parcial y la estimación bayesiana de los parámetros del modelo proporciona valor añadido a los resultados, ya que permite evaluar la incertidumbre de las estimaciones y tenerla en cuenta al obtener las cantidades a posteriori deseadas
Premi Extraordinari de Doctorat, promoció 2014-2015. Àmbit de Ciències
2015-03-17T14:33:46ZOrtego Martínez, María IsabelThe estimation of occurrence probabilities of extremal quantities is essential in the study of hazards associated with natural phenomena. The extremal quantities of interest usually correspond to phenomena characterized by two or more magnitudes, often showing dependence among them. In order to better characterize situations that could be dangerous, the magnitudes that describe the phenomenon should be jointly described.
A Poisson-GPD model, which describes the occurrence of extremal events and their marginal sizes, has been established: the occurrence of the extremal events is represented by means of a Poisson process, and each event is characterized by a size modelled by a Generalized Pareto Distribution, GPD. The dependence between events is modelled through copula functions: a family of Gumbel copulas, suitable for the type of data treated, and a new type of copula that is introduced, the CrEnC copula. The CrEnC copula minimizes the mutual information in situations in which only partial information in the form of restrictions is available, such as marginal models or joint moments of the variables.
In this context, data are often scarce, and the uncertainty in the estimation of the model will be great. A Bayesian estimation process that takes into account this uncertainty has been established. Goodness-of-fit of some aspects of the model (GPD goodness-of-fit, GPD-Weibull hypothesis and global goodness-of-fit) has been checked using a selection of Bayesian p-values, which incorporate the uncertainty of the parameter estimation. Once the model has been estimated, a post-process of information has been performed to obtain a posteriori quantities of interest, such as exceedance probabilities of reference values or return periods of events of a certain size.
The proposed model is applied to three datasets, with different characteristics. The results obtained are good: the introduced CrEnC copulas correctly represent the dependence in situations in which only partial information is available, and the Bayesian estimation of the parameters of the model gives added value to the results, because it allows the uncertainty of the posterior estimates, such as hazard and dependence parameters, to be evaluated.
La estimación de probabilidades de ocurrencia de cantidades extremales es imprescindible en el estudio de la peligrosidad de fenómenos naturales. Las cantidades extremales de interés suelen corresponder a fenómenos caracterizados por dos o más magnitudes, que en muchos casos son dependientes entre sí. Por tanto, para poder caracterizar mejor las situaciones que pudieran resultar peligrosas, se deben estudiar conjuntamente las magnitudes que describen el fenómeno. Se ha establecido un modelo Poisson-GPD que permite describir la ocurrencia de los sucesos extremales y sus tamaños marginales: la ocurrencia de los sucesos extremales se representa mediante un proceso de Poisson y cada suceso se caracteriza por un tamaño modelado según una distribución generalizada de Pareto, GPD. La dependencia entre sucesos se modeliza mediante funciones cópula: se utiliza una familia de cópulas Gumbel, adecuada al tipo de datos, y se introduce un nuevo tipo de cópula, la cópula CrEnC. La cópula CrEnC minimiza la información mutua en situaciones donde se dispone de información parcial en forma de restricciones, tales como los modelos marginales o momentos conjuntos de las variables. La representación de estas cópulas en R^2 permite mejorar tanto su estima como la apreciación de la bondad de ajuste a los datos. Se proporciona un algoritmo de estimación de cópulas CrEnC, que incluye una aproximación de las funciones normalizadoras mediante el método Montecarlo. En este contexto los datos suelen ser escasos, por lo que la incertidumbre en la estimación del modelo será elevada. Se ha establecido un proceso de estimación bayesiana de los parámetros, la cual permite tener en cuenta esta incertidumbre. La bondad de ajuste de diversos aspectos del modelo (bondad de ajuste GPD, hipótesis GPD-Weibull y bondad de ajuste global) se ha valorado mediante una selección de p-valores bayesianos, los cuales incorporan la incertidumbre de la estimación de los parámetros. Una vez estimado el modelo, se realiza un post-proceso de la información, donde se obtienen cantidades a posteriori de interés, como probabilidades de excedencia de valores de referencia o periodos de retorno de sucesos de un tamaño determinado. El modelo propuesto se aplica a tres conjuntos de datos de características diferentes. Se obtienen buenos resultados: las cópulas CrEnC introducidas representan correctamente la dependencia en situaciones en las que sólo se dispone de información parcial y la estimación bayesiana de los parámetros del modelo proporciona valor añadido a los resultados, ya que permite evaluar la incertidumbre de las estimaciones y tenerla en cuenta al obtener las cantidades a posteriori deseadasDecomposition techniques for computational limit analysisRabiei, Nimahttp://hdl.handle.net/2117/955222023-10-12T18:42:39Z2014-11-26T15:39:08ZDecomposition techniques for computational limit analysis
Rabiei, Nima
Limit analysis is relevant in many practical engineering areas such as the design of mechanical structure or the analysis of soil mechanics. The theory of limit analysis assumes a rigid, perfectly-plastic material to model the collapse of a solid that is subjected to a static load distribution.
Within this context, the problem of limit analysis is to consider a continuum that is subjected to a fixed force distribution consisting of both volume and surfaces loads. Then the objective is to obtain the maximum multiple of this force distribution that causes the collapse of the body. This multiple is usually called collapse multiplier. This collapse multiplier can be obtained analytically by solving an infinite dimensional nonlinear optimisation problem. Thus the computation of the multiplier requires two steps, the first step is to discretise its corresponding analytical problem by the introduction of finite dimensional spaces and the second step is to solve a nonlinear
optimisation problem, which represents the major difficulty and challenge in the numerical solution process.
Solving this optimisation problem, which may become very large and computationally expensive in three dimensional problems, is the second important step. Recent techniques have allowed scientists to determine upper and lower bounds of the load factor under which the structure will collapse. Despite the attractiveness of these results, their application to practical examples is still hampered by the size of the resulting optimisation process. Thus a remedy to this is the use of decomposition methods and to parallelise the corresponding optimisation problem.
The aim of this work is to present a decomposition technique which can reduce the memory requirements and computational cost of this type of problems. For this purpose, we exploit the important feature of the underlying optimisation problem: the objective function contains one scaler variable. The main contributes of the thesis are, rewriting the constraints of the problem as the intersection of appropriate sets, and proposing efficient algorithmic strategies to iteratively solve the decomposition algorithm.; El análisis en estados límite es una herramienta relente en muchas aplicaciones de la ingeniería como por ejemplo en el análisis de estructuras o en mecánica del suelo. La teoría de estados límite asume un material rígido con plasticidad perfecta para modelar la capacidad portante y los mecanismos de derrumbe de un sólido sometido a una distribución de cargas estáticas. En este contexto, el problema en estados límite considera el continuo sometido a una distribución de cargas, tanto volumétricas como de superficie, y tiene como objetivo hallar el máximo multiplicador de la carga que provoca el derrumbe del cuerpo. Este valor se conoce como el máximo factor de carga, y puede ser calculado resolviendo un problema de optimización no lineal de dimensión infinita. Desde el punto de vista computacional, se requieren pues dos pasos: la discretización del problema analítico mediante el uso de espacios de dimensión finita, y la resolución del problema de optimización resultante. Este último paso representa uno de los mayores retos en el proceso del cálculo del factor de carga. El problema de optimización mencionado puede ser de gran tamaño y con un alto coste computacional, sobretodo en el análisis límite tridimensional. Técnicas recientes han permitido a investigadores e ingenieros determinar cotas superiores e inferiores del factor de carga. A pesar del atractivo de estos resultados, su aplicación práctica en ejemplos realistas está todavía obstaculizada por el tamaño del problema de optimización resultante. Posibles remedios a este obstáculo son el diseño de técnicas de descomposición y la paralelizarían del problema de optimización. El objetivo de este trabajo es presentar una técnica de descomposición que pueda reducir los requerimientos y el coste computacional de este tipo de problemas. Con este propósito, se explotan una propiedad importante del problema de optimización: la función objetivo contiene una único escalar (el factor de carga). La contribución principal de la tesis es el replanteamiento del problema de optimización como la intersección de dos conjuntos, y la propuesta de un algoritmo eficiente para su resolución iterativa.
2014-11-26T15:39:08ZRabiei, NimaLimit analysis is relevant in many practical engineering areas such as the design of mechanical structure or the analysis of soil mechanics. The theory of limit analysis assumes a rigid, perfectly-plastic material to model the collapse of a solid that is subjected to a static load distribution.
Within this context, the problem of limit analysis is to consider a continuum that is subjected to a fixed force distribution consisting of both volume and surfaces loads. Then the objective is to obtain the maximum multiple of this force distribution that causes the collapse of the body. This multiple is usually called collapse multiplier. This collapse multiplier can be obtained analytically by solving an infinite dimensional nonlinear optimisation problem. Thus the computation of the multiplier requires two steps, the first step is to discretise its corresponding analytical problem by the introduction of finite dimensional spaces and the second step is to solve a nonlinear
optimisation problem, which represents the major difficulty and challenge in the numerical solution process.
Solving this optimisation problem, which may become very large and computationally expensive in three dimensional problems, is the second important step. Recent techniques have allowed scientists to determine upper and lower bounds of the load factor under which the structure will collapse. Despite the attractiveness of these results, their application to practical examples is still hampered by the size of the resulting optimisation process. Thus a remedy to this is the use of decomposition methods and to parallelise the corresponding optimisation problem.
The aim of this work is to present a decomposition technique which can reduce the memory requirements and computational cost of this type of problems. For this purpose, we exploit the important feature of the underlying optimisation problem: the objective function contains one scaler variable. The main contributes of the thesis are, rewriting the constraints of the problem as the intersection of appropriate sets, and proposing efficient algorithmic strategies to iteratively solve the decomposition algorithm.
El análisis en estados límite es una herramienta relente en muchas aplicaciones de la ingeniería como por ejemplo en el análisis de estructuras o en mecánica del suelo. La teoría de estados límite asume un material rígido con plasticidad perfecta para modelar la capacidad portante y los mecanismos de derrumbe de un sólido sometido a una distribución de cargas estáticas. En este contexto, el problema en estados límite considera el continuo sometido a una distribución de cargas, tanto volumétricas como de superficie, y tiene como objetivo hallar el máximo multiplicador de la carga que provoca el derrumbe del cuerpo. Este valor se conoce como el máximo factor de carga, y puede ser calculado resolviendo un problema de optimización no lineal de dimensión infinita. Desde el punto de vista computacional, se requieren pues dos pasos: la discretización del problema analítico mediante el uso de espacios de dimensión finita, y la resolución del problema de optimización resultante. Este último paso representa uno de los mayores retos en el proceso del cálculo del factor de carga. El problema de optimización mencionado puede ser de gran tamaño y con un alto coste computacional, sobretodo en el análisis límite tridimensional. Técnicas recientes han permitido a investigadores e ingenieros determinar cotas superiores e inferiores del factor de carga. A pesar del atractivo de estos resultados, su aplicación práctica en ejemplos realistas está todavía obstaculizada por el tamaño del problema de optimización resultante. Posibles remedios a este obstáculo son el diseño de técnicas de descomposición y la paralelizarían del problema de optimización. El objetivo de este trabajo es presentar una técnica de descomposición que pueda reducir los requerimientos y el coste computacional de este tipo de problemas. Con este propósito, se explotan una propiedad importante del problema de optimización: la función objetivo contiene una único escalar (el factor de carga). La contribución principal de la tesis es el replanteamiento del problema de optimización como la intersección de dos conjuntos, y la propuesta de un algoritmo eficiente para su resolución iterativa.The inverse problem on finite networksArauz Lombardía, Cristinahttp://hdl.handle.net/2117/954312023-10-12T18:38:00Z2014-09-23T12:51:05ZThe inverse problem on finite networks
Arauz Lombardía, Cristina
The aim of this thesis is to contribute to the field of discrete boundary value problems on finite networks. Boundary value problems have been considered both on the continuum and on the discrete fields. Despite working in the discrete field, we use the notations of the continuous field for elliptic operators and boundary value problems. The reason is the importance of the symbiosis between both fields, since sometimes solving a problem in the discrete setting can lead to the solution of its continuum version by a limit process. However, the relation between the discrete and the continuous settings does not work out so easily in general. Although the discrete field has softness and regular conditions on all its manifolds, functions and operators in a natural way, some difficulties that are avoided by the continuous field appear.
Specifically, this thesis endeavors two objectives. First, we wish to deduce functional, structural or resistive data of a network taking advantage of its conductivity information. The actual goal is to gather functional, structural and resistive information of a large network when the same specifics of the subnetworks that form it are known. The reason is that large networks are difficult to work with because of their size. The smaller the size of a network, the easier to work with it, and hence we try to break the networks into smaller parts that may allow us to solve easier problems on them. We seek the expressions of certain operators that characterize the solutions of boundary value problems on the original networks. These problems are denominated direct boundary value problems, on account of the direct employment of the conductivity information.
The second purpose is to recover the internal conductivity of a network using only boundary measurements and global equilibrium conditions. For this problem is poorly arranged because it is highly sensitive to changes in the boundary data, at times we only target a partial reconstruction of the conductivity data or we introduce additional conditions to the network in order to be able to perform a full internal reconstruction. This variety of problems is labelled as inverse boundary value problems, in light of the profit of boundary information to gain knowledge about the inside of the network. Our work tries to find situations where the recovery is feasible, partially or totally.
One of our ambitions regarding inverse boundary value problems is to recuperate the structure of the networks that allow the well-known Serrin's problem to have a solution in the discrete setting. Surprisingly, the answer is similar to the continuous case. We also aim to achieve a network characterization from a boundary operator on the network. With this end we define a new class of boundary value problems, that we call overdetermined partial boundary value problems. We describe how the solutions of this family of problems that hold an alternating property on a part of the boundary spread through the network preserving this alternance. If we focus in a family of networks, we see that the above mentioned operator on the boundary can be the response matrix of an infinite family of networks associated to different conductivity functions. By choosing an specific extension, we get a unique network whose response matrix is equal to a previously given matrix.
Once we have characterized those matrices that are the response matrices of certain networks, we try to recover the conductances of these networks. With this end, we characterize any solution of an overdetermined partial boundary value problem and describe its resolvent kernels. Then, we analyze two big groups of networks owning remarkable boundary properties which yield to the recovery of the conductances of certain edges near the boundary. We aim to give explicit formulae for the acquirement of these conductances. Using these formulae we are allowed to execute a full conductivity recovery under certain circumstances.; Aquesta tesi té dos objectius generals. Primer, volem deduir dades funcionals, estructurals o resistives d'una xarxa fent servir la informació proporcionada per la seva conductivitat. L'objectiu real és aconseguir aquesta informació d'una xarxa gran quan coneixem la mateixa de les subxarxes que la formen. El motiu és que les xarxes grans no són fàcils de treballar a causa de la seva mida. Com més petita sigui una xarxa, més fàcil serà treballar-hi, i per tant intentem trencar les xarxes grans en parts més petites que potser ens permeten resoldre problemes sobre elles més fàcilment. Principalment busquem les expressions de certs operadors que caracteritzen les solucions dels problemes de contorn en les xaxes originals. Aquests problemes es diuen problemes directes, ja que s'empren directament les dades de conductivitat per obtenir informació. El segon objectiu és recuperar les dades de conductivitat a l'interior d'una xarxa emprant només mesures a la frontera de la mateixa i condicions d'equiliri globals. Com que aquest problema no està ben establert perquè és altament sensible als canvis en les dades de frontera, de vegades només busquem una reconstrucció partial de la conductivitat o afegim condicions a la xarxa per tal de recuperar completament la conductivitat. Aquest tipus de problemes es diuen problemes inversos, ja que es fa servir informació a la frontera per aconseguir coneixements de l'interior de la xarxa. Aquest treball tracta de trobar situacions on la recuperació, total o parcial, es pugui dur a terme. Una de les nostres ambicions quant a problemes inversos és recuperar l'estructura de les xarxes per les que el ben conegut Problema de Serrin té solució en el camp discret. Sorprenentment, la resposta és similar al cas continu. També volem caracteritzar les xarxes mitjançant un operador a la frontera. Amb aquesta finalitat definim els problemes de contorn parcials sobredeterminats i describim com les solucions d'aquesta família de problemes que tenen una propietat d'alternància a una part de la frontera es propaguen a través de la xarxa mantenint aquesta alternància. Si ens centrem en una certa família de xarxes, veiem que l'operador a la frontera que abans hem mencionat pot ser la matriu de respostes d'una família infinita de xarxes amb diferentes conductivitats. Escollint una extensió en concret, obtenim una única xarxa per la qual una matriu donada és la seva matriu de respostes. Un cop hem caracteritzat aquelles matrius que són la matriu de respostes de certes xarxes, intentem recuperar les conductàncies d'aquestes xarxes. Amb aquesta finalitat, caracteritzem qualsevol solució d'un problema de contorn parcial sobredeterminat. Després, analitzem dos gran grups de xarxes que tene propietats de frontera notables i que ens porten a la recuperació de les conductàncies de certes branques a prop de la frontera. L'objectiu és donar fórmules explícites per obtenir aquestes conductàncies. Fent servir aquestes fórmules, aconseguim dur a terme una recuperació completa de conductàncies sota certes circumstàncies
2014-09-23T12:51:05ZArauz Lombardía, CristinaThe aim of this thesis is to contribute to the field of discrete boundary value problems on finite networks. Boundary value problems have been considered both on the continuum and on the discrete fields. Despite working in the discrete field, we use the notations of the continuous field for elliptic operators and boundary value problems. The reason is the importance of the symbiosis between both fields, since sometimes solving a problem in the discrete setting can lead to the solution of its continuum version by a limit process. However, the relation between the discrete and the continuous settings does not work out so easily in general. Although the discrete field has softness and regular conditions on all its manifolds, functions and operators in a natural way, some difficulties that are avoided by the continuous field appear.
Specifically, this thesis endeavors two objectives. First, we wish to deduce functional, structural or resistive data of a network taking advantage of its conductivity information. The actual goal is to gather functional, structural and resistive information of a large network when the same specifics of the subnetworks that form it are known. The reason is that large networks are difficult to work with because of their size. The smaller the size of a network, the easier to work with it, and hence we try to break the networks into smaller parts that may allow us to solve easier problems on them. We seek the expressions of certain operators that characterize the solutions of boundary value problems on the original networks. These problems are denominated direct boundary value problems, on account of the direct employment of the conductivity information.
The second purpose is to recover the internal conductivity of a network using only boundary measurements and global equilibrium conditions. For this problem is poorly arranged because it is highly sensitive to changes in the boundary data, at times we only target a partial reconstruction of the conductivity data or we introduce additional conditions to the network in order to be able to perform a full internal reconstruction. This variety of problems is labelled as inverse boundary value problems, in light of the profit of boundary information to gain knowledge about the inside of the network. Our work tries to find situations where the recovery is feasible, partially or totally.
One of our ambitions regarding inverse boundary value problems is to recuperate the structure of the networks that allow the well-known Serrin's problem to have a solution in the discrete setting. Surprisingly, the answer is similar to the continuous case. We also aim to achieve a network characterization from a boundary operator on the network. With this end we define a new class of boundary value problems, that we call overdetermined partial boundary value problems. We describe how the solutions of this family of problems that hold an alternating property on a part of the boundary spread through the network preserving this alternance. If we focus in a family of networks, we see that the above mentioned operator on the boundary can be the response matrix of an infinite family of networks associated to different conductivity functions. By choosing an specific extension, we get a unique network whose response matrix is equal to a previously given matrix.
Once we have characterized those matrices that are the response matrices of certain networks, we try to recover the conductances of these networks. With this end, we characterize any solution of an overdetermined partial boundary value problem and describe its resolvent kernels. Then, we analyze two big groups of networks owning remarkable boundary properties which yield to the recovery of the conductances of certain edges near the boundary. We aim to give explicit formulae for the acquirement of these conductances. Using these formulae we are allowed to execute a full conductivity recovery under certain circumstances.
Aquesta tesi té dos objectius generals. Primer, volem deduir dades funcionals, estructurals o resistives d'una xarxa fent servir la informació proporcionada per la seva conductivitat. L'objectiu real és aconseguir aquesta informació d'una xarxa gran quan coneixem la mateixa de les subxarxes que la formen. El motiu és que les xarxes grans no són fàcils de treballar a causa de la seva mida. Com més petita sigui una xarxa, més fàcil serà treballar-hi, i per tant intentem trencar les xarxes grans en parts més petites que potser ens permeten resoldre problemes sobre elles més fàcilment. Principalment busquem les expressions de certs operadors que caracteritzen les solucions dels problemes de contorn en les xaxes originals. Aquests problemes es diuen problemes directes, ja que s'empren directament les dades de conductivitat per obtenir informació. El segon objectiu és recuperar les dades de conductivitat a l'interior d'una xarxa emprant només mesures a la frontera de la mateixa i condicions d'equiliri globals. Com que aquest problema no està ben establert perquè és altament sensible als canvis en les dades de frontera, de vegades només busquem una reconstrucció partial de la conductivitat o afegim condicions a la xarxa per tal de recuperar completament la conductivitat. Aquest tipus de problemes es diuen problemes inversos, ja que es fa servir informació a la frontera per aconseguir coneixements de l'interior de la xarxa. Aquest treball tracta de trobar situacions on la recuperació, total o parcial, es pugui dur a terme. Una de les nostres ambicions quant a problemes inversos és recuperar l'estructura de les xarxes per les que el ben conegut Problema de Serrin té solució en el camp discret. Sorprenentment, la resposta és similar al cas continu. També volem caracteritzar les xarxes mitjançant un operador a la frontera. Amb aquesta finalitat definim els problemes de contorn parcials sobredeterminats i describim com les solucions d'aquesta família de problemes que tenen una propietat d'alternància a una part de la frontera es propaguen a través de la xarxa mantenint aquesta alternància. Si ens centrem en una certa família de xarxes, veiem que l'operador a la frontera que abans hem mencionat pot ser la matriu de respostes d'una família infinita de xarxes amb diferentes conductivitats. Escollint una extensió en concret, obtenim una única xarxa per la qual una matriu donada és la seva matriu de respostes. Un cop hem caracteritzat aquelles matrius que són la matriu de respostes de certes xarxes, intentem recuperar les conductàncies d'aquestes xarxes. Amb aquesta finalitat, caracteritzem qualsevol solució d'un problema de contorn parcial sobredeterminat. Després, analitzem dos gran grups de xarxes que tene propietats de frontera notables i que ens porten a la recuperació de les conductàncies de certes branques a prop de la frontera. L'objectiu és donar fórmules explícites per obtenir aquestes conductàncies. Fent servir aquestes fórmules, aconseguim dur a terme una recuperació completa de conductàncies sota certes circumstànciesValidation and generation of curved meshes for high-order unstructured methodsGargallo Peiró, Abelhttp://hdl.handle.net/2117/953762023-10-12T18:42:41Z2014-07-30T09:13:17ZValidation and generation of curved meshes for high-order unstructured methods
Gargallo Peiró, Abel
In this thesis, a new framework to validate and generate curved high-order meshes for complex models is proposed. The main application of the proposed framework is to generate curved meshes that are suitable for finite element analysis with unstructured high-order methods. Note that the lack of a robust and automatic curved mesh generator is one of the main issues that has hampered the adoption of high-order methods in industry. Specifically, without curved high-order meshes composed by valid elements and that match the domain boundary, the convergence rates and accuracy of high-order methods cannot be realized. The main motivation of this work is to propose a framework to address this issue.
First, we propose a definition of distortion (quality) measure for curved meshes of any polynomial degree. The presented measures allow validating if a high-order mesh is suitable to perform finite element analysis with an unstructured high-order method. In particular, given a high-order element, the measures assign zero quality if the element is invalid, and one if the element corresponds to the selected ideal configuration (desired shape and nodal distribution). Moreover, we prove that if the quality of an element is not zero, the region where the determinant of the Jacobian is not positive has measure zero. We present several examples to illustrate that the proposed measures can be used to validate high-order isotropic and boundary layer meshes.
Second, we develop a smoothing and untangling procedure to improve the quality for curved high-order meshes. Specifically, we propose a global non-linear least squares minimization of the defined distortion measures. The distortion is regularized to allow untangling invalid meshes, and it ensures that if the initial configuration is valid, it never becomes invalid. Moreover, the optimization procedure preserves, whenever is possible, some geometrical features of the linear mesh such as the shape, stretching, straight-sided edges, and element size. We demonstrate through examples that the implementation of the optimization problem is robust and capable of handling situations in which the mesh before optimization contains a large number of invalid elements. We consider cases with polynomial approximations up to degree ten, large deformations of the curved boundaries, concave boundaries, and highly stretched boundary layer elements.
Third, we extend the definition of distortion and quality measures to curved high-order meshes with the nodes on parameterized surfaces. Using this definition, we also propose a smoothing and untangling procedure for meshes on CAD surfaces. This procedure is posed in terms of the parametric coordinates of the mesh nodes to enforce that the nodes are on the CAD geometry. In addition, we prove that the procedure is independent of the surface parameterization. Thus, it can optimize meshes on CAD surfaces defined by low-quality parameterizations.
Finally, we propose a new mesh generation procedure by means of an a posteriori approach. The approach consists of modifying an initial linear mesh by first, introducing high-order nodes, second, displacing the boundary nodes to ensure that they are on the CAD surface, and third, smoothing and untangling the resulting mesh to produce a valid curved high-order mesh. To conclude, we include several examples to demonstrate that the generated meshes are suitable to perform finite element analysis with unstructured high-order methods.
2014-07-30T09:13:17ZGargallo Peiró, AbelIn this thesis, a new framework to validate and generate curved high-order meshes for complex models is proposed. The main application of the proposed framework is to generate curved meshes that are suitable for finite element analysis with unstructured high-order methods. Note that the lack of a robust and automatic curved mesh generator is one of the main issues that has hampered the adoption of high-order methods in industry. Specifically, without curved high-order meshes composed by valid elements and that match the domain boundary, the convergence rates and accuracy of high-order methods cannot be realized. The main motivation of this work is to propose a framework to address this issue.
First, we propose a definition of distortion (quality) measure for curved meshes of any polynomial degree. The presented measures allow validating if a high-order mesh is suitable to perform finite element analysis with an unstructured high-order method. In particular, given a high-order element, the measures assign zero quality if the element is invalid, and one if the element corresponds to the selected ideal configuration (desired shape and nodal distribution). Moreover, we prove that if the quality of an element is not zero, the region where the determinant of the Jacobian is not positive has measure zero. We present several examples to illustrate that the proposed measures can be used to validate high-order isotropic and boundary layer meshes.
Second, we develop a smoothing and untangling procedure to improve the quality for curved high-order meshes. Specifically, we propose a global non-linear least squares minimization of the defined distortion measures. The distortion is regularized to allow untangling invalid meshes, and it ensures that if the initial configuration is valid, it never becomes invalid. Moreover, the optimization procedure preserves, whenever is possible, some geometrical features of the linear mesh such as the shape, stretching, straight-sided edges, and element size. We demonstrate through examples that the implementation of the optimization problem is robust and capable of handling situations in which the mesh before optimization contains a large number of invalid elements. We consider cases with polynomial approximations up to degree ten, large deformations of the curved boundaries, concave boundaries, and highly stretched boundary layer elements.
Third, we extend the definition of distortion and quality measures to curved high-order meshes with the nodes on parameterized surfaces. Using this definition, we also propose a smoothing and untangling procedure for meshes on CAD surfaces. This procedure is posed in terms of the parametric coordinates of the mesh nodes to enforce that the nodes are on the CAD geometry. In addition, we prove that the procedure is independent of the surface parameterization. Thus, it can optimize meshes on CAD surfaces defined by low-quality parameterizations.
Finally, we propose a new mesh generation procedure by means of an a posteriori approach. The approach consists of modifying an initial linear mesh by first, introducing high-order nodes, second, displacing the boundary nodes to ensure that they are on the CAD surface, and third, smoothing and untangling the resulting mesh to produce a valid curved high-order mesh. To conclude, we include several examples to demonstrate that the generated meshes are suitable to perform finite element analysis with unstructured high-order methods.Continuous-discontinuous modelling for quasi-brittle failure: propagating cracks in a regularised bulkTamayo Mas, Elenahttp://hdl.handle.net/2117/952542023-10-12T18:36:59Z2014-05-22T12:03:37ZContinuous-discontinuous modelling for quasi-brittle failure: propagating cracks in a regularised bulk
Tamayo Mas, Elena
A new strategy to describe failure of quasi-brittle materials -concrete, for example- is presented. Traditionally, numerical simulation of quasi-brittle failure has been tackled from two different points of view: damage mechanics and fracture mechanics. The former, which belongs to the family of continuous models, describes fracture as a process of strain localisation and damage growth. The latter, which falls in the family of discontinuous models, explicitly introduces displacement discontinuities. Recently, some new approaches that merge these two classical theories have been devised. Although these combined approaches allow a better characterisation of the whole failure process, there are still some issues that need to be addressed, specially regarding the model switching -from the continuous to the continuous-discontinuous strategy.
The goal of this thesis is to present a new contribution in this direction. Our main concern is to properly account for the three main difficulties that emerge when dealing with combined strategies: (1) the pathological mesh-dependence exhibited by local softening models needs to be corrected; (2) the crack-path location has to be determined and (3) the switching from the continuous to the continuous-discontinuous strategy should be done in such a way that the two approaches are energetically equivalent.
First, we extend the applicability to a two- and three-dimensional setting of an alternative approach to regularise strain-softening -where non-locality is introduced at the level of displacements rather than some internal variable. To this end, we propose new combined boundary conditions for the regularisation equation (for the smoothed displacement field). As illustrated with different two- and three-dimensional examples, these boundary conditions allow to obtain physical realistic results for the first stages of the failure process.
Second, we present a new combined formulation that allows the propagation of cracks through a regularised bulk. To define the crack-path, instead of the classical mechanical criteria, we propose to use a geometrical criterion. More specifically, given a regularised damage field D(x), the discontinuity propagates following the direction dictated by the medial axis of the isoline (or isosurface in 3D) D(x) = D*. That is, a geometric tool widely used for image analysis, computer vision applications or mesh generation purposes is used here to locate cracks. We illustrate the capabilities of this new approach by carrying out different two- and three-dimensional numerical tests.
Last, we propose a new criterion to estimate the energy not yet dissipated by the bulk when switching models, so it can be transferred to the cohesive crack. This ensures that the continuous and the continuous-discontinuous strategies are energetically equivalent. Compared to other existing techniques, we present a strategy that accounts for the different unloading branches of damage models thus better estimating the energy that has to be transferred. We illustrate the performance of this technique with one- and two-dimensional examples.; En aquesta tesi, presentem una nova estratègia per tal de descriure el procés de fallida de materials quasi-fràgils, com ara el formigó. Típicament la simulació numèrica d'aquest procés s'ha dut a terme mitjançant models de dany o models de fractura. Els primers |models continus| descriuen la fractura com un procés de localització de deformacions on el dany creix i es propaga. Els models de fractura, en canvi, són models discontinus que introdueixen de manera explícita discontinuïtats en el camp de desplaçaments.
Recentment s'han proposat estratègies que combinen aquestes dues teories clàssiques. Tot i que aquestes formulacions alternatives permeten simular millor el procés de fallida, encara queden alguns aspectes per aclarir, especialment pel que fa al canvi de models |de l’estratègia contínua a la discontínua.
En aquesta tesi es presenta una nova estratègia contínua-discontínua. El nostre principal objectiu és proposar nous mètodes per tal de resoldre tres de les dificultats que presenten aquests models combinats: (1) solucionar la dependència patològica de la malla d'elements finits que presenten els models locals amb reblaniment; (2) determinar la trajectòria de la fissura i (3) assegurar-se que el canvi de models del continu al discontinu| es fa de manera que les dues estratègies siguin energèticament equivalents.
En primer lloc, ampliem l’ús |per tal de poder simular problemes dos i tres dimensionals d'una estratègia alternativa que regularitza el reblaniment de les lleis de tensió-deformació. Aquí la no-localitat s'introdueix a nivell del camp de desplaçaments i no a través d'una variable interna com succeeix en les formulacions estàndards. Per aquest motiu, proposem noves condicions de contorn combinades per l’equació de regularització (pel camp de desplaçaments suavitzat). Tal com s'observa en diferents exemples dos i tres dimensionals, aquestes condicions permeten simular de manera físicament realista les primeres etapes del procés de fallida.
En segon lloc, presentem una nova formulació combinada on les fissures es propaguen a través del medi regularitzat. Per tal de definir la trajectòria d'aquestes fissures, utilitzem un criteri geomètric, a diferència dels criteris mecànics clàssics. En particular, sigui D(x) un camp regularitzat de dany, les discontinuats es propaguen seguint la direcció marcada per l'eix mitjà de la isolínia (o isosuperfície mitjana en 3D) D(x) = D_. _Es a dir, utilitzem aquí aquesta eina geomètrica, molt emprada en d'altres aplicacions com ara l’anàlisi d'imatges, la visió artificial o la generació de malles| per tal de propagar les fissures. En aquest cas, donem també exemples dos i tres dimensionals.
Finalment, proposem un nou criteri per tal d'estimar l'energia que l'estructura encara no ha dissipat en el moment en que canviem de model, per tal que pugui ser transferida a la fissura cohesiva. D'aquesta manera, s'assegura que l’estratègia contínua i la contínua-discontínua siguin energèticament equivalents. En comparació amb d'altres tècniques, aquesta estratègia té en compte les diferents branques de descàrrega dels models de dany i permet estimar de manera més precisa l'energia que cal transmetre. Per tal de mostrar aquest balanç energètic es duen a terme diferents exemples en una i dues dimensions.
2014-05-22T12:03:37ZTamayo Mas, ElenaA new strategy to describe failure of quasi-brittle materials -concrete, for example- is presented. Traditionally, numerical simulation of quasi-brittle failure has been tackled from two different points of view: damage mechanics and fracture mechanics. The former, which belongs to the family of continuous models, describes fracture as a process of strain localisation and damage growth. The latter, which falls in the family of discontinuous models, explicitly introduces displacement discontinuities. Recently, some new approaches that merge these two classical theories have been devised. Although these combined approaches allow a better characterisation of the whole failure process, there are still some issues that need to be addressed, specially regarding the model switching -from the continuous to the continuous-discontinuous strategy.
The goal of this thesis is to present a new contribution in this direction. Our main concern is to properly account for the three main difficulties that emerge when dealing with combined strategies: (1) the pathological mesh-dependence exhibited by local softening models needs to be corrected; (2) the crack-path location has to be determined and (3) the switching from the continuous to the continuous-discontinuous strategy should be done in such a way that the two approaches are energetically equivalent.
First, we extend the applicability to a two- and three-dimensional setting of an alternative approach to regularise strain-softening -where non-locality is introduced at the level of displacements rather than some internal variable. To this end, we propose new combined boundary conditions for the regularisation equation (for the smoothed displacement field). As illustrated with different two- and three-dimensional examples, these boundary conditions allow to obtain physical realistic results for the first stages of the failure process.
Second, we present a new combined formulation that allows the propagation of cracks through a regularised bulk. To define the crack-path, instead of the classical mechanical criteria, we propose to use a geometrical criterion. More specifically, given a regularised damage field D(x), the discontinuity propagates following the direction dictated by the medial axis of the isoline (or isosurface in 3D) D(x) = D*. That is, a geometric tool widely used for image analysis, computer vision applications or mesh generation purposes is used here to locate cracks. We illustrate the capabilities of this new approach by carrying out different two- and three-dimensional numerical tests.
Last, we propose a new criterion to estimate the energy not yet dissipated by the bulk when switching models, so it can be transferred to the cohesive crack. This ensures that the continuous and the continuous-discontinuous strategies are energetically equivalent. Compared to other existing techniques, we present a strategy that accounts for the different unloading branches of damage models thus better estimating the energy that has to be transferred. We illustrate the performance of this technique with one- and two-dimensional examples.
En aquesta tesi, presentem una nova estratègia per tal de descriure el procés de fallida de materials quasi-fràgils, com ara el formigó. Típicament la simulació numèrica d'aquest procés s'ha dut a terme mitjançant models de dany o models de fractura. Els primers |models continus| descriuen la fractura com un procés de localització de deformacions on el dany creix i es propaga. Els models de fractura, en canvi, són models discontinus que introdueixen de manera explícita discontinuïtats en el camp de desplaçaments.
Recentment s'han proposat estratègies que combinen aquestes dues teories clàssiques. Tot i que aquestes formulacions alternatives permeten simular millor el procés de fallida, encara queden alguns aspectes per aclarir, especialment pel que fa al canvi de models |de l’estratègia contínua a la discontínua.
En aquesta tesi es presenta una nova estratègia contínua-discontínua. El nostre principal objectiu és proposar nous mètodes per tal de resoldre tres de les dificultats que presenten aquests models combinats: (1) solucionar la dependència patològica de la malla d'elements finits que presenten els models locals amb reblaniment; (2) determinar la trajectòria de la fissura i (3) assegurar-se que el canvi de models del continu al discontinu| es fa de manera que les dues estratègies siguin energèticament equivalents.
En primer lloc, ampliem l’ús |per tal de poder simular problemes dos i tres dimensionals d'una estratègia alternativa que regularitza el reblaniment de les lleis de tensió-deformació. Aquí la no-localitat s'introdueix a nivell del camp de desplaçaments i no a través d'una variable interna com succeeix en les formulacions estàndards. Per aquest motiu, proposem noves condicions de contorn combinades per l’equació de regularització (pel camp de desplaçaments suavitzat). Tal com s'observa en diferents exemples dos i tres dimensionals, aquestes condicions permeten simular de manera físicament realista les primeres etapes del procés de fallida.
En segon lloc, presentem una nova formulació combinada on les fissures es propaguen a través del medi regularitzat. Per tal de definir la trajectòria d'aquestes fissures, utilitzem un criteri geomètric, a diferència dels criteris mecànics clàssics. En particular, sigui D(x) un camp regularitzat de dany, les discontinuats es propaguen seguint la direcció marcada per l'eix mitjà de la isolínia (o isosuperfície mitjana en 3D) D(x) = D_. _Es a dir, utilitzem aquí aquesta eina geomètrica, molt emprada en d'altres aplicacions com ara l’anàlisi d'imatges, la visió artificial o la generació de malles| per tal de propagar les fissures. En aquest cas, donem també exemples dos i tres dimensionals.
Finalment, proposem un nou criteri per tal d'estimar l'energia que l'estructura encara no ha dissipat en el moment en que canviem de model, per tal que pugui ser transferida a la fissura cohesiva. D'aquesta manera, s'assegura que l’estratègia contínua i la contínua-discontínua siguin energèticament equivalents. En comparació amb d'altres tècniques, aquesta estratègia té en compte les diferents branques de descàrrega dels models de dany i permet estimar de manera més precisa l'energia que cal transmetre. Per tal de mostrar aquest balanç energètic es duen a terme diferents exemples en una i dues dimensions.Efficient models for building acoustics : combining deterministic and statistical methodsDíaz Cereceda, Cristinahttp://hdl.handle.net/2117/952152023-10-12T18:38:46Z2014-05-15T10:13:36ZEfficient models for building acoustics : combining deterministic and statistical methods
Díaz Cereceda, Cristina
Modelling vibroacoustic problems in the field of building design is a challenging problem due to the large size of the domains and the wide frequency range required by regulations. Standard numerical techniques, for instance finite element methods (FEM), fail when trying to reach the highest frequencies. The required element size is too small compared to the problem dimensions and the computational cost becomes unaffordable for such an everyday calculation.
Statistical energy analysis (SEA) is a framework of analysis for vibroacoustic problems, based on the wave behaviour at high frequencies. It works directly with averaged magnitudes, which is in fact what regulations require, and its computational cost is very low. However, this simplified approach presents several limitations when dealing with real-life structures. Experiments or other complementary data are often required to complete the definition of the SEA model.
This thesis deals with the modelling of building acoustic problems with a reasonable computational cost. In this sense, two main research lines have been followed. In the first part of the thesis, the potential of numerical simulations for extending the SEA applicability is analysed. In particular, three main points are addressed: first, a systematic methodology for the estimation of coupling loss factors from numerical simulations is developed. These factors are estimated from small deterministic simulations, and then applied for solving larger problems with SEA. Then, an SEA-like model for non-conservative couplings is presented, and a strategy for obtaining conservative and non-conservative coupling loss factors from numerical simulations is developed. Finally, a methodology for identifying SEA subsystems with modal analysis is proposed. This technique consists in performing a cluster analysis based on the problem eigenmodes. It allows detecting optimal SEA subdivisions for complex domains, even when two subsystems coexist in the same region of the geometry.
In the second part of the thesis, the sound transmission through double walls is analysed from different points of view, as a representative example of the complexities of vibroacoustic simulations. First, a compilation of classical approaches to this problem is presented. Then, the finite layer method is proposed as a new way of discretising the pressure field in the cavity inside double walls, especially when it is partially filled with an absorbing material. This method combines a FEM-like discretisation in the direction perpendicular to the wall with trigonometric functions in the two in-plane directions. This approach has less computational cost than FEM but allows the enforcement of continuity and equilibrium between fluid layers. It is compared with experimental data and also with other prediction models in order to check the influence of commonly assumed simplifications.
Finally, a combination of deterministic and statistical methods is presented as a possible solution for dealing with vibroacoustic problems consisting of double walls and other elements. The global analysis is performed with SEA, and numerical simulations of small parts of the problem are used to obtain the required parameters. Combining these techniques, a realistic simulation of the vibroacoustic problem can be performed with a reasonable computational cost.
2014-05-15T10:13:36ZDíaz Cereceda, CristinaModelling vibroacoustic problems in the field of building design is a challenging problem due to the large size of the domains and the wide frequency range required by regulations. Standard numerical techniques, for instance finite element methods (FEM), fail when trying to reach the highest frequencies. The required element size is too small compared to the problem dimensions and the computational cost becomes unaffordable for such an everyday calculation.
Statistical energy analysis (SEA) is a framework of analysis for vibroacoustic problems, based on the wave behaviour at high frequencies. It works directly with averaged magnitudes, which is in fact what regulations require, and its computational cost is very low. However, this simplified approach presents several limitations when dealing with real-life structures. Experiments or other complementary data are often required to complete the definition of the SEA model.
This thesis deals with the modelling of building acoustic problems with a reasonable computational cost. In this sense, two main research lines have been followed. In the first part of the thesis, the potential of numerical simulations for extending the SEA applicability is analysed. In particular, three main points are addressed: first, a systematic methodology for the estimation of coupling loss factors from numerical simulations is developed. These factors are estimated from small deterministic simulations, and then applied for solving larger problems with SEA. Then, an SEA-like model for non-conservative couplings is presented, and a strategy for obtaining conservative and non-conservative coupling loss factors from numerical simulations is developed. Finally, a methodology for identifying SEA subsystems with modal analysis is proposed. This technique consists in performing a cluster analysis based on the problem eigenmodes. It allows detecting optimal SEA subdivisions for complex domains, even when two subsystems coexist in the same region of the geometry.
In the second part of the thesis, the sound transmission through double walls is analysed from different points of view, as a representative example of the complexities of vibroacoustic simulations. First, a compilation of classical approaches to this problem is presented. Then, the finite layer method is proposed as a new way of discretising the pressure field in the cavity inside double walls, especially when it is partially filled with an absorbing material. This method combines a FEM-like discretisation in the direction perpendicular to the wall with trigonometric functions in the two in-plane directions. This approach has less computational cost than FEM but allows the enforcement of continuity and equilibrium between fluid layers. It is compared with experimental data and also with other prediction models in order to check the influence of commonly assumed simplifications.
Finally, a combination of deterministic and statistical methods is presented as a possible solution for dealing with vibroacoustic problems consisting of double walls and other elements. The global analysis is performed with SEA, and numerical simulations of small parts of the problem are used to obtain the required parameters. Combining these techniques, a realistic simulation of the vibroacoustic problem can be performed with a reasonable computational cost.On the structure of graphs without short cyclesSalas Piñón, Juliánhttp://hdl.handle.net/2117/949752023-10-12T18:35:14Z2013-10-25T08:47:14ZOn the structure of graphs without short cycles
Salas Piñón, Julián
The objective of this thesis is to study cages, constructions and properties of such families of graphs. For this, the study of graphs without short cycles plays a fundamental role in order to develop some knowledge on their structure, so we can later deal with the problems on cages. Cages were introduced by Tutte in 1947. In 1963, Erdös and Sachs proved that (k, g) -cages exist for any given values of k and g. Since then, large amount of research in cages has been devoted to their construction.
In this work we study structural properties such as the connectivity, diameter, and degree regularity of graphs without short cycles.
In some sense, connectivity is a measure of the reliability of a network. Two graphs with the same edge-connectivity, may be considered to have different reliabilities, as a more refined index than the edge-connectivity, edge-superconnectivity is proposed together with some other parameters called restricted connectivities.
By relaxing the conditions that are imposed for the graphs to be cages, we can achieve more refined connectivity properties on these families and also we have an approach to structural properties of the family of graphs with more restrictions (i.e., the cages).
Our aim, by studying such structural properties of cages is to get a deeper insight into their structure so we can attack the problem of their construction.
By way of example, we studied a condition on the diameter in relation to the girth pair of a graph, and as a corollary we obtained a result guaranteeing restricted connectivity of a special family of graphs arising from geometry, such as polarity graphs.
Also, we obtained a result proving the edge superconnectivity of semiregular cages. Based on these studies it was possible to develop the study of cages.
Therefore obtaining a relevant result with respect to the connectivity of cages, that is, cages are k/2-connected. And also arising from the previous work on girth pairs we obtained constructions for girth pair cages that proves a bound conjectured by Harary and Kovács, relating the order of girth pair cages with the one for cages. Concerning the degree and the diameter, there is the concept of a Moore graph, it was introduced by Hoffman and Singleton after Edward F. Moore, who posed the question of describing and classifying these graphs.
As well as having the maximum possible number of vertices for a given combination of degree and diameter, Moore graphs have the minimum possible number of vertices for a regular graph with given degree and girth. That is, any Moore graph is a cage. The formula for the number of vertices in a Moore graph can be generalized to allow a definition of Moore graphs with even girth (bipartite Moore graphs) as well as odd girth, and again these graphs are cages. Thus, Moore graphs give a lower bound for the order of cages, but they are known to exist only for very specific values of k, therefore it is interesting to study how far a cage is from this bound, this value is called the excess of a cage.
We studied the excess of graphs and give a contribution, in the sense of the work of Biggs and Ito, relating the bipartition of girth 6 cages with their orders. Entire families of cages can be obtained from finite geometries, for example, the graphs of incidence of projective planes of order q a prime power, are (q+1, 6)-cages. Also by using other incidence structures such as the generalized quadrangles or generalized hexagons, it can be obtained families of cages of girths 8 and 12.
In this thesis, we present a construction of an entire family of girth 7 cages that arises from some combinatorial properties of the incidence graphs of generalized quadrangles of order (q,q).
2013-10-25T08:47:14ZSalas Piñón, JuliánThe objective of this thesis is to study cages, constructions and properties of such families of graphs. For this, the study of graphs without short cycles plays a fundamental role in order to develop some knowledge on their structure, so we can later deal with the problems on cages. Cages were introduced by Tutte in 1947. In 1963, Erdös and Sachs proved that (k, g) -cages exist for any given values of k and g. Since then, large amount of research in cages has been devoted to their construction.
In this work we study structural properties such as the connectivity, diameter, and degree regularity of graphs without short cycles.
In some sense, connectivity is a measure of the reliability of a network. Two graphs with the same edge-connectivity, may be considered to have different reliabilities, as a more refined index than the edge-connectivity, edge-superconnectivity is proposed together with some other parameters called restricted connectivities.
By relaxing the conditions that are imposed for the graphs to be cages, we can achieve more refined connectivity properties on these families and also we have an approach to structural properties of the family of graphs with more restrictions (i.e., the cages).
Our aim, by studying such structural properties of cages is to get a deeper insight into their structure so we can attack the problem of their construction.
By way of example, we studied a condition on the diameter in relation to the girth pair of a graph, and as a corollary we obtained a result guaranteeing restricted connectivity of a special family of graphs arising from geometry, such as polarity graphs.
Also, we obtained a result proving the edge superconnectivity of semiregular cages. Based on these studies it was possible to develop the study of cages.
Therefore obtaining a relevant result with respect to the connectivity of cages, that is, cages are k/2-connected. And also arising from the previous work on girth pairs we obtained constructions for girth pair cages that proves a bound conjectured by Harary and Kovács, relating the order of girth pair cages with the one for cages. Concerning the degree and the diameter, there is the concept of a Moore graph, it was introduced by Hoffman and Singleton after Edward F. Moore, who posed the question of describing and classifying these graphs.
As well as having the maximum possible number of vertices for a given combination of degree and diameter, Moore graphs have the minimum possible number of vertices for a regular graph with given degree and girth. That is, any Moore graph is a cage. The formula for the number of vertices in a Moore graph can be generalized to allow a definition of Moore graphs with even girth (bipartite Moore graphs) as well as odd girth, and again these graphs are cages. Thus, Moore graphs give a lower bound for the order of cages, but they are known to exist only for very specific values of k, therefore it is interesting to study how far a cage is from this bound, this value is called the excess of a cage.
We studied the excess of graphs and give a contribution, in the sense of the work of Biggs and Ito, relating the bipartition of girth 6 cages with their orders. Entire families of cages can be obtained from finite geometries, for example, the graphs of incidence of projective planes of order q a prime power, are (q+1, 6)-cages. Also by using other incidence structures such as the generalized quadrangles or generalized hexagons, it can be obtained families of cages of girths 8 and 12.
In this thesis, we present a construction of an entire family of girth 7 cages that arises from some combinatorial properties of the incidence graphs of generalized quadrangles of order (q,q).