Reports de recercahttp://hdl.handle.net/2117/39432022-05-22T16:50:19Z2022-05-22T16:50:19Z5È. Audit clínic de l'ictus. Catalunya 2018/19Salvat Plana, MercèPérez de la Ossa, NataliaCortés Martínez, JordiAyesta, MercèGallofré, Guillemhttp://hdl.handle.net/2117/3647272022-03-22T11:50:25Z2022-03-22T11:43:41Z5È. Audit clínic de l'ictus. Catalunya 2018/19
Salvat Plana, Mercè; Pérez de la Ossa, Natalia; Cortés Martínez, Jordi; Ayesta, Mercè; Gallofré, Guillem
S’han auditat 4.008 casos ingressats per ictus agut entre 2018 i 2019. El període d’estudi s’ha ampliat a 6 mesos, en períodes de 1 mes i mig al llarg de 12 mesos, similars per a tots els centres. L’obtenció prospectiva de dades s’ha realitzat majoritàriament per infermeres de cada hospital. La mediana del temps entre l’inici dels símptomes i l’arribada a urgències va ser de 2,1 hores. El 72% dels casos van arribar a l’hospital dins les primeres quatre hores i mitja. Respecte al 4t Audit: * Augmenten els ingressos en Unitat d’ictus agut (44,2% a 61,3%), les activacions del Codi Ictus (42,9 a 61,4; realitzades pel SEM de 43,4 a 67,8%) i els tractaments de reperfusió (16% a 30% dels ictus isquèmics) * Augmenta el nombre de pacients en els que es diagnostica durant l’ingrés una fibril·lació auricular no coneguda prèviament (7% a 18,8%). * Hi ha un lleuger augment de les pneumònies(6% a 8%)ibaixa la mor talitat intrahospitalària (12% a 9%). * Sis indicadors de qualitat milloren significativament, tres 3 indicadors es mantenen i 3 indicadors empitjoren. Destaca una important millora en alguns indicadors de qualitat rellevants com són la realització del test de disfàgia, l’avaluació del perfil lipídic, l’educació sanitària als pacients i familiars, el registre de l’etiologia de l’ictus i la utilització d’escales neurològiques. Es necessiten accions de milloradels indicadors següents: pauta d’antitrombòtics abans de 48 hores, mobilització precoç i avaluació de l’estat d’ànim. L’estat d’ànim s’avalua en un baix percentatge i es fa servir una gran variabilitat d’eines de mesura.
La millora i el manteniment continu de la qualitat de l’atenció als malalts amb ictus agut requereix una avaluació periòdica de la pràctica clínica. Els Audits de l’Ictus són l’instrument avaluatiu del PDMVC. La millora dels seus resultats pretén garantir la millora dels resultats dels pacients.
2022-03-22T11:43:41ZSalvat Plana, MercèPérez de la Ossa, NataliaCortés Martínez, JordiAyesta, MercèGallofré, GuillemS’han auditat 4.008 casos ingressats per ictus agut entre 2018 i 2019. El període d’estudi s’ha ampliat a 6 mesos, en períodes de 1 mes i mig al llarg de 12 mesos, similars per a tots els centres. L’obtenció prospectiva de dades s’ha realitzat majoritàriament per infermeres de cada hospital. La mediana del temps entre l’inici dels símptomes i l’arribada a urgències va ser de 2,1 hores. El 72% dels casos van arribar a l’hospital dins les primeres quatre hores i mitja. Respecte al 4t Audit: * Augmenten els ingressos en Unitat d’ictus agut (44,2% a 61,3%), les activacions del Codi Ictus (42,9 a 61,4; realitzades pel SEM de 43,4 a 67,8%) i els tractaments de reperfusió (16% a 30% dels ictus isquèmics) * Augmenta el nombre de pacients en els que es diagnostica durant l’ingrés una fibril·lació auricular no coneguda prèviament (7% a 18,8%). * Hi ha un lleuger augment de les pneumònies(6% a 8%)ibaixa la mor talitat intrahospitalària (12% a 9%). * Sis indicadors de qualitat milloren significativament, tres 3 indicadors es mantenen i 3 indicadors empitjoren. Destaca una important millora en alguns indicadors de qualitat rellevants com són la realització del test de disfàgia, l’avaluació del perfil lipídic, l’educació sanitària als pacients i familiars, el registre de l’etiologia de l’ictus i la utilització d’escales neurològiques. Es necessiten accions de milloradels indicadors següents: pauta d’antitrombòtics abans de 48 hores, mobilització precoç i avaluació de l’estat d’ànim. L’estat d’ànim s’avalua en un baix percentatge i es fa servir una gran variabilitat d’eines de mesura.On solving large-scale multistage stochastic problems with a new specialized interior-point approachCastro Pérez, JordiEscudero Bueno, Laureano F.Monge Ivars, Juan Franciscohttp://hdl.handle.net/2117/3593132022-05-19T10:38:13Z2022-01-11T14:25:38ZOn solving large-scale multistage stochastic problems with a new specialized interior-point approach
Castro Pérez, Jordi; Escudero Bueno, Laureano F.; Monge Ivars, Juan Francisco
A novel approach based on a specialized interior-point method (IPM) is presented for solving large-scale stochastic multistage continuous optimization problems, which represent the uncertainty in strategic multistage and operational two-stage scenario trees, the latter being rooted at the strategic nodes. This new solution approach considers a split-variable formulation of the strategic and operational structures, for which copies are made of the strategic nodes and the structures are rooted in the form of nested strategic-operational two-stage trees. The specialized IPM solves the normal equations of the problem’s Newton system by combining Cholesky factorizations with preconditioned conjugate gradients, doing so for, respectively, the constraints of the stochastic formulation and those that equate the split-variables. We show that, for multistage stochastic problems, the preconditioner (i) is a block-diagonal matrix composed of as many shifted tridiagonal matrices as the number of nested strategicoperational two-stage trees, thus allowing the efficient solution of systems of equations; (ii) its complexity in a multistage stochastic problem is equivalent to that of a very large-scale two-stage problem. A broad computational experience is reported for large multistage stochastic supply network design (SND) and revenue management (RM) problems; the mathematical structures vary greatly for those two application types. Some of the most difficult instances of SND had 5 stages, 839 million variables, 13 million quadratic variables, 21 million constraints, and 3750 scenario tree nodes; while those of RM had 8 stages, 278 million variables, 100 million constraints, and 100,000 scenario tree nodes. For those problems, the proposed approach obtained the solution in 2.3 days using 167 gigabytes of memory for SND, and in 1.7 days using 83 gigabytes for RM; while the state-of-the-art solver CPLEX v20.1 required more than 24 days and 526 gigabytes for SND, and more than 19 days and 410 gigabytes for RM
2022-01-11T14:25:38ZCastro Pérez, JordiEscudero Bueno, Laureano F.Monge Ivars, Juan FranciscoA novel approach based on a specialized interior-point method (IPM) is presented for solving large-scale stochastic multistage continuous optimization problems, which represent the uncertainty in strategic multistage and operational two-stage scenario trees, the latter being rooted at the strategic nodes. This new solution approach considers a split-variable formulation of the strategic and operational structures, for which copies are made of the strategic nodes and the structures are rooted in the form of nested strategic-operational two-stage trees. The specialized IPM solves the normal equations of the problem’s Newton system by combining Cholesky factorizations with preconditioned conjugate gradients, doing so for, respectively, the constraints of the stochastic formulation and those that equate the split-variables. We show that, for multistage stochastic problems, the preconditioner (i) is a block-diagonal matrix composed of as many shifted tridiagonal matrices as the number of nested strategicoperational two-stage trees, thus allowing the efficient solution of systems of equations; (ii) its complexity in a multistage stochastic problem is equivalent to that of a very large-scale two-stage problem. A broad computational experience is reported for large multistage stochastic supply network design (SND) and revenue management (RM) problems; the mathematical structures vary greatly for those two application types. Some of the most difficult instances of SND had 5 stages, 839 million variables, 13 million quadratic variables, 21 million constraints, and 3750 scenario tree nodes; while those of RM had 8 stages, 278 million variables, 100 million constraints, and 100,000 scenario tree nodes. For those problems, the proposed approach obtained the solution in 2.3 days using 167 gigabytes of memory for SND, and in 1.7 days using 83 gigabytes for RM; while the state-of-the-art solver CPLEX v20.1 required more than 24 days and 526 gigabytes for SND, and more than 19 days and 410 gigabytes for RMNew interior-point approach for one- and two-class linear support vector machines using multiple variable splittingCastro Pérez, Jordihttp://hdl.handle.net/2117/3593112022-05-19T11:10:56Z2022-01-11T14:20:31ZNew interior-point approach for one- and two-class linear support vector machines using multiple variable splitting
Castro Pérez, Jordi
Multiple variable splitting is a general technique for decomposing problems by using copies of variables and additional linking constraints that equate their values. The resulting large optimization problem can be solved with a specialized interior-point method that exploits the problem structure and computes the Newton direction with a combination of direct and iterative solvers (i.e., Cholesky factorizations and preconditioned conjugate gradients for linear systems related to, respectively, subproblems and new linking constraints). The present work applies this method to solving real-world
binary classification and novelty (or outlier) detection problems by means of, respectively, two-class and one-class linear support vector machines (SVMs). Unlike previous interior-point approaches for SVMs, which were practical only with low-dimensional points, the new proposal can also deal with high-dimensional data. The new method is compared with state-of-the-art solvers for SVMs, that are based on either interior-point algorithms (such as SVM-OOPS) or specific algorithms developed by the machine learning community (such as LIBSVM and LIBLINEAR). The computational results show that, for two-class SVMs, the new proposal is competitive not only against previous interior-point methods—and much more efficient than they are with high-dimensional data—but also against LIBSVM; whereas LIBLINEAR generally outperformed the proposal. For one-class SVMs, the new method consistently outperformed all other approaches, in terms of either solution time or solution quality
2022-01-11T14:20:31ZCastro Pérez, JordiMultiple variable splitting is a general technique for decomposing problems by using copies of variables and additional linking constraints that equate their values. The resulting large optimization problem can be solved with a specialized interior-point method that exploits the problem structure and computes the Newton direction with a combination of direct and iterative solvers (i.e., Cholesky factorizations and preconditioned conjugate gradients for linear systems related to, respectively, subproblems and new linking constraints). The present work applies this method to solving real-world
binary classification and novelty (or outlier) detection problems by means of, respectively, two-class and one-class linear support vector machines (SVMs). Unlike previous interior-point approaches for SVMs, which were practical only with low-dimensional points, the new proposal can also deal with high-dimensional data. The new method is compared with state-of-the-art solvers for SVMs, that are based on either interior-point algorithms (such as SVM-OOPS) or specific algorithms developed by the machine learning community (such as LIBSVM and LIBLINEAR). The computational results show that, for two-class SVMs, the new proposal is competitive not only against previous interior-point methods—and much more efficient than they are with high-dimensional data—but also against LIBSVM; whereas LIBLINEAR generally outperformed the proposal. For one-class SVMs, the new method consistently outperformed all other approaches, in terms of either solution time or solution qualityTransport analytics approaches to the dynamic origin-destination estimation problemRos Roca, XavierMontero Mercadé, LídiaBarceló Bugeda, JaimeNoëkel, KlausGentile, Guidohttp://hdl.handle.net/2117/3442732021-04-23T12:20:41Z2021-04-23T11:23:37ZTransport analytics approaches to the dynamic origin-destination estimation problem
Ros Roca, Xavier; Montero Mercadé, Lídia; Barceló Bugeda, Jaime; Noëkel, Klaus; Gentile, Guido
Dynamic traffic models require dynamic inputs, and one of the main inputs are the Dynamic Origin-Destinations (OD) matrices describing the variability over time of the trip patterns across the network. The Dynamic OD Matrix Estimation (DODME) is a hard problem since no direct full observations are available, and therefore one should resort to indirect estimation approaches. Among the most efficient approaches, the one that formulates the problem in terms of a bilevel optimization problem has been widely used. This formulation solves at the upper level a nonlinear optimization that minimizes some distance measures between observed and estimated link flow counts at certain counting stations located in a subset of links in the network, and at the lower level a traffic assignment that estimates these link flow counts assigning the current estimated matrix. The variants of this formulation differ in the analytical approaches that estimate the link flows in terms of the assignment and their time dependencies. Since these estimations are based on a traffic assignment at the lower level, these analytical approaches, although numerically efficient, imply a high computational cost. The advent of ICT applications has made available new sets of traffic related measurements enabling new approaches; under certain conditions, the data collected on used paths could be interpreted as an, de facto, estimated assignment observed . This allows extracting empirically the same information provided by an assignment that is used in the analytical approaches. This research report explores how to extract such information from the recorded data.
The Dynamic OD Matrix Estimation (DODME) is a hard problem since no direct full observations are available, and therefore one should resort to indirect estimation approaches. This formulation solves at the upper level a nonlinear optimization that minimizes some distance measures between observed and estimated link flow counts at certain counting stations located in a subset of links in the network, and at the lower level a traffic assignment that estimates these link flow counts assigning the current estimated matrix. Since these estimations are based on a traffic assignment at the lower level, these analytical approaches, although numerically efficient, imply a high computational cost. The advent of ICT applications has made available new sets of traffic related measurements enabling new approaches. This research report explores how to extract such information from the recorded data.
2021-04-23T11:23:37ZRos Roca, XavierMontero Mercadé, LídiaBarceló Bugeda, JaimeNoëkel, KlausGentile, GuidoDynamic traffic models require dynamic inputs, and one of the main inputs are the Dynamic Origin-Destinations (OD) matrices describing the variability over time of the trip patterns across the network. The Dynamic OD Matrix Estimation (DODME) is a hard problem since no direct full observations are available, and therefore one should resort to indirect estimation approaches. Among the most efficient approaches, the one that formulates the problem in terms of a bilevel optimization problem has been widely used. This formulation solves at the upper level a nonlinear optimization that minimizes some distance measures between observed and estimated link flow counts at certain counting stations located in a subset of links in the network, and at the lower level a traffic assignment that estimates these link flow counts assigning the current estimated matrix. The variants of this formulation differ in the analytical approaches that estimate the link flows in terms of the assignment and their time dependencies. Since these estimations are based on a traffic assignment at the lower level, these analytical approaches, although numerically efficient, imply a high computational cost. The advent of ICT applications has made available new sets of traffic related measurements enabling new approaches; under certain conditions, the data collected on used paths could be interpreted as an, de facto, estimated assignment observed . This allows extracting empirically the same information provided by an assignment that is used in the analytical approaches. This research report explores how to extract such information from the recorded data.An algorithm for the microaggregation problem using column generationGentile, ClaudioSpagnolo Arrizabalaga, EnricCastro Pérez, Jordihttp://hdl.handle.net/2117/3351792022-05-17T12:19:36Z2021-01-12T13:51:32ZAn algorithm for the microaggregation problem using column generation
Gentile, Claudio; Spagnolo Arrizabalaga, Enric; Castro Pérez, Jordi
The field of Statistical Disclosure Control aims at reducing the risk of re-identification of an individualwhen disseminating data, and it is one of the main concerns of national statistical agencies. OperationsResearch (OR) techniques were widely used in the past for the protection of tabular data, but not formicrodata (i.e., files of individuals and attributes). This work presents (as far as we know, for the firsttime) an application of OR techniques for the microaggregation problem, which is considered one thebest methods for microdata protection and it is known to be NP-hard.The new heuristic approach is based on a column generation scheme and, unlike previous (primal)heuristics for microaggregation, it also provides a lower bound on the optimal microaggregation. Com-putational results on real data typically used in the literature show that solutions with small gaps areoften achieved and that dramatic improvements are obtained with respect to the most popular heuristicsin the literature.
2021-01-12T13:51:32ZGentile, ClaudioSpagnolo Arrizabalaga, EnricCastro Pérez, JordiThe field of Statistical Disclosure Control aims at reducing the risk of re-identification of an individualwhen disseminating data, and it is one of the main concerns of national statistical agencies. OperationsResearch (OR) techniques were widely used in the past for the protection of tabular data, but not formicrodata (i.e., files of individuals and attributes). This work presents (as far as we know, for the firsttime) an application of OR techniques for the microaggregation problem, which is considered one thebest methods for microdata protection and it is known to be NP-hard.The new heuristic approach is based on a column generation scheme and, unlike previous (primal)heuristics for microaggregation, it also provides a lower bound on the optimal microaggregation. Com-putational results on real data typically used in the literature show that solutions with small gaps areoften achieved and that dramatic improvements are obtained with respect to the most popular heuristicsin the literature.KLASS: estudi d'un sistema d'ajuda al tractament estadístic de grans bases de dades (Master Thesis)Gibert, Karinahttp://hdl.handle.net/2117/3295572020-10-01T03:24:22Z2020-09-30T12:13:22ZKLASS: estudi d'un sistema d'ajuda al tractament estadístic de grans bases de dades (Master Thesis)
Gibert, Karina
2020-09-30T12:13:22ZGibert, KarinaIntroducing spaced mosaic plotsFernández Martínez, DanielArnold, RichardPledger, Shirleyhttp://hdl.handle.net/2117/3287422020-09-17T02:54:36Z2020-09-15T09:28:33ZIntroducing spaced mosaic plots
Fernández Martínez, Daniel; Arnold, Richard; Pledger, Shirley
Recent research has developed a group of likelihood-based finite mixture mod- els for a data matrix with ordinal data, establishing likelihood-based multivari- ate methods which applies fuzzy clustering via finite mixtures to the ordered stereotype model. There are many visualisation tools which depict reduction of dimensionality in matrices of ordinal data. This technical report introduces the spaced mosaic plot which is one new graphical tool for ordinal data when the or- dinal stereotype model is used. It takes advantage of the fitted score parameters to determine the spacing between two adjacent ordinal categories. We develop a function in R and its documentation is presented. Finally, the description of a spaced mosaic plot is shown.
Informe de com fer spaced mosaic plots en R
2020-09-15T09:28:33ZFernández Martínez, DanielArnold, RichardPledger, ShirleyRecent research has developed a group of likelihood-based finite mixture mod- els for a data matrix with ordinal data, establishing likelihood-based multivari- ate methods which applies fuzzy clustering via finite mixtures to the ordered stereotype model. There are many visualisation tools which depict reduction of dimensionality in matrices of ordinal data. This technical report introduces the spaced mosaic plot which is one new graphical tool for ordinal data when the or- dinal stereotype model is used. It takes advantage of the fitted score parameters to determine the spacing between two adjacent ordinal categories. We develop a function in R and its documentation is presented. Finally, the description of a spaced mosaic plot is shown.Un compilador per l'UPCLISPRiaño Ramos, DavidGibert, KarinaTorra Reventos, VicencCortés García, Claudio Uliseshttp://hdl.handle.net/2117/1915692020-07-23T21:06:38Z2020-06-25T16:13:59ZUn compilador per l'UPCLISP
Riaño Ramos, David; Gibert, Karina; Torra Reventos, Vicenc; Cortés García, Claudio Ulises
El report recull el desenvolupament d'un compilador per a una versió de LISP pròpia de la UPC, l'UPCLISP. El procés de compilació va prenent una a una les funcions i després per a cadascuna d'elles fa una anàlisi de tipus en dues parts i en genera el codi. L'anàlisi de tipus comprova que la composició de les expressions dins d'una funció sigui correcta i a més a més retorna l'estructura necessària per a que es pugui afegir a l'entorn del compilador la funció acabada de definir. En aquest document s'expliquen aquestes tres fases.
2020-06-25T16:13:59ZRiaño Ramos, DavidGibert, KarinaTorra Reventos, VicencCortés García, Claudio UlisesEl report recull el desenvolupament d'un compilador per a una versió de LISP pròpia de la UPC, l'UPCLISP. El procés de compilació va prenent una a una les funcions i després per a cadascuna d'elles fa una anàlisi de tipus en dues parts i en genera el codi. L'anàlisi de tipus comprova que la composició de les expressions dins d'una funció sigui correcta i a més a més retorna l'estructura necessària per a que es pugui afegir a l'entorn del compilador la funció acabada de definir. En aquest document s'expliquen aquestes tres fases.A new interior-point approach for large two-stage stochastic problemsCastro Pérez, JordiLama Zubirán, Paula de lahttp://hdl.handle.net/2117/1843072022-05-17T10:26:48Z2020-04-22T10:56:30ZA new interior-point approach for large two-stage stochastic problems
Castro Pérez, Jordi; Lama Zubirán, Paula de la
Two-stage stochastic models give rise to very large optimization problems. Several approaches havebeen devised for efficiently solving them, including interior-point methods (IPMs). However, usingIPMs, the linking columns associated to first-stage decisions cause excessive fill-in for the solutionof the normal equations. This downside is usually alleviated if variable splitting is applied to first-stage variables. This work presents a specialized IPM that applies variable splitting and exploits thestructure of the deterministic equivalent of the stochastic problem. The specialized IPM combinesCholesky factorizations and preconditioned conjugate gradients for solving the normal equations.This specialized IPM outperforms other approaches when the number of first-stage variables is largeenough. This paper provides computational results for two stochastic problems: (1) a supply chainsystem and (2) capacity expansion in an electric system. Both linear and convex quadratic formu-lations were used, obtaining instances of up to 38 million variables and six million constraints. Thecomputational results show that our procedure is more efficient than alternative state-of-the-art IPMimplementations (e.g., CPLEX) and other specialized solvers for stochastic optimization
2020-04-22T10:56:30ZCastro Pérez, JordiLama Zubirán, Paula de laTwo-stage stochastic models give rise to very large optimization problems. Several approaches havebeen devised for efficiently solving them, including interior-point methods (IPMs). However, usingIPMs, the linking columns associated to first-stage decisions cause excessive fill-in for the solutionof the normal equations. This downside is usually alleviated if variable splitting is applied to first-stage variables. This work presents a specialized IPM that applies variable splitting and exploits thestructure of the deterministic equivalent of the stochastic problem. The specialized IPM combinesCholesky factorizations and preconditioned conjugate gradients for solving the normal equations.This specialized IPM outperforms other approaches when the number of first-stage variables is largeenough. This paper provides computational results for two stochastic problems: (1) a supply chainsystem and (2) capacity expansion in an electric system. Both linear and convex quadratic formu-lations were used, obtaining instances of up to 38 million variables and six million constraints. Thecomputational results show that our procedure is more efficient than alternative state-of-the-art IPMimplementations (e.g., CPLEX) and other specialized solvers for stochastic optimizationUsing interior point solvers for optimizing progressive lens models with spherical coordinatesCasanellas Peñalver, GlòriaCastro Pérez, Jordihttp://hdl.handle.net/2117/1842182021-06-20T12:23:25Z2020-04-22T07:28:21ZUsing interior point solvers for optimizing progressive lens models with spherical coordinates
Casanellas Peñalver, Glòria; Castro Pérez, Jordi
Designing progressive lenses is a complex problem that has beenpreviously solved by formulating an optimization model based on Cartesiancoordinates. In this work a new progressive lens model using spherical co-ordinates is presented, and interior point solvers are used to solve this newoptimization model. Although this results in a highly nonlinear, nonconvex,continuous optimization problem, the new spherical coordinates model exhibitsbetter convexity properties compared to previous ones based on Cartesian co-ordinates. The real-world instances considered gave rise to nonlinear optimiza-tion problems of about 900 variables and 15000 constraints. Each constraintcorresponds to a point of the grid used to define the lens surface. The numberof variables depends on the precision of a B-spline basis used for the repre-sentation of the surface, and the number of constraints depends on the shapeand quality of the design. We present results of progressive lenses obtainedusing the AMPL modeling language and the nonlinear interior point solversIPOPT, LOQO and KNITRO. Computational results are reported, as wellas some examples of real-world progressive lenses calculated using this newmodel. Progressive lenses obtained are competitive in terms of quality withthose resulting from previous models that are used in commercial glasses.
Research Report UPC-DEIO DR 2019
2020-04-22T07:28:21ZCasanellas Peñalver, GlòriaCastro Pérez, JordiDesigning progressive lenses is a complex problem that has beenpreviously solved by formulating an optimization model based on Cartesiancoordinates. In this work a new progressive lens model using spherical co-ordinates is presented, and interior point solvers are used to solve this newoptimization model. Although this results in a highly nonlinear, nonconvex,continuous optimization problem, the new spherical coordinates model exhibitsbetter convexity properties compared to previous ones based on Cartesian co-ordinates. The real-world instances considered gave rise to nonlinear optimiza-tion problems of about 900 variables and 15000 constraints. Each constraintcorresponds to a point of the grid used to define the lens surface. The numberof variables depends on the precision of a B-spline basis used for the repre-sentation of the surface, and the number of constraints depends on the shapeand quality of the design. We present results of progressive lenses obtainedusing the AMPL modeling language and the nonlinear interior point solversIPOPT, LOQO and KNITRO. Computational results are reported, as wellas some examples of real-world progressive lenses calculated using this newmodel. Progressive lenses obtained are competitive in terms of quality withthose resulting from previous models that are used in commercial glasses.