2009, Vol. 33, Núm. 2
http://hdl.handle.net/2099/8909
Mon, 24 Oct 2016 05:39:39 GMT2016-10-24T05:39:39ZSome improved two-stage shrinkage testimators for the mean of normal distribution
http://hdl.handle.net/2099/8952
Some improved two-stage shrinkage testimators for the mean of normal distribution
Al-Hemyari, Zuhair
In this paper, we introduced some two-stage shrinkage testimators (TSST) for the mean μ when a prior estimate μ0 of the mean μ is available from the past, by considering a feasible form of the shrinkage weight function which is used in both of the estimation stages with different quantities.
The expressions for the bias, mean squared error, expected sample size and relative efficiency for the both cases when 2 known or unknown, are derived and studied. The discussion regarding the usefulness of these testimators under different situations is provided as conclusions from various numerical tables obtained from simulation results.
Tue, 27 Apr 2010 18:20:16 GMThttp://hdl.handle.net/2099/89522010-04-27T18:20:16ZAl-Hemyari, ZuhairIn this paper, we introduced some two-stage shrinkage testimators (TSST) for the mean μ when a prior estimate μ0 of the mean μ is available from the past, by considering a feasible form of the shrinkage weight function which is used in both of the estimation stages with different quantities.
The expressions for the bias, mean squared error, expected sample size and relative efficiency for the both cases when 2 known or unknown, are derived and studied. The discussion regarding the usefulness of these testimators under different situations is provided as conclusions from various numerical tables obtained from simulation results.How much Fisher information is contained in record values and their concomitants in the presence of inter-record times?
http://hdl.handle.net/2099/8951
How much Fisher information is contained in record values and their concomitants in the presence of inter-record times?
Amini, Morteza; Ahmadi, Jafar
It is shown that, although the distribution of inter-record time does not depend on the parent distribution, Fisher information increases when inter-record times are included. The general results concern different classes of bivariate distributions and propose a comparison study of the Fisher information. This study is done in situations in which the univariate counterpart of the underlying bivariate family belongs to a general continuous parametric family and its wellknown
subclasses such as location-scale and shape families, exponential family and proportional
(reversed) hazard model. We derived some explicit formulas for the additional information of record time given records and their concomitants (bivariate records) for some classes of bivariate distributions. Some common distributions are considered as examples for illustrations and are classified according to this criterion. A simulation study and a real data example from bivariate normal distribution are considered to study the relative efficiencies of estimator based on bivariate
record values and inter-record times with respect to the corresponding estimator based on iid sample of the same size and bivariate records only.
Tue, 27 Apr 2010 18:14:06 GMThttp://hdl.handle.net/2099/89512010-04-27T18:14:06ZAmini, MortezaAhmadi, JafarIt is shown that, although the distribution of inter-record time does not depend on the parent distribution, Fisher information increases when inter-record times are included. The general results concern different classes of bivariate distributions and propose a comparison study of the Fisher information. This study is done in situations in which the univariate counterpart of the underlying bivariate family belongs to a general continuous parametric family and its wellknown
subclasses such as location-scale and shape families, exponential family and proportional
(reversed) hazard model. We derived some explicit formulas for the additional information of record time given records and their concomitants (bivariate records) for some classes of bivariate distributions. Some common distributions are considered as examples for illustrations and are classified according to this criterion. A simulation study and a real data example from bivariate normal distribution are considered to study the relative efficiencies of estimator based on bivariate
record values and inter-record times with respect to the corresponding estimator based on iid sample of the same size and bivariate records only.Eliciting expert opinion for cost-effectiveness analysis: a flexible family of prior distributions
http://hdl.handle.net/2099/8950
Eliciting expert opinion for cost-effectiveness analysis: a flexible family of prior distributions
Martel, María; Negrín, Miguel Angel; Vázquez Polo, Francisco J.
The Bayesian approach to statistics has been growing rapidly in popularity as an alternative to the classical approach in the economic evaluation of health technologies, due to the significant benefits it affords. One of the most important advantages of Bayesian methods is their incorporation
of prior information. Thus, use is made of a greater amount of information, and so stronger results are obtained than with frequentist methods. However, since Stevens and O’Hagan (2002) showed that the elicitation of a prior distribution on the parameters of interest plays a crucial role
in a Bayesian cost-effectiveness analysis, relatively few papers have addressed this issue. In a cost-effectiveness analysis, the parameters of interest are the mean efficacy and mean cost of each treatment. The most common prior structure for these two parameters is the bivariate normal structure. In this paper, we study the use of a more general (and flexible) family of prior distributions for the parameters. In particular, we assume that the conditional densities of the parameters are all normal.
The model is validated using data of a real clinical trial. The posterior distributions have been simulated using Markov Chain Monte Carlo techniques.
Tue, 27 Apr 2010 17:50:05 GMThttp://hdl.handle.net/2099/89502010-04-27T17:50:05ZMartel, MaríaNegrín, Miguel AngelVázquez Polo, Francisco J.The Bayesian approach to statistics has been growing rapidly in popularity as an alternative to the classical approach in the economic evaluation of health technologies, due to the significant benefits it affords. One of the most important advantages of Bayesian methods is their incorporation
of prior information. Thus, use is made of a greater amount of information, and so stronger results are obtained than with frequentist methods. However, since Stevens and O’Hagan (2002) showed that the elicitation of a prior distribution on the parameters of interest plays a crucial role
in a Bayesian cost-effectiveness analysis, relatively few papers have addressed this issue. In a cost-effectiveness analysis, the parameters of interest are the mean efficacy and mean cost of each treatment. The most common prior structure for these two parameters is the bivariate normal structure. In this paper, we study the use of a more general (and flexible) family of prior distributions for the parameters. In particular, we assume that the conditional densities of the parameters are all normal.
The model is validated using data of a real clinical trial. The posterior distributions have been simulated using Markov Chain Monte Carlo techniques.Estimation in the Birnbaum-Saunders distribution based on scale-mixture of normals and the EM-algorithm
http://hdl.handle.net/2099/8949
Estimation in the Birnbaum-Saunders distribution based on scale-mixture of normals and the EM-algorithm
Balakrishnan, N.; Leiva, Víctor; Sanhueza, Antonio; Vilca, Filidor
Scale mixtures of normal (SMN) distributions are used for modeling symmetric data. Members
of this family have appealing properties such as robust estimates, easy number generation, and efficient computation of the ML estimates via the EM-algorithm. The Birnbaum-Saunders (BS) distribution is a positively skewed model that is related to the normal distribution and has received
considerable attention. We introduce a type of BS distributions based on SMN models, produce a lifetime analysis, develop the EM-algorithm for ML estimation of parameters, and illustrate the obtained results with real data showing the robustness of the estimation procedure.
Tue, 27 Apr 2010 16:52:46 GMThttp://hdl.handle.net/2099/89492010-04-27T16:52:46ZBalakrishnan, N.Leiva, VíctorSanhueza, AntonioVilca, FilidorScale mixtures of normal (SMN) distributions are used for modeling symmetric data. Members
of this family have appealing properties such as robust estimates, easy number generation, and efficient computation of the ML estimates via the EM-algorithm. The Birnbaum-Saunders (BS) distribution is a positively skewed model that is related to the normal distribution and has received
considerable attention. We introduce a type of BS distributions based on SMN models, produce a lifetime analysis, develop the EM-algorithm for ML estimation of parameters, and illustrate the obtained results with real data showing the robustness of the estimation procedure.Nonparametric estimation of the expected accumulated reward for semi-Markov chains
http://hdl.handle.net/2099/8948
Nonparametric estimation of the expected accumulated reward for semi-Markov chains
D'Amico, Guglielmo
In this paper a nonparametric estimator of the expected value of a discounted semi-Markov reward chain is proposed. Its asymptotic properties are established and as a consequence of the asymptotic normality the confidence sets are obtained. An application in quality of life modelling
is described.
Tue, 27 Apr 2010 15:54:50 GMThttp://hdl.handle.net/2099/89482010-04-27T15:54:50ZD'Amico, GuglielmoIn this paper a nonparametric estimator of the expected value of a discounted semi-Markov reward chain is proposed. Its asymptotic properties are established and as a consequence of the asymptotic normality the confidence sets are obtained. An application in quality of life modelling
is described.Testing for the existence of clusters
http://hdl.handle.net/2099/8947
Testing for the existence of clusters
Fuentes, Claudio; Casella, George
Detecting and determining clusters present in a certain sample has been an important concern, among researchers from different fields, for a long time. In particular, assessing whether the clusters are statistically significant, is a question that has been asked by a number of experimenters. Recently, this question arose again in a study in maize genetics, where determining the significance of clusters is crucial as a primary step in the identification of a genome-wide collection of
mutants that may affect the kernel composition. Although several efforts have been made in this direction, not much has been done with the aim of developing an actual hypothesis test in order to assess the significance of clusters. In this paper,
we propose a new methodology that allows the examination of the hypothesis test H0 : =1 vs. H1 : =k, where denotes the number of clusters present in a certain population. Our procedure, based on Bayesian tools, permits us to obtain closed form expressions for the posterior probabilities corresponding to the null hypothesis. From here, we calibrate our results by estimating the frequentist null distribution of the posterior probabilities in order to obtain the p-values associated with the observed posterior probabilities. In most cases, actual evaluation of the posterior probabilities is computationally intensive and several algorithms have been discussed in the literature. Here, we propose a simple estimation procedure, based on MCMC techniques, that permits an efficient and easily implementable evaluation of the test. Finally, we present simulation studies
that support our conclusions, and we apply our method to the analysis of NIR spectroscopy data coming from the genetic study that motivated this work.
Tue, 27 Apr 2010 15:35:10 GMThttp://hdl.handle.net/2099/89472010-04-27T15:35:10ZFuentes, ClaudioCasella, GeorgeDetecting and determining clusters present in a certain sample has been an important concern, among researchers from different fields, for a long time. In particular, assessing whether the clusters are statistically significant, is a question that has been asked by a number of experimenters. Recently, this question arose again in a study in maize genetics, where determining the significance of clusters is crucial as a primary step in the identification of a genome-wide collection of
mutants that may affect the kernel composition. Although several efforts have been made in this direction, not much has been done with the aim of developing an actual hypothesis test in order to assess the significance of clusters. In this paper,
we propose a new methodology that allows the examination of the hypothesis test H0 : =1 vs. H1 : =k, where denotes the number of clusters present in a certain population. Our procedure, based on Bayesian tools, permits us to obtain closed form expressions for the posterior probabilities corresponding to the null hypothesis. From here, we calibrate our results by estimating the frequentist null distribution of the posterior probabilities in order to obtain the p-values associated with the observed posterior probabilities. In most cases, actual evaluation of the posterior probabilities is computationally intensive and several algorithms have been discussed in the literature. Here, we propose a simple estimation procedure, based on MCMC techniques, that permits an efficient and easily implementable evaluation of the test. Finally, we present simulation studies
that support our conclusions, and we apply our method to the analysis of NIR spectroscopy data coming from the genetic study that motivated this work.