2004, Vol. 28, Núm. 1
http://hdl.handle.net/2099/3721
2017-12-13T22:29:32ZOn-line nonparametric estimation
http://hdl.handle.net/2099/3748
On-line nonparametric estimation
Khasminskii, Rafail
A survey of some recent results on nonparametric on-line estimation is presented. The first result deals with an on-line estimation for a smooth signal S(t) in the classic ‘signal plus Gaussian white noise’ model. Then an analogous on-line estimator for the regression estimation problem with equidistant design is described and justified. Finally some preliminary results related to the on-line estimation for
the diffusion observed process are described.
2007-11-12T19:10:49ZKhasminskii, RafailA survey of some recent results on nonparametric on-line estimation is presented. The first result deals with an on-line estimation for a smooth signal S(t) in the classic ‘signal plus Gaussian white noise’ model. Then an analogous on-line estimator for the regression estimation problem with equidistant design is described and justified. Finally some preliminary results related to the on-line estimation for
the diffusion observed process are described.Asymptotic normality of the integrated square error of a density estimator in the convolution model
http://hdl.handle.net/2099/3747
Asymptotic normality of the integrated square error of a density estimator in the convolution model
Butucea, Cristina
In this paper we consider a kernel estimator of a density in a convolution model and give a central limit theorem for its integrated square error (ISE). The kernel estimator is rather classical in minimax theory when the underlying density is recovered from noisy observations. The kernel is fixed and depends
heavily on the distribution of the noise, supposed entirely known. The bandwidth is not fixed, the results hold for any sequence of bandwidths decreasing to 0. In particular the central limit theorem holds for the bandwidth minimizing the mean integrated square error (MISE). Rates of convergence are sensibly different in the case of regular noise and of super-regular noise. The smoothness of the underlying unknown density is relevant for the evaluation of the MISE.
2007-11-12T19:08:59ZButucea, CristinaIn this paper we consider a kernel estimator of a density in a convolution model and give a central limit theorem for its integrated square error (ISE). The kernel estimator is rather classical in minimax theory when the underlying density is recovered from noisy observations. The kernel is fixed and depends
heavily on the distribution of the noise, supposed entirely known. The bandwidth is not fixed, the results hold for any sequence of bandwidths decreasing to 0. In particular the central limit theorem holds for the bandwidth minimizing the mean integrated square error (MISE). Rates of convergence are sensibly different in the case of regular noise and of super-regular noise. The smoothness of the underlying unknown density is relevant for the evaluation of the MISE.On best affine unbiased covariance-preserving prediction of factor scores
http://hdl.handle.net/2099/3746
On best affine unbiased covariance-preserving prediction of factor scores
Neudecker, Heinz
This paper gives a generalization of results presented by ten Berge, Krijnen, Wansbeek & Shapiro. They examined procedures and results as proposed by Anderson & Rubin, McDonald, Green and Krijnen, Wansbeek & ten Berge. We shall consider the same matter, under weaker rank assumptions. We allow some moments, namely the variance of the observable scores vector and that of the
unique factors,to be singular. We require T′ T > 0, where T T′ is a Schur decomposition of. As usual the variance of the common factors, , and the loadings matrix Awill have full column rank.
2007-11-12T19:06:33ZNeudecker, HeinzThis paper gives a generalization of results presented by ten Berge, Krijnen, Wansbeek & Shapiro. They examined procedures and results as proposed by Anderson & Rubin, McDonald, Green and Krijnen, Wansbeek & ten Berge. We shall consider the same matter, under weaker rank assumptions. We allow some moments, namely the variance of the observable scores vector and that of the
unique factors,to be singular. We require T′ T > 0, where T T′ is a Schur decomposition of. As usual the variance of the common factors, , and the loadings matrix Awill have full column rank.Local superefficiency of data-driven projection density estimators in continuous time
http://hdl.handle.net/2099/3745
Local superefficiency of data-driven projection density estimators in continuous time
Bosq, Denis; Blanke, Delphine
We construct a data-driven projection density estimator for continuous time processes. This estimator reaches superoptimal rates over a class F0 of densities that is dense in the family of all possible densities, and a «reasonable» rate elsewhere. The class F0 may be chosen previously by the analyst. Results apply to Rd-valued processes and to N-valued processes. In the particular case where squareintegrable local time does exist, it is shown that our estimator is strictly better than the local time estimator over F0.
2007-11-12T19:03:45ZBosq, DenisBlanke, DelphineWe construct a data-driven projection density estimator for continuous time processes. This estimator reaches superoptimal rates over a class F0 of densities that is dense in the family of all possible densities, and a «reasonable» rate elsewhere. The class F0 may be chosen previously by the analyst. Results apply to Rd-valued processes and to N-valued processes. In the particular case where squareintegrable local time does exist, it is shown that our estimator is strictly better than the local time estimator over F0.Modelling stock returns with AR-GARCH processes
http://hdl.handle.net/2099/3744
Modelling stock returns with AR-GARCH processes
Ferenstein, Elzbieta; Gasowski, Miroslaw
Financial returns are often modelled as autoregressive time series with random disturbances having conditional heteroscedastic variances, especially with GARCH type processes. GARCH processes have been intensely studying in financial and econometric literature as risk models of many financial time series. Analyzing two data sets of stock prices we try to fit AR(1) processes with GARCH or EGARCH errors to the log returns. Moreover, hyperbolic or generalized error distributions occur to be
good models of white noise distributions.
2007-11-12T19:01:30ZFerenstein, ElzbietaGasowski, MiroslawFinancial returns are often modelled as autoregressive time series with random disturbances having conditional heteroscedastic variances, especially with GARCH type processes. GARCH processes have been intensely studying in financial and econometric literature as risk models of many financial time series. Analyzing two data sets of stock prices we try to fit AR(1) processes with GARCH or EGARCH errors to the log returns. Moreover, hyperbolic or generalized error distributions occur to be
good models of white noise distributions.Improving both domain and total area estimation by composition
http://hdl.handle.net/2099/3741
Improving both domain and total area estimation by composition
Costa, Àlex; Satorra, A.; Ventura, Eva
In this article we propose small area estimators for both the small and large area parameters. When the objective is to estimate parameters at both levels, optimality is achieved by a sample design that combines fixed and proportional allocation. In such a design, one fraction of the sample is distributed proportionally among the small areas and the rest is evenly distributed. Simulation is used to assess
the performance of the direct estimator and two composite small area estimators, for a range of sample sizes and different sample distributions. Performance is measured in terms of mean squared errors for both small and large area parameters. Small area composite estimators open the possibility of reducing the sample size when the desired precision is given, or improving precision for a given sample size.
2007-11-12T18:58:15ZCosta, ÀlexSatorra, A.Ventura, EvaIn this article we propose small area estimators for both the small and large area parameters. When the objective is to estimate parameters at both levels, optimality is achieved by a sample design that combines fixed and proportional allocation. In such a design, one fraction of the sample is distributed proportionally among the small areas and the rest is evenly distributed. Simulation is used to assess
the performance of the direct estimator and two composite small area estimators, for a range of sample sizes and different sample distributions. Performance is measured in terms of mean squared errors for both small and large area parameters. Small area composite estimators open the possibility of reducing the sample size when the desired precision is given, or improving precision for a given sample size.Incorporating patients' characteristics in cost-effectiveness studies with clinical trial data: a flexible Bayesian approach
http://hdl.handle.net/2099/3740
Incorporating patients' characteristics in cost-effectiveness studies with clinical trial data: a flexible Bayesian approach
Vázquez Polo, Francisco J.; Negrín Hernández, Miguel Angel
Most published research on the comparison between medical treatment options merely compares the results (effectiveness and cost) obtained for each treatment group. The present work proposes the incorporation of other patient characteristics into the analysis. Most of the studies carried out in this context assume normality of both costs and effectiveness. In practice, however, the data are not always distributed according to this assumption. Altervative models have to be developed.In this paper, we present a general model of cost-effectiveness, incorporating both binary effectiveness
and skewed cost. In a practical application, we compare two highly active antiretroviral treatments applied to asymptomatic HIV patients. We propose a logit model when the effectiveness is measured depending on whether an initial purpose is achieved. For this model, the measure to compare treatments is the difference in the probability of success. Besides, the cost data usually present a right skewing. We propose the use of the logtransformation
to carry out the regression model. The three models are fitted demonstrating the
advantages of this modelling. The cost-effectiveness acceptability curve is used as a measure for decision-making.
2007-11-12T18:56:18ZVázquez Polo, Francisco J.Negrín Hernández, Miguel AngelMost published research on the comparison between medical treatment options merely compares the results (effectiveness and cost) obtained for each treatment group. The present work proposes the incorporation of other patient characteristics into the analysis. Most of the studies carried out in this context assume normality of both costs and effectiveness. In practice, however, the data are not always distributed according to this assumption. Altervative models have to be developed.In this paper, we present a general model of cost-effectiveness, incorporating both binary effectiveness
and skewed cost. In a practical application, we compare two highly active antiretroviral treatments applied to asymptomatic HIV patients. We propose a logit model when the effectiveness is measured depending on whether an initial purpose is achieved. For this model, the measure to compare treatments is the difference in the probability of success. Besides, the cost data usually present a right skewing. We propose the use of the logtransformation
to carry out the regression model. The three models are fitted demonstrating the
advantages of this modelling. The cost-effectiveness acceptability curve is used as a measure for decision-making.