2003, Vol. 27, Núm. 1
http://hdl.handle.net/2099/3719
2017-11-25T04:16:19ZLikelihood for interval-censored observations from multi-state models
http://hdl.handle.net/2099/3735
Likelihood for interval-censored observations from multi-state models
Commenges, Daniel
We consider the mixed dicrete-continuous pattern of observation in a multi-state model; this is a classical pattern because very often clinical status is assessed at discrete visit times while time of death is observed exactly. The likelihood can easily be written heuristically for such models.
However a formal proof is not easy in such observational patterns. We give a rigorous derivation of the likelihood for the illness-death model based on applying Jacod’s formula to an observed bivariate counting process.
2007-11-12T18:00:36ZCommenges, DanielWe consider the mixed dicrete-continuous pattern of observation in a multi-state model; this is a classical pattern because very often clinical status is assessed at discrete visit times while time of death is observed exactly. The likelihood can easily be written heuristically for such models.
However a formal proof is not easy in such observational patterns. We give a rigorous derivation of the likelihood for the illness-death model based on applying Jacod’s formula to an observed bivariate counting process.Cumulative processes related to event histories
http://hdl.handle.net/2099/3734
Cumulative processes related to event histories
Cook, Richard J.; Lawless, Jerald F.; Lee, Ker-Ai
Costs or benefits which accumulate for individuals over time are of interest in many life history processes. Familiar examples include costs of health care for persons with chronic medical conditions, the payments to insured persons during periods of disability, and quality of life which is
sometimes used in the evaluation of treatments in terminally ill patients. For convenience, here we use the term costs to refer to cost or other cumulative measures. Two important scenarios are (i) where costs are associated with the occurrence of certain events, so that total cost accumulates as a step function, and (ii) where individuals may move between various states over time, with cost accumulating at a constant rate determined by the state occupied. In both cases, there is
frequently a random variable T that represents the duration of the process generating the costs. Here we consider estimation of the mean cumulative cost over a period of interest using methods based upon marginal features of the cost process and intensity based models. Robustness to
adaptive censoring is discussed in the context of the multi-state methods. Data from a quality of life study of breast cancer patients are used to illustrate the methods.
2007-11-12T17:57:24ZCook, Richard J.Lawless, Jerald F.Lee, Ker-AiCosts or benefits which accumulate for individuals over time are of interest in many life history processes. Familiar examples include costs of health care for persons with chronic medical conditions, the payments to insured persons during periods of disability, and quality of life which is
sometimes used in the evaluation of treatments in terminally ill patients. For convenience, here we use the term costs to refer to cost or other cumulative measures. Two important scenarios are (i) where costs are associated with the occurrence of certain events, so that total cost accumulates as a step function, and (ii) where individuals may move between various states over time, with cost accumulating at a constant rate determined by the state occupied. In both cases, there is
frequently a random variable T that represents the duration of the process generating the costs. Here we consider estimation of the mean cumulative cost over a period of interest using methods based upon marginal features of the cost process and intensity based models. Robustness to
adaptive censoring is discussed in the context of the multi-state methods. Data from a quality of life study of breast cancer patients are used to illustrate the methods.A sensitivity analysis for causal parameters in structural proportional hazards models
http://hdl.handle.net/2099/3733
A sensitivity analysis for causal parameters in structural proportional hazards models
Goetghebeur, E.; Loeys, T.
Deviations from assigned treatment occur often in clinical trials. In such a setting, the traditional intent-to-treat analysis does not measure biological efficacy but rather programmatic effectiveness. For all-or-nothing compliance situation, Loeys and Goetghebeur (2003) recently proposed a Structural Proportional Hazards method. It allows for causal estimation in the complier subpopulation provided the exclusion restriction holds: randomization per se has no effect unless exposure has changed. This assumption is typically made with structural models for noncompliance but questioned when the trial is not blinded. In this paper we extend the structural PH model to allow for an effect of randomization per se. This enables analyzing sensitivity of conclusions to deviations from the exclusion estriction. In a colo-rectal cancer trial we find the
causal estimator of the effect of an arterial device implantation to be remarkably insensitive to such deviations.
2007-11-12T17:54:35ZGoetghebeur, E.Loeys, T.Deviations from assigned treatment occur often in clinical trials. In such a setting, the traditional intent-to-treat analysis does not measure biological efficacy but rather programmatic effectiveness. For all-or-nothing compliance situation, Loeys and Goetghebeur (2003) recently proposed a Structural Proportional Hazards method. It allows for causal estimation in the complier subpopulation provided the exclusion restriction holds: randomization per se has no effect unless exposure has changed. This assumption is typically made with structural models for noncompliance but questioned when the trial is not blinded. In this paper we extend the structural PH model to allow for an effect of randomization per se. This enables analyzing sensitivity of conclusions to deviations from the exclusion estriction. In a colo-rectal cancer trial we find the
causal estimator of the effect of an arterial device implantation to be remarkably insensitive to such deviations.Survival analysis with coarsely observed covariates
http://hdl.handle.net/2099/3732
Survival analysis with coarsely observed covariates
Nielsen, Søren Feodor
In this paper we consider analysis of survival data with incomplete covariate information. We model the incomplete covariates as a random coarsening of the complete covariate, and an overview of the theory of coarsening at random is given. Various ways of estimating the parameters of the model for the survival data given the covariates are discussed and compared.
2007-11-12T17:50:34ZNielsen, Søren FeodorIn this paper we consider analysis of survival data with incomplete covariate information. We model the incomplete covariates as a random coarsening of the complete covariate, and an overview of the theory of coarsening at random is given. Various ways of estimating the parameters of the model for the survival data given the covariates are discussed and compared.Aspects of the analysis of multivariative failure time data
http://hdl.handle.net/2099/3731
Aspects of the analysis of multivariative failure time data
Prentice, Ross L.; Kalbfleisch, J. D.
Multivariate failure time data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated failure time when an individual is followed for the occurrence of two or more types of events for which the
individual is simultaneously at risk, or when distinct individuals have dependent event times; or more complicated multistate processes when individuals may move among a number of discrete states over the course of a follow-up study and the states and associated sojourn times are recorded. Here we provide a critical review of statistical models and data analysis methods for the analysis of recurrent event data and correlated failure time data. This review suggests a valuable role for partially marginalized intensity models for the analysis of recurrent event data, and points to the usefulness of marginal hazard rate models and nonparametric estimates of pairwise dependencies for the analysis of correlated failure times. Areas in need of further methodology development are indicated.
2007-11-12T17:48:34ZPrentice, Ross L.Kalbfleisch, J. D.Multivariate failure time data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated failure time when an individual is followed for the occurrence of two or more types of events for which the
individual is simultaneously at risk, or when distinct individuals have dependent event times; or more complicated multistate processes when individuals may move among a number of discrete states over the course of a follow-up study and the states and associated sojourn times are recorded. Here we provide a critical review of statistical models and data analysis methods for the analysis of recurrent event data and correlated failure time data. This review suggests a valuable role for partially marginalized intensity models for the analysis of recurrent event data, and points to the usefulness of marginal hazard rate models and nonparametric estimates of pairwise dependencies for the analysis of correlated failure times. Areas in need of further methodology development are indicated.Indirect inference for survival data
http://hdl.handle.net/2099/3730
Indirect inference for survival data
Turnbull, Bruce W.; Jiang, Wenxin
In this paper we describe the so-called “indirect” method of inference, originally developed from the econometric literature, and apply it to survival analyses of two data sets with repeated events. This method is often more convenient computationally than maximum likelihood estimation when handling such model complexities as random effects and measurement error, for example; and it can also serve as a basis for robust inference with less stringent assumptions on the data
generating mechanism. The first data set concerns recurrence times of mammary tumors in rats and is modeled using a Poisson process model with covariates and frailties. The second data set involves times of recurrences of skin tumors in individual patients in a clinical trial. The methodology is applied in both parametric and semi-parametric regression analyses to
accommodate random effects and covariate measurement error.
2007-11-12T17:45:27ZTurnbull, Bruce W.Jiang, WenxinIn this paper we describe the so-called “indirect” method of inference, originally developed from the econometric literature, and apply it to survival analyses of two data sets with repeated events. This method is often more convenient computationally than maximum likelihood estimation when handling such model complexities as random effects and measurement error, for example; and it can also serve as a basis for robust inference with less stringent assumptions on the data
generating mechanism. The first data set concerns recurrence times of mammary tumors in rats and is modeled using a Poisson process model with covariates and frailties. The second data set involves times of recurrences of skin tumors in individual patients in a clinical trial. The methodology is applied in both parametric and semi-parametric regression analyses to
accommodate random effects and covariate measurement error.Optimization of touristic distribution netwoorks using genetic algorithms
http://hdl.handle.net/2099/3729
Optimization of touristic distribution netwoorks using genetic algorithms
Medina, Josep R.; Yepes, Víctor
The eight basic elements to design genetic algorithms (GA) are described and applied to
solve a low demand distribution problem of passengers for a hub airport in Alicante and 30 touristic destinations in Northern Africa and Western Europe. The flexibility of GA and the possibility of creating mutually beneficial feed-back processes with human intelligence to solve complex problems as well as the difficulties in detecting erroneous codes embedded in the software are described. A new three-parent edge mapped recombination operator is used to solve
the capacitated vehicle routing problem required for estimating associated costs with touristic distribution networks of low demand. GA proved to be very flexible especially in changing business environments and to solve decision-making problems involving ambiguous and sometimes contradictory constraints.
2007-11-12T17:42:27ZMedina, Josep R.Yepes, VíctorThe eight basic elements to design genetic algorithms (GA) are described and applied to
solve a low demand distribution problem of passengers for a hub airport in Alicante and 30 touristic destinations in Northern Africa and Western Europe. The flexibility of GA and the possibility of creating mutually beneficial feed-back processes with human intelligence to solve complex problems as well as the difficulties in detecting erroneous codes embedded in the software are described. A new three-parent edge mapped recombination operator is used to solve
the capacitated vehicle routing problem required for estimating associated costs with touristic distribution networks of low demand. GA proved to be very flexible especially in changing business environments and to solve decision-making problems involving ambiguous and sometimes contradictory constraints.An empirical evaluation of five small area estimators
http://hdl.handle.net/2099/3728
An empirical evaluation of five small area estimators
Costa, Àlex; Satorra, A.; Ventura, Eva
This paper compares five small area estimators. We use Monte Carlo simulation in the context of both artificial and real populations. In addition to the direct and indirect estimators, we consider the optimal composite estimator with population weights, and two composite estimators with estimated weights: one that assumes homogeneity of within area variance and squared bias and one that uses area-specific estimates of variance and squared bias. In the study with real population, we found that among the feasible estimators, the best choice is the one that uses area-specific estimates of variance and squared bias.
2007-11-12T17:34:41ZCosta, ÀlexSatorra, A.Ventura, EvaThis paper compares five small area estimators. We use Monte Carlo simulation in the context of both artificial and real populations. In addition to the direct and indirect estimators, we consider the optimal composite estimator with population weights, and two composite estimators with estimated weights: one that assumes homogeneity of within area variance and squared bias and one that uses area-specific estimates of variance and squared bias. In the study with real population, we found that among the feasible estimators, the best choice is the one that uses area-specific estimates of variance and squared bias.