首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 677 毫秒
1.
In practice, survival data are often collected over geographical regions. Shared spatial frailty models have been used to model spatial variation in survival times, which are often implemented using the Bayesian Markov chain Monte Carlo method. However, this method comes at the price of slow mixing rates and heavy computational cost, which may render it impractical for data-intensive application. Alternatively, a frailty model assuming an independent and identically distributed (iid) random effect can be easily and efficiently implemented. Therefore, we used simulations to assess the bias and efficiency loss in the estimated parameters, if residual spatial correlation is present but using an iid random effect. Our simulations indicate that a shared frailty model with an iid random effect can estimate the regression coefficients reasonably well, even with residual spatial correlation present, when the percentage of censoring is not too high and the number of clusters and cluster size are not too low. Therefore, if the primary goal is to assess the covariate effects, one may choose the frailty model with an iid random effect; whereas if the goal is to predict the hazard, additional care needs to be given due to the efficiency loss in the parameter(s) for the baseline hazard.  相似文献   

2.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   

3.
Foxhound training enclosures are facilities where wild-trapped foxes are placed into large fenced areas for dog training purposes. Although the purpose of these facilities is to train dogs without harming foxes, dog-related mortality has been reported to be an issue in some enclosures. Using data from a fox enclosure in Virginia, we investigate factors that influence fox survival in these dog training facilities and propose a set of policies to improve fox survival. In particular, a Bayesian hierarchical model is formulated to compute fox survival probabilities based on a fox's time in the enclosure and the number of dogs allowed in the enclosure at one time. These calculations are complicated by missing information on the number of dogs in the enclosure for many days during the study. We elicit expert knowledge for a prior on the number of dogs to account for the uncertainty in the missing data. Reversible jump Markov Chain Monte Carlo is used for model selection in the presence of missing covariates. We then use our model to examine possible changes to foxhound training enclosure policy and what effect those changes may have on fox survival.  相似文献   

4.
Variable screening for censored survival data is most challenging when both survival and censoring times are correlated with an ultrahigh-dimensional vector of covariates. Existing approaches to handling censoring often make use of inverse probability weighting by assuming independent censoring with both survival time and covariates. This is a convenient but rather restrictive assumption which may be unmet in real applications, especially when the censoring mechanism is complex and the number of covariates is large. To accommodate heterogeneous (covariate-dependent) censoring that is often present in high-dimensional survival data, we propose a Gehan-type rank screening method to select features that are relevant to the survival time. The method is invariant to monotone transformations of the response and of the predictors, and works robustly for a general class of survival models. We establish the sure screening property of the proposed methodology. Simulation studies and a lymphoma data analysis demonstrate its favorable performance and practical utility.  相似文献   

5.
In clinical trials with survival data, investigators may wish to re-estimate the sample size based on the observed effect size while the trial is ongoing. Besides the inflation of the type-I error rate due to sample size re-estimation, the method for calculating the sample size in an interim analysis should be carefully considered because the data in each stage are mutually dependent in trials with survival data. Although the interim hazard estimate is commonly used to re-estimate the sample size, the estimate can sometimes be considerably higher or lower than the hypothesized hazard by chance. We propose an interim hazard ratio estimate that can be used to re-estimate the sample size under those circumstances. The proposed method was demonstrated through a simulation study and an actual clinical trial as an example. The effect of the shape parameter for the Weibull survival distribution on the sample size re-estimation is presented.  相似文献   

6.
ABSTRACT

For many years, detection of clusters has been of great public health interest and widely studied. Several methods have been developed to detect clusters and their performance has been evaluated in various contexts. Spatial scan statistics are widely used for geographical cluster detection and inference. Different types of discrete or continuous data can be analyzed using spatial scan statistics for Bernoulli, Poisson, ordinal, exponential, and normal models. In this paper, we propose a scan statistic for survival data which is based on generalized life distribution model that provides three important life distributions, viz. Weibull, exponential, and Rayleigh. The proposed method is applied to the survival data of tuberculosis patients in Nainital district of Uttarakhand, India, for the year 2004–05. The Monte Carlo simulation studies reveal that the proposed method performs well for different survival distributions.  相似文献   

7.
This article considers nonparametric comparison of survival functions, one of the most commonly required task in survival studies. For this, several test procedures have been proposed for interval-censored failure time data in which distributions of censoring intervals are identical among different treatment groups. Sometimes the distributions may depend on treatments and thus not be the same. A class of test statistics is proposed for situations where the distributions may be different for subjects in different treatment groups. The asymptotic normality of the test statistics is established and the test procedure is evaluated by simulations, which suggest that it works well for practical situations. An illustrative example is provided.  相似文献   

8.
This article extends the spatial panel data regression with fixed-effects to the case where the regression function is partially linear and some regressors may be endogenous or predetermined. Under the assumption that the spatial weighting matrix is strictly exogenous, we propose a sieve two stage least squares (S2SLS) regression. Under some sufficient conditions, we show that the proposed estimator for the finite dimensional parameter is root-N consistent and asymptotically normally distributed and that the proposed estimator for the unknown function is consistent and also asymptotically normally distributed but at a rate slower than root-N. Consistent estimators for the asymptotic variances of the proposed estimators are provided. A small scale simulation study is conducted, and the simulation results show that the proposed procedure has good finite sample performance.  相似文献   

9.
Frailty models are often used to model heterogeneity in survival analysis. The distribution of the frailty is generally assumed to be continuous. In some circumstances, it is appropriate to consider discrete frailty distributions. Having zero frailty can be interpreted as being immune, and population heterogeneity may be analysed using discrete frailty models. In this paper, survival functions are derived for the frailty models based on the discrete compound Poisson process. Maximum likelihood estimation procedures for the parameters are studied. We examine the fit of the models to earthquake and the traffic accidents’ data sets from Turkey.  相似文献   

10.
The raw materials utilized in the manufacture of cement comprise mainly of lime, silica, alumina and iron oxide. Spatial evaluation of these main chemical constituents of cement has crucial importance for providing effective production. Because these components are composed of some raw materials such as limestone and marl, the spatial relationships in a calcareous marl stone pit was taken into consideration. In practice, spatial field data taken from a cement quarry may include some variations and trends. For modeling and removing spatial trend in a cement raw material quarry as well as providing unbiased estimates, median polish kriging was used. By using the variation of the data itself, some approximations and interpolations were carried out. It was recorded that the method obtained outlier-resistant estimation of spatial trend without needing an external exploratory variable. In addition, it provided very effective estimations and additional information for analyzing spatial non-stationary data.  相似文献   

11.
Positron emission tomography (PET) imaging can be used to study the effects of pharmacologic intervention on brain function. Partial least squares (PLS) regression is a standard tool that can be applied to characterize such effects throughout the brain volume and across time. We have extended the PLS regression methodology to adjust for covariate effects that may influence spatial and temporal aspects of the functional image data over the brain volume. The extension involves multi-dimensional latent variables, experimental design variables based upon sequential PET scanning, and covariates. An illustration is provided using a sequential PET data set acquired to study the effect of d-amphetamine on cerebral blood flow in baboons. An iterative algorithm is developed and implemented and validation results are provided through computer simulation studies.  相似文献   

12.
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log‐rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan–Meier estimators of survival curves using an IPTW log‐rank test for multi‐valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
Cost assessment serves as an essential part in economic evaluation of medical interventions. In many studies, costs as well as survival data are frequently censored. Standard survival analysis techniques are often invalid for censored costs, due to the induced dependent censoring problem. Owing to high skewness in many cost data, it is desirable to estimate the median costs, which will be available with estimated survival function of costs. We propose a kernel-based survival estimator for costs, which is monotone, consistent, and more efficient than several existing estimators. We conduct numerical studies to examine the finite-sample performance of the proposed estimator.  相似文献   

14.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   

15.
In this paper we propose a quantile survival model to analyze censored data. This approach provides a very effective way to construct a proper model for the survival time conditional on some covariates. Once a quantile survival model for the censored data is established, the survival density, survival or hazard functions of the survival time can be obtained easily. For illustration purposes, we focus on a model that is based on the generalized lambda distribution (GLD). The GLD and many other quantile function models are defined only through their quantile functions, no closed‐form expressions are available for other equivalent functions. We also develop a Bayesian Markov Chain Monte Carlo (MCMC) method for parameter estimation. Extensive simulation studies have been conducted. Both simulation study and application results show that the proposed quantile survival models can be very useful in practice.  相似文献   

16.
Modelling count data with overdispersion and spatial effects   总被引:1,自引:1,他引:0  
In this paper we consider regression models for count data allowing for overdispersion in a Bayesian framework. We account for unobserved heterogeneity in the data in two ways. On the one hand, we consider more flexible models than a common Poisson model allowing for overdispersion in different ways. In particular, the negative binomial and the generalized Poisson (GP) distribution are addressed where overdispersion is modelled by an additional model parameter. Further, zero-inflated models in which overdispersion is assumed to be caused by an excessive number of zeros are discussed. On the other hand, extra spatial variability in the data is taken into account by adding correlated spatial random effects to the models. This approach allows for an underlying spatial dependency structure which is modelled using a conditional autoregressive prior based on Pettitt et al. in Stat Comput 12(4):353–367, (2002). In an application the presented models are used to analyse the number of invasive meningococcal disease cases in Germany in the year 2004. Models are compared according to the deviance information criterion (DIC) suggested by Spiegelhalter et al. in J R Stat Soc B64(4):583–640, (2002) and using proper scoring rules, see for example Gneiting and Raftery in Technical Report no. 463, University of Washington, (2004). We observe a rather high degree of overdispersion in the data which is captured best by the GP model when spatial effects are neglected. While the addition of spatial effects to the models allowing for overdispersion gives no or only little improvement, spatial Poisson models with spatially correlated or uncorrelated random effects are to be preferred over all other models according to the considered criteria.  相似文献   

17.
Survival models with continuous-time data are still superior methods of survival analysis. However when the survival data is discrete, taking it as continuous leads the researchers to incorrect results and interpretations. The discrete-time survival model has some advantages in applications such as it can be used for non-proportional hazards, time-varying covariates and tied observations. However, it has a disadvantage about the reconstruction of the survival data and working with big data sets. Actuaries are often rely on complex and big data whereas they have to be quick and efficient for short period analysis. Using the mass always creates inefficient processes and consumes time. Therefore sampling design becomes more and more important in order to get reliable results. In this study, we take into account sampling methods in discrete-time survival model using a real data set on motor insurance. To see the efficiency of the proposed methodology we conducted a simulation study.  相似文献   

18.
Prostate cancer is the most common cancer diagnosed in American men and the second leading cause of death from malignancies. There are large geographical variation and racial disparities existing in the survival rate of prostate cancer. Much work on the spatial survival model is based on the proportional hazards model, but few focused on the accelerated failure time model. In this paper, we investigate the prostate cancer data of Louisiana from the SEER program and the violation of the proportional hazards assumption suggests the spatial survival model based on the accelerated failure time model is more appropriate for this data set. To account for the possible extra-variation, we consider spatially-referenced independent or dependent spatial structures. The deviance information criterion (DIC) is used to select a best fitting model within the Bayesian frame work. The results from our study indicate that age, race, stage and geographical distribution are significant in evaluating prostate cancer survival.  相似文献   

19.
A familyof partial likelihood logistic models is proposed for clusteredsurvival data that are reported in discrete time and that maybe censored. The possible dependence of individual survival timeswithin clusters is modeled, while distinct clusters are assumedto be independent. Two types of clusters are considered. First,all clusters have the same size and are identically distributed.Second, the clusters may vary in size. In both cases our asymptoticresults apply to a large number of small independent clusters.  相似文献   

20.
This article discusses regression analysis of multivariate current status failure time data for which the observation time may be related to the underlying survival time. A local partial likelihood technique is used to estimate the varying coefficient covariate effect functions under the additive hazards frailty model. The asymptotic properties of the proposed estimators are established. An extensive simulation study is conducted for the evaluation of the proposed procedure, the results of which indicate that the proposed method works well in practice. Also, a real data study is provided to illustrate the performance of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号