首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An individual measure of relative survival   总被引:2,自引:0,他引:2  
Summary.  Relative survival techniques are used to compare survival experience in a study cohort with that expected if background population rates apply. The techniques are especially useful when cause-specific death information is not accurate or not available as they provide a measure of excess mortality in a group of patients with a certain disease. Whereas these methods are based on group comparisons, we present here a transformation approach which instead gives for each individual an outcome measure relative to the appropriate background population. The new outcome measure is easily interpreted and can be analysed by using standard survival analysis techniques. It provides additional information on relative survival and gives new options in regression analysis. For example, one can estimate the proportion of patients who survived longer than a given percentile of the respective general population or compare survival experience of individuals while accounting for the population differences. The regression models for the new outcome measure are different from existing models, thus providing new possibilities in analysing relative survival data. One distinctive feature of our approach is that we adjust for expected survival before modelling. The paper is motivated by a study into the survival of patients after acute myocardial infarction.  相似文献   

2.
The tutorial is concerned with two types of test for the general lack of fit of a linear regression model, as found in the Minitab software package, and which do not require replicated observations. They aim to identify non-specified curvature and interaction in predictors, by comparing fits over the predictor region divided into two parts. Minitab's regression subcommand XLOF which gives the tests is only briefly documented in the manual and, unlike virtually all other statistical procedures in the software, it is not standard and cannot be readily found in textbooks. The two types of test are described here; they concern the predictors one-at-a-time and the predictors all-at-once. An example of their use is given. A suite of macros is available which reproduces the results of the XLOF tests in much more detail than is given by the XLOF subcommand.  相似文献   

3.
Two useful statistical methods for generating a latent variable are described and extended to incorporate polytomous data and additional covariates. Item response analysis is not well-known outside its area of application, mainly because the procedures to fit the models are computer intensive and not routinely available within general statistical software packages. The linear score technique is less computer intensive, straightforward to implement and has been proposed as a good approximation to item response analysis. Both methods have been implemented in the standard statistical software package GLIM 4.0, and are compared to determine their effectiveness.  相似文献   

4.
Using some logarithmic and integral transformation we transform a continuous covariate frailty model into a polynomial regression model with a random effect. The responses of this mixed model can be ‘estimated’ via conditional hazard function estimation. The random error in this model does not have zero mean and its variance is not constant along the covariate and, consequently, these two quantities have to be estimated. Since the asymptotic expression for the bias is complicated, the two-large-bandwidth trick is proposed to estimate the bias. The proposed transformation is very useful for clustered incomplete data subject to left truncation and right censoring (and for complex clustered data in general). Indeed, in this case no standard software is available to fit the frailty model, whereas for the transformed model standard software for mixed models can be used for estimating the unknown parameters in the original frailty model. A small simulation study illustrates the good behavior of the proposed method. This method is applied to a bladder cancer data set.  相似文献   

5.
Current survival techniques do not provide a good method for handling clinical trials with a large percent of censored observations. This research proposes using time-dependent surrogates of survival as outcome variables, in conjunction with observed survival time, to improve the precision in comparing the relative effects of two treatments on the distribution of survival time. This is in contrast to the standard method used today which uses the marginal density of survival time, T. only, or the marginal density of a surrogate, X, only, therefore, ignoring some available information. The surrogate measure, X, may be a fixed value or a time-dependent variable, X(t). X is a summary measure of some of the covariates measured throughout the trial that provide additional information on a subject's survival time. It is possible to model these time-dependent covariate values and relate the parameters in the model to the parameters in the distribution of T given X. The result is that three new models are available for the analysis of clinical trials. All three models use the joint density of survival time and a surrogate measure. Given one of three different assumed mechanisms of the potential treatment effect, each of the three methods improves the precision of the treatment estimate.  相似文献   

6.
For clinical trials with interim analyses, there have been methodologies and software to calculate boundaries for comparing binomial, normal, and survival data from two treatment groups. Jermison & Turnbull (1991) extended Pocock (1977) and O' Brien- Fleming (1979) boundaries to t-tests, x2-tests and F-tests for comparing normal data from several treatment groups. This paper demonstrates that the above boundaries can be applied to a wide variety of test statistics based on general parametric settings. We show that asymptotically the x2 boundaries as well as the corresponding nominal significance levels calculated by Jennison & Turnbull can be applied to interim analyses based on the score test, the Wald test, and the likelihood ratio test for general parametric models. Based on the results of this paper, currently available software in group sequential testing can be used to calculate. the nominal significance levels (or boundaries) for group sequential testing based on logistic regression, A NOVA, and other parametric methods.  相似文献   

7.
8.
A method is given for generating different data sets for homework exercises in simple regression, such that not only are the estimated slopes, intercepts and correlations the same over all data sets, but so also are the analysis of variance tables, with error mean square that is a perfect square. An example is given. A computer program REGDATA is available for generating data sets of this nature.  相似文献   

9.
Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.  相似文献   

10.
Survival studies often generate not only a survival time for each patient but also a sequence of health measurements at annual or semi-annual check-ups while the patient remains alive. Such a sequence of random length accompanied by a survival time is called a survival process. Robust health is ordinarily associated with longer survival, so the two parts of a survival process cannot be assumed independent. This paper is concerned with a general technique—reverse alignment—for constructing statistical models for survival processes, here termed revival models. A revival model is a regression model in the sense that it incorporates covariate and treatment effects into both the distribution of survival times and the joint distribution of health outcomes. The revival model also determines a conditional survival distribution given the observed history, which describes how the subsequent survival distribution is determined by the observed progression of health outcomes.  相似文献   

11.
Survival studies often generate not only a survival time for each patient but also a sequence of health measurements at annual or semi-annual check-ups while the patient remains alive. Such a sequence of random length accompanied by a survival time is called a survival process. Robust health is ordinarily associated with longer survival, so the two parts of a survival process cannot be assumed independent. This paper is concerned with a general technique—reverse alignment—for constructing statistical models for survival processes, here termed revival models. A revival model is a regression model in the sense that it incorporates covariate and treatment effects into both the distribution of survival times and the joint distribution of health outcomes. The revival model also determines a conditional survival distribution given the observed history, which describes how the subsequent survival distribution is determined by the observed progression of health outcomes.  相似文献   

12.
Gu MG  Sun L  Zuo G 《Lifetime data analysis》2005,11(4):473-488
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.  相似文献   

13.
In this paper we investigate the asymptotic properties of estimators obtained for the semiparametric additive accelerated life model proposed by Bagdonavicius & Nikulin (1995). This model generalizes the well known additive hazards model of survival analysis and is close to the general transformation model (see Dabrowska & Doksum, 1988). Asymptotic properties of the estimator of the regression parameter and the estimator of the reliability function are given in the case of right censoring for discretized data and a numerical example illustrates these results.  相似文献   

14.
We propose a class of general partially linear additive transformation models (GPLATM) with right-censored survival data in this work. The class of models are flexible enough to cover many commonly used parametric and nonparametric survival analysis models as its special cases. Based on the B spline interpolation technique, we estimate the unknown regression parameters and functions by the maximum marginal likelihood estimation method. One important feature of the estimation procedure is that it does not need the baseline and censoring cumulative density distributions. Some numerical studies illustrate that this procedure can work very well for the moderate sample size.  相似文献   

15.
This note discusses a problem that might occur when forward stepwise regression is used for variable selection and among the candidate variables is a categorical variable with more than two categories. Most software packages (such as SAS, SPSSx, BMDP) include special programs for performing stepwise regression. The user of these programs has to code categorical variables with dummy variables. In this case the forward selection might wrongly indicate that a categorical variable with more than two categories is nonsignificant. This is a disadvantage of the forward selection compared with the backward elimination method. A way to avoid the problem would be to test in a single step all dummy variables corresponding to the same categorical variable rather than one dummy variable at a time, such as in the analysis of covariance. This option, however, is not available in forward stepwise procedures, except for stepwise logistic regression in BMDP. A practical possibility is to repeat the forward stepwise regression and change the reference categories each time.  相似文献   

16.
In this paper we illustrate the properties of the epsilon-skew-normal (ESN) distribution with respect to developing more flexible regression models. The ESN model is a simple one-parameter extension of the standard normal model. The additional parameter ~ corresponds to the degree of skewness in the model. In the fitting process we take advantage of relatively new powerful routines that are now available in standard software packages such as SAS. It is illustrated that even if the true underlying error distribution is exactly normal there is no practical loss n power with respect to testing for non-zero regression coefficients. If the true underlying error distribution is slightly skewed, the ESN model is superior in terms of statistical power for tests about the regression coefficient. This model has good asymptotic properties for samples of size n>50.  相似文献   

17.
This paper deals with a general class of transformation models that contains many important semiparametric regression models as special cases. It develops a self-induced smoothing for the maximum rank correlation estimator, resulting in simultaneous point and variance estimation. The self-induced smoothing does not require bandwidth selection, yet provides the right amount of smoothness so that the estimator is asymptotically normal with mean zero (unbiased) and variance–covariance matrix consistently estimated by the usual sandwich-type estimator. An iterative algorithm is given for the variance estimation and shown to numerically converge to a consistent limiting variance estimator. The approach is applied to a data set involving survival times of primary biliary cirrhosis patients. Simulation results are reported, showing that the new method performs well under a variety of scenarios.  相似文献   

18.
In this paper we develop nonparametric methods for regression analysis when the response variable is subject to censoring and/or truncation. The development is based on a data completion princple that enables us to apply, via an iterative scheme, nonparametric regression techniques to iteratively com¬pleted data from a given sample with censored and/or truncated observations. In particular, locally weighted regression smoothers and additive regression models are extended to left-truncated and right-censored data Nonparamet¬ric regression analysis is applied to the Stanford heart transplant data, which have been analyzed by previous authors using semiparametric regression meth¬ods. and provides new insights into the relationship between expected survival time after a heart transplant and explanatory variables.  相似文献   

19.
A new general class of exponentiated sinh Cauchy regression models for location, scale, and shape parameters is introduced and studied. It may be applied to censored data and used more effectively in survival analysis when compared with the usual models. For censored data, we employ a frequentist analysis for the parameters of the proposed model. Further, for different parameter settings, sample sizes, and censoring percentages, various simulations are performed. The extended regression model is very useful for the analysis of real data and could give more adequate fits than other special regression models.  相似文献   

20.
The hazard function describes the instantaneous rate of failure at a time t, given that the individual survives up to t. In applications, the effect of covariates produce changes in the hazard function. When dealing with survival analysis, it is of interest to identify where a change point in time has occurred. In this work, covariates and censored variables are considered in order to estimate a change-point in the Weibull regression hazard model, which is a generalization of the exponential model. For this more general model, it is possible to obtain maximum likelihood estimators for the change-point and for the parameters involved. A Monte Carlo simulation study shows that indeed, it is possible to implement this model in practice. An application with clinical trial data coming from a treatment of chronic granulomatous disease is also included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号