首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Variable selection is an effective methodology for dealing with models with numerous covariates. We consider the methods of variable selection for semiparametric Cox proportional hazards model under the progressive Type-II censoring scheme. The Cox proportional hazards model is used to model the influence coefficients of the environmental covariates. By applying Breslow’s “least information” idea, we obtain a profile likelihood function to estimate the coefficients. Lasso-type penalized profile likelihood estimation as well as stepwise variable selection method are explored as means to find the important covariates. Numerical simulations are conducted and Veteran’s Administration Lung Cancer data are exploited to evaluate the performance of the proposed method.  相似文献   

2.
The proportional hazards regression model of Cox(1972) is widely used in analyzing survival data. We examine several goodness of fit tests for checking the proportionality of hazards in the Cox model with two-sample censored data, and compare the performance of these tests by a simulation study. The strengths and weaknesses of the tests are pointed out. The effects of the extent of random censoring on the size and power are also examined. Results of a simulation study demonstrate that Gill and Schumacher's test is most powerful against a broad range of monotone departures from the proportional hazards assumption, but it may not perform as well fail for alternatives of nonmonotone hazard ratio. For the latter kind of alternatives, Andersen's test may detect patterns of irregular changes in hazards.  相似文献   

3.
In this paper, we develop Bayesian methodology and computational algorithms for variable subset selection in Cox proportional hazards models with missing covariate data. A new joint semi-conjugate prior for the piecewise exponential model is proposed in the presence of missing covariates and its properties are examined. The covariates are assumed to be missing at random (MAR). Under this new prior, a version of the Deviance Information Criterion (DIC) is proposed for Bayesian variable subset selection in the presence of missing covariates. Monte Carlo methods are developed for computing the DICs for all possible subset models in the model space. A Bone Marrow Transplant (BMT) dataset is used to illustrate the proposed methodology.  相似文献   

4.
In the analysis of censored survival data Cox proportional hazards model (1972) is extremely popular among the practitioners. However, in many real-life situations the proportionality of the hazard ratios does not seem to be an appropriate assumption. To overcome such a problem, we consider a class of nonproportional hazards models known as generalized odds-rate class of regression models. The class is general enough to include several commonly used models, such as proportional hazards model, proportional odds model, and accelerated life time model. The theoretical and computational properties of these models have been re-examined. The propriety of the posterior has been established under some mild conditions. A simulation study is conducted and a detailed analysis of the data from a prostate cancer study is presented to further illustrate the proposed methodology.  相似文献   

5.
ABSTRACT

Traditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models.  相似文献   

6.
In non‐randomized biomedical studies using the proportional hazards model, the data often constitute an unrepresentative sample of the underlying target population, which results in biased regression coefficients. The bias can be avoided by weighting included subjects by the inverse of their respective selection probabilities, as proposed by Horvitz & Thompson (1952) and extended to the proportional hazards setting for use in surveys by Binder (1992) and Lin (2000). In practice, the weights are often estimated and must be treated as such in order for the resulting inference to be accurate. The authors propose a two‐stage weighted proportional hazards model in which, at the first stage, weights are estimated through a logistic regression model fitted to a representative sample from the target population. At the second stage, a weighted Cox model is fitted to the biased sample. The authors propose estimators for the regression parameter and cumulative baseline hazard. They derive the asymptotic properties of the parameter estimators, accounting for the difference in the variance introduced by the randomness of the weights. They evaluate the accuracy of the asymptotic approximations in finite samples through simulation. They illustrate their approach in an analysis of renal transplant patients using data obtained from the Scientific Registry of Transplant Recipients  相似文献   

7.
A multicollinearity diagnostic is discussed for parametric models fit to censored data. The models considered include the Weibull, exponential and lognormal models as well as the Cox proportional hazards model. This diagnostic is an extension of the diagnostic proposed by Belsley, Kuh, and Welsch (1980). The diagnostic is based on the condition indicies and variance proportions of the variance covariance matrix. Its use and properties are studied through a series of examples. The effect of centering variables included in model is also discussed.  相似文献   

8.
Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.  相似文献   

9.
Muitivariate failure time data are common in medical research; com¬monly used statistical models for such correlated failure-time data include frailty and marginal models. Both types of models most often assume pro¬portional hazards (Cox, 1972); but the Cox model may not fit the data well This article presents a class of linear transformation frailty models that in¬cludes, as a special case, the proportional hazards model with frailty. We then propose approximate procedures to derive the best linear unbiased es¬timates and predictors of the regression parameters and frailties. We apply the proposed methods to analyze results of a clinical trial of different dose levels of didansine (ddl) among HIV-infected patients who were intolerant of zidovudine (ZDV). These methods yield estimates of treatment effects and of frailties corresponding to patient groups defined by clinical history prior to entry into the trial.  相似文献   

10.
The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semi-parametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coefficients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coefficients and the model space, they propose a semi-automatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology.  相似文献   

11.
12.
Frailty models can be fit as mixed-effects Poisson models after transforming time-to-event data to the Poisson model framework. We assess, through simulations, the robustness of Poisson likelihood estimation for Cox proportional hazards models with log-normal frailties under misspecified frailty distribution. The log-gamma and Laplace distributions were used as true distributions for frailties on a natural log scale. Factors such as the magnitude of heterogeneity, censoring rate, number and sizes of groups were explored. In the simulations, the Poisson modeling approach that assumes log-normally distributed frailties provided accurate estimates of within- and between-group fixed effects even under a misspecified frailty distribution. Non-robust estimation of variance components was observed in the situations of substantial heterogeneity, large event rates, or high data dimensions.  相似文献   

13.
We give chi-squared goodness-of fit tests for parametric regression models such as accelerated failure time, proportional hazards, generalized proportional hazards, frailty models, transformation models, and models with cross-effects of survival functions. Random right censored data are used. Choice of random grouping intervals as data functions is considered.  相似文献   

14.
In the analysis of survival times, the logrank test and the Cox model have been established as key tools, which do not require specific distributional assumptions. Under the assumption of proportional hazards, they are efficient and their results can be interpreted unambiguously. However, delayed treatment effects, disease progression, treatment switchers or the presence of subgroups with differential treatment effects may challenge the assumption of proportional hazards. In practice, weighted logrank tests emphasizing either early, intermediate or late event times via an appropriate weighting function may be used to accommodate for an expected pattern of non-proportionality. We model these sources of non-proportional hazards via a mixture of survival functions with piecewise constant hazard. The model is then applied to study the power of unweighted and weighted log-rank tests, as well as maximum tests allowing different time dependent weights. Simulation results suggest a robust performance of maximum tests across different scenarios, with little loss in power compared to the most powerful among the considered weighting schemes and huge power gain compared to unfavorable weights. The actual sources of non-proportional hazards are not obvious from resulting populationwise survival functions, highlighting the importance of detailed simulations in the planning phase of a trial when assuming non-proportional hazards.We provide the required tools in a software package, allowing to model data generating processes under complex non-proportional hazard scenarios, to simulate data from these models and to perform the weighted logrank tests.  相似文献   

15.
We apply the univariate sliced inverse regression to survival data. Our approach is different from the other papers on this subject. The right-censored observations are taken into account during the slicing of the survival times by assigning each of them with equal weight to all of the slices with longer survival. We test this method with different distributions for the two main survival data models, the accelerated lifetime model and Cox’s proportional hazards model. In both cases and under different conditions of sparsity, sample size and dimension of parameters, this non-parametric approach finds the data structure and can be viewed as a variable selector.  相似文献   

16.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   

17.
Sensitivity analysis for unmeasured confounding should be reported more often, especially in observational studies. In the standard Cox proportional hazards model, this requires substantial assumptions and can be computationally difficult. The marginal structural Cox proportional hazards model (Cox proportional hazards MSM) with inverse probability weighting has several advantages compared to the standard Cox model, including situations with only one assessment of exposure (point exposure) and time-independent confounders. We describe how simple computations provide sensitivity for unmeasured confounding in a Cox proportional hazards MSM with point exposure. This is achieved by translating the general framework for sensitivity analysis for MSMs by Robins and colleagues to survival time data. Instead of bias-corrected observations, we correct the hazard rate to adjust for a specified amount of unmeasured confounding. As an additional bonus, the Cox proportional hazards MSM is robust against bias from differential loss to follow-up. As an illustration, the Cox proportional hazards MSM was applied in a reanalysis of the association between smoking and depression in a population-based cohort of Norwegian adults. The association was moderately sensitive for unmeasured confounding.  相似文献   

18.
We discuss a method of weighting the likelihood equations with the aim of obtaining fully efficient and robust estimators. We discuss the case of discrete probability models using several weighting functions. If the weight functions generate increasing residual adjustment functions then the method provides a link between the maximum likelihood score equations and minimum disparity estimation, as well as a set of diagnostic weights and a goodness of fit criterion. However, when the weights do not generate increasing residual adjustment functions a selection criterion is needed to obtain the robust root.The weight functions discussed in this paper do not automatically downweight a proportion of the data; an observation is significantly downweighted only if it is inconsistent with the assumed model. At the true model, therefore, the proposed estimating equations behave like the ordinary likelihood equations. We apply our results to several discrete models; in addition, a toxicology experiment illustrates the method in the context of logistic regression.  相似文献   

19.
The Cox proportional hazards model has become the standard model for survival analysis. It is often seen as the null model in that "... explicit excuses are now needed to use different models" (Keiding, Proceedings of the XIXth International Biometric Conference, Cape Town, 1998). However, converging hazards also occur frequently in survival analysis. The Burr model, which may be derived as the marginal from a gamma frailty model, is one commonly used tool to model converging hazards. We outline this approach and introduce a mixed model which extends the Burr model and allows for both proportional and converging hazards. Although a semi-parametric model in its own right, we demonstrate how the mixed model can be derived via a gamma frailty interpretation, suggesting an E-M fitting procedure. We illustrate the modelling techniques using data on survival of hospice patients.  相似文献   

20.
The maximum likelihood and maximum partial likelihood approaches to the proportional hazards model are unified. The purpose is to give a general approach to the analysis of the proportional hazards model, whether the baseline distribution is absolutely continuous, discrete, or a mixture. The advantage is that heavily tied data will be analyzed with a discrete time model, while data with no ties is analyzed with ordinary Cox regression. Data sets in between are treated by a compromise between the discrete time model and Efron's approach to tied data in survival analysis, and the transitions between modes are automatic. A simulation study is conducted comparing the proposed approach to standard methods of handling ties. A recent suggestion, that revives Breslow's approach to tied data, is finally discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号