首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Aalen's nonparametric additive model in which the regression coefficients are assumed to be unspecified functions of time is a flexible alternative to Cox's proportional hazards model when the proportionality assumption is in doubt. In this paper, we incorporate a general linear hypothesis into the estimation of the time‐varying regression coefficients. We combine unrestricted least squares estimators and estimators that are restricted by the linear hypothesis and produce James‐Stein‐type shrinkage estimators of the regression coefficients. We develop the asymptotic joint distribution of such restricted and unrestricted estimators and use this to study the relative performance of the proposed estimators via their integrated asymptotic distributional risks. We conduct Monte Carlo simulations to examine the relative performance of the estimators in terms of their integrated mean square errors. We also compare the performance of the proposed estimators with a recently devised LASSO estimator as well as with ridge‐type estimators both via simulations and data on the survival of primary billiary cirhosis patients.  相似文献   

2.
Extensions to Cox's proportional hazards regression model (Cox, 1972) for the analysis of survival data are considered for a more general multistate framework. This framework allows several transient disease states between initial entry state and death as well as incorporating possible competing causes of death. Methods for parameter and function estimation within this extension are presented and applied to the analysis of data from the Stanford Heart Transplantation Program (Crowley and Hu,1977).  相似文献   

3.
This article considers the shrinkage estimation procedure in the Cox's proportional hazards regression model when it is suspected that some of the parameters may be restricted to a subspace. We have developed the statistical properties of the shrinkage estimators including asymptotic distributional biases and risks. The shrinkage estimators have much higher relative efficiency than the classical estimator, furthermore, we consider two penalty estimators—the LASSO and adaptive LASSO—and compare their relative performance with that of the shrinkage estimators numerically. A Monte Carlo simulation experiment is conducted for different combinations of irrelevant predictors and the performance of each estimator is evaluated in terms of simulated mean squared error. Simulation study shows that the shrinkage estimators are comparable to the penalty estimators when the number of irrelevant predictors in the model is relatively large. The shrinkage and penalty methods are applied to two real data sets to illustrate the usefulness of the procedures in practice.  相似文献   

4.
We propose the penalized empirical likelihood method via bridge estimator in Cox's proportional hazard model for parameter estimation and variable selection. Under reasonable conditions, we show that penalized empirical likelihood in Cox's proportional hazard model has oracle property. A penalized empirical likelihood ratio for the vector of regression coefficients is defined and its limiting distribution is a chi-square distributions. The advantage of penalized empirical likelihood as a nonparametric likelihood approach is illustrated in testing hypothesis and constructing confidence sets. The method is illustrated by extensive simulation studies and a real example.  相似文献   

5.
Left-truncated and right-censored (LTRC) data are encountered frequently due to a prevalent cohort sampling in follow-up studies. Because of the skewness of the distribution of survival time, quantile regression is a useful alternative to the Cox's proportional hazards model and the accelerated failure time model for survival analysis. In this paper, we apply the quantile regression model to LTRC data and develops an unbiased estimating equation for regression coefficients. The proposed estimation methods use the inverse probabilities of truncation and censoring weighting technique. The resulting estimator is uniformly consistent and asymptotically normal. The finite-sample performance of the proposed estimation methods is also evaluated using extensive simulation studies. Finally, analysis of real data is presented to illustrate our proposed estimation methods.  相似文献   

6.
Point process models are a natural approach for modelling data that arise as point events. In the case of Poisson counts, these may be fitted easily as a weighted Poisson regression. Point processes lack the notion of sample size. This is problematic for model selection, because various classical criteria such as the Bayesian information criterion (BIC) are a function of the sample size, n, and are derived in an asymptotic framework where n tends to infinity. In this paper, we develop an asymptotic result for Poisson point process models in which the observed number of point events, m, plays the role that sample size does in the classical regression context. Following from this result, we derive a version of BIC for point process models, and when fitted via penalised likelihood, conditions for the LASSO penalty that ensure consistency in estimation and the oracle property. We discuss challenges extending these results to the wider class of Gibbs models, of which the Poisson point process model is a special case.  相似文献   

7.
Cox's widely used semi-parametric proportional hazards (PH) regression model places restrictions on the possible shapes of the hazard function. Models based on the first hitting time (FHT) of a stochastic process are among the alternatives and have the attractive feature of being based on a model of the underlying process. We review and compare the PH model and an FHT model based on a Wiener process which leads to an inverse Gaussian (IG) regression model. This particular model can also represent a “cured fraction” or long-term survivors. A case study of survival after coronary artery bypass grafting is used to examine the interpretation of the IG model, especially in relation to covariates that affect both of its parameters.  相似文献   

8.
As a useful supplement to mean regression, quantile regression is a completely distribution-free approach and is more robust to heavy-tailed random errors. In this paper, a variable selection procedure for quantile varying coefficient models is proposed by combining local polynomial smoothing with adaptive group LASSO. With an appropriate selection of tuning parameters by the BIC criterion, the theoretical properties of the new procedure, including consistency in variable selection and the oracle property in estimation, are established. The finite sample performance of the newly proposed method is investigated through simulation studies and the analysis of Boston house price data. Numerical studies confirm that the newly proposed procedure (QKLASSO) has both robustness and efficiency for varying coefficient models irrespective of error distribution, which is a good alternative and necessary supplement to the KLASSO method.  相似文献   

9.
This paper focuses on robust estimation and variable selection for partially linear models. We combine the weighted least absolute deviation (WLAD) regression with the adaptive least absolute shrinkage and selection operator (LASSO) to achieve simultaneous robust estimation and variable selection for partially linear models. Compared with the LAD-LASSO method, the WLAD-LASSO method will resist to the heavy-tailed errors and outliers in the parametric components. In addition, we estimate the unknown smooth function by a robust local linear regression. Under some regular conditions, the theoretical properties of the proposed estimators are established. We further examine finite-sample performance of the proposed procedure by simulation studies and a real data example.  相似文献   

10.
In this paper we address the problem of estimating a vector of regression parameters in the Weibull censored regression model. Our main objective is to provide natural adaptive estimators that significantly improve upon the classical procedures in the situation where some of the predictors may or may not be associated with the response. In the context of two competing Weibull censored regression models (full model and candidate submodel), we consider an adaptive shrinkage estimation strategy that shrinks the full model maximum likelihood estimate in the direction of the submodel maximum likelihood estimate. We develop the properties of these estimators using the notion of asymptotic distributional risk. The shrinkage estimators are shown to have higher efficiency than the classical estimators for a wide class of models. Further, we consider a LASSO type estimation strategy and compare the relative performance with the shrinkage estimators. Monte Carlo simulations reveal that when the true model is close to the candidate submodel, the shrinkage strategy performs better than the LASSO strategy when, and only when, there are many inactive predictors in the model. Shrinkage and LASSO strategies are applied to a real data set from Veteran's administration (VA) lung cancer study to illustrate the usefulness of the procedures in practice.  相似文献   

11.
As the number of random variables for the categorical data increases, the possible number of log-linear models which can be fitted to the data increases rapidly, so that various model selection methods are developed. However, we often found that some models chosen by different selection criteria do not coincide. In this paper, we propose a comparison method to test the final models which are non-nested. The statistic of Cox (1961, 1962) is applied to log-linear models for testing non-nested models, and the Kullback-Leibler measure of closeness (Pesaran 1987) is explored. In log-linear models, pseudo estimators for the expectation and the variance of Cox's statistic are not only derived but also shown to be consistent estimators.  相似文献   

12.
In this article we present a robust and efficient variable selection procedure by using modal regression for varying-coefficient models with longitudinal data. The new method is proposed based on basis function approximations and a group version of the adaptive LASSO penalty, which can select significant variables and estimate the non-zero smooth coefficient functions simultaneously. Under suitable conditions, we establish the consistency in variable selection and the oracle property in estimation. A simulation study and two real data examples are undertaken to assess the finite sample performance of the proposed variable selection procedure.  相似文献   

13.
Several approaches have been suggested for fitting linear regression models to censored data. These include Cox's propor­tional hazard models based on quasi-likelihoods. Methods of fitting based on least squares and maximum likelihoods have also been proposed. The methods proposed so far all require special purpose optimization routines. We describe an approach here which requires only a modified standard least squares routine.

We present methods for fitting a linear regression model to censored data by least squares and method of maximum likelihood. In the least squares method, the censored values are replaced by their expectations, and the residual sum of squares is minimized. Several variants are suggested in the ways in which the expect­ation is calculated. A parametric (assuming a normal error model) and two non-parametric approaches are described. We also present a method for solving the maximum likelihood equations in the estimation of the regression parameters in the censored regression situation. It is shown that the solutions can be obtained by a recursive algorithm which needs only a least squares routine for optimization. The suggested procesures gain considerably in computational officiency. The Stanford Heart Transplant data is used to illustrate the various methods.  相似文献   

14.
Many commonly used statistical methods for data analysis or clinical trial design rely on incorrect assumptions or assume an over‐simplified framework that ignores important information. Such statistical practices may lead to incorrect conclusions about treatment effects or clinical trial designs that are impractical or that do not accurately reflect the investigator's goals. Bayesian nonparametric (BNP) models and methods are a very flexible new class of statistical tools that can overcome such limitations. This is because BNP models can accurately approximate any distribution or function and can accommodate a broad range of statistical problems, including density estimation, regression, survival analysis, graphical modeling, neural networks, classification, clustering, population models, forecasting and prediction, spatiotemporal models, and causal inference. This paper describes 3 illustrative applications of BNP methods, including a randomized clinical trial to compare treatments for intraoperative air leaks after pulmonary resection, estimating survival time with different multi‐stage chemotherapy regimes for acute leukemia, and evaluating joint effects of targeted treatment and an intermediate biological outcome on progression‐free survival time in prostate cancer.  相似文献   

15.
The use of general linear modeling (GLM) procedures based on log-rank scores is proposed for the analysis of survival data and compared to standard survival analysis procedures. For the comparison of two groups, this approach performed similarly to the traditional log-rank test. In the case of more complicated designs - without ties in the survival times - the approach was only marginally less powerful than tests from proportional hazards models, and clearly less powerful than a likelihood ratio test for a fully parametric model; however, with ties in the survival time, the approach proved more powerful than tests from Cox's semi-parametric proportional hazards procedure. The method appears to provide a reasonably powerful alternative for the analysis of survival data, is easily used in complicated study designs, avoids (semi-)parametric assumptions, and is quite computationally easy and inexpensive to employ.  相似文献   

16.
Normal residual is one of the usual assumptions in autoregressive model but sometimes in practice we are faced with non-negative residuals. In this paper, we have derived modified maximum likelihood estimators of parameters of the residuals and autoregressive coefficient. Also asymptotic distribution of modified maximum likelihood estimators in both stationary and non-stationary models are computed. So that, we can derive asymptotic distribution of unit root, Vuong's and Cox's tests statistics in stationary situation. Using simulation, it shows that Akaike information criterion and Vuong's test work to select the optimal autoregressive model with non-negative residuals. Sometimes Vuong's test select two competing models as equivalent models. These models may be suitable or unsuitable equivalent models. So we consider Cox's test to make inference after model selection. Kolmogorov–Smirnov test confirms our results. Also we have computed tracking interval for competing models to choosing between two close competing models when Vuong's test and Cox's test cannot detect the differences.  相似文献   

17.
Feature selection arises in many areas of modern science. For example, in genomic research, we want to find the genes that can be used to separate tissues of different classes (e.g. cancer and normal). One approach is to fit regression/classification models with certain penalization. In the past decade, hyper-LASSO penalization (priors) have received increasing attention in the literature. However, fully Bayesian methods that use Markov chain Monte Carlo (MCMC) for regression/classification with hyper-LASSO priors are still in lack of development. In this paper, we introduce an MCMC method for learning multinomial logistic regression with hyper-LASSO priors. Our MCMC algorithm uses Hamiltonian Monte Carlo in a restricted Gibbs sampling framework. We have used simulation studies and real data to demonstrate the superior performance of hyper-LASSO priors compared to LASSO, and to investigate the issues of choosing heaviness and scale of hyper-LASSO priors.  相似文献   

18.
Censored median regression has proved useful for analyzing survival data in complicated situations, say, when the variance is heteroscedastic or the data contain outliers. In this paper, we study the sparse estimation for censored median regression models, which is an important problem for high dimensional survival data analysis. In particular, a new procedure is proposed to minimize an inverse-censoring-probability weighted least absolute deviation loss subject to the adaptive LASSO penalty and result in a sparse and robust median estimator. We show that, with a proper choice of the tuning parameter, the procedure can identify the underlying sparse model consistently and has desired large-sample properties including root-n consistency and the asymptotic normality. The procedure also enjoys great advantages in computation, since its entire solution path can be obtained efficiently. Furthermore, we propose a resampling method to estimate the variance of the estimator. The performance of the procedure is illustrated by extensive simulations and two real data applications including one microarray gene expression survival data.  相似文献   

19.
Liu X  Wang L  Liang H 《Statistica Sinica》2011,21(3):1225-1248
Semiparametric additive partial linear models, containing both linear and nonlinear additive components, are more flexible compared to linear models, and they are more efficient compared to general nonparametric regression models because they reduce the problem known as "curse of dimensionality". In this paper, we propose a new estimation approach for these models, in which we use polynomial splines to approximate the additive nonparametric components and we derive the asymptotic normality for the resulting estimators of the parameters. We also develop a variable selection procedure to identify significant linear components using the smoothly clipped absolute deviation penalty (SCAD), and we show that the SCAD-based estimators of non-zero linear components have an oracle property. Simulations are performed to examine the performance of our approach as compared to several other variable selection methods such as the Bayesian Information Criterion and Least Absolute Shrinkage and Selection Operator (LASSO). The proposed approach is also applied to real data from a nutritional epidemiology study, in which we explore the relationship between plasma beta-carotene levels and personal characteristics (e.g., age, gender, body mass index (BMI), etc.) as well as dietary factors (e.g., alcohol consumption, smoking status, intake of cholesterol, etc.).  相似文献   

20.
Coefficient estimation in linear regression models with missing data is routinely carried out in the mean regression framework. However, the mean regression theory breaks down if the error variance is infinite. In addition, correct specification of the likelihood function for existing imputation approach is often challenging in practice, especially for skewed data. In this paper, we develop a novel composite quantile regression and a weighted quantile average estimation procedure for parameter estimation in linear regression models when some responses are missing at random. Instead of imputing the missing response by randomly drawing from its conditional distribution, we propose to impute both missing and observed responses by their estimated conditional quantiles given the observed data and to use the parametrically estimated propensity scores to weigh check functions that define a regression parameter. Both estimation procedures are resistant to heavy‐tailed errors or outliers in the response and can achieve nice robustness and efficiency. Moreover, we propose adaptive penalization methods to simultaneously select significant variables and estimate unknown parameters. Asymptotic properties of the proposed estimators are carefully investigated. An efficient algorithm is developed for fast implementation of the proposed methodologies. We also discuss a model selection criterion, which is based on an ICQ ‐type statistic, to select the penalty parameters. The performance of the proposed methods is illustrated via simulated and real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号