首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 331 毫秒
1.
Quantile regression is a flexible approach to assessing covariate effects on failure time, which has attracted considerable interest in survival analysis. When the dimension of covariates is much larger than the sample size, feature screening and variable selection become extremely important and indispensable. In this article, we introduce a new feature screening method for ultrahigh dimensional censored quantile regression. The proposed method can work for a general class of survival models, allow for heterogeneity of data and enjoy desirable properties including the sure screening property and the ranking consistency property. Moreover, an iterative version of screening algorithm has also been proposed to accommodate more complex situations. Monte Carlo simulation studies are designed to evaluate the finite sample performance under different model settings. We also illustrate the proposed methods through an empirical analysis.  相似文献   

2.
For ultrahigh-dimensional data, independent feature screening has been demonstrated both theoretically and empirically to be an effective dimension reduction method with low computational demanding. Motivated by the Buckley–James method to accommodate censoring, we propose a fused Kolmogorov–Smirnov filter to screen out the irrelevant dependent variables for ultrahigh-dimensional survival data. The proposed model-free screening method can work with many types of covariates (e.g. continuous, discrete and categorical variables) and is shown to enjoy the sure independent screening property under mild regularity conditions without requiring any moment conditions on covariates. In particular, the proposed procedure can still be powerful when covariates are strongly dependent on each other. We further develop an iterative algorithm to enhance the performance of our method while dealing with the practical situations where some covariates may be marginally unrelated but jointly related to the response. We conduct extensive simulations to evaluate the finite-sample performance of the proposed method, showing that it has favourable exhibition over the existing typical methods. As an illustration, we apply the proposed method to the diffuse large-B-cell lymphoma study.  相似文献   

3.
In this paper, we consider the ultrahigh-dimensional sufficient dimension reduction (SDR) for censored data and measurement error in covariates. We first propose the feature screening procedure based on censored data and the covariates subject to measurement error. With the suitable correction of mismeasurement, the error-contaminated variables detected by the proposed feature screening procedure are the same as the truly important variables. Based on the selected active variables, we develop the SDR method to estimate the central subspace and the structural dimension with both censored data and measurement error incorporated. The theoretical results of the proposed method are established. Simulation studies are reported to assess the performance of the proposed method. The proposed method is implemented to NKI breast cancer data.  相似文献   

4.
An estimation procedure is proposed for the Cox model in cohort studies with validation sampling, where crude covariate information is observed for the full cohort and true covariate information is collected on a validation set sampled randomly from the full cohort. The method proposed makes use of the partial information from data that are available on the entire cohort by fitting a working Cox model relating crude covariates to the failure time. The resulting estimator is consistent regardless of the specification of the working model and is asymptotically more efficient than the validation-set-only estimator. Approximate asymptotic relative efficiencies with respect to some alternative methods are derived under a simple scenario and further studied numerically. The finite sample performance is investigated and compared with alternative methods via simulation studies. A similar procedure also works for the case where the validation set is a stratified random sample from the cohort.  相似文献   

5.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

6.
The case-cohort study design is widely used to reduce cost when collecting expensive covariates in large cohort studies with survival or competing risks outcomes. A case-cohort study dataset consists of two parts: (a) a random sample and (b) all cases or failures from a specific cause of interest. Clinicians often assess covariate effects on competing risks outcomes. The proportional subdistribution hazards model directly evaluates the effect of a covariate on the cumulative incidence function under the non-covariate-dependent censoring assumption for the full cohort study. However, the non-covariate-dependent censoring assumption is often violated in many biomedical studies. In this article, we propose a proportional subdistribution hazards model for case-cohort studies with stratified data with covariate-adjusted censoring weight. We further propose an efficient estimator when extra information from the other causes is available under case-cohort studies. The proposed estimators are shown to be consistent and asymptotically normal. Simulation studies show (a) the proposed estimator is unbiased when the censoring distribution depends on covariates and (b) the proposed efficient estimator gains estimation efficiency when using extra information from the other causes. We analyze a bone marrow transplant dataset and a coronary heart disease dataset using the proposed method.  相似文献   

7.
Generalized case–cohort designs have been proved to be a cost-effective way to enhance effectiveness in large epidemiological cohort. In generalized case–cohort design, we first select a subcohort from the underlying cohort by simple random sampling, and then sample a subset of the failures in the remaining subjects. In this article, we propose the inference procedure for the unknown regression parameters in the additive hazards model and develop an optimal sample size allocations to achieve maximum power at a given budget in generalized case–cohort design. The finite sample performance of the proposed method is evaluated through simulation studies. The proposed method is applied to a real data set from the National Wilm's Tumor Study Group.  相似文献   

8.
Most feature screening methods for ultrahigh-dimensional classification explicitly or implicitly assume the covariates are continuous. However, in the practice, it is quite common that both categorical and continuous covariates appear in the data, and applicable feature screening method is very limited. To handle this non-trivial situation, we propose an entropy-based feature screening method, which is model free and provides a unified screening procedure for both categorical and continuous covariates. We establish the sure screening and ranking consistency properties of the proposed procedure. We investigate the finite sample performance of the proposed procedure by simulation studies and illustrate the method by a real data analysis.  相似文献   

9.
Matched case–control designs are commonly used in epidemiological studies for estimating the effect of exposure variables on the risk of a disease by controlling the effect of confounding variables. Due to retrospective nature of the study, information on a covariate could be missing for some subjects. A straightforward application of the conditional logistic likelihood for analyzing matched case–control data with the partially missing covariate may yield inefficient estimators of the parameters. A robust method has been proposed to handle this problem using an estimated conditional score approach when the missingness mechanism does not depend on the disease status. Within the conditional logistic likelihood framework, an empirical procedure is used to estimate the odds of the disease for the subjects with missing covariate values. The asymptotic distribution and the asymptotic variance of the estimator when the matching variables and the completely observed covariates are categorical. The finite sample performance of the proposed estimator is assessed through a simulation study. Finally, the proposed method has been applied to analyze two matched case–control studies. The Canadian Journal of Statistics 38: 680–697; 2010 © 2010 Statistical Society of Canada  相似文献   

10.
We consider hypothesis testing problems for low‐dimensional coefficients in a high dimensional additive hazard model. A variance reduced partial profiling estimator (VRPPE) is proposed and its asymptotic normality is established, which enables us to test the significance of each single coefficient when the data dimension is much larger than the sample size. Based on the p‐values obtained from the proposed test statistics, we then apply a multiple testing procedure to identify significant coefficients and show that the false discovery rate can be controlled at the desired level. The proposed method is also extended to testing a low‐dimensional sub‐vector of coefficients. The finite sample performance of the proposed testing procedure is evaluated by simulation studies. We also apply it to two real data sets, with one focusing on testing low‐dimensional coefficients and the other focusing on identifying significant coefficients through the proposed multiple testing procedure.  相似文献   

11.
A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.  相似文献   

12.
Stochastic gradient descent (SGD) provides a scalable way to compute parameter estimates in applications involving large‐scale data or streaming data. As an alternative version, averaged implicit SGD (AI‐SGD) has been shown to be more stable and more efficient. Although the asymptotic properties of AI‐SGD have been well established, statistical inferences based on it such as interval estimation remain unexplored. The bootstrap method is not computationally feasible because it requires to repeatedly resample from the entire data set. In addition, the plug‐in method is not applicable when there is no explicit covariance matrix formula. In this paper, we propose a scalable statistical inference procedure, which can be used for conducting inferences based on the AI‐SGD estimator. The proposed procedure updates the AI‐SGD estimate as well as many randomly perturbed AI‐SGD estimates, upon the arrival of each observation. We derive some large‐sample theoretical properties of the proposed procedure and examine its performance via simulation studies.  相似文献   

13.
Varying-coefficient models have been widely used to investigate the possible time-dependent effects of covariates when the response variable comes from normal distribution. Much progress has been made for inference and variable selection in the framework of such models. However, the identification of model structure, that is how to identify which covariates have time-varying effects and which have fixed effects, remains a challenging and unsolved problem especially when the dimension of covariates is much larger than the sample size. In this article, we consider the structural identification and variable selection problems in varying-coefficient models for high-dimensional data. Using a modified basis expansion approach and group variable selection methods, we propose a unified procedure to simultaneously identify the model structure, select important variables and estimate the coefficient curves. The unique feature of the proposed approach is that we do not have to specify the model structure in advance, therefore, it is more realistic and appropriate for real data analysis. Asymptotic properties of the proposed estimators have been derived under regular conditions. Furthermore, we evaluate the finite sample performance of the proposed methods with Monte Carlo simulation studies and a real data analysis.  相似文献   

14.
The author considers time‐to‐event data from case‐cohort designs. As existing methods are either inefficient or based on restrictive assumptions concerning the censoring mechanism, he proposes a semi‐parametrically efficient estimator under the usual assumptions for Cox regression models. The estimator in question is obtained by a one‐step Newton‐Raphson approximation that solves the efficient score equations with initial value obtained from an existing method. The author proves that the estimator is consistent, asymptotically efficient and normally distributed in the limit. He also resorts to simulations to show that the proposed estimator performs well in finite samples and that it considerably improves the efficiency of existing pseudo‐likelihood estimators when a correlate of the missing covariate is available. Although he focuses on the situation where covariates are discrete, the author also explores how the method can be applied to models with continuous covariates.  相似文献   

15.
Abstract. Goodness‐of‐fit tests are proposed for the skew‐normal law in arbitrary dimension. In the bivariate case the proposed tests utilize the fact that the moment‐generating function of the skew‐normal variable is quite simple and satisfies a partial differential equation of the first order. This differential equation is estimated from the sample and the test statistic is constructed as an L 2 ‐type distance measure incorporating this estimate. Extension of the procedure to dimension greater than two is suggested whereas an effective bootstrap procedure is used to study the behaviour of the new method with real and simulated data.  相似文献   

16.
This paper discusses regression analysis of current status or case I interval‐censored failure time data arising from the additive hazards model. In this situation, some covariates could be missing because of various reasons, but there may exist some auxiliary information about the missing covariates. To address the problem, we propose an estimated partial likelihood approach for estimation of regression parameters, which makes use of the available auxiliary information. The method can be easily implemented, and the asymptotic properties of the resulting estimates are established. To assess the finite sample performance of the proposed method, an extensive simulation study is conducted and indicates that the method works well.  相似文献   

17.
Sample covariance matrices play a central role in numerous popular statistical methodologies, for example principal components analysis, Kalman filtering and independent component analysis. However, modern random matrix theory indicates that, when the dimension of a random vector is not negligible with respect to the sample size, the sample covariance matrix demonstrates significant deviations from the underlying population covariance matrix. There is an urgent need to develop new estimation tools in such cases with high‐dimensional data to recover the characteristics of the population covariance matrix from the observed sample covariance matrix. We propose a novel solution to this problem based on the method of moments. When the parametric dimension of the population spectrum is finite and known, we prove that the proposed estimator is strongly consistent and asymptotically Gaussian. Otherwise, we combine the first estimation method with a cross‐validation procedure to select the unknown model dimension. Simulation experiments demonstrate the consistency of the proposed procedure. We also indicate possible extensions of the proposed estimator to the case where the population spectrum has a density.  相似文献   

18.
Screening procedures play an important role in data analysis, especially in high-throughput biological studies where the datasets consist of more covariates than independent subjects. In this article, a Bayesian screening procedure is introduced for the binary response models with logit and probit links. In contrast to many screening rules based on marginal information involving one or a few covariates, the proposed Bayesian procedure simultaneously models all covariates and uses closed-form screening statistics. Specifically, we use the posterior means of the regression coefficients as screening statistics; by imposing a generalized g-prior on the regression coefficients, we derive the analytical form of their posterior means and compute the screening statistics without Markov chain Monte Carlo implementation. We evaluate the utility of the proposed Bayesian screening method using simulations and real data analysis. When the sample size is small, the simulation results suggest improved performance with comparable computational cost.  相似文献   

19.
Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies and illustrate the proposed method by two empirical datasets.  相似文献   

20.
Right‐censored and length‐biased failure time data arise in many fields including cross‐sectional prevalent cohort studies, and their analysis has recently attracted a great deal of attention. It is well‐known that for regression analysis of failure time data, two commonly used approaches are hazard‐based and quantile‐based procedures, and most of the existing methods are the hazard‐based ones. In this paper, we consider quantile regression analysis of right‐censored and length‐biased data and present a semiparametric varying‐coefficient partially linear model. For estimation of regression parameters, a three‐stage procedure that makes use of the inverse probability weighted technique is developed, and the asymptotic properties of the resulting estimators are established. In addition, the approach allows the dependence of the censoring variable on covariates, while most of the existing methods assume the independence between censoring variables and covariates. A simulation study is conducted and suggests that the proposed approach works well in practical situations. Also, an illustrative example is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号