首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
ABSTRACT

We present here an extension of Pan's multiple imputation approach to Cox regression in the setting of interval-censored competing risks data. The idea is to convert interval-censored data into multiple sets of complete or right-censored data and to use partial likelihood methods to analyse them. The process is iterated, and at each step, the coefficient of interest, its variance–covariance matrix, and the baseline cumulative incidence function are updated from multiple posterior estimates derived from the Fine and Gray sub-distribution hazards regression given augmented data. Through simulation of patients at risks of failure from two causes, and following a prescheduled programme allowing for informative interval-censoring mechanisms, we show that the proposed method results in more accurate coefficient estimates as compared to the simple imputation approach. We have implemented the method in the MIICD R package, available on the CRAN website.  相似文献   

2.
Abstract.  Conventional bootstrap- t intervals for density functions based on kernel density estimators exhibit poor coverages due to failure of the bootstrap to estimate the bias correctly. The problem can be resolved by either estimating the bias explicitly or undersmoothing the kernel density estimate to undermine its bias asymptotically. The resulting bias-corrected intervals have an optimal coverage error of order arbitrarily close to second order for a sufficiently smooth density function. We investigated the effects on coverage error of both bias-corrected intervals when the nominal coverage level is calibrated by the iterated bootstrap. In either case, an asymptotic reduction of coverage error is possible provided that the bias terms are handled using an extra round of smoothed bootstrapping. Under appropriate smoothness conditions, the optimal coverage error of the iterated bootstrap- t intervals has order arbitrarily close to third order. Examples of both simulated and real data are reported to illustrate the iterated bootstrap procedures.  相似文献   

3.
In the competing risks analysis, most inferences have been developed based on continuous failure time data. However, failure times are sometimes observed as being discrete. We propose nonparametric inferences for the cumulative incidence function for pure discrete data with competing risks. When covariate information is available, we propose semiparametric inferences for direct regression modelling of the cumulative incidence function for grouped discrete failure time data with competing risks. Simulation studies show that the procedures perform well. The proposed methods are illustrated with a study of contraceptive use in Indonesia.  相似文献   

4.
Variable selection in the presence of grouped variables is troublesome for competing risks data: while some recent methods deal with group selection only, simultaneous selection of both groups and within-group variables remains largely unexplored. In this context, we propose an adaptive group bridge method, enabling simultaneous selection both within and between groups, for competing risks data. The adaptive group bridge is applicable to independent and clustered data. It also allows the number of variables to diverge as the sample size increases. We show that our new method possesses excellent asymptotic properties, including variable selection consistency at group and within-group levels. We also show superior performance in simulated and real data sets over several competing approaches, including group bridge, adaptive group lasso, and AIC / BIC-based methods.  相似文献   

5.

We consider the problem of estimating Weibull parameters for grouped data when competing risks are present. We propose two simple methods of estimation and derive their asymptotic properties. A Monte Carlo study was carried out to evaluate the performance of these two methods.  相似文献   

6.
Standard algorithms for the construction of iterated bootstrap confidence intervals are computationally very demanding, requiring nested levels of bootstrap resampling. We propose an alternative approach to constructing double bootstrap confidence intervals that involves replacing the inner level of resampling by an analytical approximation. This approximation is based on saddlepoint methods and a tail probability approximation of DiCiccio and Martin (1991). Our technique significantly reduces the computational expense of iterated bootstrap calculations. A formal algorithm for the construction of our approximate iterated bootstrap confidence intervals is presented, and some crucial practical issues arising in its implementation are discussed. Our procedure is illustrated in the case of constructing confidence intervals for ratios of means using both real and simulated data. We repeat an experiment of Schenker (1985) involving the construction of bootstrap confidence intervals for a variance and demonstrate that our technique makes feasible the construction of accurate bootstrap confidence intervals in that context. Finally, we investigate the use of our technique in a more complex setting, that of constructing confidence intervals for a correlation coefficient.  相似文献   

7.
In the analysis of competing risks data, cumulative incidence function is a useful summary of the overall crude risk for a failure type of interest. Mixture regression modeling has served as a natural approach to performing covariate analysis based on this quantity. However, existing mixture regression methods with competing risks data either impose parametric assumptions on the conditional risks or require stringent censoring assumptions. In this article, we propose a new semiparametric regression approach for competing risks data under the usual conditional independent censoring mechanism. We establish the consistency and asymptotic normality of the resulting estimators. A simple resampling method is proposed to approximate the distribution of the estimated parameters and that of the predicted cumulative incidence functions. Simulation studies and an analysis of a breast cancer dataset demonstrate that our method performs well with realistic sample sizes and is appropriate for practical use.  相似文献   

8.
We propose a unified approach that is flexibly applicable to various types of grouped data for estimating and testing parametric income distributions. To simplify the use of our approach, we also provide a parametric bootstrap method and show its asymptotic validity. We also compare this approach with existing methods for grouped income data, and assess their finite-sample performance by a Monte Carlo simulation. For empirical demonstrations, we apply our approach to recovering China's income/consumption distributions from a sequence of income/consumption share tables and the U.S. income distributions from a combination of income shares and sample quantiles. Supplementary materials for this article are available online.  相似文献   

9.
Interval-grouped data are defined, in general, when the event of interest cannot be directly observed and it is only known to have been occurred within an interval. In this framework, a nonparametric kernel density estimator is proposed and studied. The approach is based on the classical Parzen–Rosenblatt estimator and on the generalisation of the binned kernel density estimator. The asymptotic bias and variance of the proposed estimator are derived under usual assumptions, and the effect of using non-equally spaced grouped data is analysed. Additionally, a plug-in bandwidth selector is proposed. Through a comprehensive simulation study, the behaviour of both the estimator and the plug-in bandwidth selector considering different scenarios of data grouping is shown. An application to real data confirms the simulation results, revealing the good performance of the estimator whenever data are not heavily grouped.  相似文献   

10.
This article considers the statistical analysis of dependent competing risks model with incomplete data under Type-I progressive hybrid censored condition using a Marshall–Olkin bivariate Weibull distribution. Based on the expectation maximum algorithm, maximum likelihood estimators for the unknown parameters are obtained, and the missing information principle is used to obtain the observed information matrix. As the maximum likelihood approach may fail when the available information is insufficient, Bayesian approach incorporated with auxiliary variables is developed for estimating the parameters of the model, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The proposed method is illustrated through a numerical example under different progressive censoring schemes and masking probabilities. Finally, a real data set is analyzed for illustrative purposes.  相似文献   

11.
Grouped data are commonly encountered in applications. All data from a continuous population are grouped due to rounding of the individual observations. The Bernstein polynomial model is proposed as an approximate model in this paper for estimating a univariate density function based on grouped data. The coefficients of the Bernstein polynomial, as the mixture proportions of beta distributions, can be estimated using an EM algorithm. The optimal degree of the Bernstein polynomial can be determined using a change-point estimation method. The rate of convergence of the proposed density estimate to the true density is proved to be almost parametric by an acceptance–rejection argument used for generating random numbers. The proposed method is compared with some existing methods in a simulation study and is applied to the Chicken Embryo Data.  相似文献   

12.
Semiparametric regression models with multiple covariates are commonly encountered. When there are covariates not associated with response variable, variable selection may lead to sparser models, more lucid interpretations and more accurate estimation. In this study, we adopt a sieve approach for the estimation of nonparametric covariate effects in semiparametric regression models. We adopt a two-step iterated penalization approach for variable selection. In the first step, a mixture of the Lasso and group Lasso penalties are employed to conduct the first-round variable selection and obtain the initial estimate. In the second step, a mixture of the weighted Lasso and weighted group Lasso penalties, with weights constructed using the initial estimate, are employed for variable selection. We show that the proposed iterated approach has the variable selection consistency property, even when number of unknown parameters diverges with sample size. Numerical studies, including simulation and analysis of a diabetes dataset, show satisfactory performance of the proposed approach.  相似文献   

13.
In recent years different approaches for the analysis of time-to-event data in the presence of competing risks, i.e. when subjects can fail from one of two or more mutually exclusive types of event, were introduced. Different approaches for the analysis of competing risks data, focusing either on cause-specific or subdistribution hazard rates, were presented in statistical literature. Many new approaches use complicated weighting techniques or resampling methods, not allowing an analytical evaluation of these methods. Simulation studies often replace analytical comparisons, since they can be performed more easily and allow investigation of non-standard scenarios. For adequate simulation studies the generation of appropriate random numbers is essential. We present an approach to generate competing risks data following flexible prespecified subdistribution hazards. Event times and types are simulated using possibly time-dependent cause-specific hazards, chosen in a way that the generated data will follow the desired subdistribution hazards or hazard ratios, respectively.  相似文献   

14.
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate‐dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate‐dependent censoring. We consider a covariate‐adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate‐adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate‐adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research. Here, cancer relapse and death in complete remission are two competing risks.  相似文献   

15.
In this article, an importance sampling (IS) method for the posterior expectation of a non linear function in a Bayesian vector autoregressive (VAR) model is developed. Most Bayesian inference problems involve the evaluation of the expectation of a function of interest, usually a non linear function of the model parameters, under the posterior distribution. Non linear functions in Bayesian VAR setting are difficult to estimate and usually require numerical methods for their evaluation. A weighted IS estimator is used for the evaluation of the posterior expectation. With the cross-entropy (CE) approach, the IS density is chosen from a specified family of densities such that the CE distance or the Kullback–Leibler divergence between the optimal IS density and the importance density is minimal. The performance of the proposed algorithm is assessed in an iterated multistep forecasting of US macroeconomic time series.  相似文献   

16.
It is shown that data sharpening can be used to produce density estimators that enjoy arbitrarily high orders of bias reduction. Practical advantages of this approach, relative to competing methods, are demonstrated. They include the sheer simplicity of the estimators, which makes code for computing them particularly easy to write, very good mean-squared error performance, reduced `wiggliness' of estimates and greater robustness against undersmoothing.  相似文献   

17.
The reversible jump Markov chain Monte Carlo (MCMC) sampler (Green in Biometrika 82:711–732, 1995) has become an invaluable device for Bayesian practitioners. However, the primary difficulty with the sampler lies with the efficient construction of transitions between competing models of possibly differing dimensionality and interpretation. We propose the use of a marginal density estimator to construct between-model proposal distributions. This provides both a step towards black-box simulation for reversible jump samplers, and a tool to examine the utility of common between-model mapping strategies. We compare the performance of our approach to well established alternatives in both time series and mixture model examples.  相似文献   

18.
Research in the area of bandwidth selection was an active topic in the 1980s and 1990s, however, recently there has been little research in the area. We re-opened this investigation and have found a new method for estimating mean integrated squared error for kernel density estimators. We provide an overview of other methods to obtain optimal bandwidths and offer a comparison of these methods via a simulation study. In certain situations, our method of estimating an optimal bandwidth yields a smaller MISE than competing methods to compute bandwidths. This procedure is illustrated by an application to two data sets.  相似文献   

19.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   

20.
We propose a simple but effective estimation procedure to extract the level and the volatility dynamics of a latent macroeconomic factor from a panel of observable indicators. Our approach is based on a multivariate conditionally heteroskedastic exact factor model that can take into account the heteroskedasticity feature shown by most macroeconomic variables and relies on an iterated Kalman filter procedure. In simulations we show the unbiasedness of the proposed estimator and its superiority to different approaches introduced in the literature. Simulation results are confirmed in applications to real inflation data with the goal of forecasting long-term bond risk premia. Moreover, we find that the extracted level and conditional variance of the latent factor for inflation are strongly related to NBER business cycles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号