首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Using the data from the AIDS Link to Intravenous Experiences cohort study as an example, an informative censoring model was used to characterize the repeated hospitalization process of a group of patients. Under the informative censoring assumption, the estimators of the baseline rate function and the regression parameters were shown to be related to a latent variable. Hence, it becomes impractical to directly estimate the unknown quantities in the moments of the estimators for the bandwidth selection of a smoothing estimator and the construction of confidence intervals, which are respectively based on the asymptotic mean squared errors and the asymptotic distributions of the estimators. To overcome these difficulties, we develop a random weighted bootstrap procedure to select appropriate bandwidths and to construct approximated confidence intervals. One can see that our method is simple and faster to implement from a practical point of view, and is at least as accurate as other bootstrap methods. In this article, it is shown that the proposed method is useful through the performance of a Monte Carlo simulation. An application of our procedure is also illustrated by a recurrent event sample of intravenous drug users for inpatient cares over time.  相似文献   

2.
This work concerns the estimation of a smooth survival function based on doubly censored data. We establish strong consistency and asymptotic normality for a kernel estimator. Moreover, we also obtain an asymptotic expression for the mean integrated squared error, which yields an optimum bandwidth in terms of readily estimable quantities.  相似文献   

3.
Abstract.  A new kernel distribution function (df) estimator based on a non-parametric transformation of the data is proposed. It is shown that the asymptotic bias and mean squared error of the estimator are considerably smaller than that of the standard kernel df estimator. For the practical implementation of the new estimator a data-based choice of the bandwidth is proposed. Two possible areas of application are the non-parametric smoothed bootstrap and survival analysis. In the latter case new estimators for the survival function and the mean residual life function are derived.  相似文献   

4.
The problem of bandwidth selection for kernel-based estimation of the distribution function (cdf) at a given point is considered. With appropriate bandwidth, a kernel-based estimator (kdf) is known to outperform the empirical distribution function. However, such a bandwidth is unknown in practice. In pointwise estimation, the appropriate bandwidth depends on the point where the function is estimated. The existing smoothing methods use one common bandwidth to estimate the cdf. The accuracy of the resulting estimates varies substantially depending on the cdf and the point where it is estimated. We propose to select bandwidth by minimizing a bootstrap estimator of the MSE of the kdf. The resulting estimator performs reliably, irrespective of where the cdf is estimated. It is shown to be consistent under i.i.d. as well as strongly mixing dependence assumption. Two applications of the proposed estimator are shown in finance and seismology. We report a dataset on the S & P Nifty index values.  相似文献   

5.
Data Sharpening for Hazard Rate Estimation   总被引:1,自引:0,他引:1  
Data sharpening is a general tool for enhancing the performance of statistical estimators, by altering the data before substituting them into conventional methods. In one of the simplest forms of data sharpening, available for curve estimation, an explicit empirical transformation is used to alter the data. The attraction of this approach is diminished, however, if the formula has to be altered for each different application. For example, one could expect the formula for use in hazard rate estimation to differ from that for straight density estimation, since a hazard rate is a ratio–type functional of a density. This paper shows that, in fact, identical data transformations can be used in each case, regardless of whether the data involve censoring. This dramatically simplifies the application of data sharpening to problems involving hazard rate estimation, and makes data sharpening attractive.  相似文献   

6.
In this article, we formulate a semiparametric model for counting processes in which the effect of covariates is to transform the time scale for a baseline rate function. We assume an arbitrary dependence structure for the counting process and propose a class of estimating equations for the regression parameters. Asymptotic results for these estimators are derived. In addition, goodness of fit methods for assessing the adequacy of the accelerated rates model are proposed. The finite-sample behavior of the proposed methods is examined in simulation studies, and data from a chronic granulomatous disease study are used to illustrate the methodology.  相似文献   

7.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007 Chaubey , Y. P. , Sen , A. , Sen , P. K. ( 2007 ). A new smooth density estimator for non-negative random variables. Technical Report No. 1/07. Department of Mathematics and Statistics, Concordia University, Montreal, Canada . [Google Scholar]) for the empirical survival function studied in Bagai and Prakasa Rao (1991 Bagai , I. , Prakasa Rao , B. L. S. ( 1991 ). Estimation of the survival function for stationary associated processes . Statist. Probab. Lett. 12 : 385391 .[Crossref], [Web of Science ®] [Google Scholar]) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011 Chaubey , Y. P. , Dewan , I. , Li , J. ( 2011 ). Smooth estimation of survival and density functions for a stationary associated process using poisson weights . Statist. Probab. Lett. 81 : 267276 .[Crossref], [Web of Science ®] [Google Scholar]) showing that the two estimators have comparable finite sample global as well as local behavior.  相似文献   

8.
Marginal Means/Rates Models for Multiple Type Recurrent Event Data   总被引:3,自引:0,他引:3  
Recurrent events are frequently observed in biomedical studies, and often more than one type of event is of interest. Follow-up time may be censored due to loss to follow-up or administrative censoring. We propose a class of semi-parametric marginal means/rates models, with a general relative risk form, for assessing the effect of covariates on the censored event processes of interest. We formulate estimating equations for the model parameters, and examine asymptotic properties of the parameter estimators. Finite sample properties of the regression coefficients are examined through simulations. The proposed methods are applied to a retrospective cohort study of risk factors for preschool asthma.  相似文献   

9.
In this article, we propose a nonparametric approach for estimating the intensity function of temporal point processes based on kernel estimators. In particular, we use asymmetric kernel estimators characterized by the gamma distribution, in order to describe features of observed point patterns adequately. Some characteristics of these estimators are analyzed and discussed both through simulated results and applications to real data from different seismic catalogs.  相似文献   

10.
Abstract

In survival or reliability studies, it is common to have data which are not only incomplete but weakly dependent too. Random truncation and censoring are two common forms of such data when they are neither independent nor strongly mixing but rather associated. The focus of this paper is on estimating conditional distribution and conditional quantile functions for randomly left truncated data satisfying association condition. We aim at deriving strong uniform consistency rates and asymptotic normality for the estimators and thereby, extend to association case some results stated under iid and α-mixing hypotheses. The performance of the quantile function estimator is evaluated on simulated data sets.  相似文献   

11.
Recurrent events data with a terminal event often arise in many longitudinal studies. Most of existing models assume multiplicative covariate effects and model the conditional recurrent event rate given survival. In this article, we propose a marginal additive rates model for recurrent events with a terminal event, and develop two procedures for estimating the model parameters. The asymptotic properties of the resulting estimators are established. In addition, some numerical procedures are presented for model checking. The finite-sample behavior of the proposed methods is examined through simulation studies, and an application to a bladder cancer study is also illustrated.  相似文献   

12.
13.
The Andersen-Gill multiplicative intensity(MI) model is well-suited to the analysis of recurrent failuretime data. The fundamental assumption of the MI model is thatthe process M_i(t) for subjects i=1,,n,defined to be the difference between a subject's counting processand compensator, i.e., N_i(t) A_i(t); >0,is a martingale with respect to some filtration. We propose omnibusprocedures for testing this assumption. The methods are basedon transformations of the estimated martingale residual process ^M i (t) a function of consistent estimatesof the log-intensity ratios and the baseline cumulative hazard.Under a correctly specified model, the expected value of ^M i (t)is approximately equal to zero with approximately uncorrelatedincrements. These properties are exploited in the proposed testingprocedures. We examine the effects of censoring and covariateeffects on the operating characteristics of the proposed methodsvia simulation. The procedures are most sensitive to the omissionof a time-varying continuous covariate. We illustrate use ofthe methods in an analysis of data from a clinical trial involvingpatients with chronic granulatomous disease.  相似文献   

14.
We consider a recurrent event wherein the inter‐event times are independent and identically distributed with a common absolutely continuous distribution function F. In this article, interest is in the problem of testing the null hypothesis that F belongs to some parametric family where the q‐dimensional parameter is unknown. We propose a general Chi‐squared test in which cell boundaries are data dependent. An estimator of the parameter obtained by minimizing a quadratic form resulting from a properly scaled vector of differences between Observed and Expected frequencies is used to construct the test. This estimator is known as the minimum chi‐square estimator. Large sample properties of the proposed test statistic are established using empirical processes tools. A simulation study is conducted to assess the performance of the test under parameter misspecification, and our procedures are applied to a fleet of Boeing 720 jet planes' air conditioning system failures.  相似文献   

15.
The recurrent-event setting, where the subjects experience multiple occurrences of the event of interest, are encountered in many biomedical applications. In analyzing recurrent event data, non informative censoring is often assumed for the implementation of statistical methods. However, when a terminating event such as death serves as part of the censoring mechanism, validity of the censoring assumption may be violated because recurrence can be a powerful risk factor for death. We consider joint modeling of recurrent event process and terminating event under a Bayesian framework in which a shared frailty is used to model the association between the intensity of the recurrent event process and the hazard of the terminating event. Our proposed model is implemented on data from a well-known cancer study.  相似文献   

16.
We propose kernel density estimators based on prebinned data. We use generalized binning schemes based on the quantiles points of a certain auxiliary distribution function. Therein the uniform distribution corresponds to usual binning. The statistical accuracy of the resulting kernel estimators is studied, i.e. we derive mean squared error results for the closeness of these estimators to both the true function and the kernel estimator based on the original data set. Our results show the influence of the choice of the auxiliary density on the binned kernel estimators and they reveal that non-uniform binning can be worthwhile.  相似文献   

17.
Most of current false discovery rate (FDR) procedures in a microarray experiment assume restrictive dependence structures, resulting in being less reliable. FDR controlling procedure under suitable dependence structures based on Poisson distributional approximation is shown. Unlike other procedures, the distribution of false null hypotheses is estimated by using kernel density estimation allowing for dependent structures among the genes. Furthermore, we develop an FDR framework that minimizes the false nondiscovery rate (FNR) with a constraint on the controlled level of the FDR. The performance of the proposed FDR procedure is compared with that of other existing FDR controlling procedures, with an application to the microarray study of simulated data.  相似文献   

18.
In biomedical studies, correlated failure time data arise often. Although point and confidence interval estimation for quantiles with independent censored failure time data have been extensively studied, estimation for quantiles with correlated failure time data has not been developed. In this article, we propose a nonparametric estimation method for quantiles with correlated failure time data. We derive the asymptotic properties of the quantile estimator and propose confidence interval estimators based on the bootstrap and kernel smoothing methods. Simulation studies are carried out to investigate the finite sample properties of the proposed estimators. Finally, we illustrate the proposed method with a data set from a study of patients with otitis media.  相似文献   

19.
One particular recurrent events data scenario involves patients experiencing events according to a common intensity rate, and then a treatment may be applied. The treatment might be effective for a limited amount of time, so that the intensity rate would be expected to change abruptly when the effect of the treatment wears out. In particular, we allow models for the intensity rate, post-treatment, to be at first decreasing and then change to increasing (and vice versa). Two estimators of the location of this change are proposed.  相似文献   

20.
Abstract.  The two-stage design is popular in epidemiology studies and clinical trials due to its cost effectiveness. Typically, the first stage sample contains cheaper and possibly biased information, while the second stage validation sample consists of a subset of subjects with accurate and complete information. In this paper, we study estimation of a survival function with right-censored survival data from a two-stage design. A non-parametric estimator is derived by combining data from both stages. We also study its large sample properties and derive pointwise and simultaneous confidence intervals for the survival function. The proposed estimator effectively reduces the variance and finite-sample bias of the Kaplan–Meier estimator solely based on the second stage validation sample. Finally, we apply our method to a real data set from a medical device postmarketing surveillance study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号