首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Kaplan and Meier (1958) give a maximum likelihood estimator of the distribution function based on a univariate right censored sample-Here we investigate the extension of their results to the case of bivariate right censored samples. Following Efron (1967), we provide "self-consistent" estimators for the bivariate distribution function.  相似文献   

2.
The paper considers the goodness of fit tests with right censored data or doubly censored data. The Fredholm Integral Equation (FIE) method proposed by Ren (1993) is implemented in the simulation studies to estimate the null distribution of the Cramér-von Mises test statistics and the asymptotic covariance function of the self-consistent estimator for the lifetime distribution with right censored data or doubly censored data. We show that for fixed alternatives, the bootstrap method does not estimate the null distribution consistently for doubly censored data. For the right censored case, a comparison between the performance of FIE and the η out of η bootstrap shows that FIE gives better estimation for the null distribution. The application of FIE to a set of right censored Channing House data and to a set of doubly censored breast cancer data is presented.  相似文献   

3.
We propose a bivariate hurdle negative binomial (BHNB) regression model with right censoring to model correlated bivariate count data with excess zeros and few extreme observations. The parameters of the BHNB regression model are obtained using maximum likelihood with conjugate gradient optimization. The proposed model is applied to actual survey data where the bivariate outcome is number of days missed from primary activities and number of days spent in bed due to illness during the 4-week period preceding the inquiry date. We compared the right censored BHNB model to the right censored bivariate negative binomial (BNB) model. A simulation study is conducted to discuss some properties of the BHNB model. Our proposed model demonstrated superior performance in goodness-of-fit of estimated frequencies.KEYWORDS: Zero inflation, over-dispersion, parameter estimation, model selection, right censoring  相似文献   

4.
Motivated by a recent tuberculosis (TB) study, this paper is concerned with covariates missing not at random (MNAR) and models the potential intracluster correlation by a frailty. We consider the regression analysis of right‐censored event times from clustered subjects under a Cox proportional hazards frailty model and present the semiparametric maximum likelihood estimator (SPMLE) of the model parameters. An easy‐to‐implement pseudo‐SPMLE is then proposed to accommodate more realistic situations using readily available supplementary information on the missing covariates. Algorithms are provided to compute the estimators and their consistent variance estimators. We demonstrate that both the SPMLE and the pseudo‐SPMLE are consistent and asymptotically normal by the arguments based on the theory of modern empirical processes. The proposed approach is examined numerically via simulation and illustrated with an analysis of the motivating TB study data.  相似文献   

5.
A mixture model is proposed to analyze a bivariate interval censored data with cure rates. There exist two types of association related with bivariate failure times and bivariate cure rates, respectively. A correlation coefficient is adopted for the association of bivariate cure rates and a copula function is applied for bivariate survival times. The conditional expectation of unknown quantities attributable to interval censored data and cure rates are calculated in the E-step in ES (Expectation-Solving algorithm) and the marginal estimates and the association measures are estimated in the S-step through a two-stage procedure. A simulation study is performed to evaluate the suggested method and a real data from HIV patients is analyzed as a real data example.  相似文献   

6.
In this paper, we are concerned with nonparametric estimation of the density and the failure rate functions of a random variable X which is at risk of being censored. First, we establish the asymptotic normality of a kernel density estimator in a general censoring setup. Then, we apply our result in order to derive the asymptotic normality of both the density and the failure rate estimators in the cases of right, twice and doubly censored data. Finally, the performance and the asymptotic Gaussian behaviour of the studied estimators, based on either doubly or twice censored data, are illustrated through a simulation study.  相似文献   

7.
In this note we provide a general framework for describing interval-censored samples including estimation of the magnitude and rank positions of data that have been interval-censored so as to counteract the effect of censoring. This process of sample adjustment, or renovation, allows samples to be compared graphically, using diagrams (such as boxplots) which are based on ranks. The renovation process is based on Buckley-James regression estimators for linear regression with censored data.  相似文献   

8.
Nonparametric maximum likelihood estimation of bivariate survival probabilities is developed for interval censored survival data. We restrict our attention to the situation where response times within pairs are not distinguishable, and the univariate survival distribution is the same for any individual within any pair. Campbell's (1981) model is modified to incorporate this restriction. Existence and uniqueness of maximum likelihood estimators are discussed. This methodology is illustrated with a bivariate life table analysis of an angioplasty study where each patient undergoes two procedures.  相似文献   

9.
We consider the estimation problem under the Lehmann model with interval-censored data, but focus on the computational issues. There are two methods for computing the semi-parametric maximum likelihood estimator (SMLE) under the Lehmann model (or called Cox model): the Newton-Raphson (NR) method and the profile likelihood (PL) method. We show that they often do not get close to the SMLE. We propose several approach to overcome the computational difficulty and apply our method to a breast cancer research data set.  相似文献   

10.
The article focuses mainly on a conditional imputation algorithm of quantile-filling to analyze a new kind of censored data, mixed interval-censored and complete data related to interval-censored sample. With the algorithm, the imputed failure times, which are the conditional quantiles, are obtained within the censoring intervals in which some exact failure times are. The algorithm is viable and feasible for the parameter estimation with general distributions, for instance, a case of Weibull distribution that has a moment estimation of closed form by log-transformation. Furthermore, interval-censored sample is a special case of the new censored sample, and the conditional imputation algorithm can also be used to deal with the failure data of interval censored. By comparing the interval-censored data and the new censored data, using the imputation algorithm, in the view of the bias of estimation, we find that the performance of new censored data is better than that of interval censored.  相似文献   

11.
Sample entropy based tests, methods of sieves and Grenander estimation type procedures are known to be very efficient tools for assessing normality of underlying data distributions, in one-dimensional nonparametric settings. Recently, it has been shown that the density based empirical likelihood (EL) concept extends and standardizes these methods, presenting a powerful approach for approximating optimal parametric likelihood ratio test statistics, in a distribution-free manner. In this paper, we discuss difficulties related to constructing density based EL ratio techniques for testing bivariate normality and propose a solution regarding this problem. Toward this end, a novel bivariate sample entropy expression is derived and shown to satisfy the known concept related to bivariate histogram density estimations. Monte Carlo results show that the new density based EL ratio tests for bivariate normality behave very well for finite sample sizes. To exemplify the excellent applicability of the proposed approach, we demonstrate a real data example.  相似文献   

12.
We study the performance of six proposed bivariate survival curve estimators on simulated right censored data. The performance of the estimators is compared for data generated by three bivariate models with exponential marginal distributions. The estimators are compared in their ability to estimate correlations and survival functions probabilities. Simulated data results are presented so that the proposed estimators in this relatively new area of analysis can be explicitly compared to the known distribution of the data and the parameters of the underlying model. The results show clear differences in the performance of the estimators.  相似文献   

13.
In this article, we propose a new empirical likelihood method for linear regression analysis with a right censored response variable. The method is based on the synthetic data approach for censored linear regression analysis. A log-empirical likelihood ratio test statistic for the entire regression coefficients vector is developed and we show that it converges to a standard chi-squared distribution. The proposed method can also be used to make inferences about linear combinations of the regression coefficients. Moreover, the proposed empirical likelihood ratio provides a way to combine different normal equations derived from various synthetic response variables. Maximizing this empirical likelihood ratio yields a maximum empirical likelihood estimator which is asymptotically equivalent to the solution of the estimating equation that are optimal linear combination of the original normal equations. It improves the estimation efficiency. The method is illustrated by some Monte Carlo simulation studies as well as a real example.  相似文献   

14.
Recently, least absolute deviations (LAD) estimator for median regression models with doubly censored data was proposed and the asymptotic normality of the estimator was established. However, it is invalid to make inference on the regression parameter vectors, because the asymptotic covariance matrices are difficult to estimate reliably since they involve conditional densities of error terms. In this article, three methods, which are based on bootstrap, random weighting, and empirical likelihood, respectively, and do not require density estimation, are proposed for making inference for the doubly censored median regression models. Simulations are also done to assess the performance of the proposed methods.  相似文献   

15.
Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non‐linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one‐compartment first‐order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we study the performance of a soccer player based on analysing an incomplete data set. To achieve this aim, we fit the bivariate Rayleigh distribution to the soccer dataset by the maximum likelihood method. In this way, the missing data and right censoring problems, that usually happen in such studies, are considered. Our aim is to inference about the performance of a soccer player by considering the stress and strength components. The first goal of the player of interest in a match is assumed as the stress component and the second goal of the match is assumed as the strength component. We propose some methods to overcome incomplete data problem and we use these methods to inference about the performance of a soccer player.  相似文献   

17.
In an attempt to provide a statistical tool for disease screening and prediction, we propose a semiparametric approach to analysis of the Cox proportional hazards cure model in situations where the observations on the event time are subject to right censoring and some covariates are missing not at random. To facilitate the methodological development, we begin with semiparametric maximum likelihood estimation (SPMLE) assuming that the (conditional) distribution of the missing covariates is known. A variant of the EM algorithm is used to compute the estimator. We then adapt the SPMLE to a more practical situation where the distribution is unknown and there is a consistent estimator based on available information. We establish the consistency and weak convergence of the resulting pseudo-SPMLE, and identify a suitable variance estimator. The application of our inference procedure to disease screening and prediction is illustrated via empirical studies. The proposed approach is used to analyze the tuberculosis screening study data that motivated this research. Its finite-sample performance is examined by simulation.  相似文献   

18.
We develop a simple approach to finding the Fisher information matrix (FIM) for a single pair of order statistic and its concomitant, and Type II right, left, and doubly censored samples from an arbitrary bivariate distribution. We use it to determine explicit expressions for the FIM for the three parameters of Downton's bivariate exponential distribution for single pairs and Type II censored samples. We evaluate the FIM in censored samples for finite sample sizes and determine its limiting form as the sample size increases. We discuss implications of our findings to inference and experimental design using small and large censored samples and for ranked-set samples from this distribution.  相似文献   

19.
In this paper, we introduce classical and Bayesian approaches for the Basu–Dhar bivariate geometric distribution in the presence of covariates and censored data. This distribution is considered for the analysis of bivariate lifetime as an alternative to some existing bivariate lifetime distributions assuming continuous lifetimes as the Block and Basu or Marshall and Olkin bivariate distributions. Maximum likelihood and Bayesian estimators are presented. Two examples are considered to illustrate the proposed methodology: an example with simulated data and an example with medical bivariate lifetime data.  相似文献   

20.
The linear regression model for right censored data, also known as the accelerated failure time model using the logarithm of survival time as the response variable, is a useful alternative to the Cox proportional hazards model. Empirical likelihood as a non‐parametric approach has been demonstrated to have many desirable merits thanks to its robustness against model misspecification. However, the linear regression model with right censored data cannot directly benefit from the empirical likelihood for inferences mainly because of dependent elements in estimating equations of the conventional approach. In this paper, we propose an empirical likelihood approach with a new estimating equation for linear regression with right censored data. A nested coordinate algorithm with majorization is used for solving the optimization problems with non‐differentiable objective function. We show that the Wilks' theorem holds for the new empirical likelihood. We also consider the variable selection problem with empirical likelihood when the number of predictors can be large. Because the new estimating equation is non‐differentiable, a quadratic approximation is applied to study the asymptotic properties of penalized empirical likelihood. We prove the oracle properties and evaluate the properties with simulated data. We apply our method to a Surveillance, Epidemiology, and End Results small intestine cancer dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号