首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 953 毫秒
1.
Conditions on the hazard functions under the usual log-rank test remains locally optimal for the Cox regression model under random censoring (withdrawal) are examined. In the light of these, the asymptotic efficiency results pertaining to the Cox partial likelihood statistic and the log-rank statistic are studied.  相似文献   

2.
Since the publication of the seminal paper by Cox (1972), proportional hazard model has become very popular in regression analysis for right censored data. In observational studies, treatment assignment may depend on observed covariates. If these confounding variables are not accounted for properly, the inference based on the Cox proportional hazard model may perform poorly. As shown in Rosenbaum and Rubin (1983), under the strongly ignorable treatment assignment assumption, conditioning on the propensity score yields valid causal effect estimates. Therefore we incorporate the propensity score into the Cox model for causal inference with survival data. We derive the asymptotic property of the maximum partial likelihood estimator when the model is correctly specified. Simulation results show that our method performs quite well for observational data. The approach is applied to a real dataset on the time of readmission of trauma patients. We also derive the asymptotic property of the maximum partial likelihood estimator with a robust variance estimator, when the model is incorrectly specified.  相似文献   

3.
We examine robust estimators and tests using the family of generalized negative exponential disparities, which contains the Pearson's chi‐square and the ordinary negative exponential disparity as special cases. The influence function and α‐influence function of the proposed estimators are discussed and their breakdown points derived. Under the model, the estimators are asymptotically efficient, and are shown to have an asymptotic breakdown point of 50%. The proposed tests are shown to be equivalent to the likelihood ratio test under the null hypothesis, and their breakdown points are obtained. The competitive performance of the proposed estimators and tests relative to those based on the Hellinger distance is illustrated through examples and simulation results. Unlike the Hellinger distance, several members of this family of generalized negative exponential disparities generate estimators which also possess excellent inlier‐controlling capability. The corresponding tests of hypothesis are shown to have better power breakdown than the Hellinger deviance test in the cases examined.  相似文献   

4.
Robust tests for the common principal components model   总被引:1,自引:0,他引:1  
When dealing with several populations, the common principal components (CPC) model assumes equal principal axes but different variances along them. In this paper, a robust log-likelihood ratio statistic allowing to test the null hypothesis of a CPC model versus no restrictions on the scatter matrices is introduced. The proposal plugs into the classical log-likelihood ratio statistic robust scatter estimators. Using the same idea, a robust log-likelihood ratio and a robust Wald-type statistic for testing proportionality against a CPC model are considered. Their asymptotic distributions under the null hypothesis and their partial influence functions are derived. A small simulation study allows to compare the behavior of the classical and robust tests, under normal and contaminated data.  相似文献   

5.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystic pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the potential censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. Robins (1995) specified two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. The goal of this paper is to provide a class of consistent and reasonably efficient semiparametric tests and estimators for the treatment effect under these assumptions. The tests in our class, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also study the estimation, in the presence of informative censoring, of the effect of treatment on the evolution over time of the mean of repeated measures outcome such as CD4 count.  相似文献   

6.
When observational data are used to compare treatment-specific survivals, regular two-sample tests, such as the log-rank test, need to be adjusted for the imbalance between treatments with respect to baseline covariate distributions. Besides, the standard assumption that survival time and censoring time are conditionally independent given the treatment, required for the regular two-sample tests, may not be realistic in observational studies. Moreover, treatment-specific hazards are often non-proportional, resulting in small power for the log-rank test. In this paper, we propose a set of adjusted weighted log-rank tests and their supremum versions by inverse probability of treatment and censoring weighting to compare treatment-specific survivals based on data from observational studies. These tests are proven to be asymptotically correct. Simulation studies show that with realistic sample sizes and censoring rates, the proposed tests have the desired Type I error probabilities and are more powerful than the adjusted log-rank test when the treatment-specific hazards differ in non-proportional ways. A real data example illustrates the practical utility of the new methods.  相似文献   

7.
In randomized clinical trials, a treatment effect on a time-to-event endpoint is often estimated by the Cox proportional hazards model. The maximum partial likelihood estimator does not make sense if the proportional hazard assumption is violated. Xu and O'Quigley (Biostatistics 1:423-439, 2000) proposed an estimating equation, which provides an interpretable estimator for the treatment effect under model misspecification. Namely it provides a consistent estimator for the log-hazard ratio among the treatment groups if the model is correctly specified, and it is interpreted as an average log-hazard ratio over time even if misspecified. However, the method requires the assumption that censoring is independent of treatment group, which is more restricted than that for the maximum partial likelihood estimator and is often violated in practice. In this paper, we propose an alternative estimating equation. Our method provides an estimator of the same property as that of Xu and O'Quigley under the usual assumption for the maximum partial likelihood estimation. We show that our estimator is consistent and asymptotically normal, and derive a consistent estimator of the asymptotic variance. If the proportional hazards assumption holds, the efficiency of the estimator can be improved by applying the covariate adjustment method based on the semiparametric theory proposed by Lu and Tsiatis (Biometrika 95:679-694, 2008).  相似文献   

8.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count.  相似文献   

9.
The minimum disparity estimators proposed by Lindsay (1994) for discrete models form an attractive subclass of minimum distance estimators which achieve their robustness without sacrificing first order efficiency at the model. Similarly, disparity test statistics are useful robust alternatives to the likelihood ratio test for testing of hypotheses in parametric models; they are asymptotically equivalent to the likelihood ratio test statistics under the null hypothesis and contiguous alternatives. Despite their asymptotic optimality properties, the small sample performance of many of the minimum disparity estimators and disparity tests can be considerably worse compared to the maximum likelihood estimator and the likelihood ratio test respectively. In this paper we focus on the class of blended weight Hellinger distances, a general subfamily of disparities, and study the effects of combining two different distances within this class to generate the family of “combined” blended weight Hellinger distances, and identify the members of this family which generally perform well. More generally, we investigate the class of "combined and penal-ized" blended weight Hellinger distances; the penalty is based on reweighting the empty cells, following Harris and Basu (1994). It is shown that some members of the combined and penalized family have rather attractive properties  相似文献   

10.
We propose a new class of semiparametric estimators for proportional hazards models in the presence of measurement error in the covariates, where the baseline hazard function, the hazard function for the censoring time, and the distribution of the true covariates are considered as unknown infinite dimensional parameters. We estimate the model components by solving estimating equations based on the semiparametric efficient scores under a sequence of restricted models where the logarithm of the hazard functions are approximated by reduced rank regression splines. The proposed estimators are locally efficient in the sense that the estimators are semiparametrically efficient if the distribution of the error‐prone covariates is specified correctly and are still consistent and asymptotically normal if the distribution is misspecified. Our simulation studies show that the proposed estimators have smaller biases and variances than competing methods. We further illustrate the new method with a real application in an HIV clinical trial.  相似文献   

11.
The currently existing estimation methods and goodness-of-fit tests for the Cox model mainly deal with right censored data, but they do not have direct extension to other complicated types of censored data, such as doubly censored data, interval censored data, partly interval-censored data, bivariate right censored data, etc. In this article, we apply the empirical likelihood approach to the Cox model with complete sample, derive the semiparametric maximum likelihood estimators (SPMLE) for the Cox regression parameter and the baseline distribution function, and establish the asymptotic consistency of the SPMLE. Via the functional plug-in method, these results are extended in a unified approach to doubly censored data, partly interval-censored data, and bivariate data under univariate or bivariate right censoring. For these types of censored data mentioned, the estimation procedures developed here naturally lead to Kolmogorov-Smirnov goodness-of-fit tests for the Cox model. Some simulation results are presented.  相似文献   

12.
A useful parameterization of the exponential failure model with imperfect signalling, under random censoring scheme, is considered to accommodate covariates. Simple sufficient conditions for the existence, uniqueness, consistency, and asymptotic normality of maximum likelihood estimators for the parameters in these models are given. The results are then applied to derive the asymptotic properties of the likelihood ratio test for a difference between failure signalling proportions between groups in a ‘one-way’ classification.  相似文献   

13.
This article considers a class of estimators for the location and scale parameters in the location-scale model based on ‘synthetic data’ when the observations are randomly censored on the right. The asymptotic normality of the estimators is established using counting process and martingale techniques when the censoring distribution is known and unknown, respectively. In the case when the censoring distribution is known, we show that the asymptotic variances of this class of estimators depend on the data transformation and have a lower bound which is not achievable by this class of estimators. However, in the case that the censoring distribution is unknown and estimated by the Kaplan–Meier estimator, this class of estimators has the same asymptotic variance and attains the lower bound for variance for the case of known censoring distribution. This is different from censored regression analysis, where asymptotic variances depend on the data transformation. Our method has three valuable advantages over the method of maximum likelihood estimation. First, our estimators are available in a closed form and do not require an iterative algorithm. Second, simulation studies show that our estimators being moment-based are comparable to maximum likelihood estimators and outperform them when sample size is small and censoring rate is high. Third, our estimators are more robust to model misspecification than maximum likelihood estimators. Therefore, our method can serve as a competitive alternative to the method of maximum likelihood in estimation for location-scale models with censored data. A numerical example is presented to illustrate the proposed method.  相似文献   

14.
A model for analyzing release-recapture data is presented that generalizes a previously existing individual covariate model to include multiple groups of animals. As in the previous model, the generalized version includes selection parameters that relate individual covariates to survival potential. Significance of the selection parameters was equivalent to significance of the individual covariates. Simulation studies were conducted to investigate three inferential properties with respect to the selection parameters: (1) sample size requirements, (2) validity of the likelihood ratio test (LRT) and (3) power of the LRT. When the survival and capture probabilities ranged from 0.5 to 1.0, a total sample size of 300 was necessary to achieve a power of 0.80 at a significance level of 0.1 when testing the significance of the selection parameters. However, only half that (a total of 150) was necessary for the distribution of the maximum likelihood estimators of the selection parameters to approximate their asymptotic distributions. In general, as the survival and capture probabilities decreased, the sample size requirements increased. The validity of the LRT for testing the significance of the selection parameters was confirmed because the LRT statistic was distributed as theoretically expected under the null hypothesis, i.e. like a chi 2 random variable. When the baseline survival model was fully parameterized with population and interval effects, the LRT was also valid in the presence of unaccounted for random variation. The power of the LRT for testing the selection parameters was unaffected by over-parameterization of the baseline survival and capture models. The simulation studies showed that for testing the significance of individual covariates to survival the LRT was remarkably robust to assumption violations.  相似文献   

15.
A model for analyzing release-recapture data is presented that generalizes a previously existing individual covariate model to include multiple groups of animals. As in the previous model, the generalized version includes selection parameters that relate individual covariates to survival potential. Significance of the selection parameters was equivalent to significance of the individual covariates. Simulation studies were conducted to investigate three inferential properties with respect to the selection parameters: (1) sample size requirements, (2) validity of the likelihood ratio test (LRT) and (3) power of the LRT. When the survival and capture probabilities ranged from 0.5 to 1.0, a total sample size of 300 was necessary to achieve a power of 0.80 at a significance level of 0.1 when testing the significance of the selection parameters. However, only half that (a total of 150) was necessary for the distribution of the maximum likelihood estimators of the selection parameters to approximate their asymptotic distributions. In general, as the survival and capture probabilities decreased, the sample size requirements increased. The validity of the LRT for testing the significance of the selection parameters was confirmed because the LRT statistic was distributed as theoretically expected under the null hypothesis, i.e. like a chi 2 random variable. When the baseline survival model was fully parameterized with population and interval effects, the LRT was also valid in the presence of unaccounted for random variation. The power of the LRT for testing the selection parameters was unaffected by over-parameterization of the baseline survival and capture models. The simulation studies showed that for testing the significance of individual covariates to survival the LRT was remarkably robust to assumption violations.  相似文献   

16.
Exact confidence interval estimation for accelerated life regression models with censored smallest extreme value (or Weibull) data is often impractical. This paper evaluates the accuracy of approximate confidence intervals based on the asymptotic normality of the maximum likelihood estimator, the asymptotic X2distribution of the likelihood ratio statistic, mean and variance correction to the likelihood ratio statistic, and the so-called Bartlett correction to the likelihood ratio statistic. The Monte Carlo evaluations under various degrees of time censoring show that uncorrected likelihood ratio intervals are very accurate in situations with heavy censoring. The benefits of mean and variance correction to the likelihood ratio statistic are only realized with light or no censoring. Bartlett correction tends to result in conservative intervals. Intervals based on the asymptotic normality of maximum likelihood estimators are anticonservative and should be used with much caution.  相似文献   

17.
In this paper, we study a nonparametric maximum likelihood estimator (NPMLE) of the survival function based on a semi-Markov model under dependent censoring. We show that the NPMLE is asymptotically normal and achieves asymptotic nonparametric efficiency. We also provide a uniformly consistent estimator of the corresponding asymptotic covariance function based on an information operator. The finite-sample performance of the proposed NPMLE is examined with simulation studies, which show that the NPMLE has smaller mean squared error than the existing estimators and its corresponding pointwise confidence intervals have reasonable coverages. A real example is also presented.  相似文献   

18.
Abstract.  In observational studies treatment may be adapted to the patient's state during the course of time. These covariates may in turn also react on the treatment under study, and so on. This makes it hard to distinguish between treatment effect and selection bias. Structural nested models aim at estimating treatment effect in such complicated situations, even when treatment may change at any time. We show that structural nested models can often be calculated with standard software, by using standard models to predict treatment as a tool to estimate treatment effect. Robins ( Survival analysis, Volume 6 of Encyclopedia of Biostatistics , John Wiley and Sons, Chichester, 1998) conjectured this, but so far it was unproven. We use a partial likelihood approach to choose the estimators and tests as a subclass of the estimators and tests in Lok (math. ST/0410271 at http://arXiv.org , 2004). We show that this is the class of estimators and tests that can be calculated with standard software. The estimators are consistent and asymptotically normal, and have interesting asymptotic properties.  相似文献   

19.
The two-sample problem for comparing Weibull scale parameters is studied for randomly censored data. Three different test statistics are considered and their asymptotic properties are established under a sequence of local alternatives, It is shown that both the test statistic based on the mlefs (maximum likelihood estimators) and the likelihood ratio test are asymptotically optimum. The third statistic based only on the number of failures is not, Asymptotic relative efficiency of this statistic is obtained and its numerical values are computed for uniform and Weibull censoring, Effects of uniform random censoring on the censoring level of the experiment are illus¬trated, A direct proof for the joint asymptotic normality of the mlefs of the shape and the scale parameters is also given  相似文献   

20.
Chronic disease processes often feature transient recurrent adverse clinical events. Treatment comparisons in clinical trials of such disorders must be based on valid and efficient methods of analysis. We discuss robust strategies for testing treatment effects with recurrent events using methods based on marginal rate functions, partially conditional rate functions, and methods based on marginal failure time models. While all three approaches lead to valid tests of the null hypothesis when robust variance estimates are used, they differ in power. Moreover, some approaches lead to estimators of treatment effect which are more easily interpreted than others. To investigate this, we derive the limiting value of estimators of treatment effect from marginal failure time models and illustrate their dependence on features of the underlying point process, as well as the censoring mechanism. Through simulation, we show that methods based on marginal failure time distributions are shown to be sensitive to treatment effects delaying the occurrence of the very first recurrences. Methods based on marginal or partially conditional rate functions perform well in situations where treatment effects persist or in settings where the aim is to summarizee long-term data on efficacy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号