首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count.  相似文献   

2.
In longitudinal clinical trials, when outcome variables at later time points are only defined for patients who survive to those times, the evaluation of the causal effect of treatment is complicated. In this paper, we describe an approach that can be used to obtain the causal effect of three treatment arms with ordinal outcomes in the presence of death using a principal stratification approach. We introduce a set of flexible assumptions to identify the causal effect and implement a sensitivity analysis for non-identifiable assumptions which we parameterize parsimoniously. Methods are illustrated on quality of life data from a recent colorectal cancer clinical trial.  相似文献   

3.
The generalized odds-rate class of regression models for time to event data is indexed by a non-negative constant and assumes thatg(S(t|Z)) = (t) + Zwhere g(s) = log(-1(s-) for > 0, g0(s) = log(- log s), S(t|Z) is the survival function of the time to event for an individual with qx1 covariate vector Z, is a qx1 vector of unknown regression parameters, and (t) is some arbitrary increasing function of t. When =0, this model is equivalent to the proportional hazards model and when =1, this model reduces to the proportional odds model. In the presence of right censoring, we construct estimators for and exp((t)) and show that they are consistent and asymptotically normal. In addition, we show that the estimator for is semiparametric efficient in the sense that it attains the semiparametric variance bound.  相似文献   

4.
Summary: Data depth is a concept that measures the centrality of a point in a given data cloud x 1, x 2,...,x n or in a multivariate distribution P X on d d . Every depth defines a family of so–called trimmed regions. The –trimmed region is given by the set of points that have a depth of at least . Data depth has been used to define multivariate measures of location and dispersion as well as multivariate dispersion orders.If the depth of a point can be represented as the minimum of the depths with respect to all unidimensional projections, we say that the depth satisfies the (weak) projection property. Many depths which have been proposed in the literature can be shown to satisfy the weak projection property. A depth is said to satisfy the strong projection property if for every the unidimensional projection of the –trimmed region equals the –trimmed region of the projected distribution.After a short introduction into the general concept of data depth we formally define the weak and the strong projection property and give necessary and sufficient criteria for the projection property to hold. We further show that the projection property facilitates the construction of depths from univariate trimmed regions. We discuss some of the depths proposed in the literature which possess the projection property and define a general class of projection depths, which are constructed from univariate trimmed regions by using the above method.Finally, algorithmic aspects of projection depths are discussed. We describe an algorithm which enables the approximate computation of depths that satisfy the projection property.  相似文献   

5.
We investigate the properties of several statistical tests for comparing treatment groups with respect to multivariate survival data, based on the marginal analysis approach introduced by Wei, Lin and Weissfeld [Regression Analysis of multivariate incomplete failure time data by modelling marginal distributians, JASA vol. 84 pp. 1065–1073]. We consider two types of directional tests, based on a constrained maximization and on linear combinations of the unconstrained maximizer of the working likelihood function, and the omnibus test arising from the same working likelihood. The directional tests are members of a larger class of tests, from which an asymptotically optimal test can be found. We compare the asymptotic powers of the tests under general contiguous alternatives for a variety of settings, and also consider the choice of the number of survival times to include in the multivariate outcome. We illustrate the results with simulations and with the results from a clinical trial examining recurring opportunistic infections in persons with HIV.  相似文献   

6.
Computing location depth and regression depth in higher dimensions   总被引:3,自引:0,他引:3  
The location depth (Tukey 1975) of a point relative to a p-dimensional data set Z of size n is defined as the smallest number of data points in a closed halfspace with boundary through . For bivariate data, it can be computed in O(nlogn) time (Rousseeuw and Ruts 1996). In this paper we construct an exact algorithm to compute the location depth in three dimensions in O(n2logn) time. We also give an approximate algorithm to compute the location depth in p dimensions in O(mp3+mpn) time, where m is the number of p-subsets used.Recently, Rousseeuw and Hubert (1996) defined the depth of a regression fit. The depth of a hyperplane with coefficients (1,...,p) is the smallest number of residuals that need to change sign to make (1,...,p) a nonfit. For bivariate data (p=2) this depth can be computed in O(nlogn) time as well. We construct an algorithm to compute the regression depth of a plane relative to a three-dimensional data set in O(n2logn) time, and another that deals with p=4 in O(n3logn) time. For data sets with large n and/or p we propose an approximate algorithm that computes the depth of a regression fit in O(mp3+mpn+mnlogn) time. For all of these algorithms, actual implementations are made available.  相似文献   

7.
In the exponential regression model, Bayesian inference concerning the non-linear regression parameter has proved extremely difficult. In particular, standard improper diffuse priors for the usual parameters lead to an improper posterior for the non-linear regression parameter. In a recent paper Ye and Berger (1991) applied the reference prior approach of Bernardo (1979) and Berger and Bernardo (1989) yielding a proper informative prior for . This prior depends on the values of the explanatory variable, goes to 0 as goes to 1, and depends on the specification of a hierarchical ordering of importance of the parameters.This paper explains the failure of the uniform prior to give a proper posterior: the reason is the appearance of the determinant of the information matrix in the posterior density for . We apply the posterior Bayes factor approach of Aitkin (1991) to this problem; in this approach we integrate out nuisance parameters with respect to their conditional posterior density given the parameter of interest. The resulting integrated likelihood for requires only the standard diffuse prior for all the parameters, and is unaffected by orderings of importance of the parameters. Computation of the likelihood for is extremely simple. The approach is applied to the three examples discussed by Berger and Ye and the likelihoods compared with their posterior densities.  相似文献   

8.
Four distribution-free tests are developed for use in matched pair experiments when data may be censored: a bootstrap based on estimates of the median difference, and three rerandomization tests. The latter include a globally almost most powerful (GAMP) test which uses the original data and two modified Gilbert-Gehan tests which use the ranks. Computation time is reduced by using a binary count to generate subsamples and by restricting subsampling to the uncensored pairs. In Monte Carlo simulations against normal alternatives, mixed normal alternatives, and exponential alternatives, the GAMP test is most powerful with light censoring, the rank test is most powerful with heavy censoring. The bootstrap degenerates to the sign test and is least powerful.  相似文献   

9.
Summary.  Measuring the process of care in substance abuse treatment requires analysing repeated client assessments at critical time points during treatment tenure. Assessments are frequently censored because of early departure from treatment. Most analyses accounting for informative censoring define the censoring time to be that of the last observed assessment. However, if missing assessments for those who remain in treatment are attributable to logistical reasons rather than to the underlying treatment process being measured, then the length of stay in treatment might better characterize censoring than would time of measurement. Bayesian variable selection is incorporated in the conditional linear model to assess whether time of measurement or length of stay better characterizes informative censoring. Marginal posterior distributions of the trajectory of treatment process scores are obtained that incorporate model uncertainty. The methodology is motivated by data from an on-going study of the quality of care in in-patient substance abuse treatment.  相似文献   

10.
In the model of type I censored exponential lifetimes, coverage probabilities are compared for a number of confidence interval constructions proposed in literature. The coverage probabilities are calculated exactly for sample sizes up to 50 and for different degrees of censoring and different degrees of intended confidence. If not only a fair two-sided coverage is desired, but also fair one-sided coverage's, only few methods are quite satisfactory. A likelihood-based interval and a third root transformation to normality work almost perfectly, but the 2-based method that is perfect under no censoring and under type II censoring can also be advocated.  相似文献   

11.
Permutation tests for symmetry are suggested using data that are subject to right censoring. Such tests are directly relevant to the assumptions that underlie the generalized Wilcoxon test since the symmetric logistic distribution for log-errors has been used to motivate Wilcoxon scores in the censored accelerated failure time model. Its principal competitor is the log-rank (LGR) test motivated by an extreme value error distribution that is positively skewed. The proposed one-sided tests for symmetry against the alternative of positive skewness are directly relevant to the choice between usage of these two tests.

The permutation tests use statistics from the weighted LGR class normally used for making two-sample comparisons. From this class, the test using LGR weights (all weights equal) showed the greatest discriminatory power in simulations that compared the possibility of logistic errors versus extreme value errors.

In the test construction, a median estimate, determined by inverting the Kaplan–Meier estimator, is used to divide the data into a “control” group to its left that is compared with a “treatment” group to its right. As an unavoidable consequence of testing symmetry, data in the control group that have been censored become uninformative in performing this two-sample test. Thus, early heavy censoring of data can reduce the effective sample size of the control group and result in diminished power for discriminating symmetry in the population distribution.  相似文献   


12.
Connections are established between the theories of weighted logrank tests and of frailty models. These connections arise because omission of a balanced covariate from a proportional hazards model generally leads to a model with non-proportional hazards, for which the simple logrank test is no longer optimal. The optimal weighting function and the asymptotic relative efficiencies of the simple logrank test and of the optimally weighted logrank test relative to the adjusted test that would be used if the covariate values were known, are expressible in terms of the Laplace transform of the hazard ratio for the distribution of the omitted covariate. For example if this hazard ratio has a gamma distribution, the optimal test is a member of the G class introduced by Harrington and Fleming (1982). We also consider positive stable, inverse Gaussian, displaced Poisson and two-point frailty distribution. Results are obtained for parametric and nonparametric tests and are extended to include random censoring. We show that the loss of efficiency from omitting a covariate is generally more important than the additional loss due to misspecification of the resulting non-proportional hazards model as a proportional hazards model. However two-point frailty distributions can provide exceptions to this rule. Censoring generally increases the efficiency of the simple logrank test to the adjusted logrank test.  相似文献   

13.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant , and the noise model has a single parameter . The ratio / alone is responsible for determining globally all these attributes of the interpolant: its complexity, flexibility, smoothness, characteristic scale length, and characteristic amplitude. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of conditional convexity when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error.  相似文献   

15.
Randomly right censored data often arise in industrial life testing and clinical trials. Several authors have proposed asymptotic confidence bands for the survival function when data are randomly censored on the right. All of these bands are based on the empirical estimator of the survival function. In this paper, families of asymptotic (1-)100% level confidence bands are developed from the smoothed estimate of the survival function under the general random censorship model. The new bands are compared to empirical bands, and it is shown that for small sample sizes, the smooth bands have a higher coverage probability than the empirical counterparts.  相似文献   

16.
In longitudinal observational studies, repeated measures are often correlated with observation times as well as censoring time. This article proposes joint modeling and analysis of longitudinal data with time-dependent covariates in the presence of informative observation and censoring times via a latent variable. Estimating equation approaches are developed for parameter estimation and asymptotic properties of the proposed estimators are established. In addition, a generalization of the semiparametric model with time-varying coefficients for the longitudinal response is considered. Furthermore, a lack-of-fit test is provided for assessing the adequacy of the model, and some tests are presented for investigating whether or not covariate effects vary with time. The finite-sample behavior of the proposed methods is examined in simulation studies, and an application to a bladder cancer study is illustrated.  相似文献   

17.
We propose exploratory, easily implemented methods for diagnosing the appropriateness of an underlying copula model for bivariate failure time data, allowing censoring in either or both failure times. It is found that the proposed approach effectively distinguishes gamma from positive stable copula models when the sample is moderately large or the association is strong. Data from the Womens Health and Aging Study (WHAS, Guralnik et al., The Womenss Health and Aging Study: Health and Social Characterisitics of Older Women with Disability. National Institute on Aging: Bethesda, Mayland, 1995) are analyzed to demonstrate the proposed diagnostic methodology. The positive stable model gives a better overall fit to these data than the gamma frailty model, but it tends to underestimate association at the later time points. The finding is consistent with recent theory differentiating catastrophic from progressive disability onset in older adults. The proposed methods supply an interpretable quantity for copula diagnosis. We hope that they will usefully inform practitioners as to the reasonableness of their modeling choices.  相似文献   

18.
In the analysis of survival times, the logrank test and the Cox model have been established as key tools, which do not require specific distributional assumptions. Under the assumption of proportional hazards, they are efficient and their results can be interpreted unambiguously. However, delayed treatment effects, disease progression, treatment switchers or the presence of subgroups with differential treatment effects may challenge the assumption of proportional hazards. In practice, weighted logrank tests emphasizing either early, intermediate or late event times via an appropriate weighting function may be used to accommodate for an expected pattern of non-proportionality. We model these sources of non-proportional hazards via a mixture of survival functions with piecewise constant hazard. The model is then applied to study the power of unweighted and weighted log-rank tests, as well as maximum tests allowing different time dependent weights. Simulation results suggest a robust performance of maximum tests across different scenarios, with little loss in power compared to the most powerful among the considered weighting schemes and huge power gain compared to unfavorable weights. The actual sources of non-proportional hazards are not obvious from resulting populationwise survival functions, highlighting the importance of detailed simulations in the planning phase of a trial when assuming non-proportional hazards.We provide the required tools in a software package, allowing to model data generating processes under complex non-proportional hazard scenarios, to simulate data from these models and to perform the weighted logrank tests.  相似文献   

19.
This article presents a general Bayesian analysis of incomplete categorical data considered as generated by a statistical model involving the categorical sampling process and the observable censoring process. The novelty is that we allow dependence of the censoring process paramenters on the sampling categories; i.e., an informative censoring process. In this way, we relax the assumptions under which both classical and Bayesian solutions have been de-veloped. The proposed solution is outlined for the relevant case of the censoring pattern based on partitions. It is completely developed for a simple but typical examples. Several possible extensions of our procedure are discussed in the final remarks.  相似文献   

20.
The issue of modelling the joint distribution of survival time and of prognostic variables measured periodically has recently become of interest in the AIDS literature but is of relevance in other applications. The focus of this paper is on clinical trials where follow-up measurements of potentially prognostic variables are often collected but not routinely used. These measurements can be used to study the biological evolution of the disease of interest; in particular the effect of an active treatment can be examined by comparing the time profiles of patients in the active and placebo group. It is proposed to use multilevel regression analysis to model the individual repeated observations as function of time and, possibly, treatment. To address the problem of informative drop-out—which may arise if deaths (or any other censoring events) are related to the unobserved values of the prognostic variables—we analyse sequentially overlapping portions of the follow-up information. An example arising from a randomized clinical trial for the treatment of primary biliary cirrhosis is examined in detail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号