首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider a subject entered on a clinicaltrial in which the major endpoint is a time metric such as deathor time to reach a well defined event. During the observationalperiod the subject may experience an intermediate clinical event.The intermediate clinical event may induce a change in the survivaldistribution. We consider models for the one and two sample problem.The model for the one sample problem enables one to test if theoccurrence of the intermediate event changed the survival distribution.This models provides a way of carrying out non-randomized clinicaltrial to determine if a therapy has benefit. The two sample problemconsiders testing if the probability distributions, with andwithout an intermediate event, are the same. Statistical testsare derived using a semi-Markov or a time dependent mixture model.Simulation studies are carried out to compare these new procedureswith the log rank, stratified log rank and landmark tests. Thenew tests appear to have uniformly greater power than these competitortests. The methods are applied to a randomized clinical trialcarried out by the Aids Clinical Trial Group (ACTG) which comparedlow versus high doses of zidovudine (AZT).  相似文献   

2.
In the analysis of survival times, the logrank test and the Cox model have been established as key tools, which do not require specific distributional assumptions. Under the assumption of proportional hazards, they are efficient and their results can be interpreted unambiguously. However, delayed treatment effects, disease progression, treatment switchers or the presence of subgroups with differential treatment effects may challenge the assumption of proportional hazards. In practice, weighted logrank tests emphasizing either early, intermediate or late event times via an appropriate weighting function may be used to accommodate for an expected pattern of non-proportionality. We model these sources of non-proportional hazards via a mixture of survival functions with piecewise constant hazard. The model is then applied to study the power of unweighted and weighted log-rank tests, as well as maximum tests allowing different time dependent weights. Simulation results suggest a robust performance of maximum tests across different scenarios, with little loss in power compared to the most powerful among the considered weighting schemes and huge power gain compared to unfavorable weights. The actual sources of non-proportional hazards are not obvious from resulting populationwise survival functions, highlighting the importance of detailed simulations in the planning phase of a trial when assuming non-proportional hazards.We provide the required tools in a software package, allowing to model data generating processes under complex non-proportional hazard scenarios, to simulate data from these models and to perform the weighted logrank tests.  相似文献   

3.
Survival models are used to examine data in the event of an occurrence. These are discussed in various types including parametric, non-parametric and semi-parametric models. Parametric models require a clear distribution of survival time, and semi-parametric models assume proportional hazards. Among these models, the non-parametric model of artificial neural network has the fewest assumptions and can be often replaced by other models. Given the importance of distribution Weibull survival models in this study of simulation shape parameter of the Weibull distribution have been assumed as 1, 2 and 3, and also the average rate at levels of 0%–75% have been censored. The values predicted by the neural network forecasting model with parametric survival and Cox regression models were compared. This comparison considering levels of complexity due to the hazard model using the ROC curve and the corresponding tests have been carried out.  相似文献   

4.
Summary.  In longitudinal studies of biological markers, different individuals may have different underlying patterns of response. In some applications, a subset of individuals experiences latent events, causing an instantaneous change in the level or slope of the marker trajectory. The paper presents a general mixture of hierarchical longitudinal models for serial biomarkers. Interest centres both on the time of the event and on levels of the biomarker before and after the event. In observational studies where marker series are incomplete, the latent event can be modelled by a survival distribution. Risk factors for the occurrence of the event can be investigated by including covariates in the survival distribution. A combination of Gibbs, Metropolis–Hastings and reversible jump Markov chain Monte Carlo sampling is used to fit the models to serial measurements of forced expiratory volume from lung transplant recipients.  相似文献   

5.
In this paper, we examine a method for analyzing competing risks data where the failure type of interest is missing or incomplete, but where there is an intermediate event, and only patients who experience the intermediate event can die of the cause of interest. In some applications, a method called “log-rank subtraction” has been applied to these problems. There has been no systematic study of this methodology, though. We investigate the statistical properties of the method and further propose a modified method by including a weight function in the construction of the test statistic to correct for potential biases. A class of tests is then proposed for comparing the disease-specific mortality in the two groups. The tests are based on comparing the difference of weighted log-rank scores for the failure type of interest. We derive the asymptotic properties for the modified test procedure. Simulation studies indicate that the tests are unbiased and have reasonable power. The results are also illustrated with data from a breast cancer study.  相似文献   

6.
In a clinical trial with the time to an event as the outcome of interest, we may randomize a number of matched subjects, such as litters, to different treatments. The number of treatments equals the number of subjects per litter, two in the case of twins. In this case, the survival times of matched subjects could be dependent. Although the standard rank tests, such as the logrank and Wilcoxon tests, for independent samples may be used to test the equality of marginal survival distributions, their standard error should be modified to accommodate the possible dependence of survival times between matched subjects. In this paper we propose a method of calculating the standard error of the rank tests for paired two-sample survival data. The method is naturally extended to that for K-sample tests under dependence.  相似文献   

7.
Survival models deal with the time until the occurrence of an event of interest. However, in some situations the event may not occur in part of the studied population. The fraction of the population that will never experience the event of interest is generally called cure rate. Models that consider this fact (cure rate models) have been extensively studied in the literature. Hypothesis testing on the parameters of these models can be performed based on likelihood ratio, gradient, score or Wald statistics. Critical values of these tests are obtained through approximations that are valid in large samples and may result in size distortion in small or moderate sample sizes. In this sense, this paper proposes bootstrap corrections to the four mentioned tests and bootstrap Bartlett correction for the likelihood ratio statistic in the Weibull promotion time model. Besides, we present an algorithm for bootstrap resampling when the data presents cure fraction and right censoring time (random and non-informative). Simulation studies are conducted to compare the finite sample performances of the corrected tests. The numerical evidence favours the corrected tests we propose. We also present an application in an actual data set.  相似文献   

8.
Panel count data occur in many fields and a number of approaches have been developed. However, most of these approaches are for situations where there is no terminal event and the observation process is independent of the underlying recurrent event process unconditionally or conditional on the covariates. In this paper, we discuss a more general situation where the observation process is informative and there exists a terminal event which precludes further occurrence of the recurrent events of interest. For the analysis, a semiparametric transformation model is presented for the mean function of the underlying recurrent event process among survivors. To estimate the regression parameters, an estimating equation approach is proposed in which an inverse survival probability weighting technique is used. The asymptotic distribution of the proposed estimates is provided. Simulation studies are conducted and suggest that the proposed approach works well for practical situations. An illustrative example is provided. The Canadian Journal of Statistics 41: 174–191; 2013 © 2012 Statistical Society of Canada  相似文献   

9.
The product limit or Kaplan‐Meier (KM) estimator is commonly used to estimate the survival function in the presence of incomplete time to event. Application of this method assumes inherently that the occurrence of an event is known with certainty. However, the clinical diagnosis of an event is often subject to misclassification due to assay error or adjudication error, by which the event is assessed with some uncertainty. In the presence of such errors, the true distribution of the time to first event would not be estimated accurately using the KM method. We develop a method to estimate the true survival distribution by incorporating negative predictive values and positive predictive values, into a KM‐like method of estimation. This allows us to quantify the bias in the KM survival estimates due to the presence of misclassified events in the observed data. We present an unbiased estimator of the true survival function and its variance. Asymptotic properties of the proposed estimators are provided, and these properties are examined through simulations. We demonstrate our methods using data from the Viral Resistance to Antiviral Therapy of Hepatitis C study.  相似文献   

10.
In this article, we propose a parametric model for the distribution of time to first event when events are overdispersed and can be properly fitted by a Negative Binomial distribution. This is a very common situation in medical statistics, when the occurrence of events is summarized as a count for each patient and the simple Poisson model is not adequate to account for overdispersion of data. In this situation, studying the time of occurrence of the first event can be of interest. From the Negative Binomial distribution of counts, we derive a new parametric model for time to first event and apply it to fit the distribution of time to first relapse in multiple sclerosis (MS). We develop the regression model with methods for covariate estimation. We show that, as the Negative Binomial model properly fits relapse counts data, this new model matches quite perfectly the distribution of time to first relapse, as tested in two large datasets of MS patients. Finally we compare its performance, when fitting time to first relapse in MS, with other models widely used in survival analysis (the semiparametric Cox model and the parametric exponential, Weibull, log-logistic and log-normal models).  相似文献   

11.
In survival data analysis it is frequent the occurrence of a significant amount of censoring to the right indicating that there may be a proportion of individuals in the study for which the event of interest will never happen. This fact is not considered by the ordinary survival theory. Consequently, the survival models with a cure fraction have been receiving a lot of attention in the recent years. In this article, we consider the standard mixture cure rate model where a fraction p 0 of the population is of individuals cured or immune and the remaining 1 ? p 0 are not cured. We assume an exponential distribution for the survival time and an uniform-exponential for the censoring time. In a simulation study, the impact caused by the informative uniform-exponential censoring on the coverage probabilities and lengths of asymptotic confidence intervals is analyzed by using the Fisher information and observed information matrices.  相似文献   

12.
Clinical studies aimed at identifying effective treatments to reduce the risk of disease or death often require long term follow-up of participants in order to observe a sufficient number of events to precisely estimate the treatment effect. In such studies, observing the outcome of interest during follow-up may be difficult and high rates of censoring may be observed which often leads to reduced power when applying straightforward statistical methods developed for time-to-event data. Alternative methods have been proposed to take advantage of auxiliary information that may potentially improve efficiency when estimating marginal survival and improve power when testing for a treatment effect. Recently, Parast et al. (J Am Stat Assoc 109(505):384–394, 2014) proposed a landmark estimation procedure for the estimation of survival and treatment effects in a randomized clinical trial setting and demonstrated that significant gains in efficiency and power could be obtained by incorporating intermediate event information as well as baseline covariates. However, the procedure requires the assumption that the potential outcomes for each individual under treatment and control are independent of treatment group assignment which is unlikely to hold in an observational study setting. In this paper we develop the landmark estimation procedure for use in an observational setting. In particular, we incorporate inverse probability of treatment weights (IPTW) in the landmark estimation procedure to account for selection bias on observed baseline (pretreatment) covariates. We demonstrate that consistent estimates of survival and treatment effects can be obtained by using IPTW and that there is improved efficiency by using auxiliary intermediate event and baseline information. We compare our proposed estimates to those obtained using the Kaplan–Meier estimator, the original landmark estimation procedure, and the IPTW Kaplan–Meier estimator. We illustrate our resulting reduction in bias and gains in efficiency through a simulation study and apply our procedure to an AIDS dataset to examine the effect of previous antiretroviral therapy on survival.  相似文献   

13.
In a clinical trial, we may randomize subjects (called clusters) to different treatments (called groups), and make observations from multiple sites (called units) of each subject. In this case, the observations within each subject could be dependent, whereas those from different subjects are independent. If the outcome of interest is the time to an event, we may use the standard rank tests proposed for independent survival data, such as the logrank and Wilcoxon tests, to test the equality of marginal survival distributions, but their standard error should be modified to accommodate the possible intracluster correlation. In this paper we propose a method of calculating the standard error of the rank tests for two-sample clustered survival data. The method is naturally extended to that for K-sample tests under dependence.  相似文献   

14.
Occasionally, investigators collect auxiliary marks at the time of failure in a clinical study. Because the failure event may be censored at the end of the follow‐up period, these marked endpoints are subject to induced censoring. We propose two new families of two‐sample tests for the null hypothesis of no difference in mark‐scale distribution that allows for arbitrary associations between mark and time. One family of proposed tests is a nonparametric extension of an existing semi‐parametric linear test of the same null hypothesis while a second family of tests is based on novel marked rank processes. Simulation studies indicate that the proposed tests have the desired size and possess adequate statistical power to reject the null hypothesis under a simple change of location in the marginal mark distribution. When the marginal mark distribution has heavy tails, the proposed rank‐based tests can be nearly twice as powerful as linear tests.  相似文献   

15.
ABSTRACT

In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. When disease registries or surveillance systems collect data based on incidence occurring within a specific calendar time interval, the initial event is usually subject to double truncation. Furthermore, since the second duration process is observable only if the first event has occurred, double truncation and dependent censoring arise. In this article, under the two sampling biases with an unspecified distribution of truncation variables, we propose a nonparametric estimator of the joint survival function of two successive duration times using the inverse-probability-weighted (IPW) approach. The consistency of the proposed estimator is established. Based on the estimated marginal survival functions, we also propose a two-stage estimation procedure for estimating the parameters of copula model. The bootstrap method is used to construct confidence interval. Numerical studies demonstrate that the proposed estimation approaches perform well with moderate sample sizes.  相似文献   

16.
Non-parametric Tests for Recurrent Events under Competing Risks   总被引:1,自引:0,他引:1  
Abstract.  We consider a data set on nosocomial infections of patients hospitalized in a French intensive care facility. Patients may suffer from recurrent infections of different types and they also have a high risk of death. To deal with such situations, a model of recurrent events with competing risks and a terminal event is introduced. Our aim was to compare the occurrence rates of two types of events. For this purpose, we propose two tests: one to detect if the occurrence rate of a given type of event increases with time; a second to detect if the instantaneous probability of experiencing an event of a given type is always greater than the one of another type. The asymptotic properties of the test statistics are derived and Monte Carlo methods are used to study the power of the tests. Finally, the procedures developed are applied to the French nosocomial infections data set.  相似文献   

17.
Current status data arise in studies where the target measurement is the time of occurrence of some event, but observations are limited to indicators of whether or not the event has occurred at the time the sample is collected - only the current status of each individual with respect to event occurrence is observed. Examples of such data arise in several fields, including demography, epidemiology, econometrics and bioassay. Although estimation of the marginal distribution of times of event occurrence is well understood, techniques for incorporating covariate information are not well developed. This paper proposes a semiparametric approach to estimation for regression models of current status data, using techniques from generalized additive modeling and isotonic regression. This procedure provides simultaneous estimates of the baseline distribution of event times and covariate effects. No parametric assumptions about the form of the baseline distribution are required. The results are illustrated using data from a demographic survey of breastfeeding practices in developing countries, and from an epidemiological study of heterosexual Human Immunodeficiency Virus (HIV) transmission. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

18.
This article presents a bivariate distribution for analyzing the failure data of mechanical and electrical components in presence of a forewarning or primer event whose occurrence denotes the inception of the failure mechanism that will cause the component failure after an additional random time. The characteristics of the proposed distribution are discussed and several point estimators of parameters are illustrated and compared, in case of complete sampling, via a large Monte Carlo simulation study. Confidence intervals based on asymptotic results are derived, as well as procedures are given for testing the independence between the occurrence time of the forewarning event and the additional time to failure. Numerical applications based on failure data of cable insulation specimens and of two-component parallel systems are illustrated.  相似文献   

19.
In this article, we consider Crámer–von Mises type goodness-of-fit statistics for the Generalized Pareto law. The tests involve a certain transformation of the original observations, which, at least in the case of completely specified null distribution, may be viewed as transforming to uniformity and comparing the resulting moments of arbitrary positive order to those of a uniform distribution. The method is shown to be consistent, and the asymptotic null distribution of the test statistic is derived. Simulation results indicate that the proposed test compares well with standard methods based on the empirical distribution function.  相似文献   

20.
The data are n independent random binomial events, each resulting in success or failure. The event outcomes are believed to be trials from a binomial distribution with success probability p, and tests of p=1/2 are desired. However, there is the possibility that some unidentified event has a success probability different from the common value p for the other n?1 events. Then, tests of whether this common p equals 1/2 are desired. Fortunately, two-sided tests can be obtained that simultaneously are applicable for both situations. That is, the significance level for a test is same when one event has a different probability as when all events have the same probability. These tests are the usual equal-tail tests for p=1/2 (based on n independent trials from a binomial distribution).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号