首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recurrent events data are frequently encountered and could be stopped by a terminal event in clinical trials. It is of interest to assess the treatment efficacy simultaneously with respect to both the recurrent events and the terminal event in many applications. In this paper we propose joint covariate-adjusted score test statistics based on joint models of recurrent events and a terminal event. No assumptions on the functional form of the covariates are needed. Simulation results show that the proposed tests can improve the efficiency over tests based on covariate unadjusted model. The proposed tests are applied to the SOLVD data for illustration.  相似文献   

2.
Recurrent events data with a terminal event often arise in many longitudinal studies. Most of existing models assume multiplicative covariate effects and model the conditional recurrent event rate given survival. In this article, we propose a marginal additive rates model for recurrent events with a terminal event, and develop two procedures for estimating the model parameters. The asymptotic properties of the resulting estimators are established. In addition, some numerical procedures are presented for model checking. The finite-sample behavior of the proposed methods is examined through simulation studies, and an application to a bladder cancer study is also illustrated.  相似文献   

3.
Recurrent event data occur in many clinical and observational studies (Cook and Lawless, Analysis of recurrent event data, 2007) and in these situations, there may exist a terminal event such as death that is related to the recurrent event of interest (Ghosh and Lin, Biometrics 56:554–562, 2000; Wang et al., J Am Stat Assoc 96:1057–1065, 2001; Huang and Wang, J Am Stat Assoc 99:1153–1165, 2004; Ye et al., Biometrics 63:78–87, 2007). In addition, sometimes there may exist more than one type of recurrent events, that is, one faces multivariate recurrent event data with some dependent terminal event (Chen and Cook, Biostatistics 5:129–143, 2004). It is apparent that for the analysis of such data, one has to take into account the dependence both among different types of recurrent events and between the recurrent and terminal events. In this paper, we propose a joint modeling approach for regression analysis of the data and both finite and asymptotic properties of the resulting estimates of unknown parameters are established. The methodology is applied to a set of bivariate recurrent event data arising from a study of leukemia patients.  相似文献   

4.
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log‐rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan–Meier estimators of survival curves using an IPTW log‐rank test for multi‐valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

6.
Bivariate recurrent event data are observed when subjects are at risk of experiencing two different type of recurrent events. In this paper, our interest is to suggest statistical model when there is a substantial portion of subjects not experiencing recurrent events but having a terminal event. In a context of recurrent event data, zero events can be related with either the risk free group or a terminal event. For simultaneously reflecting both a zero inflation and a terminal event in a context of bivariate recurrent event data, a joint model is implemented with bivariate frailty effects. Simulation studies are performed to evaluate the suggested models. Infection data from AML (acute myeloid leukemia) patients are analyzed as an application.  相似文献   

7.
This paper considers comparison of discrete failure time distributions when the survival time of interest measures elapsed time between two related events and observations on the occurrences of both events could be interval-censored. This kind of data is often referred to as doubly interval-censored failure time data. If the occurrence of the first event defining the survival time can be exactly observed, the data are usually referred to as interval-censored data. For the comparison problem based on interval-censored failure time data, Sun (1996) proposed a nonparametric test procedure. In this paper we generalize the procedure given in Sun (1996) to doubly interval-censored data case and the generalized test is evaluated by simulations.  相似文献   

8.
Abstract. The Buckley–James estimator (BJE) is a well‐known estimator for linear regression models with censored data. Ritov has generalized the BJE to a semiparametric setting and demonstrated that his class of Buckley–James type estimators is asymptotically equivalent to the class of rank‐based estimators proposed by Tsiatis. In this article, we revisit such relationship in censored data with covariates missing by design. By exploring a similar relationship between our proposed class of Buckley–James type estimating functions to the class of rank‐based estimating functions recently generalized by Nan, Kalbfleisch and Yu, we establish asymptotic properties of our proposed estimators. We also conduct numerical studies to compare asymptotic efficiencies from various estimators.  相似文献   

9.
With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time‐to‐event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log‐rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST‐based inference and against the hazard ratio–based inference, under various scenarios and design parameter setups. The log‐rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan‐Meier survival curves, but the performance of the RMST test is similar. Under non‐PH scenarios where late separation of survival curves is observed, the RMST‐based test has better performance than the log‐rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption.  相似文献   

10.
The practice for testing homogeneity of several rival models is of interest. In this article, we consider a non parametric multiple test for non nested distributions in the context of the model selection. Based on the linear sign rank test, and the known union–intersection principle, we let the magnitude of the data to give a better performance to the test statistic. We consider the sample and the non nested rival models as blocks and treatments, respectively, and introduce the extended Friedman test version to compare with the results of the test based on the linear sign rank test. A real dataset based on the waiting time to earthquake is considered to illustrate the results.  相似文献   

11.
The assessment of overall homogeneity of time‐to‐event curves is a key element in survival analysis in biomedical research. The currently commonly used testing methods, e.g. log‐rank test, Wilcoxon test, and Kolmogorov–Smirnov test, may have a significant loss of statistical testing power under certain circumstances. In this paper we propose a new testing method that is robust for the comparison of the overall homogeneity of survival curves based on the absolute difference of the area under the survival curves using normal approximation by Greenwood's formula. Monte Carlo simulations are conducted to investigate the performance of the new testing method compared against the log‐rank, Wilcoxon, and Kolmogorov–Smirnov tests under a variety of circumstances. The proposed new method has robust performance with greater power to detect the overall differences than the log‐rank, Wilcoxon, and Kolmogorov–Smirnov tests in many scenarios in the simulations. Furthermore, the applicability of the new testing approach is illustrated in a real data example from a kidney dialysis trial. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
In this article, we propose an additive-multiplicative rates model for recurrent event data in the presence of a terminal event such as death. The association between recurrent and terminal events is nonparametric. For inference on the model parameters, estimating equation approaches are developed, and the asymptotic properties of the resulting estimators are established. The finite sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a bladder cancer study is provided.  相似文献   

13.
During their follow-up, patients with cancer can experience several types of recurrent events and can also die. Over the last decades, several joint models have been proposed to deal with recurrent events with dependent terminal event. Most of them require the proportional hazard assumption. In the case of long follow-up, this assumption could be violated. We propose a joint frailty model for two types of recurrent events and a dependent terminal event to account for potential dependencies between events with potentially time-varying coefficients. For that, regression splines are used to model the time-varying coefficients. Baseline hazard functions (BHF) are estimated with piecewise constant functions or with cubic M-Splines functions. The maximum likelihood estimation method provides parameter estimates. Likelihood ratio tests are performed to test the time dependency and the statistical association of the covariates. This model was driven by breast cancer data where the maximum follow-up was close to 20 years.  相似文献   

14.
Recurrent event data are often encountered in longitudinal follow-up studies in many important areas such as biomedical science, econometrics, reliability, criminology and demography. Multiplicative marginal rates models have been used extensively to analyze recurrent event data, but often fail to fit the data adequately. In addition, the analysis is complicated by excess zeros in the data as well as the presence of a terminal event that precludes further recurrence. To address these problems, we propose a semiparametric model with an additive rate function and an unspecified baseline to analyze recurrent event data, which includes a parameter to accommodate excess zeros and a frailty term to account for a terminal event. Local likelihood procedure is applied to estimate the parameters, and the asymptotic properties of the estimators are established. A simulation study is conducted to evaluate the performance of the proposed methods, and an example of their application is presented on a set of tumor recurrent data for bladder cancer.  相似文献   

15.
Semicompeting risks data, where a subject may experience sequential non-terminal and terminal events, and the terminal event may censor the non-terminal event but not vice versa, are widely available in many biomedical studies. We consider the situation when a proportion of subjects’ non-terminal events is missing, such that the observed data become a mixture of “true” semicompeting risks data and partially observed terminal event only data. An illness–death multistate model with proportional hazards assumptions is proposed to study the relationship between non-terminal and terminal events, and provide covariate-specific global and local association measures. Maximum likelihood estimation based on semiparametric regression analysis is used for statistical inference, and asymptotic properties of proposed estimators are studied using empirical process and martingale arguments. We illustrate the proposed method with simulation studies and data analysis of a follicular cell lymphoma study.  相似文献   

16.
In biomedical studies, the event of interest is often recurrent and within-subject events cannot usually be assumed independent. In addition, individuals within a cluster might not be independent; for example, in multi-center or familial studies, subjects from the same center or family might be correlated. We propose methods of estimating parameters in two semi-parametric proportional rates/means models for clustered recurrent event data. The first model contains a baseline rate function which is common across clusters, while the second model features cluster-specific baseline rates. Dependence structures for patients-within-cluster and events-within-patient are both unspecified. Estimating equations are derived for the regression parameters. For the common baseline model, an estimator of the baseline mean function is proposed. The asymptotic distributions of the model parameters are derived, while finite-sample properties are assessed through a simulation study. Using data from a national organ failure registry, the proposed methods are applied to the analysis of technique failures among Canadian dialysis patients.  相似文献   

17.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

18.
In a clinical trial, we may randomize subjects (called clusters) to different treatments (called groups), and make observations from multiple sites (called units) of each subject. In this case, the observations within each subject could be dependent, whereas those from different subjects are independent. If the outcome of interest is the time to an event, we may use the standard rank tests proposed for independent survival data, such as the logrank and Wilcoxon tests, to test the equality of marginal survival distributions, but their standard error should be modified to accommodate the possible intracluster correlation. In this paper we propose a method of calculating the standard error of the rank tests for two-sample clustered survival data. The method is naturally extended to that for K-sample tests under dependence.  相似文献   

19.
Abstract.  Multiple events data are commonly seen in medical applications. There are two types of events, namely terminal and non-terminal. Statistical analysis for non-terminal events is complicated due to dependent censoring. Consequently, joint modelling and inference are often needed to avoid the problem of non-identifiability. This article considers regression analysis for multiple events data with major interest in a non-terminal event such as disease progression. We generalize the technique of artificial censoring, which is a popular way to handle dependent censoring, under flexible model assumptions on the two types of events. The proposed method is applied to analyse a data set of bone marrow transplantation.  相似文献   

20.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号