首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The term “intercurrent events” has recently been used to describe events in clinical trials that may complicate the definition and calculation of the treatment effect estimand. This paper focuses on the use of an attributable estimand to address intercurrent events. Those events that are considered to be adversely related to randomized treatment (eg, discontinuation due to adverse events or lack of efficacy) are considered attributable and handled with a composite estimand strategy, while a hypothetical estimand strategy is used for intercurrent events not considered to be related to randomized treatment (eg, unrelated adverse events). We explore several options for how to implement this approach and compare them to hypothetical “efficacy” and treatment policy estimand strategies through a series of simulation studies whose design is inspired by recent trials in chronic obstructive pulmonary disease (COPD), and we illustrate through an analysis of a recently completed COPD trial.  相似文献   

2.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
In the analysis of retrospective data or when interpreting results from a single-arm phase II clinical trial relative to historical data, it is often of interest to show plots summarizing time-to-event outcomes comparing treatment groups. If the groups being compared are imbalanced with respect to factors known to influence outcome, these plots can be misleading and seemingly incompatible with results obtained from a regression model that accounts for these imbalances. We consider ways in which covariate information can be used to obtain adjusted curves for time-to-event outcomes. We first review a common model-based method and then suggest another model-based approach that is not as reliant on model assumptions. Finally, an approach that is partially model free is suggested. Each method is applied to an example from hematopoietic cell transplantation.  相似文献   

4.
The analysis of recurrent event data in clinical trials presents a number of difficulties. The statistician is faced with issues of event dependency, composite endpoints, unbalanced follow‐up times and informative dropout. It is not unusual, therefore, for statisticians charged with responsibility for providing reliable and valid analyses to need to derive new methods specific to the clinical indication under investigation. One method is proposed that appears to have possible advantages over those that are often used in the analysis of recurrent event data in clinical trials. Based on an approach that counts periods of time with events instead of single event counts, the proposed method makes an adjustment for patient time on study and incorporates heterogeneity by estimating an individual per‐patient risk of experiencing a morbid event. Monte Carlo simulations demonstrate that, with use of a real clinical study data, the proposed method consistently outperforms other measures of morbidity. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
A model to accommodate time-to-event ordinal outcomes was proposed by Berridge and Whitehead. Very few studies have adopted this approach, despite its appeal in incorporating several ordered categories of event outcome. More recently, there has been increased interest in utilizing recurrent events to analyze practical endpoints in the study of disease history and to help quantify the changing pattern of disease over time. For example, in studies of heart failure, the analysis of a single fatal event no longer provides sufficient clinical information to manage the disease. Similarly, the grade/frequency/severity of adverse events may be more important than simply prolonged survival in studies of toxic therapies in oncology. We propose an extension of the ordinal time-to-event model to allow for multiple/recurrent events in the case of marginal models (where all subjects are at risk for each recurrence, irrespective of whether they have experienced previous recurrences) and conditional models (subjects are at risk of a recurrence only if they have experienced a previous recurrence). These models rely on marginal and conditional estimates of the instantaneous baseline hazard and provide estimates of the probabilities of an event of each severity for each recurrence over time. We outline how confidence intervals for these probabilities can be constructed and illustrate how to fit these models and provide examples of the methods, together with an interpretation of the results.  相似文献   

6.
The number of patient‐years needed to treat (NPYNT), also called the event‐based number needed to treat, to avoid one additional exacerbation has been reported in recently published respiratory trials, but the confidence intervals are not routinely reported. The challenge of constructing confidence intervals for NPYNT is due to the fact that exacerbation data or count data in general are usually analyzed using Poisson‐based models such as Poisson or negative binomial regression and the rate ratio is the natural metric for between‐treatment comparison, while NPYNT is based on rate difference, which is not usually calculated for those models. Therefore, the variance estimates from these analysis models are directly related to the rate ratio rather than the rate difference. In this paper, we propose several methods to construct confidence intervals for the NPYNT, assuming that the event rates are estimated using Poisson or negative binomial regression models. The coverage property of the confidence intervals constructed with these methods is assessed by simulations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
A stratified analysis of the differences in proportions has been widely employed in epidemiological research, social sciences, and drug development. It provides a useful framework for combining data across strata to produce a common effect. However, for rare events with incidence rates close to zero, popular confidence intervals for risk differences in a stratified analysis may not have appropriate coverage probabilities that approach the nominal confidence levels and the algorithms may fail to produce a valid confidence interval because of zero events in both the arms of a stratum. The main objective of this study is to evaluate the performance of certain methods commonly employed to construct confidence intervals for stratified risk differences when the response probabilities are close to a boundary value of zero or one. Additionally, we propose an improved stratified Miettinen–Nurminen confidence interval that exhibits a superior performance over standard methods while avoiding computational difficulties involving rare events. The proposed method can also be employed when the response probabilities are close to one.  相似文献   

8.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

9.
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non‐parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
11.
In many clinical research applications the time to occurrence of one event of interest, that may be obscured by another??so called competing??event, is investigated. Specific interventions can only have an effect on the endpoint they address or research questions might focus on risk factors for a certain outcome. Different approaches for the analysis of time-to-event data in the presence of competing risks were introduced in the last decades including some new methodologies, which are not yet frequently used in the analysis of competing risks data. Cause-specific hazard regression, subdistribution hazard regression, mixture models, vertical modelling and the analysis of time-to-event data based on pseudo-observations are described in this article and are applied to a dataset of a cohort study intended to establish risk stratification for cardiac death after myocardial infarction. Data analysts are encouraged to use the appropriate methods for their specific research questions by comparing different regression approaches in the competing risks setting regarding assumptions, methodology and interpretation of the results. Notes on application of the mentioned methods using the statistical software R are presented and extensions to the presented standard methods proposed in statistical literature are mentioned.  相似文献   

12.
Pragmatic trials offer practical means of obtaining real-world evidence to help improve decision-making in comparative effectiveness settings. Unfortunately, incomplete adherence is a common problem in pragmatic trials. The commonly used methods in randomized control trials often cannot handle the added complexity imposed by incomplete adherence, resulting in biased estimates. Several naive methods and advanced causal inference methods (e.g., inverse probability weighting and instrumental variable-based approaches) have been used in the literature to deal with incomplete adherence. Practitioners and applied researchers are often confused about which method to consider under a given setting. This current work is aimed to review commonly used statistical methods to deal with non-adherence along with their key assumptions, advantages, and limitations, with a particular focus on pragmatic trials. We have listed the applicable settings for these methods and provided a summary of available software. All methods were applied to two hypothetical datasets to demonstrate how these methods perform in a given scenario, along with the R codes. The key considerations include the type of intervention strategy (point treatment settings, where treatment is administered only once versus sustained treatment settings, where treatment has to be continued over time) and availability of data (e.g., the extent of measured or unmeasured covariates that are associated with adherence, dependent confounding impacted by past treatment, and potential violation of assumptions). This study will guide practitioners and applied researchers to use the appropriate statistical method to address incomplete adherence in pragmatic trial settings for both the point and sustained treatment strategies.  相似文献   

13.
In this paper, we consider the analysis of recurrent event data that examines the differences between two treatments. The outcomes that are considered in the analysis are the pre-randomisation event count and post-randomisation times to first and second events with associated cure fractions. We develop methods that allow pre-randomisation counts and two post-randomisation survival times to be jointly modelled under a Poisson process framework, assuming that outcomes are predicted by (unobserved) event rates. We apply these methods to data that examine the difference between immediate and deferred treatment policies in patients presenting with single seizures or early epilepsy. We find evidence to suggest that post-randomisation seizure rates change at randomisation and following a first seizure after randomisation. We also find that there are cure rates associated with the post-randomisation times to first and second seizures. The increase in power over standard survival techniques, offered by the joint models that we propose, resulted in more precise estimates of the treatment effect and the ability to detect interactions with covariate effects.  相似文献   

14.
Chronic disease processes often feature transient recurrent adverse clinical events. Treatment comparisons in clinical trials of such disorders must be based on valid and efficient methods of analysis. We discuss robust strategies for testing treatment effects with recurrent events using methods based on marginal rate functions, partially conditional rate functions, and methods based on marginal failure time models. While all three approaches lead to valid tests of the null hypothesis when robust variance estimates are used, they differ in power. Moreover, some approaches lead to estimators of treatment effect which are more easily interpreted than others. To investigate this, we derive the limiting value of estimators of treatment effect from marginal failure time models and illustrate their dependence on features of the underlying point process, as well as the censoring mechanism. Through simulation, we show that methods based on marginal failure time distributions are shown to be sensitive to treatment effects delaying the occurrence of the very first recurrences. Methods based on marginal or partially conditional rate functions perform well in situations where treatment effects persist or in settings where the aim is to summarizee long-term data on efficacy.  相似文献   

15.
For the cancer clinical trials with immunotherapy and molecularly targeted therapy, time-to-event endpoint is often a desired endpoint. In this paper, we present an event-driven approach for Bayesian one-stage and two-stage single-arm phase II trial designs. Two versions of Bayesian one-stage designs were proposed with executable algorithms and meanwhile, we also develop theoretical relationships between the frequentist and Bayesian designs. These findings help investigators who want to design a trial using Bayesian approach have an explicit understanding of how the frequentist properties can be achieved. Moreover, the proposed Bayesian designs using the exact posterior distributions accommodate the single-arm phase II trials with small sample sizes. We also proposed an optimal two-stage approach, which can be regarded as an extension of Simon's two-stage design with the time-to-event endpoint. Comprehensive simulations were conducted to explore the frequentist properties of the proposed Bayesian designs and an R package BayesDesign can be assessed via R CRAN for convenient use of the proposed methods.  相似文献   

16.
In recent years different approaches for the analysis of time-to-event data in the presence of competing risks, i.e. when subjects can fail from one of two or more mutually exclusive types of event, were introduced. Different approaches for the analysis of competing risks data, focusing either on cause-specific or subdistribution hazard rates, were presented in statistical literature. Many new approaches use complicated weighting techniques or resampling methods, not allowing an analytical evaluation of these methods. Simulation studies often replace analytical comparisons, since they can be performed more easily and allow investigation of non-standard scenarios. For adequate simulation studies the generation of appropriate random numbers is essential. We present an approach to generate competing risks data following flexible prespecified subdistribution hazards. Event times and types are simulated using possibly time-dependent cause-specific hazards, chosen in a way that the generated data will follow the desired subdistribution hazards or hazard ratios, respectively.  相似文献   

17.
Endpoints in clinical trials are often highly correlated. However, the commonly used multiple testing procedures in clinical trials either do not take into consideration the correlations among test statistics or can only exploit known correlations. Westfall and Young constructed a resampling-based stepdown method that implicitly utilizes the correlation structure of test statistics in situations with unknown correlations. However, their method requires a “subset pivotality” assumption. Romano and Wolf proposed a more general stepdown method, which does not require such an assumption. There is at present little experience with the application of such methods in analyzing clinical trial data. We advocate the application of resampling-based multiple testing procedures to clinical trials data when appropriate. We have conjectured that the resampling-based stepdown methods can be extended to a stepup procedure under appropriate assumptions and examined the performance of both stepdown and stepup methods under a variety of correlation structures and distribution types. Results from our simulation studies support the use of the resampling-based methods under various scenarios, including binary data and small samples, with strong control of Family wise type I error rate (FWER). Under positive dependence and for binary data even under independence, the resampling-based methods are more powerful than the Holm and Hochberg methods. Last, we illustrate the advantage of the resampling-based stepwise methods with two clinical trial data examples: a cardiovascular outcome trial and an oncology trial.  相似文献   

18.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
In the past, many clinical trials have withdrawn subjects from the study when they prematurely stopped their randomised treatment and have therefore only collected ‘on‐treatment’ data. Thus, analyses addressing a treatment policy estimand have been restricted to imputing missing data under assumptions drawn from these data only. Many confirmatory trials are now continuing to collect data from subjects in a study even after they have prematurely discontinued study treatment as this event is irrelevant for the purposes of a treatment policy estimand. However, despite efforts to keep subjects in a trial, some will still choose to withdraw. Recent publications for sensitivity analyses of recurrent event data have focused on the reference‐based imputation methods commonly applied to continuous outcomes, where imputation for the missing data for one treatment arm is based on the observed outcomes in another arm. However, the existence of data from subjects who have prematurely discontinued treatment but remained in the study has now raised the opportunity to use this ‘off‐treatment’ data to impute the missing data for subjects who withdraw, potentially allowing more plausible assumptions for the missing post‐study‐withdrawal data than reference‐based approaches. In this paper, we introduce a new imputation method for recurrent event data in which the missing post‐study‐withdrawal event rate for a particular subject is assumed to reflect that observed from subjects during the off‐treatment period. The method is illustrated in a trial in chronic obstructive pulmonary disease (COPD) where the primary endpoint was the rate of exacerbations, analysed using a negative binomial model.  相似文献   

20.
The trimmed mean is a method of dealing with patient dropout in clinical trials that considers early discontinuation of treatment a bad outcome rather than leading to missing data. The present investigation is the first comprehensive assessment of the approach across a broad set of simulated clinical trial scenarios. In the trimmed mean approach, all patients who discontinue treatment prior to the primary endpoint are excluded from analysis by trimming an equal percentage of bad outcomes from each treatment arm. The untrimmed values are used to calculated means or mean changes. An explicit intent of trimming is to favor the group with lower dropout because having more completers is a beneficial effect of the drug, or conversely, higher dropout is a bad effect. In the simulation study, difference between treatments estimated from trimmed means was greater than the corresponding effects estimated from untrimmed means when dropout favored the experimental group, and vice versa. The trimmed mean estimates a unique estimand. Therefore, comparisons with other methods are difficult to interpret and the utility of the trimmed mean hinges on the reasonableness of its assumptions: dropout is an equally bad outcome in all patients, and adherence decisions in the trial are sufficiently similar to clinical practice in order to generalize the results. Trimming might be applicable to other inter‐current events such as switching to or adding rescue medicine. Given the well‐known biases in some methods that estimate effectiveness, such as baseline observation carried forward and non‐responder imputation, the trimmed mean may be a useful alternative when its assumptions are justifiable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号