首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A sample size justification is a vital part of any trial design. However, estimating the number of participants required to give a meaningful result is not always straightforward. A number of components are required to facilitate a suitable sample size calculation. In this paper, the steps for conducting sample size calculations for non‐inferiority and equivalence trials are summarised. Practical advice and examples are provided that illustrate how to carry out the calculations by hand and using the app SampSize. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, a simulation study is conducted to systematically investigate the impact of different types of missing data on six different statistical analyses: four different likelihood‐based linear mixed effects models and analysis of covariance (ANCOVA) using two different data sets, in non‐inferiority trial settings for the analysis of longitudinal continuous data. ANCOVA is valid when the missing data are completely at random. Likelihood‐based linear mixed effects model approaches are valid when the missing data are at random. Pattern‐mixture model (PMM) was developed to incorporate non‐random missing mechanism. Our simulations suggest that two linear mixed effects models using unstructured covariance matrix for within‐subject correlation with no random effects or first‐order autoregressive covariance matrix for within‐subject correlation with random coefficient effects provide well control of type 1 error (T1E) rate when the missing data are completely at random or at random. ANCOVA using last observation carried forward imputed data set is the worst method in terms of bias and T1E rate. PMM does not show much improvement on controlling T1E rate compared with other linear mixed effects models when the missing data are not at random but is markedly inferior when the missing data are at random. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
In drug development, non‐inferiority tests are often employed to determine the difference between two independent binomial proportions. Many test statistics for non‐inferiority are based on the frequentist framework. However, research on non‐inferiority in the Bayesian framework is limited. In this paper, we suggest a new Bayesian index τ = P(π1 > π2 ? Δ0 | X1,X2), where X1 and X2 denote binomial random variables for trials n1 and n2, and parameters π1 and π2, respectively, and the non‐inferiority margin is Δ0 > 0. We show two calculation methods for τ, an approximate method that uses normal approximation and an exact method that uses an exact posterior PDF. We compare the approximate probability with the exact probability for τ. Finally, we present the results of actual clinical trials to show the utility of index τ. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

5.
Non‐inferiority trials aim to demonstrate whether an experimental therapy is not unacceptably worse than an active reference therapy already in use. When applicable, a three‐arm non‐inferiority trial, including an experiment therapy, an active reference therapy, and a placebo, is often recommended to assess assay sensitivity and internal validity of a trial. In this paper, we share some practical considerations based on our experience from a phase III three‐arm non‐inferiority trial. First, we discuss the determination of the total sample size and its optimal allocation based on the overall power of the non‐inferiority testing procedure and provide ready‐to‐use R code for implementation. Second, we consider the non‐inferiority goal of ‘capturing all possibilities’ and show that it naturally corresponds to a simple two‐step testing procedure. Finally, using this two‐step non‐inferiority testing procedure as an example, we compare extensively commonly used frequentist p ‐value methods with the Bayesian posterior probability approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
The generalized method of moments (GMM) and empirical likelihood (EL) are popular methods for combining sample and auxiliary information. These methods are used in very diverse fields of research, where competing theories often suggest variables satisfying different moment conditions. Results in the literature have shown that the efficient‐GMM (GMME) and maximum empirical likelihood (MEL) estimators have the same asymptotic distribution to order n?1/2 and that both estimators are asymptotically semiparametric efficient. In this paper, we demonstrate that when data are missing at random from the sample, the utilization of some well‐known missing‐data handling approaches proposed in the literature can yield GMME and MEL estimators with nonidentical properties; in particular, it is shown that the GMME estimator is semiparametric efficient under all the missing‐data handling approaches considered but that the MEL estimator is not always efficient. A thorough examination of the reason for the nonequivalence of the two estimators is presented. A particularly strong feature of our analysis is that we do not assume smoothness in the underlying moment conditions. Our results are thus relevant to situations involving nonsmooth estimating functions, including quantile and rank regressions, robust estimation, the estimation of receiver operating characteristic (ROC) curves, and so on.  相似文献   

7.
In the absence of placebo‐controlled trials, the efficacy of a test treatment can be alternatively examined by showing its non‐inferiority to an active control; that is, the test treatment is not worse than the active control by a pre‐specified margin. The margin is based on the effect of the active control over placebo in historical studies. In other words, the non‐inferiority setup involves a network of direct and indirect comparisons between test treatment, active controls, and placebo. Given this framework, we consider a Bayesian network meta‐analysis that models the uncertainty and heterogeneity of the historical trials into the non‐inferiority trial in a data‐driven manner through the use of the Dirichlet process and power priors. Depending on whether placebo was present in the historical trials, two cases of non‐inferiority testing are discussed that are analogs of the synthesis and fixed‐margin approach. In each of these cases, the model provides a more reliable estimate of the control given its effect in other trials in the network, and, in the case where placebo was only present in the historical trials, the model can predict the effect of the test treatment over placebo as if placebo had been present in the non‐inferiority trial. It can further answer other questions of interest, such as comparative effectiveness of the test treatment among its comparators. More importantly, the model provides an opportunity for disproportionate randomization or the use of small sample sizes by allowing borrowing of information from a network of trials to draw explicit conclusions on non‐inferiority. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we describe a method of comparing agreement between two diagnostic contingency tables after adjustment to more clinically relevant marginal distributions using the iterative proportional fitting algorithm. When the categories of a contingency table represent mild, moderate, and severe outcomes, the majority of patients often are in the mild category. Because it is often of more interest to evaluate agreement when patients are uniformly distributed among categories, we present the primary results of two clinical trials with adjustment to this structure. We also describe the relationship between the sponsor's pre‐specified agreement measure for the observed contingency table and kappa for the adjusted table; and by either criterion, we then show that the agreement of the new diagnostic tool with the standard diagnostic tool is comparably non‐inferior to the agreement of the standard diagnostic tool with itself. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
A three‐arm trial including an experimental treatment, an active reference treatment and a placebo is often used to assess the non‐inferiority (NI) with assay sensitivity of an experimental treatment. Various hypothesis‐test‐based approaches via a fraction or pre‐specified margin have been proposed to assess the NI with assay sensitivity in a three‐arm trial. There is little work done on confidence interval in a three‐arm trial. This paper develops a hybrid approach to construct simultaneous confidence interval for assessing NI and assay sensitivity in a three‐arm trial. For comparison, we present normal‐approximation‐based and bootstrap‐resampling‐based simultaneous confidence intervals. Simulation studies evidence that the hybrid approach with the Wilson score statistic performs better than other approaches in terms of empirical coverage probability and mesial‐non‐coverage probability. An example is used to illustrate the proposed approaches.  相似文献   

10.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

13.
14.
The need to use rigorous, transparent, clearly interpretable, and scientifically justified methodology for preventing and dealing with missing data in clinical trials has been a focus of much attention from regulators, practitioners, and academicians over the past years. New guidelines and recommendations emphasize the importance of minimizing the amount of missing data and carefully selecting primary analysis methods on the basis of assumptions regarding the missingness mechanism suitable for the study at hand, as well as the need to stress‐test the results of the primary analysis under different sets of assumptions through a range of sensitivity analyses. Some methods that could be effectively used for dealing with missing data have not yet gained widespread usage, partly because of their underlying complexity and partly because of lack of relatively easy approaches to their implementation. In this paper, we explore several strategies for missing data on the basis of pattern mixture models that embody clear and realistic clinical assumptions. Pattern mixture models provide a statistically reasonable yet transparent framework for translating clinical assumptions into statistical analyses. Implementation details for some specific strategies are provided in an Appendix (available online as Supporting Information), whereas the general principles of the approach discussed in this paper can be used to implement various other analyses with different sets of assumptions regarding missing data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3 :277–287) for two‐stage trials with mid‐trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re‐estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.  相似文献   

16.
Missing data in clinical trials is a well‐known problem, and the classical statistical methods used can be overly simple. This case study shows how well‐established missing data theory can be applied to efficacy data collected in a long‐term open‐label trial with a discontinuation rate of almost 50%. Satisfaction with treatment in chronically constipated patients was the efficacy measure assessed at baseline and every 3 months postbaseline. The improvement in treatment satisfaction from baseline was originally analyzed with a paired t‐test ignoring missing data and discarding the correlation structure of the longitudinal data. As the original analysis started from missing completely at random assumptions regarding the missing data process, the satisfaction data were re‐examined, and several missing at random (MAR) and missing not at random (MNAR) techniques resulted in adjusted estimate for the improvement in satisfaction over 12 months. Throughout the different sensitivity analyses, the effect sizes remained significant and clinically relevant. Thus, even for an open‐label trial design, sensitivity analysis, with different assumptions for the nature of dropouts (MAR or MNAR) and with different classes of models (selection, pattern‐mixture, or multiple imputation models), has been found useful and provides evidence towards the robustness of the original analyses; additional sensitivity analyses could be undertaken to further qualify robustness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we investigate Bayesian generalized nonlinear mixed‐effects (NLME) regression models for zero‐inflated longitudinal count data. The methodology is motivated by and applied to colony forming unit (CFU) counts in extended bactericidal activity tuberculosis (TB) trials. Furthermore, for model comparisons, we present a generalized method for calculating the marginal likelihoods required to determine Bayes factors. A simulation study shows that the proposed zero‐inflated negative binomial regression model has good accuracy, precision, and credibility interval coverage. In contrast, conventional normal NLME regression models applied to log‐transformed count data, which handle zero counts as left censored values, may yield credibility intervals that undercover the true bactericidal activity of anti‐TB drugs. We therefore recommend that zero‐inflated NLME regression models should be fitted to CFU count on the original scale, as an alternative to conventional normal NLME regression models on the logarithmic scale.  相似文献   

18.
This paper gives some background information on what non‐linear mixed effects models are, why they can be useful in analysing repeated measures and what makes analysing such data challenging. Various software packages and routines are available to perform this kind of analysis, and this paper will compare and contrast these. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号