首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Biostatisticians recognize the importance of precise definitions of technical terms in randomized controlled clinical trial (RCCT) protocols, statistical analysis plans, and so on, in part because definitions are a foundation for subsequent actions. Imprecise definitions can be a source of controversies about appropriate statistical methods, interpretation of results, and extrapolations to larger populations. This paper presents precise definitions of some familiar terms and definitions of some new terms, some perhaps controversial. The glossary contains definitions that can be copied into a protocol, statistical analysis plan, or similar document and customized. The definitions were motivated and illustrated in the context of a longitudinal RCCT in which some randomized enrollees are non‐adherent, receive a corrupted treatment, or withdraw prematurely. The definitions can be adapted for use in a much wider set of RCCTs. New terms can be used in place of controversial terms, for example, subject. We define terms specifying a person's progress through RCCT phases and that precisely define the RCCT's phases and milestones. We define terms that distinguish between subsets of an RCCT's enrollees and a much larger patient population. ‘The intention‐to‐treat (ITT) principle’ has multiple interpretations that can be distilled to the definitions of the ‘ITT analysis set of randomized enrollees’. Most differences among interpretations of ‘the’ ITT principle stem from an RCCT's primary objective (mainly efficacy versus effectiveness). Four different ‘authoritative’ definitions of ITT analysis set of randomized enrollees illustrate the variety of interpretations. We propose a separate specification of the analysis set of data that will be used in a specific analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Designing Phase I clinical trials is challenging when accrual is slow or sample size is limited. The corresponding key question is: how to efficiently and reliably identify the maximum tolerated dose (MTD) using a sample size as small as possible? We propose model-assisted and model-based designs with adaptive intrapatient dose escalation (AIDE) to address this challenge. AIDE is adaptive in that the decision of conducting intrapatient dose escalation depends on both the patient's individual safety data, as well as other enrolled patient's safety data. When both data indicate reasonable safety, a patient may perform intrapatient dose escalation, generating toxicity data at more than one dose. This strategy not only provides patients the opportunity to receive higher potentially more effective doses, but also enables efficient statistical learning of the dose-toxicity profile of the treatment, which dramatically reduces the required sample size. Simulation studies show that the proposed designs are safe, robust, and efficient to identify the MTD with a sample size that is substantially smaller than conventional interpatient dose escalation designs. Practical considerations are provided and R code for implementing AIDE is available upon request.  相似文献   

4.
Bioequivalence (BE) trials play an important role in drug development for demonstrating the BE between test and reference formulations. The key statistical analysis for BE trials is the use of two one‐sided tests (TOST), which is equivalent to showing that the 90% confidence interval of the relative bioavailability is within a given range. Power and sample size calculations for the comparison between one test formulation and the reference formulation has been intensively investigated, and tables and software are available for practical use. From a statistical and logistical perspective, it might be more efficient to test more than one formulation in a single trial. However, approaches for controlling the overall type I error may be required. We propose a method called multiplicity‐adjusted TOST (MATOST) combining multiple comparison adjustment approaches, such as Hochberg's or Dunnett's method, with TOST. Because power and sample size calculations become more complex and are difficult to solve analytically, efficient simulation‐based procedures for this purpose have been developed and implemented in an R package. Some numerical results for a range of scenarios are presented in the paper. We show that given the same overall type I error and power, a BE crossover trial designed to test multiple formulations simultaneously only requires a small increase in the total sample size compared with a simple 2 × 2 crossover design evaluating only one test formulation. Hence, we conclude that testing multiple formulations in a single study is generally an efficient approach. The R package MATOST is available at https://sites.google.com/site/matostbe/ . Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
The Maximum Likelihood (ML) and Best Linear Unbiased (BLU) estimators of the location and scale parameters of an extreme value distribution (Lawless [1982]) are compared under conditions of small sample sizes and Type I censorship. The comparisons were made in terms of the mean square error criterion. According to this criterion, the ML estimator of σ in the case of very small sample sizes (n < 10) and heavy censorship (low censoring time) proved to be more efficient than the corresponding BLU estimator. However, the BLU estimator for σ attains parity with the corresponding ML estimator when the censoring time increases even for sample sizes as low as 10. The BLU estimator of σ attains equivalence with the ML estimator when the sample size increases above 10, particularly when the censoring time is also increased. The situation is reversed when it came to estimating the location parameter μ, as the BLU estimator was found to be consistently more efficient than the ML estimator despite the improved performance of the ML estimator when the sample size increases. However, computational ease and convenience favor the ML estimators.  相似文献   

6.
The success rate of drug development has been declined dramatically in recent years and the current paradigm of drug development is no longer functioning. It requires a major undertaking on breakthrough strategies and methodology for designs to minimize sample sizes and to shorten duration of the development. We propose an alternative phase II/III design based on continuous efficacy endpoints, which consists of two stages: a selection stage and a confirmation stage. For the selection stage, a randomized parallel design with several doses with a placebo group is employed for selection of doses. After the best dose is chosen, the patients of the selected dose group and placebo group continue to enter the confirmation stage. New patients will also be recruited and randomized to receive the selected dose or placebo group. The final analysis is performed with the cumulative data of patients from both stages. With the pre‐specified probabilities of rejecting the drug at each stage, sample sizes and critical values for both stages can be determined. As it is a single trial with controlling overall type I and II error rates, the proposed phase II/III adaptive design may not only reduce the sample size but also improve the success rate. An example illustrates the applications of the proposed phase II/III adaptive design. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Summary. Interim analysis is important in a large clinical trial for ethical and cost considerations. Sometimes, an interim analysis needs to be performed at an earlier than planned time point. In that case, methods using stochastic curtailment are useful in examining the data for early stopping while controlling the inflation of type I and type II errors. We consider a three-arm randomized study of treatments to reduce perioperative blood loss following major surgery. Owing to slow accrual, an unplanned interim analysis was required by the study team to determine whether the study should be continued. We distinguish two different cases: when all treatments are under direct comparison and when one of the treatments is a control. We used simulations to study the operating characteristics of five different stochastic curtailment methods. We also considered the influence of timing of the interim analyses on the type I error and power of the test. We found that the type I error and power between the different methods can be quite different. The analysis for the perioperative blood loss trial was carried out at approximately a quarter of the planned sample size. We found that there is little evidence that the active treatments are better than a placebo and recommended closure of the trial.  相似文献   

8.
When conducting research with controlled experiments, sample size planning is one of the important decisions that researchers have to make. However, current methods do not adequately address this issue with regard to variance heterogeneity with some cost constraints for comparing several treatment means. This paper proposes a sample size allocation ratio in the fixed-effect heterogeneous analysis of variance when group variances are unequal and in cases where the sampling and/or variable cost has some constraints. The efficient sample size allocation is determined for the purpose of minimizing total cost with a designated power or maximizing the power with a given total cost. Finally, the proposed method is verified by using the index of relative efficiency and the corresponding total cost and the total sample size needed. We also apply our method in a pain management trial to decide an efficient sample size. Simulation studies also show that the proposed sample size formulas are efficient in terms of statistical power. SAS and R codes are provided in the appendix for easy application.  相似文献   

9.
Sampling cost is a crucial factor in sample size planning, particularly when the treatment group is more expensive than the control group. To either minimize the total cost or maximize the statistical power of the test, we used the distribution-free Wilcoxon–Mann–Whitney test for two independent samples and the van Elteren test for randomized block design, respectively. We then developed approximate sample size formulas when the distribution of data is abnormal and/or unknown. This study derived the optimal sample size allocation ratio for a given statistical power by considering the cost constraints, so that the resulting sample sizes could minimize either the total cost or the total sample size. Moreover, for a given total cost, the optimal sample size allocation is recommended to maximize the statistical power of the test. The proposed formula is not only innovative, but also quick and easy. We also applied real data from a clinical trial to illustrate how to choose the sample size for a randomized two-block design. For nonparametric methods, no existing commercial software for sample size planning has considered the cost factor, and therefore the proposed methods can provide important insights related to the impact of cost constraints.  相似文献   

10.
To increase the efficiency of comparisons between treatments in clinical trials, we may consider the use of a multiple matching design, in which, for each patient receiving the experimental treatment, we match with more than one patient receiving the standard treatment. To assess the efficacy of the experimental treatment, the risk ratio (RR) of patient responses between two treatments is certainly one of the most commonly used measures. Because the probability of patient responses in clinical trial is often not small, the odds ratio (OR), of which the practical interpretation is not easily understood, cannot approximate RR well. Thus, all sample size formulae in terms of OR for case-control studies with multiple matched controls per case can be of limited use here. In this paper, we develop three sample size formulae based on RR for randomized trials with multiple matching. We propose a test statistic for testing the equality of RR under multiple matching. On the basis of Monte Carlo simulation, we evaluate the performance of the proposed test statistic with respect to Type I error. To evaluate the accuracy and usefulness of the three sample size formulae developed in this paper, we further calculate their simulated powers and compare them with those of the sample size formula ignoring matching and the sample size formula based on OR for multiple matching published elsewhere. Finally, we include an example that employs the multiple matching study design about the use of the supplemental ascorbate in the supportive treatment of terminal cancer patients to illustrate the use of these formulae.  相似文献   

11.
Multi-arm trials are an efficient way of simultaneously testing several experimental treatments against a shared control group. As well as reducing the sample size required compared to running each trial separately, they have important administrative and logistical advantages. There has been debate over whether multi-arm trials should correct for the fact that multiple null hypotheses are tested within the same experiment. Previous opinions have ranged from no correction is required, to a stringent correction (controlling the probability of making at least one type I error) being needed, with regulators arguing the latter for confirmatory settings. In this article, we propose that controlling the false-discovery rate (FDR) is a suitable compromise, with an appealing interpretation in multi-arm clinical trials. We investigate the properties of the different correction methods in terms of the positive and negative predictive value (respectively how confident we are that a recommended treatment is effective and that a non-recommended treatment is ineffective). The number of arms and proportion of treatments that are truly effective is varied. Controlling the FDR provides good properties. It retains the high positive predictive value of FWER correction in situations where a low proportion of treatments is effective. It also has a good negative predictive value in situations where a high proportion of treatments is effective. In a multi-arm trial testing distinct treatment arms, we recommend that sponsors and trialists consider use of the FDR.  相似文献   

12.
Designs for early phase dose finding clinical trials typically are either phase I based on toxicity, or phase I-II based on toxicity and efficacy. These designs rely on the implicit assumption that the dose of an experimental agent chosen using these short-term outcomes will maximize the agent's long-term therapeutic success rate. In many clinical settings, this assumption is not true. A dose selected in an early phase oncology trial may give suboptimal progression-free survival or overall survival time, often due to a high rate of relapse following response. To address this problem, a new family of Bayesian generalized phase I-II designs is proposed. First, a conventional phase I-II design based on short-term outcomes is used to identify a set of candidate doses, rather than selecting one dose. Additional patients then are randomized among the candidates, patients are followed for a predefined longer time period, and a final dose is selected to maximize the long-term therapeutic success rate, defined in terms of duration of response. Dose-specific sample sizes in the randomization are determined adaptively to obtain a desired level of selection reliability. The design was motivated by a phase I-II trial to find an optimal dose of natural killer cells as targeted immunotherapy for recurrent or treatment-resistant B-cell hematologic malignancies. A simulation study shows that, under a range of scenarios in the context of this trial, the proposed design has much better performance than two conventional phase I-II designs.  相似文献   

13.
This article proposes a CV chart by using the variable sample size and sampling interval (VSSI) feature to improve the performance of the basic CV chart, for detecting small and moderate shifts in the CV. The proposed VSSI CV chart is designed by allowing the sample size and the sampling interval to vary. The VSSI CV chart's statistical performance is measured by using the average time to signal (ATS) and expected average time to signal (EATS) criteria and is compared with that of existing CV charts. The Markov chain approach is employed in the design of the chart.  相似文献   

14.
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies.  相似文献   

15.
Latent class models (LCMs) are used increasingly for addressing a broad variety of problems, including sparse modeling of multivariate and longitudinal data, model-based clustering, and flexible inferences on predictor effects. Typical frequentist LCMs require estimation of a single finite number of classes, which does not increase with the sample size, and have a well-known sensitivity to parametric assumptions on the distributions within a class. Bayesian nonparametric methods have been developed to allow an infinite number of classes in the general population, with the number represented in a sample increasing with sample size. In this article, we propose a new nonparametric Bayes model that allows predictors to flexibly impact the allocation to latent classes, while limiting sensitivity to parametric assumptions by allowing class-specific distributions to be unknown subject to a stochastic ordering constraint. An efficient MCMC algorithm is developed for posterior computation. The methods are validated using simulation studies and applied to the problem of ranking medical procedures in terms of the distribution of patient morbidity.  相似文献   

16.
This paper presents a new randomized response model that combines Kim and Warde's (2004) stratified Warner's randomized response technique using optimal allocation with the unrelated question randomized response model. The empirical studies performed show that, for the prior information given, the new model is more efficient in terms of variance (in the case of completely truthful reporting) and mean square error (in the case of less than completely truthful reporting) than its component models.  相似文献   

17.
In behavioral, educational and medical practice, interventions are often personalized over time using strategies that are based on individual behaviors and characteristics and changes in symptoms, severity, or adherence that are a result of one's treatment. Such strategies that more closely mimic real practice, are known as dynamic treatment regimens (DTRs). A sequential multiple assignment randomized trial (SMART) is a multi-stage trial design that can be used to construct effective DTRs. This article reviews a simple to use ‘weighted and replicated’ estimation technique for comparing DTRs embedded in a SMART design using logistic regression for a binary, end-of-study outcome variable. Based on a Wald test that compares two embedded DTRs of interest from the ‘weighted and replicated’ regression model, a sample size calculation is presented with a corresponding user-friendly applet to aid in the process of designing a SMART. The analytic models and sample size calculations are presented for three of the more commonly used two-stage SMART designs. Simulations for the sample size calculation show the empirical power reaches expected levels. A data analysis example with corresponding code is presented in the appendix using data from a SMART developing an effective DTR in autism.  相似文献   

18.
Summary.  We introduce a new method for generating optimal split-plot designs. These designs are optimal in the sense that they are efficient for estimating the fixed effects of the statistical model that is appropriate given the split-plot design structure. One advantage of the method is that it does not require the prior specification of a candidate set. This makes the production of split-plot designs computationally feasible in situations where the candidate set is too large to be tractable. The method allows for flexible choice of the sample size and supports inclusion of both continuous and categorical factors. The model can be any linear regression model and may include arbitrary polynomial terms in the continuous factors and interaction terms of any order. We demonstrate the usefulness of this flexibility with a 100-run polypropylene experiment involving 11 factors where we found a design that is substantially more efficient than designs that are produced by using other approaches.  相似文献   

19.
In Clinical trials involving multiple comparisons of interest, the importance of controlling the trial Type I error is well-understood and well-documented. Moreover, when these comparisons are themselves correlated, methodologies exist for accounting for the correlation in the trial design, when calculating the trial significance levels. However, less well-documented is the fact that there are some circumstances where multiple comparisons affect the Type II error rather than the Type I error, and failure to account for this, can result in a reduction in the overall trial power. In this paper, we describe sample size calculations for clinical trials involving multiple correlated comparisons, where all the comparisons must be statistically significant for the trial to provide evidence of effect, and show how such calculations have to account for multiplicity in the Type II error. For the situation of two comparisons, we provide a result which assumes a bivariate Normal distribution. For the general case of two or more comparisons we provide a solution using inflation factors to increase the sample size relative to the case of a single outcome. We begin with a simple case of two comparisons assuming a bivariate Normal distribution, show how to factor in correlation between comparisons and then generalise our findings to situations with two or more comparisons. These methods are easy to apply, and we demonstrate how accounting for the multiplicity in the Type II error leads, at most, to modest increases in the sample size.  相似文献   

20.
A randomized exploratory clinical trial comparing an experimental treatment with a control treatment on a binary endpoint is often conducted to make a go or no‐go decision. Such an exploratory trial needs to have an adequate sample size such that it will provide convincing evidence that the experimental treatment is either worthwhile or unpromising relative to the control treatment. In this paper, we propose three new sample‐size determination methods for an exploratory trial, which utilize the posterior probabilities calculated from predefined efficacy and inefficacy criteria leading to a declaration of the worthwhileness or unpromisingness of the experimental treatment. Simulation studies, including numerical investigation, showed that all three methods could declare the experimental treatment as worthwhile or unpromising with a high probability when the true response probability of the experimental treatment group is higher or lower, respectively, than that of the control treatment group.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号