首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider outcome adaptive phase II or phase II/III trials to identify the best treatment for further development. Different from many other multi-arm multi-stage designs, we borrow approaches for the best arm identification in multi-armed bandit (MAB) approaches developed for machine learning and adapt them for clinical trial purposes. The best arm identification in MAB focuses on the error rate of identification at the end of the trial, but we are also interested in the cumulative benefit of trial patients, for example, the frequency of patients treated with the best treatment. In particular, we consider Top-Two Thompson Sampling (TTTS) and propose an acceleration approach for better performance in drug development scenarios in which the sample size is much smaller than that considered in machine learning applications. We also propose a variant of TTTS (TTTS2) which is simpler, easier for implementation, and has comparable performance in small sample settings. An extensive simulation study was conducted to evaluate the performance of the proposed approach in multiple typical scenarios in drug development.  相似文献   

2.
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.  相似文献   

3.
4.
We consider response adaptive designs when the binary response may be misclassified and extend relevant results in the literature. We derive the optimal allocations under various objectives and examine the relationship between the power of statistical test and the variability of treatment allocation. Asymptotically best response adaptive randomization procedures and effects of misclassification on the optimal allocations are investigated. A real-life clinical trial is also discussed to illustrate our proposed approach.  相似文献   

5.
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design.  相似文献   

6.
The borrowing of historical control data can be an efficient way to improve the treatment effect estimate of the current control group in a randomized clinical trial. When the historical and current control data are consistent, the borrowing of historical data can increase power and reduce Type I error rate. However, when these 2 sources of data are inconsistent, it may result in a combination of biased estimates, reduced power, and inflation of Type I error rate. In some situations, inconsistency between historical and current control data may be caused by a systematic variation in the measured baseline prognostic factors, which can be appropriately addressed through statistical modeling. In this paper, we propose a Bayesian hierarchical model that can incorporate patient‐level baseline covariates to enhance the appropriateness of the exchangeability assumption between current and historical control data. The performance of the proposed method is shown through simulation studies, and its application to a clinical trial design for amyotrophic lateral sclerosis is described. The proposed method is developed for scenarios involving multiple imbalanced prognostic factors and thus has meaningful implications for clinical trials evaluating new treatments for heterogeneous diseases such as amyotrophic lateral sclerosis.  相似文献   

7.
Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers – equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely.  相似文献   

8.
When sampling from a continuous population (or distribution), we often want a rather small sample due to some cost attached to processing the sample or to collecting information in the field. Moreover, a probability sample that allows for design‐based statistical inference is often desired. Given these requirements, we want to reduce the sampling variance of the Horvitz–Thompson estimator as much as possible. To achieve this, we introduce different approaches to using the local pivotal method for selecting well‐spread samples from multidimensional continuous populations. The results of a simulation study clearly indicate that we succeed in selecting spatially balanced samples and improve the efficiency of the Horvitz–Thompson estimator.  相似文献   

9.
Summary.  Few references deal with response-adaptive randomization procedures for survival outcomes and those that do either dichotomize the outcomes or use a non-parametric approach. In this paper, the optimal allocation approach and a parametric response-adaptive randomization procedure are used under exponential and Weibull distributions. The optimal allocation proportions are derived for both distributions and the doubly adaptive biased coin design is applied to target the optimal allocations. The asymptotic variance of the procedure is obtained for the exponential distribution. The effect of intrinsic delay of survival outcomes is treated. These findings are based on rigorous theory but are also verified by simulation. It is shown that using a doubly adaptive biased coin design to target the optimal allocation proportion results in more patients being randomized to the better performing treatment without loss of power. We illustrate our procedure by redesigning a clinical trial.  相似文献   

10.
Assessment of the time needed to attain steady state is a key pharmacokinetic objective during drug development. Traditional approaches for assessing steady state include ANOVA‐based methods for comparing mean plasma concentration values from each sampling day, with either a difference or equivalence test. However, hypothesis‐testing approaches are ill suited for assessment of steady state. This paper presents a nonlinear mixed effects modelling approach for estimation of steady state attainment, based on fitting a simple nonlinear mixed model to observed trough plasma concentrations. The simple nonlinear mixed model is developed and proposed for use under certain pharmacokinetic assumptions. The nonlinear mixed modelling estimation approach is described and illustrated by application to trough data from a multiple dose trial in healthy subjects. The performance of the nonlinear mixed modelling approach is compared to ANOVA‐based approaches by means of simulation techniques. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper, a penalized weighted least squares approach is proposed for small area estimation under the unit level model. The new method not only unifies the traditional empirical best linear unbiased prediction that does not take sampling design into account and the pseudo‐empirical best linear unbiased prediction that incorporates sampling weights but also has the desirable robustness property to model misspecification compared with existing methods. The empirical small area estimator is given, and the corresponding second‐order approximation to mean squared error estimator is derived. Numerical comparisons based on synthetic and real data sets show superior performance of the proposed method to currently available estimators in the literature.  相似文献   

12.
A bridging study defined by ICH E5 is usually conducted in the new region after the test product has been approved for commercial marketing in the original region due to its proven efficacy and safety. However, extensive duplication of clinical evaluation in the new region not only requires valuable development resources but also delay availability of the test product to the needed patients in the new regions. To shorten the drug lag or the time lag for approval, simultaneous drug development, submission, and approval in the world may be desirable. Recently, multi-regional trials have attracted much attention from sponsors as well as regulatory authorities. Current methods for sample determination are based on the assumption that true treatment effect is uniform across regions. However, unrecognized heterogeneity among patients as ethnic or genetic factor will effect patients’ survival. Using the simple log-rank test for analysis of treatment effect on survival in studies under heterogeneity may be severely underpowered. In this article, we address the issue that the treatment effects are different among regions to design a multi-regional trial. The optimal log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multi-regional trial and the consistent trend and the proposed criteria are used to rationalize partition sample size to each region.  相似文献   

13.
Many assumptions, including assumptions regarding treatment effects, are made at the design stage of a clinical trial for power and sample size calculations. It is desirable to check these assumptions during the trial by using blinded data. Methods for sample size re‐estimation based on blinded data analyses have been proposed for normal and binary endpoints. However, there is a debate that no reliable estimate of the treatment effect can be obtained in a typical clinical trial situation. In this paper, we consider the case of a survival endpoint and investigate the feasibility of estimating the treatment effect in an ongoing trial without unblinding. We incorporate information of a surrogate endpoint and investigate three estimation procedures, including a classification method and two expectation–maximization (EM) algorithms. Simulations and a clinical trial example are used to assess the performance of the procedures. Our studies show that the expectation–maximization algorithms highly depend on the initial estimates of the model parameters. Despite utilization of a surrogate endpoint, all three methods have large variations in the treatment effect estimates and hence fail to provide a precise conclusion about the treatment effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
The odds ratio (OR) has been recommended elsewhere to measure the relative treatment efficacy in a randomized clinical trial (RCT), because it possesses a few desirable statistical properties. In practice, it is not uncommon to come across an RCT in which there are patients who do not comply with their assigned treatments and patients whose outcomes are missing. Under the compound exclusion restriction, latent ignorable and monotonicity assumptions, we derive the maximum likelihood estimator (MLE) of the OR and apply Monte Carlo simulation to compare its performance with those of the other two commonly used estimators for missing completely at random (MCAR) and for the intention-to-treat (ITT) analysis based on patients with known outcomes, respectively. We note that both estimators for MCAR and the ITT analysis may produce a misleading inference of the OR even when the relative treatment effect is equal. We further derive three asymptotic interval estimators for the OR, including the interval estimator using Wald’s statistic, the interval estimator using the logarithmic transformation, and the interval estimator using an ad hoc procedure of combining the above two interval estimators. On the basis of a Monte Carlo simulation, we evaluate the finite-sample performance of these interval estimators in a variety of situations. Finally, we use the data taken from a randomized encouragement design studying the effect of flu shots on the flu-related hospitalization rate to illustrate the use of the MLE and the asymptotic interval estimators for the OR developed here.  相似文献   

15.
We proposed a modification to the variant of link-tracing sampling suggested by Félix-Medina and Thompson [M.H. Félix-Medina, S.K. Thompson, Combining cluster sampling and link-tracing sampling to estimate the size of hidden populations, Journal of Official Statistics 20 (2004) 19–38] that allows the researcher to have certain control of the final sample size, precision of the estimates or other characteristics of the sample that the researcher is interested in controlling. We achieve this goal by selecting an initial sequential sample of sites instead of an initial simple random sample of sites as those authors suggested. We estimate the population size by means of the maximum likelihood estimators suggested by the above-mentioned authors or by the Bayesian estimators proposed by Félix-Medina and Monjardin [M.H. Félix-Medina, P.E. Monjardin, Combining link-tracing sampling and cluster sampling to estimate the size of hidden populations: A Bayesian-assisted approach, Survey Methodology 32 (2006) 187–195]. Variances are estimated by means of jackknife and bootstrap estimators as well as by the delta estimators proposed in the two above-mentioned papers. Interval estimates of the population size are obtained by means of Wald and bootstrap confidence intervals. The results of an exploratory simulation study indicate good performance of the proposed sampling strategy.  相似文献   

16.
A clinical hold order by the Food and Drug Administration (FDA) to the sponsor of a clinical trial is a measure to delay a proposed or to suspend an ongoing clinical investigation. The phase III clinical trial START serves as motivating data example to explore implications and potential statistical approaches for a trial continuing after a clinical hold is lifted. In spite of a modified intention‐to‐treat (ITT) analysis introduced to account for the clinical hold by excluding patients potentially affected most by the clinical hold, results of the trial did not show a significant improvement of overall survival duration, and the question remains whether the negative result was an effect of the clinical hold. In this paper, we propose a multistate model incorporating the clinical hold as well as disease progression as intermediate events to investigate the impact of the clinical hold on the treatment effect. Moreover, we consider a simple counterfactual censoring approach as alternative strategy to the modified ITT analysis to deal with a clinical hold. Using a realistic simulation study informed by the START data and with a design based on our multistate model, we show that the modified ITT analysis used in the START trial was reasonable. However, the censoring approach will be shown to have some benefits in terms of power and flexibility.  相似文献   

17.
Clinical trials are primarily conducted to understand the average effects treatments have on patients. However, patients are heterogeneous in the severity of the condition and in ways that affect what treatment effect they can expect. It is therefore important to understand and characterize how treatment effects vary. The design and analysis of clinical studies play critical roles in evaluating and characterizing heterogeneous treatment effects. This panel discussed considerations in design and analysis under the recognition that there are heterogeneous treatment effects across subgroups of patients. Panel members discussed many questions including: What is a good estimate of the treatment effect in me, a 65-year-old, bald, Caucasian-American, male patient? What magnitude of heterogeneity of treatment effects (HTE) is sufficiently large to merit attention? What role can prior evidence about HTE play in confirmatory trial design and analysis? Is there anything described in the 21st Century Cures Act that would benefit from greater attention to HTE? An example of a Bayesian approach addressing multiplicity when testing for treatment effects in subgroups will be provided. We can do more or better at understanding heterogeneous treatment effects and providing the best information on heterogeneous treatment effects.  相似文献   

18.
Patients often discontinue from a clinical trial because their health condition is not improving or they cannot tolerate the assigned treatment. Consequently, the observed clinical outcomes in the trial are likely better on average than if every patient had completed the trial. If these differences between trial completers and non-completers cannot be explained by the observed data, then the study outcomes are missing not at random (MNAR). One way to overcome this problem—the trimmed means approach for missing data due to study discontinuation—sets missing values as the worst observed outcome and then trims away a fraction of the distribution from each treatment arm before calculating differences in treatment efficacy (Permutt T, Li F. Trimmed means for symptom trials with dropouts. Pharm Stat. 2017;16(1):20–28). In this paper, we derive sufficient and necessary conditions for when this approach can identify the average population treatment effect. Simulation studies show the trimmed means approach's ability to effectively estimate treatment efficacy when data are MNAR and missingness due to study discontinuation is strongly associated with an unfavorable outcome, but trimmed means fail when data are missing at random. If the reasons for study discontinuation in a clinical trial are known, analysts can improve estimates with a combination of multiple imputation and the trimmed means approach when the assumptions of each hold. We compare the methodology to existing approaches using data from a clinical trial for chronic pain. An R package trim implements the method. When the assumptions are justifiable, using trimmed means can help identify treatment effects notwithstanding MNAR data.  相似文献   

19.
This paper applies methodology of Finkelstein and Schoenfeld [Stat. Med. 13 (1994) 1747.] to consider new treatment strategies in a synthetic clinical trial. The methodology is an approach for estimating survival functions as a composite of subdistributions defined by an auxiliary event which is intermediate to the failure. The subdistributions are usually calculated utilizing all subjects in a study, by taking into account the path determined by each individual's auxiliary event. However, the method can be used to get a composite estimate of failure from different subpopulations of patients. We utilize this application of the methodology to test a new treatment strategy, that changes therapy at later stages of disease, by combining subdistributions from different treatment arms of a clinical trial that was conducted to test therapies for prevention of pneumocystis carinii pneumonia.  相似文献   

20.
In this paper we present an approach to using historical control data to augment information from a randomized controlled clinical trial, when it is not possible to continue the control regimen to obtain the most reliable and valid assessment of long term treatment effects. Using an adjustment procedure to the historical control data, we investigate a method of estimating the long term survival function for the clinical trial control group and for evaluating the long term treatment effect. The suggested method is simple to interpret, and particularly motivated in clinical trial settings when ethical considerations preclude the long term follow-up of placebo controls. A simulation study reveals that the bias in parameter estimates that arises in the setting of group sequential monitoring will be attenuated when long term historical control information is used in the proposed manner. Data from the first and second National Wilms' Tumor studies are used to illustrate the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号