首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Existing statutes in the United States and Europe require manufacturers to demonstrate evidence of effectiveness through the conduct of adequate and well‐controlled studies to obtain marketing approval of a therapeutic product. What constitutes adequate and well‐controlled studies is usually interpreted as randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, cost, patient preference, or in some cases, ethical concerns. For example, RCTs may not be fully powered in rare diseases or in infections caused by multidrug resistant pathogens because of the low number of enrollable patients. In this case, data available from external controls (including historical controls and observational studies or data registries) can complement information provided by RCT. Propensity score matching methods can be used to select or “borrow” additional patients from the external controls, for maintaining a one‐to‐one randomization between the treatment arm and active control, by matching the new treatment and control units based on a set of measured covariates, ie, model‐based pairing of treatment and control units that are similar in terms of their observable pretreatment characteristics. To this end, 2 matching schemes based on propensity scores are explored and applied to a real clinical data example with the objective of using historical or external observations to augment data in a trial where the randomization is disproportionate or asymmetric.  相似文献   

3.
This paper addresses the problem of simultaneous variable selection and estimation in the random-intercepts model with the first-order lag response. This type of model is commonly used for analyzing longitudinal data obtained through repeated measurements on individuals over time. This model uses random effects to cover the intra-class correlation, and the first lagged response to address the serial correlation, which are two common sources of dependency in longitudinal data. We demonstrate that the conditional likelihood approach by ignoring correlation among random effects and initial responses can lead to biased regularized estimates. Furthermore, we demonstrate that joint modeling of initial responses and subsequent observations in the structure of dynamic random-intercepts models leads to both consistency and Oracle properties of regularized estimators. We present theoretical results in both low- and high-dimensional settings and evaluate regularized estimators' performances by conducting simulation studies and analyzing a real dataset. Supporting information is available online.  相似文献   

4.
To accelerate the drug development process and shorten approval time, the design of multiregional clinical trials (MRCTs) incorporates subjects from many countries/regions around the world under the same protocol. After showing the overall efficacy of a drug in all global regions, one can also simultaneously evaluate the possibility of applying the overall trial results to all regions and subsequently support drug registration in each of them. In this paper, we focus on a specific region and establish a statistical criterion to assess the consistency between the specific region and overall results in an MRCT. More specifically, we treat each region in an MRCT as an independent clinical trial, and each perhaps has different treatment effect. We then construct the empirical prior information for the treatment effect for the specific region on the basis of all of the observed data from other regions. We will conclude similarity between the specific region and all regions if the posterior probability of deriving a positive treatment effect in the specific region is large, say 80%. Numerical examples illustrate applications of the proposed approach in different scenarios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Ⅰ期临床试验的主要目的是探索药物毒性最大耐受剂量MTD,而MTD估计的准确与否将影响之后的Ⅱ期和Ⅲ期临床试验研究的结果.抗肿瘤药物Ⅰ期试验的特点是直接对病人进行试验,且样本量较小,这对构造估计精确度高并具有安全性保障要求的统计设计方法提出了挑战.回顾三种常用的Ⅰ期试验设计方法有:3+3设计、CRM设计和mTPI设计.3+3设计是应用较为广泛的传统方法,后两者是当前常用的贝叶斯自适应试验设计方法.通过大量模拟研究对三种方法从最优分配、安全性和估计MTD精确性三方面给以全面考察,并结合中国实际得出mTPI设计是比较适合推荐的Ⅰ期临床试验设计方法的结论.  相似文献   

6.
Drug developers are required to demonstrate substantial evidence of effectiveness through the conduct of adequate and well‐controlled (A&WC) studies to obtain marketing approval of their medicine. What constitutes A&WC is interpreted as the conduct of randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, and cost. One way to reduce sample size is to leverage information on the control through a prior. One consideration when forming data‐driven prior is the consistency of the external and the current data. It is essential to make this process less susceptible to choosing information that only helps improve the chances toward making an effectiveness claim. For this purpose, propensity score methods are employed for two reasons: (1) it gives the probability of a patient to be in the trial, and (2) it minimizes selection bias by pairing together treatment and control within the trial and control subjects in the external data that are similar in terms of their pretreatment characteristics. Two matching schemes based on propensity scores, estimated through generalized boosted methods, are applied to a real example with the objective of using external data to perform Bayesian augmented control in a trial where the allocation is disproportionate. The simulation results show that the data augmentation process prevents prior and data conflict and improves the precision of the estimator of the average treatment effect.  相似文献   

7.
杨远  林明 《统计研究》2016,33(2):91-98
本文提出一种改进的多重尝试Metropolis算法,用于非线性动态随机一般均衡模型的贝叶斯参数估计和模型选择。多重尝试策略通过每次迭代抽取多个尝试点的方法来提高算法的混合速率,新方法中提出使用近似的方法提高计算速度,并通过接收概率调整偏差。数值实验表明新方法在相同的计算时间内具有更高的估计效率。最后,本文比较了具有不同货币政策设定的模型对中国经济数据的拟合效果,发现中国数据更加支持具有时变通胀目标的模型。  相似文献   

8.
ABSTRACT

Recently, sponsors and regulatory authorities pay much attention on the multiregional trial because it can shorten the drug lag or the time lag for approval, simultaneous drug development, submission, and approval in the world. However, many studies have shown that genetic determinants may mediate variability among persons in response to a drug. Thus, some therapeutics benefit part of treated patients. It means that the assumption of homogeneous effect size is not suitable for multiregional trials. In this paper, we conduct the sample size determination of a multiregional clinical trial calculated by fixed effect and random effect under the assumption of heterogeneous effect size. The performances of fixed effect and random effect on allocating sample size on a specific region are compared by statistical criteria for consistency between the region of interest and overall results.  相似文献   

9.
International Conference on Harmonization E10 concerns non-inferiority trials and the assessment of comparative efficacy, both of which often involve indirect comparisons. In the non-inferiority setting, there are clinical trial results directly comparing an experimental treatment with an active control, and clinical trial results directly comparing the active control with placebo, and there is an interest in the indirect comparison of the experimental treatment with placebo. In the comparative efficacy setting, there may be separate clinical trial results comparing each of two treatments with placebo, and there is interest in an indirect comparison of the treatments. First, we show that the sample size required for a trial intended to demonstrate superiority through an indirect comparison is always greater than the sample size required for a direct comparison. In addition, by introducing the concept of preservation of effect, we show that the hypothesis addressed in the two settings is identical. Our main result concerns the logical inconsistency between a reasonable criterion for preference of an experimental treatment to a standard treatment and existing regulatory guidance for approval of the experimental treatment on the basis of an indirect comparison. Specifically, the preferred treatment will not always meet the criterion for regulatory approval. This is due to the fact that the experimental treatment bears the burden of overcoming the uncertainty in the effect of the standard treatment. We consider an alternative approval criterion that avoids this logical inconsistency.  相似文献   

10.
With advancement of technologies such as genomic sequencing, predictive biomarkers have become a useful tool for the development of personalized medicine. Predictive biomarkers can be used to select subsets of patients, which are most likely to benefit from a treatment. A number of approaches for subgroup identification were proposed over the last years. Although overviews of subgroup identification methods are available, systematic comparisons of their performance in simulation studies are rare. Interaction trees (IT), model‐based recursive partitioning, subgroup identification based on differential effect, simultaneous threshold interaction modeling algorithm (STIMA), and adaptive refinement by directed peeling were proposed for subgroup identification. We compared these methods in a simulation study using a structured approach. In order to identify a target population for subsequent trials, a selection of the identified subgroups is needed. Therefore, we propose a subgroup criterion leading to a target subgroup consisting of the identified subgroups with an estimated treatment difference no less than a pre‐specified threshold. In our simulation study, we evaluated these methods by considering measures for binary classification, like sensitivity and specificity. In settings with large effects or huge sample sizes, most methods perform well. For more realistic settings in drug development involving data from a single trial only, however, none of the methods seems suitable for selecting a target population. Using the subgroup criterion as alternative to the proposed pruning procedures, STIMA and IT can improve their performance in some settings. The methods and the subgroup criterion are illustrated by an application in amyotrophic lateral sclerosis.  相似文献   

11.
In recent years, global collaboration has become a conventional strategy for new drug development. To accelerate the development process and shorten approval time, the design of multi-regional clinical trials (MRCTs) incorporates subjects from many countries/regions around the world under the same protocol. After showing the overall efficacy of a drug in a global trial, one can also simultaneously evaluate the possibility of applying the overall trial results to all regions and subsequently support drug registration in each region. However, most of the recent approaches developed for the design and evaluation of MRCTs focus on establishing criteria to examine whether the overall results from the MRCT can be applied to a specific region. In this paper, we use the consistency criterion of Method 1 from the Japanese Ministry of Health, Labour and Welfare (MHLW) guidance to assess whether the overall results from the MRCT can be applied to all regions. Sample size determination for the MRCT is also provided to take all the consistency criteria from each individual region into account. Numerical examples are given to illustrate applications of the proposed approach.  相似文献   

12.
Pragmatic trials offer practical means of obtaining real-world evidence to help improve decision-making in comparative effectiveness settings. Unfortunately, incomplete adherence is a common problem in pragmatic trials. The commonly used methods in randomized control trials often cannot handle the added complexity imposed by incomplete adherence, resulting in biased estimates. Several naive methods and advanced causal inference methods (e.g., inverse probability weighting and instrumental variable-based approaches) have been used in the literature to deal with incomplete adherence. Practitioners and applied researchers are often confused about which method to consider under a given setting. This current work is aimed to review commonly used statistical methods to deal with non-adherence along with their key assumptions, advantages, and limitations, with a particular focus on pragmatic trials. We have listed the applicable settings for these methods and provided a summary of available software. All methods were applied to two hypothetical datasets to demonstrate how these methods perform in a given scenario, along with the R codes. The key considerations include the type of intervention strategy (point treatment settings, where treatment is administered only once versus sustained treatment settings, where treatment has to be continued over time) and availability of data (e.g., the extent of measured or unmeasured covariates that are associated with adherence, dependent confounding impacted by past treatment, and potential violation of assumptions). This study will guide practitioners and applied researchers to use the appropriate statistical method to address incomplete adherence in pragmatic trial settings for both the point and sustained treatment strategies.  相似文献   

13.
In rare diseases, typically only a small number of patients are available for a randomized clinical trial. Nevertheless, it is not uncommon that more than one study is performed to evaluate a (new) treatment. Scarcity of available evidence makes it particularly valuable to pool the data in a meta-analysis. When the primary outcome is binary, the small sample sizes increase the chance of observing zero events. The frequentist random-effects model is known to induce bias and to result in improper interval estimation of the overall treatment effect in a meta-analysis with zero events. Bayesian hierarchical modeling could be a promising alternative. Bayesian models are known for being sensitive to the choice of prior distributions for between-study variance (heterogeneity) in sparse settings. In a rare disease setting, only limited data will be available to base the prior on, therefore, robustness of estimation is desirable. We performed an extensive and diverse simulation study, aiming to provide practitioners with advice on the choice of a sufficiently robust prior distribution shape for the heterogeneity parameter. Our results show that priors that place some concentrated mass on small τ values but do not restrict the density for example, the Uniform(−10, 10) heterogeneity prior on the log(τ2) scale, show robust 95% coverage combined with less overestimation of the overall treatment effect, across varying degrees of heterogeneity. We illustrate the results with meta-analyzes of a few small trials.  相似文献   

14.
To shorten the drug lag or the time lag for approval, simultaneous drug development, submission, and approval in the world may be desirable. Recently, multi-regional trials have attracted much attention from sponsors as well as regulatory authorities. Current methods for sample determination are based on the assumption that true treatment effect is uniform across regions. However, unrecognized heterogeneity among patients as ethnic or genetic factor will effect patients’ survival. In this article, we address the issue that the treatment effects with unrecognized heterogeneity that interacts with treatment are among regions to design a multi-regional trial. The log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multi-regional trial and the consistent trend is used to rationalize partition for sample size to each region.  相似文献   

15.
Dose‐escalation trials commonly assume a homogeneous trial population to identify a single recommended dose of the experimental treatment for use in future trials. Wrongly assuming a homogeneous population can lead to a diluted treatment effect. Equally, exclusion of a subgroup that could in fact benefit from the treatment can cause a beneficial treatment effect to be missed. Accounting for a potential subgroup effect (ie, difference in reaction to the treatment between subgroups) in dose‐escalation can increase the chance of finding the treatment to be efficacious in a larger patient population. A standard Bayesian model‐based method of dose‐escalation is extended to account for a subgroup effect by including covariates for subgroup membership in the dose‐toxicity model. A stratified design performs well but uses available data inefficiently and makes no inferences concerning presence of a subgroup effect. A hypothesis test could potentially rectify this problem but the small sample sizes result in a low‐powered test. As an alternative, the use of spike and slab priors for variable selection is proposed. This method continually assesses the presence of a subgroup effect, enabling efficient use of the available trial data throughout escalation and in identifying the recommended dose(s). A simulation study, based on real trial data, was conducted and this design was found to be both promising and feasible.  相似文献   

16.
With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches.  相似文献   

17.
We introduce the problem of estimation of the parameters of a dynamically selected population in an infinite sequence of random variables and provide its application in the statistical inference based on record values from a non stationary scheme. We develop unbiased estimation of the parameters of the dynamically selected population and evaluate the risk of the estimators. We provide comparisons with natural estimators and obtain asymptotic results. Finally, we illustrate the applicability of the results using real data.  相似文献   

18.
Randomized controlled trials are recognized as the 'gold standard' for evaluating the effect of health interventions, yet few such trials of human immunodeficiency virus (HIV) preventive interventions have been conducted. We discuss the role of randomized trials in the evaluation of such interventions, and we review the strengths and weaknesses of this and other approaches. Randomization of clusters (groups of individuals) may sometimes be appropriate, and we discuss several issues in the design of such cluster-randomized trials, including sample size, the definition and size of clusters, matching and the role of base-line data. Finally we review some general issues in the design of HIV prevention trials, including the choice of the study population, trial end points and ethical issues. It is argued that randomized trials have an important role to play in the evolution of HIV control.  相似文献   

19.
In a clinical trial, sometimes it is desirable to allocate as many patients as possible to the best treatment, in particular, when a trial for a rare disease may contain a considerable portion of the whole target population. The Gittins index rule is a powerful tool for sequentially allocating patients to the best treatment based on the responses of patients already treated. However, its application in clinical trials is limited due to technical complexity and lack of randomness. Thompson sampling is an appealing approach, since it makes a compromise between optimal treatment allocation and randomness with some desirable optimal properties in the machine learning context. However, in clinical trial settings, multiple simulation studies have shown disappointing results with Thompson samplers. We consider how to improve short-run performance of Thompson sampling and propose a novel acceleration approach. This approach can also be applied to situations when patients can only be allocated by batch and is very easy to implement without using complex algorithms. A simulation study showed that this approach could improve the performance of Thompson sampling in terms of average total response rate. An application to a redesign of a preference trial to maximize patient's satisfaction is also presented.  相似文献   

20.
In parallel group trials, long‐term efficacy endpoints may be affected if some patients switch or cross over to the alternative treatment arm prior to the event. In oncology trials, switch to the experimental treatment can occur in the control arm following disease progression and potentially impact overall survival. It may be a clinically relevant question to estimate the efficacy that would have been observed if no patients had switched, for example, to estimate ‘real‐life’ clinical effectiveness for a health technology assessment. Several commonly used statistical methods are available that try to adjust time‐to‐event data to account for treatment switching, ranging from naive exclusion and censoring approaches to more complex inverse probability of censoring weighting and rank‐preserving structural failure time models. These are described, along with their key assumptions, strengths, and limitations. Best practice guidance is provided for both trial design and analysis when switching is anticipated. Available statistical software is summarized, and examples are provided of the application of these methods in health technology assessments of oncology trials. Key considerations include having a clearly articulated rationale and research question and a well‐designed trial with sufficient good quality data collection to enable robust statistical analysis. No analysis method is universally suitable in all situations, and each makes strong untestable assumptions. There is a need for further research into new or improved techniques. This information should aid statisticians and their colleagues to improve the design and analysis of clinical trials where treatment switch is anticipated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号